Tap to unmute

AI ruined bug bounties

Share
Embed
  • Published on Mar 16, 2026
  • 🏫 MY COURSES
    Sign-up for my FREE 3-Day C Course: lowlevel.academy
    🧙‍♂️ HACK YOUR CAREER
    Wanna learn to hack? Join my new CTF platform: stacksmash.io
    🔥COME HANG OUT
    Check out my other stuff: lowlevel.tv

Comments •

  • @LowLevelTV
    @LowLevelTV  Month ago +840

    its okay low level don't cry

    • @LowLevelTV
      @LowLevelTV  Month ago +168

      ok i will be strong for daniel

    • @3RR0RNULL
      @3RR0RNULL Month ago +12

      ibs or something idk i forgot the acronym

    • @themsayl0
      @themsayl0 Month ago +117

      @LowLevelTV why is bro talking to himself

    • @kayleekayt3306
      @kayleekayt3306 Month ago +28

      The only way to be sure the other participant isn't just statistical weights.

    • @LowLevelTV
      @LowLevelTV  Month ago +110

      its not talking to yourself if you have ✨multiple personalities ✨

  • @richardedmondson9434
    @richardedmondson9434 Month ago +1685

    I feel like we're only moments away from "CVE 10/10: sudo allows root to impersonate arbitrary users with impunity"

    • @AnovSiradj
      @AnovSiradj Month ago +7

      Lmao

    • @OhNotThat
      @OhNotThat Month ago +253

      "major vulnerability: if an attacker somehow figures out the password for the root account, sudo could be used to bypass security!"

    • @ftoh
      @ftoh Month ago +120

      «HID devices allow control over your computer and must be removed.»

    • @El-Aminoh
      @El-Aminoh Month ago +41

      The root vulnerability here is that the computer even turned on

    • @GodlyTank
      @GodlyTank Month ago +4

      lmao

  • @noanyobiseniss7462
    @noanyobiseniss7462 Month ago +1292

    That is a social engineering attack on security safegaurds.

    • @highdefinist9697
      @highdefinist9697 Month ago +9

      Unfortunately (or fortunately?), that does not even seem to be the case here...
      As in, bogus reports in the absence of any reward are very plausibly an "attack" as such - and that is definitely a huge issue overall, that will become even larger in the future. But, in this specific case, the reward itself created bad incentives, to have people simply chase the reward in a selfish and destructive way.

    • @noanyobiseniss7462
      @noanyobiseniss7462 Month ago +15

      @highdefinist9697 Could be both, a auto reply system needs to be installed at the hackerone level to detect and blackhole these and that should get rid of the slop once they are baited into wasting a month with similar nonsensical reply's.

    • @Adiee5Priv
      @Adiee5Priv Month ago +5

      ​@noanyobiseniss7462 really easier said than done though

    • @JohnDoe-my5ip
      @JohnDoe-my5ip Month ago +27

      It’s just spam tbh. Same problem that’s plagued the internet since forever.

    • @AmIThePresident
      @AmIThePresident Month ago +29

      More of a DOS

  • @alexsanderrain2980
    @alexsanderrain2980 Month ago +740

    The Sloppa claims another soul.

    • @instantchow
      @instantchow Month ago

      Slopia? never hear of her.

    • @pri19055
      @pri19055 Month ago

      @instantchow they're talking about ai slop

    • @tentaklaus9382
      @tentaklaus9382 Month ago

      @pri19055So Instantchow actually said "SlopIA", the OP said "Sloppa". So if anything InstantChow did change it to include A and I in the name as a head nod to understanding what was meant by the OP along with AI ties.

    • @ludmilopez6982
      @ludmilopez6982 Month ago

      I cried XD

    • @Darthilandia
      @Darthilandia Month ago

      @instantchow you must be slow

  • @Digan
    @Digan Month ago +408

    Really wish you would link articles that you utilize in your videos so the creators of them can get traffic.

    • @data043
      @data043 Month ago +22

      This

    • @Dm3qXY
      @Dm3qXY Month ago +9

      latest ideas like this died in the early 2000's, maybe 2010's.., you're about 20 years late for proposing such things

    • @peq42_
      @peq42_ Month ago

      up

    • @НААТ
      @НААТ Month ago

      🤣🤣

    • @Malaphor2501
      @Malaphor2501 Month ago +31

      @Dm3qXY Nah, there's a big movement for crediting the creators out there. You can thank all the freebooters for that.

  • @danardalin
    @danardalin Month ago +129

    "AI" has enabled a whole new generation of script kiddies. The crash can't come fast enough.

    • @jolomuhmad2925
      @jolomuhmad2925 Month ago +15

      This generation makes the past generation look like geniuses by comparison.

    • @thefrub
      @thefrub Month ago +6

      Even if the trillion dollar companies crash, the tools are still out there. Pandora is out of the box, the cat's eaten the bag

    • @GigaChad-d3e9j
      @GigaChad-d3e9j Month ago

      @jolomuhmad2925 yep,am from this generation.

    • @LOGOS33LOVE
      @LOGOS33LOVE Month ago +1

      its crashing soon!11

    • @thefrub
      @thefrub Month ago +4

      @LOGOS33LOVE Two more weeks bro, trust me

  • @phillipsilva6290
    @phillipsilva6290 Month ago +117

    Hackerone accounts that send ai slop should get three strikes and then blacklisted forever

    • @Chrisxantixemox
      @Chrisxantixemox Month ago +20

      Nah, one strike and you’re done. AI slop should be outright banned.
      If you use AI to help you test your ideas, or build your case, that’s fine. If scammers just want to release agents to fully autonomously submit slop, then nah.

    • @Liriq
      @Liriq Month ago +21

      Accounts aren't people. Banning accounts has no real effect. It can worked around. Millions of accounts are banned every day.

    • @Chrisxantixemox
      @Chrisxantixemox Month ago +1

      @Liriq Sure, but what you are describing is illegal, which raises the risk for individuals/crews to run these types of bots considerably.

    • @Liriq
      @Liriq Month ago +1

      ​@Chrisxantixemox to great effect.....

    • @TheCyberDiary
      @TheCyberDiary Month ago

      ban india and 90% of this would stop

  • @highdefinist9697
    @highdefinist9697 Month ago +134

    "I understand you are upset. I'm happy to listen" - Well... few sentences make me more upset, and less willing to listen.

    • @zea_64
      @zea_64 Month ago +20

      "I understand you are upset. I will now attempt to make that upsetness worse by not actually understanding why and just continuing to waste your time."

    • @Imperial_Squid
      @Imperial_Squid Month ago +6

      That and the "think of

    • @pcachu
      @pcachu Month ago

      There are few more reliable ways to make me go full Hard-R Luddite. "SILENCE CLANKER, I AM A DIVINE BEING, YOU ARE A FILTHY MACHINE, DO NOT SOIL MY HOLY TONGUE WITH YOUR SLOP."

    • @dhmacher
      @dhmacher Month ago

      "You've never seen me upset"

    • @serg_sel7526
      @serg_sel7526 Month ago

      ​@Imperial_SquidIt can't. The "educated adult" style of writing happens to be in the database less often than reddit posts and trendy articles.

  • @ToyKeeper
    @ToyKeeper Month ago +241

    On my latest open-source project, so far, 100% of all patches I've been sent were AI-generated and nowhere near the quality required to get merged. One broke within 3 keystrokes, another made the version numbers go in random order instead of counting up, another added "unit tests" which just played an animation of "tests" turning green but didn't actually test a single line of the project's code. And the slop patches just go on and on like that. It's a real problem.

    • @ksln
      @ksln Month ago +7

      I'm in pain. Oh.... F....

    • @moonasha
      @moonasha Month ago

      that sounds really annoying to deal with. Guess yet another thing Jon Blow was right about...

    • @444knuffelmac
      @444knuffelmac Month ago +1

      That sounds horrible, I dont have anything online yet (cause I like coding myself) but now I dont think I want to (or atleast not make it possible for merge requests) I hope you get some actual useful patches and not just ai slop. I wish you good luck with your project(s).
      Do you know how to block them? Like a filter before you even have to look that deeply? And did the ai slop if developed by an actual human (so better coded for your project) have anything useful? Like the one that broke very fast, if made to not break in that way, would it have been useful, or just complete slop all the way trough?

    • @nitherin6440
      @nitherin6440 Month ago +24

      Tests turning green sounds like a real winner! 💚

    • @pwrdprompt
      @pwrdprompt Month ago +4

      I honestly haven't had or seen aigen submissions that bad on that level. Sorry about that man, I bet it sucks.

  • @Expllosaoriginal
    @Expllosaoriginal Month ago +1667

    We need a "pay to report, given back if confirmed" option in these platforms

    • @dannyarcher6370
      @dannyarcher6370 Month ago +46

      That's a great idea.

    • @dwncasted
      @dwncasted Month ago +10

      True.

    • @Reelix
      @Reelix Month ago

      Polymarket tried that.
      What happens is that you have a paid third-party arbitration service that votes whichever way will get them the most money.
      Eg:
      Bug Bounty: $1,000 for Criticals (And $500 buy in requirement).
      Person: Hey - Arbitration Service? I'll give you $600 if you vote this true.
      Arbitration Service: Deal!
      Person: This (Obviously not a bug) issue is Critical - Here's my $500 buy in requirement.
      Service: There is no way this is a bug - You just threw paint against a wall.
      Person: Arbitration?
      Arbitration: This is definitely a Critical bug - No doubt about it!

    • @autohmae
      @autohmae Month ago

      I think it should still be a bug bounty, but yes, let's start with charging some money up front, but reward more when it's a real bug

    • @ZuZaFamily
      @ZuZaFamily Month ago +84

      i just suggested that! YOU PAY 10 usd to report and get the bounty if claimed! and if its slop you loose it

  • @daniels-mo9ol
    @daniels-mo9ol Month ago +267

    "AI will take our jobs", not because AI is any good at anything, but because AI just causes a work overload on humans and they hit a wall and can't work anymore.

    • @data043
      @data043 Month ago +5

      people are so brain broken from bad uses of ai they just apply it to "AI" in a general sense

    • @data043
      @data043 Month ago +4

      also all of this could literally be 99% mitigated with a simple AI text filter check and ip ban system

    • @infinitivez
      @infinitivez Month ago

      @data043 That's the thing I don't understand - why isn't HackerOne policing their platform better? It's absurd to allow this behavior from any AI agent. There should be ZERO explanation given when needing to remove an AI submission, and AI agents should be forced to identify themselves. Sorry bots, we're not going to go into a deep dive as to why you're wrong, so that you can better yourselves using our skills. You've already stolen infinite amounts of code and comments from us, to do that. Now you want more of our labor, and bounty payouts? HA! get bent

    • @Timka09
      @Timka09 Month ago +13

      @data043 "AI text filter check" doesn't exist, not a reliable one anyway. While it's pretty easy to realize that you're talking to a popular LLM from a short conversation, detecting any LLM at all from the initial message alone isn't really possible.

    • @ElMarcoh
      @ElMarcoh Month ago

      If you have ever worked with agents, they also cause work overload among themselves and cause lots of tokens to be wasted on retarded AI discussions about the most basic things, obviously someone has to pay for those tokens.

  • @lemagreengreen
    @lemagreengreen Month ago +17

    It's the blatant use and copy pasting of the entire ChatGPT output that really gets me lol.

  • @minirop
    @minirop Month ago +46

    it's funny that hackerone's main page shows left, right and center "humans + AI"

  • @SXZ-dev
    @SXZ-dev Month ago +1121

    Another day of AI destroying software and tech bros calling it a fuckwin

    • @felixjohnson3874
      @felixjohnson3874 Month ago +35

      BS reports, PRs, etc. have literally always been a problem. AI just allowed more people to make them in the same way it's allowed more people to produce code in general.
      The issue isn't "AI", it's people's inherent overconfidence and/or the lack of penalty for submitting bad issues/PRs.
      For most people, even if the code is just barely functional, that's still a win because now they can create code that works and they literally did not know how before. Only in the context of professional projects/organizations does low code-quality present real, meaningful issues.
      It seems like a very easy fix here would be to have every account have a running "reputation" that starts very low, is increased when the project accepts your report/request as valid, is decreased when your report/request is dismissed on account of it being poorly thought out or clearly BS, and then your reputation affects the payout you get from bounties. Then you could even say "our project should not be visible to anyone with a reputation below 30" so you don't even need to look at low quality reports because they literally cannot be submitted.
      It's honestly not a particularly hard problem to solve (conceptually at least, I recognize something like this has it's own implementational work behind it) but people need to be honest about it instead of just throwing their hands up and blaming AI.
      Realistically, the "most correct" system would involve a security-deposit. You are paid for finding bugs because that takes time, effort, and skill. But, so does reviewing bug reports. So, realistically, you should need to pay the project to consider your report, and then be paid yourself if it's considered valid. This would (likely) be enough to solve the problem. Project creators decide what their time is worth and get compensated for it, anyone of any reputation can submit a report to any project and still get a full payout, and people blowing out hundreds of bogus AI reports a day go bankrupt. And, again, this is completely justified because triage takes time & effort just like making bug reports or PRs does. There is value being created & lost in both sides of this trade so money technically *_should_* be flowing both ways.
      Buuuuuut people like to see money as something unique or special (& not just an abstract way of exchanging value that itself has nebulously strong correlation with time & effort) and would probably throw a fit at the idea of needing to make a deposit to file a bug report. (Even tho, again, triage isn't free! That's real work & effort that *_you_* are demanding someone *_else_* put in!)
      Eitherway, AI isn't unique in that it makes bad reports, it just enables people to do what they've already been doing but more. And since making reports has basically always been "free", people don't care about submitting bad ones. Whether you make reports "not free" with a reputation system or a deposit system, either way you'd force people to incur personal loss (either lost money today or lost potential money tomorrow) from submitting bad reports.
      AI isn't special or unique, it just lets people do more of the shit they were already doing.

    • @TheDOSGamer
      @TheDOSGamer Month ago +61

      @felixjohnson3874 That's an absolutely ridiculous take. The issue is 100% AI.

    • @cauthrim4298
      @cauthrim4298 Month ago +65

      @felixjohnson3874 "The problem was always there but AI exacerbated it to an unmanageable degree. AI is not at fault". So much text for so much lunacy.

    • @felixjohnson3874
      @felixjohnson3874 Month ago +11

      ​@TheDOSGamerif you think humans weren't creating their own shitty reports just fine before AI you haven't been paying attention. Maybe try actually reading the post instead of just desperately finding an excuse to say "AI bad" for internet points.
      The issue is that there's no penalty to making bad reports. Realistically this is an issue which should have already been solved but since so (relatively) few people even knew enough to make a report at all it was small enough to be ignored.

    • @felixjohnson3874
      @felixjohnson3874 Month ago +11

      ​@cauthrim4298see previous. Actually read the post before finding an excuse to yell "AI bad" and bleach your knickers.
      The issue behind bad AI reports and bad human reports are identical; there is no penalty to making a bad report. There is literally no downside to submitting a low-quality report but there IS a potential return.
      If there were a penalty then en masse bad reports, by people OR AI, would *_both_* be either unprofitable or outright bankrupting.
      You don't need a special "AI focussed solution" because AI isn't special. Bad AI reports are a result of the same exact problem as bad human reports.
      You can either accept that and fix both problems, or just whine about how "AI bad" and fix nothing.

  • @aoi_mizma
    @aoi_mizma Month ago +352

    Every bug reports should probably require some deposit money to report at this point...

    • @lazyh0rse
      @lazyh0rse Month ago +6

      that's a really great way to gatekeep it. otherwise this just doesn't work...

    • @concinnus
      @concinnus Month ago +7

      By deposit you mean to be refunded on confirmation that it's valid? Sounds like a great idea.

    • @amunak_
      @amunak_ Month ago +12

      @concinnus Yeah. Or even when the authors just feel like it (i.e. someone who seems like an actual person makes an honest mistake when figuring out the code). But there should be some investment required imo.

    • @zea_64
      @zea_64 Month ago +37

      Daniel talked about that idea, but it has the issue of either not being a high enough barrier to entry in richer countries or being a prohibitively high barrier to entry in poorer countries.

    • @DustinMaki
      @DustinMaki Month ago +5

      The legit time spent finding real bugs, researching potential exploits, and reporting through proper channels IS an investment. Undertaken without any guarantee of payment. What deposit amount is low enough that it won't ever make a researcher feel unappreciated and push them over the edge into becoming an attacker. They are holding an unknown viable exploit remember. Why did bug bounties become a thing in the first place? To avert this situation.

  • @_GntlStone_
    @_GntlStone_ Month ago +17

    6:08 the AI followup "I feel disrespected" is the best part.

  • @electrified0
    @electrified0 Month ago +97

    These programs have a dual purpose, as not only do they find and fix security exploits, they also provide a legitimate revenue stream for people with the skills to exploit vulnerable systems and effectively removes them from the threat pool. If bug bounties go away, not only will we have less fixes, we'll have more attackers.

    • @BitTheByte
      @BitTheByte Month ago

      True. It’s a shame unskilled utterly worthless “prompters” are ruining this avenue. Oh well.

    • @katanah3195
      @katanah3195 Month ago +1

      @electrified0 The entire reason bug bounties were started in the first place was because paying a bounty for reported exploits is cheaper than having the bugs exploited.

    • @Zaluskowsky
      @Zaluskowsky Month ago

      @electrified0 this.

  • @coliimusic
    @coliimusic Month ago +10

    9:37 This right here is the part that all the people in the back needed to hear but probably didn't watch long enough to see

  • @akaMasterSplinter
    @akaMasterSplinter Month ago +280

    Tale as old as time , this is why we can't have nice things.

    • @username_miller_1
      @username_miller_1 Month ago

      What exactly is the tale you are refering to? I'm not saying that I disagree, but I honestly don't know what you are talking about. As it stands right now, you're falling in the exact category that the video is criticizing: Contributing stuff that is easily agreeable without really saying anything of substance.

    • @NibsNiven
      @NibsNiven Month ago +12

      ​@username_miller_1 Greed predates humanity and ruins a lot of nice things.

    • @ChadDoebelin
      @ChadDoebelin Month ago +5

      we don't hate clankers enough.

    • @username_miller_1
      @username_miller_1 Month ago +1

      @NibsNiven I doubt greed is behind those commits. If you're chasing money, there are more promising grifts (most of them also include AI). Bug bounties are not really lucrative. These guys are either (1) hostile, (2) naive (over-confident AI bros) or (3) after something they can list in their CV. I don't see money playing a big role here.

    • @potential900
      @potential900 Month ago +1

      Setup an LLM fueled spam filter?

  • @Uerdue
    @Uerdue Month ago +22

    I like to think it's the same on the darknet forums where they sell the exploits...

    • @PTS1337
      @PTS1337 Month ago +5

      That'll be poetic justice.

  • @KaX321
    @KaX321 Month ago +9

    We need offensive firewalls.
    A cookie if you get the reference.

  • @kc-fr3qp
    @kc-fr3qp Month ago +137

    I figured this was gonna happen. I remember when he first wrote about AI bug and slop reports. Honestly, good for him I guess.

    • @IceWotor
      @IceWotor Month ago +2

      Fr, first found out about these ai slop reports when the reports themselves are reporting about vulnerabilities that don't even exist

    • @CyanRooper
      @CyanRooper Month ago +1

      "Source?"
      "I made it the fuck up."

  • @junkname9983
    @junkname9983 Month ago +3

    slop doubling down on with more slop without having someone look at it. it's incredible

  • @lizardkeeper100
    @lizardkeeper100 Month ago +78

    Is this the way humanity is going trying to destroy all that is good in this world with laziness and AI?

    • @zea_64
      @zea_64 Month ago +5

      s/humanity/capitalism

    • @Sonny_McMacsson
      @Sonny_McMacsson Month ago +2

      @zea_64 Exactly the sort of lazy and useless response that demonstrates the problem. Still, humanity.

    • @jkobain
      @jkobain Month ago +1

      @zea_64 state the nature of your urge.

    • @potential900
      @potential900 Month ago +10

      Grifters have always been around.

    • @lizardkeeper100
      @lizardkeeper100 Month ago +7

      @potential900 Yeah I just fear we are making it easier to target the vulnerable. I can envision a world that prevents that sort of behavior in their innovations but alas I guess we aren't there.

  • @CRCinAU
    @CRCinAU Month ago +4

    In before Google submits a bug report and demands a fix asap....

  • @ttfh3500
    @ttfh3500 Month ago +160

    * "part of the bounty may be used to hire killers to go behind those who made ai-slop post"
    - The Curl staff

    • @maximilionus
      @maximilionus Month ago +12

      Justified

    • @kajoma1782
      @kajoma1782 Month ago +7

      Wait is this real if so then based

    • @ataarono
      @ataarono Month ago

      slop posting should be punishable by public execution change my mind

    • @LudicrosityIndustries
      @LudicrosityIndustries Month ago +2

      please do this!!!

    • @iennefaLsh
      @iennefaLsh Month ago

      ​@kajoma1782 Sounds too extreme for it to be real, but if it convinces you, then the world has gone mad (assuming it hasn't already).

  • @nufosmatic
    @nufosmatic Month ago +18

    6:37 - As someone who’s been introduced as “a talented troubleshooter” and who frequently submitted bug reports as part of my day job with no bonus remuneration except the adulation of my peers, I think I appreciate the financial incentive…

  • @BennyColyn
    @BennyColyn Month ago +10

    The slop shall continue until morale improves

  • @TheDOSGamer
    @TheDOSGamer Month ago +56

    Bug bounties will all go away. They were meant for human experts to try to help. Not for every person with a Claude terminal to try to make money despite having no idea what they're doing.

    • @thedave1771
      @thedave1771 Month ago +1

      I still get requests for bounties because my SPF record ends in ?all rather than -all sent to my security.txt contact. Or a particular domain doesn’t use HSTS.
      They’ve partially died off, probably because nobody ever paid anything this stupid, but a few still attempt it.

    • @tekchip
      @tekchip Month ago

      Nah, bug bounties will get spam filters. Probably LLMs checking the LLMs work. Where would we be if everyone had just up and quit email the moment people started sending spam via email, text, any numer of other comms systems? Curl maintainer just going all old man yells at cloud.

    • @TheDOSGamer
      @TheDOSGamer Month ago

      ​@tekchip why waste money on an LLM to filter out LLM submissions. Bug bounties will just be in house using AI to churn out reports and be reviewed internally.

    • @JamesPeters68
      @JamesPeters68 6 days ago

      @tekchip Then the LLMs will reject anything not written by an LLM. Louis Rossmann recently did a video about his website getting delisted until he changed his wording to what Gemini recommended.

  • @breezyx976
    @breezyx976 Month ago +114

    should make it cost a deposit to submit a bug in the first place, to cover the verification process. THat will kill AI slop right away.

    • @meh.7539
      @meh.7539 Month ago +11

      If you've got some un-lucky person whose living in poverty but finds a valuable vulnerability that could help change their position a little bit; do we really want to put _that_ kind of financial barrier in place?
      That seems like it's going to limit it to people that have the ability to pay to play. That doesn't really seam fair, either.

    • @unkarsthug4429
      @unkarsthug4429 Month ago +14

      It also might incintivize people to be much more careful. Aka, assuming "I must just be missing something,". It's a catch 22.

    • @aaaasdfdsa
      @aaaasdfdsa Month ago

      ​@meh.7539 either these people are drowned in slop/bounty programs cease to exist or they have to pay a deposit of 5-10$. In one case they at least have a chance

    • @harleyspeedthrust4013
      @harleyspeedthrust4013 Month ago +1

      @meh.7539 yes, of course we do. i don't believe that a person with a computer can't scrounge together $5, which i think would be a reasonable submission fee. if a poor person really has found a valuable vulnerability then the $5 they put together somehow could turn into hundreds or thousands for them.

    • @deepspacewanderer9897
      @deepspacewanderer9897 Month ago

      ​@meh.7539we may not *want* to do it, but... the only way to stop a for-profit activity is to make it economically inefficient. You can't defeat the market, best you can do is try to point it in a certain direction.

  • @crusaderanimation6967
    @crusaderanimation6967 Month ago +12

    SLOP MUST FLOW !

  • @g.paudra28
    @g.paudra28 Month ago +5

    I just realized he's using same web design as fitgirl

  • @steveftoth
    @steveftoth Month ago +11

    LLMs have proven that they are better at causing conflict than they are at solving problems to me.

    • @JoshCP527
      @JoshCP527 3 days ago

      Neural net still has its limits being in a digital space. Always trying to imitate the analog world sold as artificial intelligence 😂

  • @SteltekOne
    @SteltekOne Month ago +58

    Essentially, Hackerone needs to ban every single one of these submitters for their bad faith submissions. (Especially the ones that double down after being told they are wrong.)
    Alternatively, some sort of collateral, like $20, that you do not get back if an independent panel finds you guilty of bad faith submissions. (Pay it to the project that was attacked in part or in full.)

    • @jkobain
      @jkobain Month ago +7

      Damage is done. With enough capacity, this all can easily go DDoS.

    • @potential900
      @potential900 Month ago +2

      Thoughts on LLM driven spam filters?

    • @uzlonewolf
      @uzlonewolf Month ago +10

      The problem there is those slop generators will just create new accounts and keep doing it. I liked an idea someone had upthread where it requires a $10 deposit to submit a bug report and you get it back if your submission wasn't slop.

    • @gorillaau
      @gorillaau Month ago +3

      ​@uzlonewolfAlso $20 is nothing if you have a vendetta or want to prank a project. A spare $1,000 gets you 50 bug reports, perhaps all under different names that takes maybe five minutes to read through, 10 to understand what they are trying to say and another five for intial response. There's a thousand minutes wasted, or over 16 hours spent triaging.

    • @bytefu
      @bytefu Month ago +1

      @gorillaau 16 hours for $1000? I'd take that.

  • @robertf4832
    @robertf4832 Month ago +15

    Hooray new low level post

  • @Monster_Rancher
    @Monster_Rancher Month ago +63

    if you use curl to download ms-edge would that be a bug?

  • @OniiHvH
    @OniiHvH Month ago +18

    Pretty sure "XBow" is pronounced "Crossbow" or "Ex-bow"

    • @inverlock
      @inverlock Month ago

      It’s pronounce sybau (sorry 😂)

  • @ragectl
    @ragectl Month ago +59

    this is why people who keep promoting vibe coding are destroying the environment for all future learners. A whole generation of content creators pulling the ladder up behind them as the walls against the AI slop they are promoting go up.

    • @KolbBlue
      @KolbBlue Month ago +3

      @ragectl *cough* *cough* ThePrimeagen

    • @irmofs
      @irmofs Month ago

      @KolbBlue I thought he was always against AI. I haven't watched a ton but I got the "vibe" that he doesn't like AI or vibe coding

    • @dycedargselderbrother5353
      @dycedargselderbrother5353 Month ago +5

      @irmofs He waffles on AI. He acknowledges the pitfalls but also realizes the potential gains. I think he's in the position a lot of people are: it can be a helpful tool if you're already an expert but it's not a replacement for expertise. However, because using AI to generate answers is low friction, it's going to be used to replace expertise. The worst part is, providing counters to AI slop takes a lot more effort than refuting "I don't know" or some code pasted from Stack Overflow without modifications for the issue at hand.

    • @refsOnReality
      @refsOnReality Month ago +1

      no content creator should make available the whole version of what they create. Post a demo on social media and give a link to the rest on your website to users who can solve a captcha or have a verified account. Make sure AI companies get no more than 10% of what you have created.

  • @UNgineering
    @UNgineering Month ago +26

    as soon as you see "thank you for your feedback. you're absolutely right" - ban that user immediately.

    • @prhasn
      @prhasn Month ago +1

      exactly if platforms like hackerone ban those users we can potentially reduce the slop.

  • @schism15
    @schism15 Month ago +8

    4:40 Couldn't even be bothered to rewrite the response so it doesn't sound like it came directly from a chatbot.

    • @cianmoriarty7345
      @cianmoriarty7345 Month ago +1

      I know, it's wild, I had to do a double take. For a second I thought I was back talking to Chat GPT trying to get it to explain something and I've caught it out spinning absolute nonsense again.
      I'm not saying it isn't good at explaining things, but I am going to say that absolutely if you don't know a single thing about a topic you are trying to get it to teach you, and so you can't spot the occasional mistake it makes, you can be quite far mislead.

    • @FlatlandsSurvivor
      @FlatlandsSurvivor Month ago

      @cianmoriarty7345 Kurzgesagt has a video where they talk about trying to use AI to assist in research and learning, and literal astrophysicists asking the chat bot for facts about brown dwarfs got misled by convincing lies.
      More of the story: no one is "smart enough" to be able to tell when it's making something up or sourcing unreliable data, of course all portrayed in the exact same way.

  • @runningbird501
    @runningbird501 Month ago +3

    It's like thinking that everything a bomb sniffing dog sniffs is a bomb and not a urine stain on a tree.

  • @ewejinyeap
    @ewejinyeap Month ago +5

    The incentive structure will eventually change to accommodate. Pay to report and or reputation thresholds.

  • @qasimstatic
    @qasimstatic Month ago +3

    lowlevel we want more comedy videos like this!

  • @RamenEnjoyer404
    @RamenEnjoyer404 Month ago +4

    bug bounty platforms really need to make creating a account much harder paired with if you get to many false flags getting banned. There needs to be a way to hold the AI slop people accountable for when they don't even do any due diligence. Also that chain example you showed was absolutely insane

  • @DavidLindes
    @DavidLindes Month ago +15

    So, in summary, the "AI" platforms are a DDoS on society at large. Gotcha.
    Hmm, what do we do with a DDoS? Kick them off the Internet?? Arrest them? Other?

    • @tekchip
      @tekchip Month ago +1

      This isn't a DDOS this is spam. You get a spam filter. Probably LLMs checking other LLMs work and punting anything implausible. Doesn't seem like this should be a particularly hard problem to solve.

    • @KarmicMishap
      @KarmicMishap Month ago +1

      @tekchip an LLM is the stupidity that generated the original report. Why exactly would another stupid LLM suddenly have the ability to recognize such reports as slop?

    • @DavidLindes
      @DavidLindes Month ago +2

      @tekchip "this is spam" does not preclude "this is a DDoS". If you have some other argument for why you think it's not a DDoS, feel free to make it, but... also, "spam" is mostly a concept that's specific to e-mail, and certainly your solution of "get a spam filter" isn't something that one can do in the contexts the LLMs are getting used these days. How do I install, for example, a spam filter on the phone line for some business I call on the phone? And yet, some of them are being answered by systems that sure seem to be employing LLMs. Which _denies_ my ability to get the _service_ of actually talking to a human about something, and such systems are being _distributed_ all over the place. Ergo, a Distributed Denial of Service, or DDoS.

  • @glurp1er
    @glurp1er Month ago +1

    One of the worst thing about AI is its self-confidence, even when it's obviously wrong.

  • @kevincampos3418
    @kevincampos3418 Month ago +5

    This isn’t just in security. Everyone in the office is trying to shove copilot in everything. They think prompts are magic. That AI is never wrong and they think you can guaranteed 100% accurate data…which is impossible

    • @rtharris84
      @rtharris84 Month ago

      Just saw a commercial where M$ said that out loud (co pilot being able to make someone an expert).

  • @Strenkoo
    @Strenkoo Month ago +1

    AI continuing to only have the ability to make things worse.

  • @alexmipego
    @alexmipego Month ago +3

    Easy to fix: Every bug report must be accompanied by an exploit tool.

    • @oink-747
      @oink-747 Month ago +1

      There are vulnerability classes which are clearly dangerous, but not (yet) exploitable in the wild. Many of the real attacks are of swiss-cheese type - multiple unrelated vulnerabilities stack together, each one being non-exploitable in isolation.

  • @langnostic5157
    @langnostic5157 Month ago +16

    AI hallucinates, dont trust it. Ever.

    • @magicclippy101
      @magicclippy101 Month ago +3

      Does this include all of humanity or just the religious ones?

    • @langnostic5157
      @langnostic5157 Month ago +1

      ​@magicclippy101 I'm a misanthrope and Security adjacent SWE, you think i trust humans? I code all day for a reason.

  • @zimbu_
    @zimbu_ Month ago +1

    It's tough to run a message board whenever a new type of spambot gets released.

  • @graealex
    @graealex Month ago +4

    Slop is the right word. I would assume with the right prompts and tool access, you could probably find plenty of exploits - not in CURL maybe, since that's been audited for like a million years now already.

  • @BboyKeny
    @BboyKeny Month ago +11

    I remember when the bottleneck for AI was encoding expertise and ensuring precision. Who knew the bottleneck was actually the high standards people expected

  • @hugme9592
    @hugme9592 Month ago +1

    I literally just put up a Linkedin post about this. I'm a CISO and I am seeing them more and more every week. It's taking up way too much of my teams time.

  • @webdev1019
    @webdev1019 Month ago +5

    I'm guessing XBow found a bug in the web framework being used and simply spammed every single web app that uses that framework and then puts the on their site. AI is slop. It's nothing else. AI understands nothing. It's just fancy math that guesses at everything it does. To put it one way, FUCK AI.

  • @lionlol
    @lionlol Month ago

    The fact that the slop is being generated, identified, and filtered is a huge win for the data scaling bottleneck of training data.

  • @crisenbici
    @crisenbici Month ago +6

    AI are made to answer the most correct thing they can, meaning that if the answer is 1% correct thats the Best they can do and answer. Any person with minimum tech knowledge knows this but the vibe coders are now in every area including security which is bad.

  • @menloe471
    @menloe471 13 days ago

    I have only been following this channel for a few weeks, but something about how you explain things in your videos helps me understand things I never could before!

  • @OVERKILL_PINBALL
    @OVERKILL_PINBALL Month ago +2

    When AI cluttered the world... forever

  • @anon_y_mousse
    @anon_y_mousse Month ago

    I had not heard that definition for the acronym CEO before, but I like it, and I think I'm going to use it.

  • @jacob_90s
    @jacob_90s Month ago +1

    What we might need is to have it where in order to submit a report, you have to pay a dollar, and then if it gets verified, you get it back plus a reward. Doesn't fully solve it, but it at least puts an upfront cost to submitting a report.

    • @volkris
      @volkris Month ago

      More than a dollar. It needs to be enough to cover the cost of reviewing the report.

  • @bobdobalinaf3981
    @bobdobalinaf3981 Month ago +1

    Kinda feel bad for submitting a bug report that said "tendies" in every field now

  • @simo47768
    @simo47768 Month ago +17

    Every ai post or picture of movie should mention that it is ai, otherwise a fine or jail

    • @johannweber5185
      @johannweber5185 Month ago

      Where would you draw the line. In your opinion should Ai-supported de-noising of images or sound already be considered ab AI image. This is really a question, I do not know where to draw the line.

    • @simo47768
      @simo47768 Month ago

      @johannweber5185
      It is not easy.
      It sends a message to users. Mention ai when you use ai.
      Also people should validate themselves before they can participate to bounty. It is very easy nowadays. You pay 0.01 cent to bank and they know you are who you say you are
      Also I think that with pictures and movies there should be some digital signing using a certificate. Every phone and camera should have it. Then you know for sure image or movie is not ai generated
      I guess the line is when you know for sure and can be prove that something is ai generated just to mislead people
      Fine 1000 dollar

    • @JustAGooseman
      @JustAGooseman Month ago

      @johannweber5185 I think it should be required to be disclosed regardless. Even if its being used as it should be (as a tool to help human users) then I don't really care, but it should still be disclosed. Transparency is one of the best and most important things a company or vendor of goods should be practicing.

    • @aonodensetsu
      @aonodensetsu Month ago +1

      @bagel_deficient ai image generation *is* inherently modification for the currently popular models at least, a noise image is used and then ai modifies it to 'be closer to what the prompt describes', a dozen times in a row

    • @aonodensetsu
      @aonodensetsu Month ago

      ​@bagel_deficientthe best kind of correct!

  • @gabetower
    @gabetower 4 days ago

    If Monty Python were making stuff today, the word we would use in stead of "Spam" would be "Slop"

  • @konkitoman
    @konkitoman Month ago +3

    AI is really useful for weather prediction, speech to text and any problem that you don't know how to solve, and using an approximation is good enough.
    But Large Language Models and General Purpose Transformers are hell, they should never exist!
    Any Generative AI from your own data set is also fine.

    • @kitsuneneko2567
      @kitsuneneko2567 Month ago

      @konkitoman the problem is not the LLM, its people who use them without understanding how to use them properly.

  • @uendarkarplips7263
    @uendarkarplips7263 Month ago +2

    the india slop was bad enough, now the ai slop on top of it just makes it impossible to crowdsource security testing.

  • @joostlekkerkerker9182
    @joostlekkerkerker9182 Month ago +20

    Daniel did an AWESOME talk at FOSDEM last Sunday in Brussels! Definitely a must watch

    • @blackandcold
      @blackandcold Month ago +3

      this one? ruclips.net/video/6wYSwZ20NJU/video.html
      Open Source security in spite of AI - Daniel Stenberg

    • @joostlekkerkerker9182
      @joostlekkerkerker9182 Month ago

      ​@blackandcoldYes!

  • @SMRTWorld
    @SMRTWorld Month ago

    Hiiii I have a binary file which I have downloaded from github and I want to know which algorithm is used in a binary file

  • @TheMyx231
    @TheMyx231 Month ago +10

    You gotta love that last comment "I used to love curl and recommended it to others, now I'm going to take my business elsewhere". Oh brother...you are absolutely right.

    • @SianaGearz
      @SianaGearz Month ago +3

      Business? How does one even buy a curl. Or sell a curl.

    • @TheMyx231
      @TheMyx231 Month ago

      @SianaGearz hmmm I've heard you can buy a curl at the hairdresser's, and I'm sure some people on the internet would be happy to buy one. Oh wait

    • @foobarf8766
      @foobarf8766 Month ago +2

      I demand all my money back. Every last cent of zero dollars

  • @xxlarrytfvwxx9531
    @xxlarrytfvwxx9531 Month ago

    It's like the snake farmers.

  • @themsayl0
    @themsayl0 Month ago +8

    holy sub5 nerd its over. brutal.

  • @benoitgaudeul4200
    @benoitgaudeul4200 Month ago +1

    Would it be help full if you had to pay to submit a bug report to a bug bounty program ? Like 5 to 10% of the bounty.

  • @nufosmatic
    @nufosmatic Month ago +3

    5:44 - This is not something that just happens with AIs. You’ve never had a junior developer who couldn’t see past the end of his nose and was insistent that a non-problem was a problem? AIs just do it faster and, almost never, take “no” for an answer…

    • @JürgenErhard
      @JürgenErhard Month ago

      A junior dev can (most of the time) learn. "AI" can't.

    • @nufosmatic
      @nufosmatic Month ago

      @JürgenErhard An AI can learn. It can't know that what it's learning is crap...

  • @H8KU
    @H8KU Month ago +2

    only 10% of India has an Internet connection, it would have happened one way or another

  • @Delicioushashbrowns
    @Delicioushashbrowns Month ago +9

    6:06 ain't no way brother no freaking way the idiot replied with "i used to love using curl" no fucking way brother i would've crashed out that moment that instant. I'm fuming right now even lmao

    • @trofl
      @trofl Month ago

      Hah, good catch! Any human security researcher that knows anything about cURL would understand that nearly every internet-connected device is using libcurl under the hood. So, uh, good luck definitely-real-person on avoiding using it 🙃

    • @Whitecroc
      @Whitecroc Month ago

      It's just an LLM response to perceived rudeness.

  • @jeremyb5619
    @jeremyb5619 Month ago

    It seems like the quality of everything will just continue to freefall. The enshittification will continue until morale improves

  • @JonahTsai
    @JonahTsai Month ago +13

    "AI is dump as hell." Finally somebody else said it publicly. I started calling it ANI (Artificial No Intelligence), before the term AGI was coined.

    • @meh.7539
      @meh.7539 Month ago +2

      we'll get GTA 6 before we get to AGI.

    • @kamoroso94
      @kamoroso94 Month ago +1

      When do you think the term AGI was coined?

    • @aonodensetsu
      @aonodensetsu Month ago +1

      @kamoroso94 like 3 hours ago in his worldview probably

    • @JonahTsai
      @JonahTsai Month ago

      @kamoroso94 late 90’s, quite a few 3 hrs before attack dogs show up.

  • @StarbucksCoffey5280

    Dead internet theory is getting more and more real and less theoretical.

  • @CC21200
    @CC21200 Month ago +5

    How about, you pay a deposit to submit, which you lose if you're too sloppy?

  • @gorillaau
    @gorillaau Month ago +1

    Can we get article links in the description, please?

  • @horryportier7539
    @horryportier7539 Month ago +4

    if you constantly submit slop reports you should be blacklisted
    ppl use ai and when something goes wrong they like "but that's ai's fault" ya no shit but you still are responsible for it

    • @trofl
      @trofl Month ago

      And what global ID exists that you can blacklist to keep these authors from creating a new account?

    • @horryportier7539
      @horryportier7539 Month ago

      @trofl at least made their life a bit harder

  • @RvLeshrac
    @RvLeshrac Month ago

    The problem is that those people don't get banned, with zero-tolerance.

  • @systemloc
    @systemloc Month ago +4

    This AI slop bug bounty submission is a very easy fix. Charge 20 bucks per submission.

    • @jonjonr6
      @jonjonr6 Month ago +2

      Yeah. This. If it's legit, they get their money back.
      Deter bad faith actors.
      Only people who genuinely believe they have something significant will submit.

  • @anise515
    @anise515 Month ago +1

    Worth mentioning this comes at the same time as AISLE's AI system discovered and patched 5 CVEs in curl and 12 out of the 12 CVEs patched in OpenSSL 3.6.1 including one that was high severity (CVE-2025-15467)

    • @simond.455
      @simond.455 Month ago

      According to AISLE's blog post, they manually verified and reproduced each vulnerability before submitting it.
      There's no mention of how many false positives the AI found. I'm absolutely certain it was a non-zero number. 🙂

  • @johanngambolputty5351

    Is there no way to whitelist only high reputation accounts? That have a track record of good reports?

    • @meh.7539
      @meh.7539 Month ago +7

      That's a bad way to handle it, too. How do you get noticed as a new bug bounty hunter if they're preferring people who've already been established?
      Not to say that I know how to fix this problem, either.

    • @triffid0hunter
      @triffid0hunter Month ago +1

      The issue there is that you end up excluding folk who stumbled across a bug while doing other stuff, and don't have a history of contributing to the project but do know how to write a proper bug report.

    • @johanngambolputty5351
      @johanngambolputty5351 Month ago

      @triffid0hunter I mean rep, across projects, it might be annoying to build rep in the first place, but might be better than having to not accept reports at all...

  • @lesath7883
    @lesath7883 Month ago +1

    6:07 That last reply from the AI slop reporter at the end is even more infuriating.
    Devs should have an option to close and block those timewasters.

  • @古狐
    @古狐 Month ago +4

    That is so disappointing. People just abused AI to ruin the code bounty, wasting developers' time, hoping for free money without they doing any hard work researching.

  • @AS-uo9dd
    @AS-uo9dd Month ago +1

    learning AI get me into cyber security, i build my own terminal so i can have model run command live on terminal that i used just like warp terminal but still clasic terminal style, without realizing the risk.

  • @SianaGearz
    @SianaGearz Month ago +3

    Bottom up open source, community run projects shouldn't offer bug bounties - that's what corporations do, figuring, to do the same kind of work they'd have to hire an engineer for X hours and that's worth money. If you have a legitimate bug, just report it normally and it will be fixed.

  • @jenesuispasbavard
    @jenesuispasbavard Month ago

    At this point Hackerone should just charge for every submission.

  • @jhonseagull5127
    @jhonseagull5127 Month ago

    It cant see the strcpy guard clause like it cant see the r's in strawberry

  • @CoreInterrupt
    @CoreInterrupt Month ago +3

    Yet another example of 'this is why we can't have nice things'.. incentives will always run the world. That isn't the issue. Setting up proper incentive structures is, and in this case better moderation. People don't do things that don't benefit them. They move on to something that does. Some people don't care who they hurt, so the answer can not be take away all good things due to some taking advantage. Everything will be gamed, every time. Humans like puzzles and some have no ethical boundaries or 2nd order understanding.
    Change the incentive structure. Make it punitive for those who game the system. Not in a whack a mole type of way but in a ground up design way. If the beneficial path is doing things the right way, it incentivizes only doing things the right way. That's not just true of this situation but every institution and even every social group.

  • @WackoMcGoose
    @WackoMcGoose Month ago +1

    "CEO: Code Emitting Organism"... I've gotta remember that one 🤣

  • @AndreGreeff
    @AndreGreeff Month ago +2

    this reminds me of a saying my Grandfather used to tell me all the time when I was younger: "be careful what you look for, because you might actually find it.."
    my point is, if someone opens an OSS project repo to use AI to look for a buffer overflow error, it will "read the intent" in a way, and hallucinate one into existence... convincingly, too.. at least, to the average person. *this* is why we will always need real "bug hunters", with real intelligence, and real creativity to think outside the box that is "the product". AI is just a tool. it's fine, *IF it's used appropriately*. 🤔

  • @faik...
    @faik... Month ago +1

    I think best way to fix this would be to monetize submitting bug reports.
    Not sure if anyone would be willing to take the risk when submitting, or if there would be people using this bug bounty as gambling,
    but I think that would be better than completely shutting it down.

  • @amadensor
    @amadensor Month ago +4

    At work, we have AI code reviews to allow the developer to clean up most of the stuff before it wastes the time of other developers. Yesterday, it found 5 bugs. 1 was inconsistent capitalization in one variable name, so it was not like the others, and that made it harder to read. The other 4 were just wrong, like a variable was supposedly null because no value had been assigned to it, even though 8 lines before the bug, it was assigned a value. The other three were the same types. Race condition error: the data isn't written before a message is sent, even though 10 lines before the message send, it did the DB write. Usually it's better than this, but not yesterday. So, my point is: these things are useful for finding potential issues, but you really need to check the output and dig in, understanding the code to know if the reports are real.

    • @Luzilyo
      @Luzilyo Month ago

      Yeah, same as always. There's two ways that AI can be useful for complex topics. First, it can be good for discussing hypothetical ideas that don't really have any impact on the real world. Like me trying to design a more realistic genetically engineered superhuman type of thing - just for my own kind of, like, "enhanced daydreaming" type of stuff. It requires a lot of in-depth knowledge about human biology and anatomy that I simply don't have the time to learn, so it's better to just do it with AI, since it doesn't really actually matter if what the AI says is actually correct or not, as long as it at least sounds somewhat plausible. Second is, like you said, if the AI is being used by ppl who actually already have an in-depth understanding of the topic so they can quickly notice it when the AI says something that just isn't true.
      If someone who doesn't actually understand a topic tries to use AI in a more professional capacity with real world impact it's never gonna go well. That would be the equivalent of me just copypasting my "enhanced daydreaming" to some sort of biology research lab thing, like, bruh. They'd probably absolutely *destroy* literally everything I've come up with cuz, like, it sounds plausible but only because I don't really have any idea what I'm even talking about. I'm sure that someone who actually works with the topic of genes and hormones and all the other stuff would immediately notice issues that just make the entire thing completely impossible - not just because we don't yet have the neccessary tools for required fine-tuned gene editing but because I'm sure there's a lot of details that the AI actually missed.

  • @joshuabrunetti2001

    clankers gonna clank

  • @MattHudsonAtx
    @MattHudsonAtx Month ago +4

    The slop is a manifestation of economic disparity

  • @lurkydismal298
    @lurkydismal298 Month ago

    but would have rust avoid this?

  • @GodOfChaos_HeXa
    @GodOfChaos_HeXa Month ago +5

    i dont know how big the bug-bounty payouts are, but why not just require a payment before reporting a bug, if the report is obvious slop you just lost money, if its a bug which wouldnt get a payout since its not to bad you just get back your money and if its an actual bug you get the bug bounty + your initial payment. the excess money, this would most likely stop AI slop reports and if it doesnt the money could be used to pay extra staff for reviewing the reports.