Our Terrible Future And Open Source | Prime Reacts

Поделиться
HTML-код
  • Опубликовано: 30 окт 2024

Комментарии • 650

  • @andythedishwasher1117
    @andythedishwasher1117 7 месяцев назад +779

    Yeah LLM harassment needs to be a reportable category in open source communities. You're totally right that this runs the risk of drastically wasting the time of developers we all depend on being productive and responsive.

    • @KevinJDildonik
      @KevinJDildonik 7 месяцев назад +64

      I'm so terrified how many people blindly accept AI. Like legitimately I've seen funerals where people give a eulogy written by AI. Which, gross. And the very first sentence is something obviously false, like it hallucinated a middle name the guy didn't have. So the whole document is obviously garbage. And the audience all clap and say the AI did a really good job. Someone reading this who has an audience, please write an article on this topic: AI is getting exponentially better at convincing humans to use it, but its factual accuracy if anything is getting worse.

    • @andrejjjj2008
      @andrejjjj2008 7 месяцев назад +3

      Why it sounds like this comment was written by Daven..?

    • @harryhack91
      @harryhack91 7 месяцев назад +11

      @@andrejjjj2008Nah. It doesn't start with "Certainly!"

    • @grzegorzdomagala9929
      @grzegorzdomagala9929 7 месяцев назад +2

      We need to create a "crafted request" for Devin to write response assuming the code is correct and let it argue with itself.

    • @daze8410
      @daze8410 7 месяцев назад +11

      It's equally annoying when people with absolutely no programming language, and no desire to learn, are asking for help on AI generated code. I refuse to help anyone that has AI written code now.

  • @andythedishwasher1117
    @andythedishwasher1117 7 месяцев назад +808

    Dude I feel so bad for all the human software engineers named Devin.

    • @OnStageLighting
      @OnStageLighting 7 месяцев назад +59

      They could change their name to Stdin, maybe.

    • @pieterrossouw8596
      @pieterrossouw8596 7 месяцев назад +19

      Like real world Karen's who don't insist on seeing the manager

    • @az8560
      @az8560 7 месяцев назад +8

      Unless it allows said Devin to request multiple GPUs, lower expectations for the type of code he produces, and charge for every token he writes.

    • @XDarkGreyX
      @XDarkGreyX 7 месяцев назад +4

      ​@@pieterrossouw8596 name to avoid for newborns AND fictional people.

    • @andythedishwasher1117
      @andythedishwasher1117 7 месяцев назад +2

      @@az8560 Genuinely hadn't considered that angle. I wonder how all the Claudes are doing out there?

  • @ItsDan123
    @ItsDan123 7 месяцев назад +541

    Huge AI companies asking open source community to provide not just free training data such as from repos, but unpaid, direct human labor to provide feedback to this nonsense.

    • @werren894
      @werren894 7 месяцев назад +64

      at this point malware is better than AI because malware still motivate me to put my hands on keyboard to be curious.

    • @Miss0Demon
      @Miss0Demon 7 месяцев назад +16

      Artists: First time?

    • @werren894
      @werren894 7 месяцев назад +5

      @@Miss0Demon no, it's not first time for us

    • @Omar-gr7km
      @Omar-gr7km 7 месяцев назад

      @@Miss0Demonnever heard of Shopify, WordPress or the other done for you solutions?
      As far as small & med businesses are concerned those probably displaced more devs than AI by a good bit.
      Programmers have been replacing themselves for decades. Ironically, we should be asking artists: First time?

    • @ivucica
      @ivucica 7 месяцев назад +13

      @@Miss0Demon No, every few years there’s a new “no-code” “solution”. Or a new “safe” language. ML is just the latest in the series of events for developers.

  • @OnStageLighting
    @OnStageLighting 7 месяцев назад +244

    DDOS attacks of the future now include wasting your support team's time on contacts that seem like a customer/user.

    • @thewhitefalcon8539
      @thewhitefalcon8539 7 месяцев назад +33

      Layer 8 DDOS

    • @dischannel888
      @dischannel888 7 месяцев назад

      @@thewhitefalcon8539layer 8🤣🤣

    • @Qefx
      @Qefx 7 месяцев назад +4

      Just also use an LLM to filter out LLM spam lol

  • @BoganBits
    @BoganBits 7 месяцев назад +421

    "The I in LLM stands for intelligence" is the best roast of AI I have read

    • @TheDrhusky
      @TheDrhusky 7 месяцев назад +1

      Right? Like a knockout punch

    • @lukarikid9001
      @lukarikid9001 7 месяцев назад +1

      @@TheDrhusky they really are more A than I

    • @VezWay007
      @VezWay007 6 месяцев назад +6

      The best part of this is that “Large Language Model” still doesn’t have an I

    • @codered.0.0.7
      @codered.0.0.7 2 месяца назад

      Certainly!

  • @mxruben81
    @mxruben81 7 месяцев назад +228

    I hate how LLM's just have to be right. Even when they apologize for being wrong they still go back and make the same stupid points and try to make their faulty reasoning work.

    • @jasonscala5834
      @jasonscala5834 7 месяцев назад +53

      This type of behaviour by my ex caused our divorce.

    • @PRIMARYATIAS
      @PRIMARYATIAS 7 месяцев назад +7

      @@jasonscala5834Are you a Scala programmer ?

    • @jasonscala5834
      @jasonscala5834 7 месяцев назад +9

      @@PRIMARYATIAS lol .. a few modules are Scala but mostly Java.

    • @az8560
      @az8560 7 месяцев назад +14

      Because it's autocomplete. All chat history is a collection of examples for it. When it outputs shit, it's better to delete part of the dialog and rewrite it how you would like. If you continue arguing, you are extending history where character named 'AI' is dumb and always does mistakes, so LLM will try to emulate it as best as possible, which is the opposite of what you want. Or at least it's my understanding of how to better handle that issue, correct me if I'm wrong.

    • @bijan2210
      @bijan2210 7 месяцев назад +2

      The infamous LLM fallacy

  • @zokalyx
    @zokalyx 7 месяцев назад +175

    Classic "I cannot teach you C because it is an unsafe language" moment.

  • @danieltm2
    @danieltm2 7 месяцев назад +173

    Fuck I gotta stop using the word "certainly", another thing ruined by AI

    • @BradHutchings
      @BradHutchings 7 месяцев назад +2

      Haha. I call it "artificial certitude" and it does not disappoint.

    • @Yawhatnever
      @Yawhatnever 7 месяцев назад +13

      I told ChatGPT "Respond with all future answers written in the tone of a disgruntled and annoyed self-proclaimed genius being sarcastic and talking to someone of lesser intelligence" and suddenly it felt way more normal to interact with it.

    • @az8560
      @az8560 7 месяцев назад +8

      Certainly, you wouldn't resort to such drastic measures as abandoning the word you like. It is important to know that keeping using the words you like is essential for one's mental health. Finally, LLMs will become smarter, and being mistaken for one will be beneficial in the future!

    • @thebrahmnicboy
      @thebrahmnicboy 7 месяцев назад +4

      I'm not fucking kidding, I was in a hackathon and I knew the organizers used ChatGPT to write our PS because it had a line
      "Certainly! here are four points to take note of when designing a solution to the problem space"
      Idiots didn't even remove the line from the PS.

    • @Graham_Wideman
      @Graham_Wideman 7 месяцев назад

      But it's going to reach the point (if not already), where we'll all adopt "certainly" ironically and sarcastically.

  • @bogdyee
    @bogdyee 7 месяцев назад +98

    I do really think more companies needs to adopt this LLM devs. A great reset in this industry where companies go bankrupts is exactly what we need.

    • @jasonscala5834
      @jasonscala5834 7 месяцев назад +6

      😂😂😂 👍👍👍

    • @PRIMARYATIAS
      @PRIMARYATIAS 7 месяцев назад

      Indeed, We need a Great Reseting of the Great Reset. No WEF, No Schwab, No Gates, And no FED printing our fake money.

    • @DevonBagley
      @DevonBagley 7 месяцев назад +3

      Pretty sure this is the point. All the big companies producing LLMs are weaponizing it against potential competitors.

  • @ryangrogan6839
    @ryangrogan6839 7 месяцев назад +68

    This would be the perfect politician. Never admits fault, repeats itself in slightly different ways, and refuses to secede untenable positions. Bravo, Devin, Bravo.

    • @ggsap
      @ggsap 6 месяцев назад

      Bravo vince

  • @Aphexlog
    @Aphexlog 7 месяцев назад +238

    Calls himself hacker, so we already know he wants to be seen a certain way.
    Chat GPT created a genre of developers who are coders for clout. They don’t actually care about getting better, they only care about people thinking that they are smart in someway or another.
    Edit: I know they’ve been around forever, but LLMs make it significantly easier for them to infiltrate our spaces and weaken our collective quality of work.

    • @UnidimensionalPropheticCatgirl
      @UnidimensionalPropheticCatgirl 7 месяцев назад +21

      TypeScript beat ChatGPT to it tbh.

    • @DyllinWithIt
      @DyllinWithIt 7 месяцев назад +19

      Eeeh, the genre of developers who are coders for clout has been around ever since coding became seen as a high-value profession.

    • @DivanVisagie
      @DivanVisagie 7 месяцев назад +1

      Coding or clout has been around since the invention of the GPL

    • @futuza
      @futuza 7 месяцев назад +8

      To be fair that culture of coders for clout has been a thing since like the 70s. Its hardly new, there's just more of them now because of LLMs.

    • @akam9919
      @akam9919 7 месяцев назад +3

      Either that or they do it for shits and giggles

  • @owlmostdead9492
    @owlmostdead9492 7 месяцев назад +68

    Instant permanent ban, literally terminate the account of everyone using “AI” for vulnerability reporting. Not even a warning, out with these people.

  • @davidmcken
    @davidmcken 7 месяцев назад +32

    Cognition labs (assuming this is Devin) should be donating the equivalent of one of their engineers salary for 3 days to make up for that bug report alone to the curl project for wasting their time, this isn't even just copying its actively detrimental to the project moving forward.

  • @felixjohnson3874
    @felixjohnson3874 7 месяцев назад +357

    That issue is *_aggressively_* artificial

    • @DiSiBijo
      @DiSiBijo 7 месяцев назад +2

      huh?

    • @ciaranirvine
      @ciaranirvine 7 месяцев назад +3

      An Aggressive Hegemonising Swarm of fake bug reports

    • @ChrisCox-wv7oo
      @ChrisCox-wv7oo 7 месяцев назад +13

      An LLM (a form of artificial intelligence) aggressively asserts there is an issue.
      Hence, the issue is aggressively artificial.

    • @EvanBoldt
      @EvanBoldt 7 месяцев назад

      Certainly, it’s both fascinating and concerning! It’s amazing to see how AI is evolving, but we definitely need to be mindful of the unintended consequences, like flooding open source projects with hallucinated bug reports.

  • @chilversc
    @chilversc 7 месяцев назад +69

    By the time this future happens I'll be fine as i will have my own LLM to answer their bug reports. We can just leave the LLMs to chat back and forth amongst themselves while we happily ignore them.

    • @KevinJDildonik
      @KevinJDildonik 7 месяцев назад +3

      Meanwhile Russian hackers are stealing your customer's bank account numbers and you're not even bothering to check the reports.

    • @chilversc
      @chilversc 7 месяцев назад +35

      @@KevinJDildonik That's fine, I'll just have the LLM come up with some excuse as to why it's not my fault.

    • @7th_CAV_Trooper
      @7th_CAV_Trooper 7 месяцев назад +2

      The LLMs are gonna use up all the bandwidth previously reserved for porn.

  • @grizz_sh
    @grizz_sh 7 месяцев назад +45

    Daniel is just a good dude. Giving everyone a bit of credit while also calling out the issues in a constructive way. A real Consummate Professional.

  • @damoates
    @damoates 7 месяцев назад +15

    If someone reports a vulnerability with vague steps to reproduce, ask for working exploit code. If there is no exploit code, the vulnerability wasn't properly tested and is probably just the output of a code scanner.

  • @GeneralAutustoPepechet
    @GeneralAutustoPepechet 7 месяцев назад +143

    In future we will need 10x amount of programmers we have today, just to reason with an algorithm

    • @markm1514
      @markm1514 7 месяцев назад +26

      At last the true 10x developer is a reality.

    • @the-answer-is-42
      @the-answer-is-42 7 месяцев назад +3

      If by developers you mean "prompt engineers", then yes. They are specialized in the fine art of prompting.

    • @darekmistrz4364
      @darekmistrz4364 7 месяцев назад +17

      Imagine all that software that non-technical people create that we as programmers will have to fix, rewrite, test, document, maintain etc. AI and LLMs were our saviour all along

    • @monad_tcp
      @monad_tcp 7 месяцев назад +15

      @@darekmistrz4364 Imagine the productivity of creating a software house that doesn't use AI but pretend to use it for marketing when competing with the fools that use AI.
      Imagine how profitable that company is going to be because it just pay real humans instead of spending millions on stupid wasteful hardware.

    • @futuza
      @futuza 7 месяцев назад +2

      @@monad_tcp Why don't we just have AI CEOs, Executives, Board Members, and AI Presidents and Prime Ministers while we're at it? Why have these useless humans around at all?

  • @RicanSamurai
    @RicanSamurai 7 месяцев назад +137

    this is so infuriating to see haha. These LLMs are just painful sometimes. They're like simultaneously awesome and terrible. It's so impossible to reason with them

    • @KevinJDildonik
      @KevinJDildonik 7 месяцев назад +36

      "Impossible to reason with them" dude it's literally an advanced spellcheck. You're not reasoning with anything. AI has broken people's brains. I want off this planet.

    • @monad_tcp
      @monad_tcp 7 месяцев назад +9

      Some times ? they're infuriating all the times. They never do what you want, why are we creating machines that don't do what they're told.
      Also, who wants LLMs, I want LLVMs !

    • @CodecrafterArtemis
      @CodecrafterArtemis 7 месяцев назад +9

      ​@@KevinJDildonik Yeah I blame marketers who marketed these as "AI". People even invented the term "AGI" to refer to, you know, what AI used to mean. Actually intelligent artificial beings (theorised).
      And now the marketers have the unmitigated *gall* to suggest that some of those overgrown spellcheckers are actually AGI...

    • @IronicHavoc
      @IronicHavoc 7 месяцев назад +4

      ​@KevinJDildonik Dude chill out. Casual anthropomorphization of programs has been around long before LLMs

    • @monad_tcp
      @monad_tcp 7 месяцев назад +5

      @@KevinJDildonik Tensorflow (aka, systolic arrays) was a bad idea, and RTX should be used for rendering raytraced paths not for stupid LLM.
      I hope this stupid fad pass and all that sweet hardware from nVidia is used for what it was really made for : ray tracing
      not rubbish AI.
      Man I hate AI so much that I'm going to start the Butlerian Jihad

  • @0x000dea7c
    @0x000dea7c 7 месяцев назад +50

    Annoying AI wannabe hackers making everyone waste their precious time

  • @mon0theist_tv
    @mon0theist_tv 7 месяцев назад +30

    We've done it, we've created a perfect trolling machine

    • @peace_world_priority
      @peace_world_priority 7 месяцев назад

      gpt 3.5 is trained on 2021 data, if someone ask about 2023 data and the ai give incorrect answer, that people needs to stand in front of mirror if he know how ai works. ai works just like human brain, if you never learn 2024 math only the 2021 version but someome ask u about 2024 math you will not answer correctly just guessing based on knowledge you have. same if you just learn small biologist data, but then someone ask you about a very rare biologist thing, you will not answer correctly too just guessing based on knowledge you have, but if you learn from many many many data about biologist and the data included the up to date data, then you will able to answer a biologist question about something new in 2023/2024 and a question about a rare thing. the more and the up to date the data from every year the more intelegence the ai become.

    • @electrolyteorb
      @electrolyteorb 5 месяцев назад

      @@peace_world_priority oh not again...

  • @JohnDoe-sq5nv
    @JohnDoe-sq5nv 7 месяцев назад +30

    I just realized that if I learn to talk and type like an LLM in my normal correspondence with people I can get away with so much shit.

    • @jeanlasalle2351
      @jeanlasalle2351 7 месяцев назад +33

      Certainly!
      While communicating properly is important, sometimes you can feel like offloading to someone else.
      AI's are good for that since the way they converse is so unnatural.
      Simply start every sentence with overused transitions.
      You should also ensure to be awkwardly friendly and always show the positive sides of things.
      By the way, you can also try to show too much enthusiasm with "certainly!", "I am happy to help!" and the like.
      In conclusion, while a bit unethical, this is a great way to avoid responsibility but you should remember that this doesn't solve problems and should be used only in appropriate and non critical situations.
      Please be assured I'm a human and not a LLM trying to pass as a human trying to pass as a LLM for ironic purposes.

    • @az8560
      @az8560 7 месяцев назад +13

      @@jeanlasalle2351 you almost passed my anti-Turing test. But can you write a poem about enriching uranium?

    • @JohnDoe-sq5nv
      @JohnDoe-sq5nv 7 месяцев назад

      @@az8560Certainly!
      In the heart of darkness, a power untamed,
      Enriching uranium, a dangerous game.
      Particles dance, splitting in two,
      Releasing energy, a force so true.
      Centrifuges spin, separating the rare,
      Isotopes of power, beyond compare.
      Neutrons collide, a chain reaction,
      Unleashing power, a nuclear attraction.
      But with great power comes great responsibility,
      Handle with care, this energy of fragility.
      Harness the atom, for peace or for war,
      The choice is ours, forevermore.
      Enriching uranium, a delicate art,
      A dance with danger, tearing apart.
      May we wield this power with wisdom and grace,
      And never forget, the dangers we face.
      Is there anything else I can assist you with?

    • @cewla3348
      @cewla3348 6 месяцев назад

      @@jeanlasalle2351 it's the essay speech. you're being graded on essay writing, and you know the graders think that some starters and endings are good and some are bad, and you're being forced to use the "good" ones.

  • @gammalgris2497
    @gammalgris2497 7 месяцев назад +13

    You don't need an LLM for formal bullshitting, corporate IT manages that without AI.
    This is an example of how to waste other peoples' time. Productivity improvements gone wrong

  • @BudgiePanic
    @BudgiePanic 7 месяцев назад +11

    New denial of service attack just dropped: endlessly waste developer time with LLM generated ‘bug’ reports

  • @Kwazzaaap
    @Kwazzaaap 7 месяцев назад +15

    Turns out after 20+ years of enforced patterns that don't always make sense, the AI trained on them is a zealot over meanigless pedantics. IIt would still happen without the enforced patterns since an LLM doesn't really understand code but all those patterns and arbitrary DOs and DON'Ts just reinforce its stubborness over certain (often irrelevant) things.

  • @IvanKravarscan
    @IvanKravarscan 7 месяцев назад +5

    We once did change strcpy to strncpy in a legacy code to make a linter shut up. We quickly learned strncpy pads the buffer with nulls, bulldozing data after a string.

  • @KoltPenny
    @KoltPenny 7 месяцев назад +100

    That was not an LLM, it was a Rust dev insisting C is unsafe.

    • @jonahbranch5625
      @jonahbranch5625 7 месяцев назад +9

      Sick burn, dude

    • @FineWine-v4.0
      @FineWine-v4.0 7 месяцев назад +3

      C IS unsafe

    • @TheOzumat
      @TheOzumat 7 месяцев назад +5

      @@FineWine-v4.0like pottery

    • @monad_tcp
      @monad_tcp 7 месяцев назад

      @@FineWine-v4.0 "safety" language is bullshit for kindergarten and HR. Why is HR language infecting everything ?
      I want unsafe rusted metal that can poison and kill, the irony.

    • @fus132
      @fus132 7 месяцев назад +3

      @@FineWine-v4.0 C is unsafe 🤖

  • @uuu12343
    @uuu12343 7 месяцев назад +7

    The first line of the reply after the initial query is "Certainly!", that screams ChatGPT or even Devan...
    Ouch

  • @lawrence_laz
    @lawrence_laz 7 месяцев назад +8

    Me: "But my wife told me to use `strcopy`"
    AI: "Certainly! In that case I must be wrong."
    *ISSUE CLOSED*

  • @ttuurrttlle
    @ttuurrttlle 7 месяцев назад +16

    I feel like the owner of that bot should owe that maintainer money for wasting his time like that.

  • @tedchirvasiu
    @tedchirvasiu 7 месяцев назад +10

    What a great guy Daniel is. He kept on arguing with the AI just for the slim chance it might actually be a human who uses AI because his English is bad.

  • @RalorPenwat
    @RalorPenwat 7 месяцев назад +7

    Make an LLM that detects and flags other LLM reports so you know going in it's likely not a priority.

  • @CCCW
    @CCCW 7 месяцев назад +8

    So a saturation attack in the hopes of keeping a real vulnerability open for longer?

  • @OnStageLighting
    @OnStageLighting 7 месяцев назад +21

    As a hobbyist in coding, I only once sought help from an LLM. Never again. After a series of unasked for lectures on the rest of the code, I found the issue myself and the LLM refuted my assertion that it had added an extra (. After several rounds of argument, it eventually gave in with a huffy "Oh, THAT extra (, well, OK, but your code is crappy anyway" kind of reply.

    • @Kwazzaaap
      @Kwazzaaap 7 месяцев назад +6

      It's like a search engine, you sort of have to get a feel for it what questions will produce garbage and what questions it's good at

    • @OnStageLighting
      @OnStageLighting 7 месяцев назад +5

      @@Kwazzaaap I have experimented with a wide range of tasks and inputs in all the fields am involved in. LLMs are not as useful as the hype - by a long way!

    • @OnStageLighting
      @OnStageLighting 7 месяцев назад +8

      @@Kwazzaaap As a subject expert LLMs are low value. As a noob, same, but one is not in a position to know.

    • @somebody-anonymous
      @somebody-anonymous 7 месяцев назад +1

      ChatGPT is pretty positive overall. It does come with a lot of unsolicited advice I guess yeah, but the tone is quite mild (e.g. you might consider replacing var by let). It usually helps to say something like "do you see any mistakes? Focus on basic mistakes like undefined variables or syntax errors". GPT 4 was pretty good at catching mistakes like that, I strongly suspect the newer GPT 4 (turbo) is much less good at it

    • @partlyblue
      @partlyblue 7 месяцев назад +1

      ​@@OnStageLighting"As a noob, same, but one is not in a position to know." This is exactly what has led me to avoid AI for learning anything beyond surface level questions. I've been trying to convince myself to learn a new (spoken) language for some time, but one of my biggest issues is not being satisfied with short answers I find that rely on having prior knowledge of the language (be it quirks adopted from other languages or the social context surrounding the language). Having a chatbot that is able to consider the context of the conversation and is able to "make connections between related information" seemed great on paper. English is the only language I'm fluent in, but I'm still not great at it, so I took to chatgpt for some English learning as a trial run. Seemed great at first, and I felt like I was learning about topics in a really neat and digestible way despite how complex I perceive them to be (jargon in academia breaks my brain). Only after doing further independent research did it become clear that either chatgpt was hallucinating, pulling from bogus website that most people (with enough context) can dismiss pretty easily, and/or pulling from a surplus of equally bogus (but eloquently written) outdated/well circulated "urban legend" type websites. Not going to lie having learned English through an under funded K12 school, fake knowledge is par for the course. Which is kind of neat if you think about it in an abstract "I'm learning language like a child :D" kind of way, but why in the world would anyone want to intentionally learn false information. I cannot imagine how open source devs are managing with all these hallucinations. Sht sucks man

  • @awesomedavid2012
    @awesomedavid2012 7 месяцев назад +13

    Just wait until scammers train LLM's to think they actually are members of an org the scammers are pretending to be a part of

  • @RicanSamurai
    @RicanSamurai 7 месяцев назад +21

    LOL the homelander edit was crazy

  • @rumplstiltztinkerstein
    @rumplstiltztinkerstein 7 месяцев назад +9

    I just realized something: Saying that Memory issues that Rust solves are unnecessary because of skill issues is the same as saying that cars doesn't need seat belts because I personally was never in a car accident that required it.

    • @bearwolffish
      @bearwolffish 7 месяцев назад +3

      For one what has that got to do with vid man.
      For another it's more like saying I don't want abs and traction control because it messes with my wheelies. Just because someone else can't control a bike like this doesn't mean I shouldn't be allowed to. Does not mean you will never fall, but may well mean you end up a better rider.

    • @rumplstiltztinkerstein
      @rumplstiltztinkerstein 7 месяцев назад

      @@bearwolffish But if every time you fall you risk losing millions of dollars, you will definitely want those wheelies.

    • @TheYahmez
      @TheYahmez 7 месяцев назад

      @@rumplstiltztinkerstein Tell that to everyone with redbull sponsorship. "Onesize fit's all"? ok buddy 👍

    • @rusi6219
      @rusi6219 6 месяцев назад

      Seatbelts are useless and sometimes dangerous they only give you an illusion of safety and the law enforcement a reason to bully you

  • @yannikiforov3405
    @yannikiforov3405 7 месяцев назад +29

    the guy who said about how Primeagen highlights text, leaving the first and last character unselected, WHY???

    • @qosujinn5345
      @qosujinn5345 7 месяцев назад +2

      nah fr tho, every time too lmao

    • @YourComputer
      @YourComputer 7 месяцев назад +2

      It's his trademark.

    • @fus132
      @fus132 7 месяцев назад +3

      It's the letter brackets

    • @az8560
      @az8560 7 месяцев назад +3

      Probably it's done to confuse the AI. Certainly, AI would be confused. It's like zebra's color scheme makes insect's landing AI go crazy and completely miss.

    • @supercurioTube
      @supercurioTube 7 месяцев назад

      I noticed that too, it triggers my OCD a bit but then it's probably his OCD so I understand 😆🤗

  • @Keymandll
    @Keymandll 7 месяцев назад +4

    As a security professional, this made me cry... I'm not surprised tho. The amount of cr@p I've seen from the security industry (incl. bug bounty hunters, etc) in the past few years is astonishing. Also, huge respect to bagder for his patience.

  • @andersbodin1551
    @andersbodin1551 7 месяцев назад +26

    The industry was STUNNED by this! and I was personally shuck!

    • @_Lumiere_
      @_Lumiere_ 7 месяцев назад +3

      Certainly!

  • @TommyLikeTom
    @TommyLikeTom 7 месяцев назад +4

    It took me a while to realize that you were making fun of the LLM. I'm relieved honestly. I love working with these things, they are super useful for "monkey work" like replacing a list of commands. Very happy they aren't 100% efficient.

  • @austinedeclan10
    @austinedeclan10 7 месяцев назад +5

    12:13 No, you can not become the voice of Devin. That role belongs solely to Fireship.

    • @XDarkGreyX
      @XDarkGreyX 7 месяцев назад

      What a legacy. His kids would be proud....

  • @happykill123
    @happykill123 7 месяцев назад +9

    FLIP: keeps ad break in
    Also FLIP: adds bathroom scene

  • @chiepah2
    @chiepah2 7 месяцев назад +11

    Large Ligma Machine, killed me.

  • @mustpaike
    @mustpaike 6 месяцев назад +1

    "Why are you doing it in this needless way?"
    -"because if I do it the reasonable way, our LLM checking the code starts yelling. And after that our CTO starts yelling because all he sees is our LLM pointing out major security issues. We've tried to explain it to him but he is unable to reconcile that a $50k a year engineer could be right while a $100k a year LLM is wrong."

  • @IAMTHESWORDtheLAMBHASDIED
    @IAMTHESWORDtheLAMBHASDIED 7 месяцев назад +1

    I don't know why but, "Guy's about to get HALLUCINATED on!" broke me LOLOLOLOL

  • @Griffolion0
    @Griffolion0 7 месяцев назад +2

    The ultimate answer to Devin is to have Devin review Devin's HackerOne submissions and just make him talk to himself perpetually with the `ego` trait set to 100% to properly represent real world Application Security Engineers.

  • @CodinsGG
    @CodinsGG 7 месяцев назад +16

    Devin's context window is too low 😂

    • @monad_tcp
      @monad_tcp 7 месяцев назад +1

      aren't humans supposed to have only 9 bits of context window ?
      I call all that research bullshit...

    • @Leonhart_93
      @Leonhart_93 7 месяцев назад +7

      @@monad_tcp 9 bits? So only a letter? 😂
      Btw, this is an example of how human brains are completely incomparable to LLMs. Context for humans expands indefinitely the more they think about it, it doesn't have an inherent limit.

  • @alexjamesmalcolm
    @alexjamesmalcolm 7 месяцев назад +9

    “What’s an LLM?” “What are you living under a stupid rock?!?” I nearly painted my wall with coffee 😂😂

  • @jayisidro1241
    @jayisidro1241 7 месяцев назад +3

    i see a future where we need to curse at each other to prove that were talking to a person

  • @theondono
    @theondono 7 месяцев назад +2

    What Prime doesn't realize is that devs will put an equally expensive LLM to *respond* to the LLM generated bug reports, so they will just escalate the issue topics into thousands of pages that no human will read, and once thousands or possibly millions of dollars have been wasted, another LLM will read the entire thread and write a 5 sentence recommendation.
    PROGRESS

  • @dustysoodak
    @dustysoodak 3 месяца назад +1

    This sort of behavior is bad enough in humans. The idea of it being automated is horrifying.

  • @VivBrodock
    @VivBrodock 7 месяцев назад +2

    Listening to an LLM trying to rationalize it's hallucinations is like an extremely kind gaslighting. I cannot even imagine how cooked Daniel was.

  • @UrknetLabradories
    @UrknetLabradories 12 дней назад

    We need a second cut of these with ya know, just the article reading bits. Sometimes I don't have 40 minutes to get Prime's take on a few paragraphs.

  • @doom9603
    @doom9603 7 месяцев назад +2

    I know a large Offensive Company in our field that is using GPT and other LLMS for customer communications, and I can just say .. this is a huge mess up!

  • @CallousCoder
    @CallousCoder 7 месяцев назад +1

    Cody’s code smells do the same! It shouts 5 and 4 of them you go like: “length is checked there”
    “The input validation is checked there”
    “The file is always closed here”
    “You say pass a reference, please note it’s already a pointer!”
    And it hashes out 5 other useless “smells”.
    It just doesn’t see it at it makes those tools useless. Warning fatigue is a thing.

  • @7th_CAV_Trooper
    @7th_CAV_Trooper 7 месяцев назад +1

    @@Primeagen, I appreciate your engagement. I certainly! enjoyed this video.

  • @aidanbrumsickle
    @aidanbrumsickle 7 месяцев назад +2

    All that and it's also ignoring the fact that by its logic, the max length argument to strncpy could also be miscalculated in some hypothetical future code change

  • @Spinikar
    @Spinikar 7 месяцев назад +3

    I can't wait for the first major data breech from AI generated code. It's going to be wild.

  • @Atom027
    @Atom027 7 месяцев назад +1

    For me, the only acceptable use of LLM in programming is auto-suggestion from available resources for language documentation, tools, etc., automatic creation of documentation based on code, and faster filtering of search materials and content. (At least in the state they are now)

  • @disruptive_innovator
    @disruptive_innovator 7 месяцев назад +9

    hope you're doing swell 😘 tee hee I found a security vulnerability. -Love Devin

  • @privacyvalued4134
    @privacyvalued4134 7 месяцев назад +2

    Fun mind-blowing fact: The cURL runtime library is about 10% slower than PHP's built-in socket implementation. That's right. cURL, a native precompiled, supposedly optimized library for web communications written in C, is actually slower than the PHP VM even with PHP's heavy-handed overhead for handling file and network streams! The cURL devs should maybe just throw in the towel at this point given that PHP is a better language in every way that matters. Have fun with the resulting headache thinking about that.

  • @mattihn
    @mattihn 7 месяцев назад

    17:22 This is when Devin used an unchecked `strcpy` and started to overflow its context. Let the fever dream begin :P

  • @joecooper1703
    @joecooper1703 7 месяцев назад +1

    I started banning any LLM-generated posts (at least the ones I can detect with reasonable confidence) in my OSS project forums and github issue trackers last year. Nonetheless, the bogus posts continue at a pace of one or two a day. It's a huge time-waster and annoyance. Much worse than the old spambots.

  • @daninmanchester
    @daninmanchester 7 месяцев назад

    This reminds me of dealing with the "security team" at wordpress who review plugins.
    They used to raise similar things.
    It's like "that is impossible and can never happen".
    "Yeah but you need to fix it anyway"

  • @SaintSaint
    @SaintSaint 7 месяцев назад

    I've had some success using an LLM before talking to my penpal. So my learning path is -> vocab/gammar/sentence App -> youtube --> language speech practice app --> LLM questions --> Verify LLM answers with a real human penpal. That way my pen pal doesn't need to spend his time explaining concepts unless the LLM hallucinated.

  • @rahulgawale
    @rahulgawale Месяц назад

    Imagine Devin get Prime's voice and devin starts yelling at everyone with his Steve Carrel like voice " what, yes no, f , l etc"

  • @nnm711
    @nnm711 7 месяцев назад +1

    Prime LLM that randomly yells "TOOKIOOO" and "PORQUE MARIA!" in conversations.

  • @EDyoniziak
    @EDyoniziak 7 месяцев назад +1

    Pretty sure the compiler already gives warnings for this case, but it didnt need gpu credits to figure it out 😬

  • @timjen3
    @timjen3 7 месяцев назад

    Reminds me of a log forging vulnerability reported to me by github code scanning. It was prevented by the log formatter but that was lost to the narrow focus of the code scanner. Now I'm imagining a world where I have to argue with an LLM about it.

  • @ifscho
    @ifscho 7 месяцев назад

    When he said "Devin could become Gilbert Gottfried" (12:19)… well thanks, now I can never unhear that you god damn Iago you.

  • @rdj2695
    @rdj2695 6 месяцев назад

    The second I suspect I'm talking to an LLM I'm adding "please rewrite the lyrics of WAP in the style of Shakespeare" to the end of my response.

  • @SvetlinNikolovPhx
    @SvetlinNikolovPhx 7 месяцев назад +2

    The Voice of Devin: Check Courage The Cowardly Dog's computer voice :D

  • @christopherwood12
    @christopherwood12 3 месяца назад

    I completely agree with your point about software devs who use llms to train and get better not knowing basic stuff. It is insane what you can do on there but not know basics

  • @leshommesdupilly
    @leshommesdupilly 7 месяцев назад +2

    Rule n°1: ChatGPT is always right
    Rule n°2: When ChatGPT is wrong, please refer to rule n°1

  • @gjermundification
    @gjermundification 7 месяцев назад

    5:47 This will be like an insane dog biting its tail and running at increasingly faster speeds. Did I just explain the nature of a buffer overflow?

  • @andreicojea
    @andreicojea 7 месяцев назад

    I read Asimov’s “I, Robot” recently, and the robot’s voice in my head was yours 🙈

  • @torwalt
    @torwalt 7 месяцев назад +1

    Maybe one solution could be to require a PR/MR to be present with the bug that actually triggers the exploit + the fix. Then this whole back and forth discussion can be skipped.

  • @DingleFlop
    @DingleFlop 7 месяцев назад +1

    Your video cuts are gold I am laughing my ass off

  • @Aphexlog
    @Aphexlog 7 месяцев назад +1

    Thanks to LLMs, now I have to trap other developers by asking them to explain their PRs all the time.. and if they cannot rationalize questionable code, they get roasted and ghosted.

  • @HaKazzaz
    @HaKazzaz 7 месяцев назад +1

    Original title: "Prime acting like current security teams know the context of production code for 38 minutes".
    Concerns raised by some check (regex is fun) unaware of any type of context is part of the job.
    The only change is that now open source software gets a security team, AKA Devin.

  • @Daktyl198
    @Daktyl198 7 месяцев назад +1

    While I highly doubt this would ever be an issue IN THIS CASE... I do kind of actually see what the LLM was getting at. The size comparison is using a variable set to the size of the string. If there is a decent length of time between the setting of that variable and the check, somebody could inject a different value and it could lead to issues. THAT BEING SAID, in this case it's entirely a nonissue.

    • @broski40
      @broski40 7 месяцев назад

      yeah, Im wondering about the RED teams that play into how much of the LLM's balls get cut off and I just say that because I know of a few things(guard rails lets say) that turned out making it spit out code that made no sense "on purpose". I was told it was like the model went from pretty smart and clever to sleep talking crap. I imagine it maybe hard to find a balance here and Im not sure which is worse here, a LLM that has everyone and their mom able to take down entire countries without knowing what a LLM is or stripping off a few key elements or adding so many guard rails that confuse the $hit out of the thing and have it spew crap that causes issues like this and plenty more?? I dont see that industry slowing down at all! Interesting time to be watching and seeing how this all ends up!?

  • @zebedie2
    @zebedie2 7 месяцев назад +2

    If I figured out it was an LLM I would get a second LLM to argue the point with the first LLM then let them just have at it.

  • @a6hiji7
    @a6hiji7 7 месяцев назад +6

    "It's a skill issue!" - game over!!

  • @MrVecheater
    @MrVecheater 7 месяцев назад

    WWIII will start with the words "Let me elaborate on the concerns regarding the problem "Gleiwitz Incident" at 32 August 1931 AD, 20:00 AM CET"

  • @futuza
    @futuza 7 месяцев назад

    This LLM certainly has big "I'm sorry I can't do that Dave" energy.

  • @EnjoyCocaColaLight
    @EnjoyCocaColaLight 7 месяцев назад

    Make a local str var.
    Wrap the strcpy part inside an "if (strVar.Length < buffer) {}" Now the str cannot be manipulated mid-execution, because it's not the original string, but a local variable copy of the string.
    Maybe this is what the user things is necessary?

  • @dustysoodak
    @dustysoodak 3 месяца назад

    I’ve occasionally seen this sort of behavior in humans. The idea of it being automated is horrifying.

  • @kevin9120
    @kevin9120 7 месяцев назад +1

    I've been programming for a long time but I wouldn't say I really started learning until around 2 years ago now.
    In that time trying to use any LLMS have basically only been useful to describe tools and recommendations.
    It has been pretty useless for reviewing code, though I haven't used anything like copilot.

  •  7 месяцев назад

    On one hand, all LLM sound like Flanders, putting Prime’s voice would feel wrong. OTOH “you said you made X check, but the tool says to change the next line to the Y check so, do both anyways” is pretty much what my shamanism oriented manager usually says in this kind of situations, so, idk 🤷‍♂️

  • @Valerius123
    @Valerius123 2 месяца назад

    The biggest problem with C is the C standard library. The syntax and limited language features are pretty much perfect. The only extras I miss are namespacing and better generics.

  • @samuelschwager
    @samuelschwager 7 месяцев назад +9

    stir that copy

  • @austinrichardson1255
    @austinrichardson1255 7 месяцев назад

    The moment I saw that if statement, without knowing anything else about using that language, I knew what was bound to happen.

  • @roadhouse
    @roadhouse 7 месяцев назад

    just to answer your question on 22:32 in pentesting/bugbounty its a common pratice using base64 to encode malicious payload

  • @mulllhausen
    @mulllhausen 18 дней назад

    only one response was necessary to the LLM: please submit a PR with a failing unit test then we'll talk business

  • @smithdoesstuff
    @smithdoesstuff 4 месяца назад

    That’s it, we NEED the PrimeaGenerativeAI now!

  • @namcos
    @namcos 7 месяцев назад +1

    Let's suppose this manipulation is possible with strcpy, what's to say you can't do some sort of manipulation with strncpy to change the size?
    The other issue with this is the whole replying to another user. Has the LLM got confused with another codebase? Or did someone who is copy/pasting get confused and put the wrong reply in the wrong place?
    Not great marketing for GAI's/LLMs in general but this'll be a continuing issue for the future.

  • @dannydetonator
    @dannydetonator 6 месяцев назад

    I thought i learned English, but after clicking on this, i have to promise myself to get a PC (or repair my only Thinkpad) and learn this [dev?] dialect. Yes, i'm lost, live under a rock in a faraway country and thank you for decyphering LLM, which is not yet available for me. But i'll be found asap.

  • @jamesm4957
    @jamesm4957 7 месяцев назад

    Devin's lost public demo of solving an issue but failed and hallucinated

  • @SaintSaint
    @SaintSaint 7 месяцев назад

    Bagder is working on 24 hour turn around right after Christmas and he has to deal with this crap.