Linus On LLMs For Coding

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии • 790

  • @claaaaaire
    @claaaaaire 5 месяцев назад +1211

    So basically "AI is fine because you all suck anyway"

    • @verrigo
      @verrigo 5 месяцев назад +115

      Which is a Linus take as a Linus take can be :D

    • @NiNgem-bb6lc
      @NiNgem-bb6lc 5 месяцев назад +8

      🤣🤣

    • @waltercapa5265
      @waltercapa5265 5 месяцев назад +5

      TRUUUUU

    • @navi2710
      @navi2710 5 месяцев назад +4

      Bro why do you have to hit so hard.

    • @Karurosagu
      @Karurosagu 5 месяцев назад

      A slap in da face

  • @Gregorius_
    @Gregorius_ 5 месяцев назад +1350

    The I in LLM stands for intelligence.

    • @Kane0123
      @Kane0123 5 месяцев назад +55

      The I in Linus stands for Intelligence.

    • @Wusaruful
      @Wusaruful 5 месяцев назад +11

      I mean if you use a new library and search a specific function it can safe a few minutes

    • @hamm8934
      @hamm8934 5 месяцев назад +14

      I too can copy and paste

    • @SoftBreadSoft
      @SoftBreadSoft 5 месяцев назад +16

      The G in Tom stands for genius.

    • @MikkoRantalainen
      @MikkoRantalainen 5 месяцев назад +16

      And the S in LLM stands for security.

  • @WillScrillz
    @WillScrillz 5 месяцев назад +713

    Being hard on code and neutral on LLMs is not a contradiction. If someone submits terrible code it doesnt matter if they used AI or not, if they submit good code it also doesnt matter. The point is that you judge code on the merit of the code, not the source. Its honestly really strange to try so hard to apply a moral or even value judgement to something which is just an inanimate tool that can be used or misused.

    • @idkwhattonamethisshti
      @idkwhattonamethisshti 5 месяцев назад +39

      Yeah it just a tool, but knowing how much Linus hated C++ it's kinda surprising that he is so neutrals towards AI when there is so much garbage code produced by it.

    • @JorgetePanete
      @JorgetePanete 5 месяцев назад

      doesn't*
      It's*

    • @freezingcicada6852
      @freezingcicada6852 5 месяцев назад +10

      Its cause if its shit you dont really improve anything and have no idea where to even look to make it not shitty.
      Its fine if your end game is immediate satisfaction. But if your aiming to exceed the average. Then relying on a "tool" that averages isnt the way to go

    • @jean-michelgilbert8136
      @jean-michelgilbert8136 5 месяцев назад +26

      You have to understand the context of the situation. This was a live interview. Linus couldn't afford to be as much a keyboard warrior as he naturally is. I'm completely convinced that would blow a fuse if someone submitted a patch to the kernel containing LLM code.

    • @k-yo
      @k-yo 5 месяцев назад +1

      this

  • @keyboard_g
    @keyboard_g 5 месяцев назад +208

    Linus goes off on developers cutting corners and breaking rules. Not respecting how important the kernel is.

  • @zsi
    @zsi 5 месяцев назад +177

    I think Linus is ambivalent or neutral about LLM coding because he doesn't direct his anger towards unconscious, inanimate agents. What he gets upset about is when a human, who should know better, tries to merge garbage code generated by an LLM without understanding what they are attempting to merge.

    • @morezombies9685
      @morezombies9685 5 месяцев назад +1

      I think that you are correct but Id also like to point out that getting worked up over llms is fruitless at its core. Its an exercise for people who dont understand the world.

    • @tukib_
      @tukib_ 5 месяцев назад +1

      Yeah. There's a lot of money to be made in capturing attention by selling outrage stories of LLMs and just AI in general, when it's really just repackaging people problems into a new shiny exterior. That's not to say skepticism is unwarranted, but you're gonna have a more focused discussion once you isolate human decision making elements.

  • @Xankill3r
    @Xankill3r 5 месяцев назад +269

    Re: LLMs - you should read/review the recent paper on LLMs and learning outcomes for students. They basically found that although LLMs helped students improve *while* they had access to them the overall learning outcomes were poorer when access was taken away vs when access was never provided. Basically students who learned with LLMs became poorer at learning things in general or at least didn't improve at learning things compared to their peers who didn't use LLMs.

    • @rnts08
      @rnts08 5 месяцев назад +56

      The google/stackoverflow/Wikipedia effect. 😂

    • @Arcidi225
      @Arcidi225 5 месяцев назад +89

      I mean it's not surprising.
      You are learning to use a tool, and when tool is taken away you are less productive. Simple as that.

    • @-weedle
      @-weedle 5 месяцев назад +3

      What's the name of the paper? nvm got it

    • @Xankill3r
      @Xankill3r 5 месяцев назад +2

      @@-weedle that's the weird bit. Just saw it reported on yesterday and now I can't find it. Actually have the video paused at ~7:30 because I'm off in another tab looking for the darned thing 🤣
      Will update as soon as I find it.

    • @MartynasNegreckis
      @MartynasNegreckis 5 месяцев назад +17

      Correct, now write a React to-do app, pen and paper only.

  • @comradepeter87
    @comradepeter87 5 месяцев назад +112

    The reason he's so chill about LLMs is because he trusts his review process. He nitpicks everything and takes it seriously. Therefore it doesn't matter if the code came from LLM or from a human, it would still need to go through him or his trusted review body.

    • @ryanlee2091
      @ryanlee2091 5 месяцев назад +6

      He’s chill cuz he already made millions and about to retire.

    • @AndrewMorris-wz1vq
      @AndrewMorris-wz1vq 5 месяцев назад +8

      Right. If you aggressively fight regression than every attempt at chance either improves things or changes nothing.

    • @atiedebee1020
      @atiedebee1020 5 месяцев назад +1

      But if more people start submitting AI crap, its going to be a lot more work to find the parches that actually matter

    • @AndrewMorris-wz1vq
      @AndrewMorris-wz1vq 5 месяцев назад

      @@atiedebee1020 Block them. If you submit bad code and don't correct I am very confident they just block them, or through them in spam, or whatever.
      and maybe, just maybe if AI leads to more new contributors who feel confident to submit code for the first time because of AI, create a learning channel for them if they need additional guidance on how to review LLM output before submitting it. You know assuming ignorance but good faith.

    • @notusingmyrealnamegoogle6232
      @notusingmyrealnamegoogle6232 5 месяцев назад +8

      @@ryanlee2091that would make sense for most people but he is not most people and still gets fired up about code pretty often

  • @neko6
    @neko6 5 месяцев назад +46

    AI for coding is (currently) basically a replacement for Stack Overflow and Google
    If you just plug in AI generated code into your system, you're gonna have problems, just like you would if you copy code from SO as-is
    If you consult the AI, learn from it, and review what it produces and how it stacks up to your needs, then it becomes a net positive force that can both help your with trivial boring tasks, and also teach you things

    • @thorwaldjohanson2526
      @thorwaldjohanson2526 4 месяца назад +3

      I use it all the time for scripts and jumping off points or single functions. But it's just a start that speeds things up.

    • @DankMemes-xq2xm
      @DankMemes-xq2xm 4 месяца назад +2

      This is the way.

    • @Vangaurd_tiger
      @Vangaurd_tiger 4 месяца назад +1

      Yeah i am using it currently learn sfml game dev cpp.

  • @lucasvella
    @lucasvella 5 месяцев назад +39

    I am objectively a great programmer (as judged by my peers over the years during my carer), and I like Copilot very much. I don't think it made me better, quality wise, but it made me faster on the boring tasks.

    • @MrVohveli
      @MrVohveli 5 месяцев назад +2

      This. So much this. All of this.

    • @olabiedev5418
      @olabiedev5418 5 месяцев назад

      link ur github great programmer

    • @Jasonlhy
      @Jasonlhy 5 месяцев назад

      me too

    • @WhiteWolfsp93
      @WhiteWolfsp93 5 месяцев назад +13

      I'm objectively a genius 100x programmer and i think copilot stinks.

    • @teaser6089
      @teaser6089 5 месяцев назад

      Just wait, most people think like that and after a few months they realize they aren't faster

  • @TrancorWD
    @TrancorWD 5 месяцев назад +153

    And in 3 years time, we have neural networks in our compilers warning you about your novel solution because it deviates from the average quality of all the code it was trained on.

    • @martingisser273
      @martingisser273 5 месяцев назад

      Yeah. What we will ultimately get is not AI, but artificial stupidity... at the best, artificial superficiality.

    • @TheSulross
      @TheSulross 5 месяцев назад +11

      Will need a new category of attribute to sprinkle in code and disable AI analysis around actual innovative code.

    • @InforSpirit
      @InforSpirit 5 месяцев назад +34

      Hybrid Ai Linter: " I'm sorry Dave, I cannot let you commit that"

    • @Happyduderawr
      @Happyduderawr 5 месяцев назад

      Why would data scientists make average quality code datasets? You have to assume that data scientists are complete imbeciles for them to purposely train LLM's to make dumb suggestions. If it's overly suggestive, then the dataset will be changed to make it stop suggesting so much. probably through RLHF.

    • @TrancorWD
      @TrancorWD 5 месяцев назад +2

      @@TheSulross Copilot told me `# $NO-COPILOT$` would work, it did not... just to stop it from stomping on my code currently.

  • @mikew7171
    @mikew7171 5 месяцев назад +100

    AI is going to be the corporate equivalent to buying a $3000 Gibson, Les Paul guitar thinking it’s gonna make them a better player without learning how to actually play.

    • @oompalumpus699
      @oompalumpus699 5 месяцев назад +4

      Even though I believe in the potential of AI, I am against corporations having a stranglehold on access to it.
      The future should be a place where we can all develop AI the same way we develop applications.
      Corporations apply the classic tactic of turning people into helpless consumers so they keep paying for whatever services that are being peddled.
      Independently assembled AI should be the direction to move towards.

    • @seanwoods647
      @seanwoods647 5 месяцев назад +2

      Actually it is more like buying a $3000 collectors edition of a Harley Davidson bike. In 1:24 scale. With "real working engine", that is literally just a translucent engine block that gyrates the pistons if you turn the wheel.

    • @unoriginal_name7091
      @unoriginal_name7091 5 месяцев назад +4

      This analogy is even better when you consider Gibson's quality control has been trash for over a decade

    • @alpuhagame
      @alpuhagame 5 месяцев назад

      At least with guitar this expensive the sunk cost fallacy would force you at least try to improve to justify this investment.

    • @thejeffyb9766
      @thejeffyb9766 5 месяцев назад +1

      Remember the Gibson auto-tuning guitar? Lol.

  • @alvarojneto
    @alvarojneto 5 месяцев назад +18

    I'm not a great or experienced coder, but one issue I already see with LLMs is that it breaks an important aspect of coding, which is the dissection of idea implementation.
    A huge benefit I get from coding is that it forces me to really think about what it is that I am trying to do.

  • @suchithsridhar
    @suchithsridhar 5 месяцев назад +64

    Dude! The thumbnail looked like you interviewed him! I was so exicted!

    • @Abu_Shawarib
      @Abu_Shawarib 5 месяцев назад +10

      Baited (like me)

    • @amoghnk
      @amoghnk 5 месяцев назад +3

      Same 😅

    • @sempiternal_futility
      @sempiternal_futility 5 месяцев назад

      I got baited too

    • @theghost9362
      @theghost9362 5 месяцев назад

      imma park here with yall

    •  4 месяца назад

      Skill issues

  • @Me-wi6ym
    @Me-wi6ym 5 месяцев назад +49

    My general rule of thumb is to use LLMs to learn how to *approach* a problem, then go figure out the details myself. If I am ever asking it about specific numbers in a problem, I have strayed too far from its purpose (in my opinion).
    Something like: "I want to make ____ kind of project, how might I start that?",
    or even: "I am stuck on ____ step, what might be a few good things to try?" are both fine.
    But nowadays, as soon as I search anything like: "will this loop go out of bounds of this array?", I start a new chat because I shifted its focus too far in the original. Once the numbers are wrong, I don't think I've ever seen them correct themselves.
    In rare cases, I'll ask it to explain what some code will do, but that's only if the documentation is truly abysmal, which to be fair, sometimes it is. I just see it as a way to sift through all the more niche or hidden code discussions online.

    • @gljames24
      @gljames24 5 месяцев назад +16

      It's a great rubber duck.

    • @snznz
      @snznz 5 месяцев назад +3

      I agree, it's the best rubber duck short of an actual human subject matter expert, which often times you may not have access to depending on what kind of problem you are working on. Bouncing ideas off your significant other for instance, is probably not going to be useful if you are trying to write something like an inverse fast Fourier transform, but the LLM will have the context needed to plan an approach. Using it to actually write code is iffy, it can get you like 50%-70% there often but you may end up spending more time fixing the output than it would take you to just write it.

    • @Atomhaz
      @Atomhaz 5 месяцев назад +1

      yeah this is how I've used it. I wanted to make a music app and all the code it gave me didn't work because the library had been updated but it suggested patterns I could choose to adopt or dismiss

    • @TheNewton
      @TheNewton 5 месяцев назад +2

      @@gljames24 Basically, though it's a great rubber duck that LIES.
      And if someone doesn't know enough they are literally incapable of spotting the lies.
      And way too many treat these LLMS as sources of truth.
      In part maybe because a subconscious misconception views everything the LLM generates as based 100% on exact-words a person has written in that sequence like it's a search engine and not an adlibs slot machine.

  • @ScottHess
    @ScottHess 5 месяцев назад +20

    Kernighan's Law suggests that debugging is twice as hard as writing code. Letting the LLM write the code and then debugging the result is a direction with subtle issues. It probably means that you can crank out your low-end work even faster than before, but you may not be able to improve the quality of your high-end work at all.
    And Amdahl's Law would suggest that making your low-end work easier to do may not free much if any time up to put more hours into your hard jobs. The problem in that case isn't in having time to do the actual hard work, it's that your job involves grinding through boilerplate.

  • @gljames24
    @gljames24 5 месяцев назад +46

    LLMs are great for Rubber ducking or small snippets. They can't replace a human programmer.

    • @ch3nz3n
      @ch3nz3n 5 месяцев назад +9

      Yet

    • @TheSulross
      @TheSulross 5 месяцев назад +1

      Am pretty sure could get AI to generate an entire web interface CRUD application - in my programming language and tech stack of choice.

    • @LS-qs9ju
      @LS-qs9ju 5 месяцев назад +3

      @@TheSulross As it should be, CRUD already been done for almost three decades, will be weird if AI can't learn it with that amount of dataset. But last time I check it kinda shit at doing visual to code based task (like asking to generate HTML+CSS with certain visual specification), they will do the bare minimum and then unable to expand it to something that you want.

    • @TheNewton
      @TheNewton 5 месяцев назад

      ​@@TheSulross there are already alot of projects for this, it's super optimistic to even refer to the output of such toolchains as "prototypes" considering the amount of unmaintainable garbage, nonsense code, and refactoring they need.

    • @aethreas
      @aethreas 5 месяцев назад

      @@ch3nz3n It never will, the whole technology is a step in the wrong direction as far as real AI goes. All LLMs by their very nature can only rearrange and regurgitate that which already exists, it literally can't come up with something new because the underlying algorithms just take what it's been trained on (stuff that already exists) and tries to rearrange it in ways that best fits the prompt. That's a massive oversimplification but at it's core that's what it's doing

  • @faceofdead
    @faceofdead 5 месяцев назад +2

    For me personally, as a low/mid level IT support, LLM's help me a lot, because i was always a quieter person and somewhat shy to ask questions to the seniors at work...
    With the LLM's there is no such issues and i excel at the tasks quicker ^_^

  • @MikkoRantalainen
    @MikkoRantalainen 5 месяцев назад +3

    5:45 I interpret Linus's opinion here as "LLM can be a great code linter but you should assume its output as opinion about the code and then decide by yourself if you want to actually change the code".
    Though this obviously assumes that the developer skill issues are more about the accuracy of the implementation instead of overall algorithm or mis-understanding data structures or thread locking.

  • @ProfRoxas
    @ProfRoxas 5 месяцев назад +6

    I used gemini for a short while while writing my thesis work, but after it didn't help but instead i had to give them the answer, i stopped using it.
    i dont have much experience with them but i still think it can be useful for simple tasks or to have a starting point for figuring out something, but as a replacement or trusting its output without confirmation, i dont think it's perfect.

  • @killzolot
    @killzolot 5 месяцев назад +2

    As someone who is a novice with code, I concur with your opinion. It's vital to have a strong foundation of understanding, and an LLM should supplement this, not replace it. If you always take shortcuts you will never build up the knowledge and skills to do anything well, and I think this is true for everything, not just coding

  • @themartdog
    @themartdog 5 месяцев назад +25

    He is outcome focused, he doesn't care how people get there.

    • @Nick-rs5if
      @Nick-rs5if 4 месяца назад +1

      I get that exact feeling as well. Linus just seem to treat LLM's as just another tool, which it currently is.

  • @conceptrat
    @conceptrat 5 месяцев назад +1

    @12:00 This is the crux of the problem. Using the 'always blow on the pie' quote. Always create and run the tests even on LLM created/guided work. This isn't just field specific.

  • @sirtobi6006
    @sirtobi6006 5 месяцев назад +17

    I love LLMs to read documentation for me to quickly get started with new libraries.

    • @Karurosagu
      @Karurosagu 5 месяцев назад +4

      Why not read the documentation itself? Every doc out there for a library or a framework has a "Getting started" section in its first pages

    • @HRRRRRDRRRRR
      @HRRRRRDRRRRR 5 месяцев назад +5

      ​@@Karurosagu Because most of them are poorly written, and I'm not autistic enough to understand the author.

    • @Trahloc
      @Trahloc 5 месяцев назад

      ​@@Karurosagulinking the entire documentation and then asking your specific query is faster as the "getting started" might not answer the thing you need.

    • @Karurosagu
      @Karurosagu 5 месяцев назад +1

      @@HRRRRRDRRRRR Most of them? IDK, I think the quality depends on a lot of factors
      And by the way, English is not my main language and I've read many docs without problems wether they are poorly written or not. Most problems i've had have been with: very new libraries and frameworks, very specific topics within existing docs that haven't been updated after a new release, misinterpreted features that turned out to be hotfixes and then got removed, and so on
      TL;DR
      Skill issues

    • @Karurosagu
      @Karurosagu 5 месяцев назад

      @@Trahloc "Getting started" is not a specific feature

  • @TheHackysack
    @TheHackysack 5 месяцев назад +16

    holy heck I don't think I've ever seen you go more than a minute without stopping the video

  • @Aosome23
    @Aosome23 5 месяцев назад +1

    LLM's are great replacement for searching for obscure methods in API documents. When string matching doesn't cut it, I always resort to LLMs. And they find stuff that I couldn't find in a few minutes with 70%~ accuracy

  • @jean-michelgilbert8136
    @jean-michelgilbert8136 5 месяцев назад +32

    I can't stand working with LLMs for coding. I spend more time correcting their mistakes than it would take me coding the things from scratch.

    • @abeidiot
      @abeidiot 5 месяцев назад +3

      that just means you don't know how to use them or are bad at natural language. It's like being given freshman college interns and giving them tasks too hard for them

    • @l3lackoutsMedia
      @l3lackoutsMedia 5 месяцев назад +2

      I think it's good for pointing vaguely towards something you can try to use

    • @Okabim
      @Okabim 5 месяцев назад

      If tell the LLM to write something it's usually bad (even GPT4o struggles with regular expressions). But something like codeium in vscode auto completing a line I've started is almost always correct, saving on keystrokes.

    • @jean-michelgilbert8136
      @jean-michelgilbert8136 5 месяцев назад

      Totally agreed. I tried a LLM stress test where I asked Mixtral 8x7b to make a console Hello World but with WinMain entrypoint in C++ without using any function from the standard library, only functions from the Windows API. In my requirements, the code had to work properly whether the UNICODE macro was designed or not and there had to be no #ifdef UNICODE in the LLM answer. Let's say that it was an abject failure. There are exemplars on how to do each of the specific tasks I asked for on GitHub and on Stackoverflow but they are few and far between. To code it, you're better off just with the MSDN doc 😂

    • @Happyduderawr
      @Happyduderawr 5 месяцев назад +3

      Then dont write prompts that produce mistakes... Its not hard to guestimate the ability of an LLM and decide to only ask it questions within its range of ability.

  • @James2210
    @James2210 4 месяца назад +1

    Watching this with subtitles on is trippy

  • @notapplicable7292
    @notapplicable7292 5 месяцев назад +112

    Saying you're 10x better with AI is like writing terrible code just to cite a massive performance improvement

    • @sownheard
      @sownheard 5 месяцев назад +1

      or the person just makes spelling mistakes and the bot just corrects the spelling

    • @specy_
      @specy_ 5 месяцев назад +12

      A lot of coding is repetitive work, gpt is really good at repetitive stuff, that's where it helps. If you manage to build your code good enough where it is composable and reusable, the LLM will see the pattern and suggest you ways to compose it correctly

    • @kaijuultimax9407
      @kaijuultimax9407 5 месяцев назад +6

      @@specy_ But that isn't making my code 10x better, it's just getting it done faster. If all it's doing is recognizing the pattern of what I'm coding and completing it, then the AI isn't making me do my job better, it's just letting me do the same job but slightly faster.

    • @GackFinder
      @GackFinder 5 месяцев назад +2

      @@specy_ "Resuable code" is in general a fallacy that will bite you in the tuchus sooner or later.

    • @specy_
      @specy_ 5 месяцев назад

      @@kaijuultimax9407 yeah ofc it won't help u make better code, but it helps u make faster code. If u can save 50% of your time when writing one feature, you can use that time to make it better yourself.

  • @thk4711
    @thk4711 4 месяца назад

    I have been using LLMS since quite some time. I am not a full time developer but have to write some python code from time to time. It really helps me when I start a script. You just tell it - I need a class named XYZ with methods a,b,c which have the following parameter and which return that. The script has the following command line parameters … etc.
    And it will perfectly do that. Then you have to kick in and write what you need your script to do. From time to time you ask how I can get this and that done. At the end you can let it help you to optimize your code in a short time if you for instance have a little bit too much if then else stuff in your code. But at the end you have to understand each line and judge what is a good recommendation and what not.

  • @jony1710
    @jony1710 5 месяцев назад

    I've pasted in code that doesn't work into an LLM that was non-trivial and it spotted a bug for me, where I got the memory ordering wrong on some atomic operations.
    I feel like this is the sort of thing where the right answer exists out there a multitude of times and the LLM can pull together all these resources and explain why your code is broken. It's super useful for that stuff, and is way better than just trying to only absorb this from the docs. Also it's a great rubber duck.

  • @mohammadhalipoto
    @mohammadhalipoto 5 месяцев назад +5

    There are 10 reasons LLM's spit out crap code and both of them are hard to fix

  • @Soldknight324
    @Soldknight324 5 месяцев назад +4

    Linus had a valium this morning

  • @TonyDiCroce
    @TonyDiCroce 5 месяцев назад +18

    I have been programming in C++ for 30+ years. I use LLM's in all their forms for coding. Using an LLM for coding successfully involves breaking off chunks of functionality that it can handle... and it usually involves defining function signatures for it. You'll only know what an LLM can handle by using it a lot. More complicated uses can only be tackled by providing it with extensive guidance in the form of pseudocode.
    Also, I never "trust" an LLM. I have to maintain the code so I have MUST understand it. Yes, they do make mistakes... but given the size of functions I'm asking it to write those mistakes are usually easily spotted.

    • @censoredeveryday3320
      @censoredeveryday3320 5 месяцев назад +1

      Which LLMs do you use for C++ ?

    • @TonyDiCroce
      @TonyDiCroce 5 месяцев назад

      @@censoredeveryday3320 I pay for and use ChatGPT & Claude mostly for technical discussions and exploring ideas (though I sometimes use them for code generation as well). I use github copilot is vscode... and as of last night I use Cursor pro.

    • @rich1051414
      @rich1051414 5 месяцев назад +4

      It's ok for self isolated functions, usually, but it falls apart when it needs to interface with multiple systems already designed. And I would NOT use it in memory unsafe languages.

  • @0xCAFEF00D
    @0xCAFEF00D 5 месяцев назад

    Best use I have for LLMs is as a user integrating very basic features in websites through grease monkey.
    Doesn't take long to change a website that requires hover on an icon to show a picture to one that shows them by default.
    It's not hard, it's not code that will see reuse. It's just fiddly normally.
    And with chatgpt you can actually just roughly ask it with the right info and receive a good enough result.

  • @burger-guy-99
    @burger-guy-99 5 месяцев назад

    I think that last bit is key. If your in it to learn, turn off autocomplete at least.
    If your just trying to ship your GPT wrapper, then go for it.

  • @imperius06
    @imperius06 2 месяца назад

    LLM have helped me screw thing up hard and I gotta learn more to correct them

  • @darylclarino5439
    @darylclarino5439 5 месяцев назад

    it might also be because even though you are using an LLM for help, the PR you produce will always depend on the person doing it. Whether they rely fully on it, or not.

  • @supercheetah0
    @supercheetah0 4 месяца назад

    2:17 I'm not quite sure which incident that you're referring to, but, if it's about the Bcachefs author, that didn't have anything to do with his code, but rather not following the rules of development and RC cycles where he was submitting 1000+ line patches during RC cycles. That got Torvalds pretty upset.

  • @anasmostafa1
    @anasmostafa1 5 месяцев назад

    Off topic, before watching the video, just to say ThePrimeagen is a beautiful soul

  • @timturner7609
    @timturner7609 4 месяца назад

    I really like whatever Microsoft has baked in to the new visual studio where when you're refactoring a project and you make a similar change 2 or 3 times, vs will give you the red "press tab to update" the next time you move to a similar line.
    It sure beats trying to come up with a regular expression to search and replace. Simetimes.

  • @parker7721
    @parker7721 5 месяцев назад +4

    His view is probably not negative because the code that he reviews is from developers that know how to use AI. Meaning they don't just tell a LLM to "make a driver in rust" they just use it for tedious repetitive code tasks.

  • @llpolluxll
    @llpolluxll 4 месяца назад

    I use llms to help me understand the problem I'm trying to solve. You can bounce ideas off of it to help you learn.

  • @christian15213
    @christian15213 3 месяца назад

    The argument that I feel you miss a little is that there are different ways you can use it. You are absolutely correct that knowing the code and being a good coder is a must but the bits you can bounce off or syntax you can gather quickly is invaluable. Often times, I am going back and forth between docs and llms and I will know when the LLM is bullshitting and I need to go to docs. In a way your right but also there are other ways you can learn to use it that is more helpful. if any of that makes any sense.

  • @alexandrecolautoneto7374
    @alexandrecolautoneto7374 5 месяцев назад +30

    LLM turns all bugs into subtle bugs. LLM turns compilation errors into syntax correct bugs with logical flaws that take ages to discover what when wrong.

    • @LtdJorge
      @LtdJorge 5 месяцев назад +7

      Heavily depends on the language. With Rust, they hallucinate shit that gets instantly caught by the compiler.

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 5 месяцев назад +12

      @@LtdJorge But they are evolving to hallucinate as coherent as possible. The future model will just be better at tricking te compiler.

    • @TheNewton
      @TheNewton 5 месяцев назад

      @@alexandrecolautoneto7374 users asks "make this" > LLM outputs > user copy and pastes > compiler fails > user gives LLM negative feedback > LLM model evolves to avoid negative feedback > stealth code.
      Currently reminds me of so many web performance "services" that just insert javaScript to trick auditing tools to spit out higher points. Mission accomplished for everyone that doesn't understand the actual code.

    • @exec.producer2566
      @exec.producer2566 5 месяцев назад +3

      This is a skill issue. In 10 years, the mark of a good programmer will be their ability to debug LLM code. Prime & others are coping because the introduction of LLMs caused a total paradigm shift in regards to writing good code quickly. NVIM and all this other DX shit they obsessed over is a brick compared Cursor AI + Claude. These guys jerked each other off over their WPMs, but are trapped in their old ways when something better comes out.

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 5 месяцев назад +2

      @@exec.producer2566 I loved the theory, but the reality is that LLM are just the not right model for coding. No matter how we improve it will always hallucinate, it's just how they work.

  • @matt_milack
    @matt_milack 5 месяцев назад +29

    For me it's pretty simply. I meet developer, software engineer, sysadmin, network admin, cloud admin, qa tester, data analyst, data engineer, devops engineer or cybersecurity professional who lost his/her job because a random person who is not IT/CS professional can work their job using LLMs, and I'll be like "God exists, and it's AI."

    • @dmitriyrasskazov8858
      @dmitriyrasskazov8858 5 месяцев назад +1

      If random person can do this using llm - special person can do it too.

    • @matt_milack
      @matt_milack 5 месяцев назад +7

      ​@dmitriyrasskazov8858 A random person would be perfectly happy with a significant lower salary then special one.

    • @ottowesterlund
      @ottowesterlund 5 месяцев назад +1

      I don't quite understand. Are you saying you already seen this happen, or is it more of a "if/when it happens in the future..."?

    • @matt_milack
      @matt_milack 5 месяцев назад +3

      ​@@ottowesterlundThe later option.

    • @deepspace9043
      @deepspace9043 5 месяцев назад +3

      I think it'll be either domain professionals using LLMs to do their job, or it will just be entirely automated.
      If you have a random person who knows nothing about the domain using an LLM to do the job, then you can likely just automate the job at that point.

  • @msclrhd
    @msclrhd 5 месяцев назад

    I've found LLMs mixed. The line-based auto-complete is 80-90% useful, especially writing similar repeated code. The other times it has got in the way, but on balance I generally prefer to have this functionality.
    Using LLMs to ask questions, I found it helpful when trying to identify a Bootstrap class -- my Google searching didn't find the class, but asking an LLM helped me find the class name that I then looked up in the docs. Some other approaches I've asked for I've ended up adapting the code to the way I wanted to write it, using the LLM code as a basis. In some other instances it didn't help me solve the problem so I used different approaches.

  • @pixelfingers
    @pixelfingers 5 месяцев назад

    It’s interesting what you said about software development and understanding the problem domain, and bugs due to edge cases or things you’d not quite fully understood.
    Because that’s sounds like you’re coming up with a solution to some kind of problem within a set of constraints (or trying to understand what the problem actually is about and what the constraints are.) It’s like a level higher than a particular programming language. It’s more like designing and being able to understand certain types of problems.
    So say if you were using a language that just didn’t allow certain classes of bug (like memory errors) so it was high level and the LLM didn’t need to generate that kind of code, and it became more about expressing solutions to known problems (I want to say applying patterns but I don’t mean design patterns - something probably more high level) and if an LLM was working at this level then I think they’d be really useful.
    “I can see you’re trying to do this kind of software, have you thought about applying technique / approach / algorthim X.”
    If you could somehow turn that knowledge about problems and their solutions into some abstract model that LLM could use to spot patterns and suggest techniques, to help you understand the problem space, then that’d be good.
    I genuinely don’t know if LLMs work like that at the moment. 🤷‍♂️

  • @pm-dev
    @pm-dev 5 месяцев назад

    You should do the recent François Chollet ARC Prize talk at some point.
    Getting takes on LLMs from engineers like Linus is more of a personality test than anything else at this point. You should listen to what actual AGI researchers think about LLMs.

  • @sarkedev
    @sarkedev 4 месяца назад +1

    7:10 so if we get LLMs to review and respond to these requests, we'll have LLMs arguing with each other.

  • @tutacat
    @tutacat Месяц назад

    It only works if you fully test it and submit good code. If you try and submit bad code, it will be looked at the same, if it's bad it will probably get rejected.

  • @monkishrex
    @monkishrex 5 месяцев назад +1

    AI is great for remembering syntax with context. You don't ask it to build a house which its terrible at; you ask it to build a wall 4 times, a floor, a roof... Etc which it's actually pretty good at

  • @sarkedev
    @sarkedev 4 месяца назад +1

    16:32 not negative, but he has very high standards. He's like Gordon Ramsay, who can be an asshole in the kitchen, but is a sweetheart when you see him outside the kitchen or interacting with children

  • @temari2860
    @temari2860 5 месяцев назад +1

    Linus talked about LLMs in future tense. He never said anything about using them for programming here and now. I think he's just optimistic about their potential in the future.

  • @sierragutenberg
    @sierragutenberg 5 месяцев назад +108

    "openAI.... f*ck you!"
    - Linus probably

    • @tedchirvasiu
      @tedchirvasiu 5 месяцев назад +7

      There is an identical comment below you, I think you are a bot.

    • @483SGT
      @483SGT 5 месяцев назад +7

      There is an identical comment below you, I think you are a bot.

    • @ahmeddeco7320
      @ahmeddeco7320 5 месяцев назад +3

      There is an identical comment below you, I think you are a bot.

    • @xClairy
      @xClairy 5 месяцев назад +4

      There is an identical comment below you, I think you are a bot.

    • @SgtVenom
      @SgtVenom 5 месяцев назад +2

      There is an identical comment below you, I think you are a bot.

  • @pyajudeme9245
    @pyajudeme9245 5 месяцев назад +1

    It seems like he sees it like a logical consequence of C -> compiler magic -> assembly. Now it is AI -> some black box magic -> C -> compiler magic -> assembly

  • @turbokev3772
    @turbokev3772 5 месяцев назад

    My experience with co-pilot and with cursor is that they are distracting and not particularly useful and getting your way when you already know exactly what code you want to write

  • @zzyzxyz5419
    @zzyzxyz5419 5 месяцев назад +1

    Does the curl article have a video?

  • @tutacat
    @tutacat Месяц назад

    The curl bounty "bugs" are just with no oversight, or people have no idea what they are doing.

  • @sarkedev
    @sarkedev 4 месяца назад +1

    15:50 maybe not dumber, but out of practice. I'm a full stack independent developer that employed a frontend dev for a few years. I find that using LLMs is the same if you don't "take in" all the code that is produced

  • @dmofOfficial
    @dmofOfficial 5 месяцев назад +2

    how do you not hallucinate when you're playing statistical probability?

    • @lolerie
      @lolerie 4 месяца назад

      They are not. They produce text like we do, due to emergent abilities. And we hallucinate all the time too. Wr just catch ourselfs often, many people do not

  • @Rignchen
    @Rignchen 3 месяца назад

    I think an llm wich would spot sth that could be a bug, setup an env to test the bug, run the program in order to trigger the bug, then check if it worked as wanted and then give this to the user, then it would be really interesting

  • @tesuji-go
    @tesuji-go 4 месяца назад

    From an automation standpoint, I'm hoping/expecting AI to find a useful home in property-based testing. Helping to more quickly zero in on corner cases that the implementation missed.

  • @npip99
    @npip99 5 месяцев назад +2

    LLMs have been trained on all of his rants, and the entire existing linux kernel. Hard to argue with that.

  • @UnFiltered1776
    @UnFiltered1776 5 месяцев назад +1

    Linus is a simple creature who understands complex problems.

  • @Rohinthas
    @Rohinthas 5 месяцев назад

    I think you are spot on about Linus being surrounded by good developers who have the competence and discipline to use LLMs responsibly. I am lucky that I trust at least half my team to use LLMs in a useful way. Not so much the other half though...
    I notice that my reviews for the people using LLMs in a way I dont necessarily approve of (aka Copilot Autocompletes everywhere) have harsher wording. We can tell if you wrote it yourself y'all, we know you dont write code like that and if I find dumb shit in it, I will get madder at you for not discovering it yourself than I would for you making the error by yourself, because you are pushing your job of reviewing the suggestions unto me!

  • @LusidDreaming
    @LusidDreaming 3 месяца назад

    People hallucinate too. We've all been in a meeting where someone swears the API does X, when in fact it does Y.

  • @vivekpraseed918
    @vivekpraseed918 4 месяца назад

    The question is how good multimodal LLMs or VLMs will be in the future like 10 years from now...They are somewhat decent today but can get magically good as time goes by

  • @laszlo3547
    @laszlo3547 5 месяцев назад

    The current paradigm just fundamentally doesn't work for identifying bugs. The code available to train on likely got published after the big bugs are fixed.
    The small remaining ones are not identified in the training material as bugs.

  • @lemmeML
    @lemmeML 5 месяцев назад

    you do need to be open to the possibility of things in order to truly understand what they are.

  • @gatisozols
    @gatisozols 5 месяцев назад

    Maybe I am missing the point, but strlen will cause buffer overrun if there is no terminating null char..

  • @comfixit
    @comfixit 5 месяцев назад

    LLMs have come a long way for coding even in the last few months. Sometimes it's about identifying the use cases its good at. One of the best use cases I use it for is dumping in an entire project, often times with lots of spaghetti code into context (Which now get as big as 2 million tokens and can be cached on repeated calls to save money) and asking it to locate the parts of the code that do X. It will surface the code, I can do a quick find and I'm on my way. It probably grabs what I need 80-90% of the time on the first shot with modern models.
    It seems kind of like common sense but it's a good idea to use LLMs for the aspects of coding they are good at and probably a bad idea to use them for coding tasks they underperform on. Unfortunately, things emerge and change so fast what an LLM is good or bad at coding wise is shifting quite a bit and not obvious out of the box.

  • @maezrfulldive2770
    @maezrfulldive2770 4 месяца назад

    Thank you for making this kind of video again, my idiot friend every week thinks that the end of the coders is here.

  • @trn450
    @trn450 4 месяца назад

    The LLM's are very useful to people who know enough to check their work. They're a productivity multiplier. Additionally, they do serve as a decent second set of eyes. They do catch bugs.

  • @DotaBlitzPicker-wn7oq
    @DotaBlitzPicker-wn7oq 2 месяца назад

    I think LLMs for coding are great, but turn off the 'auto-suggest' feature. That stuff messes up with your thinking process because you have an idea of what the function is supposed to do, but at that exact moment a whole bunch of code appears and you've got to read through an lose your train of thought, and maybe sometimes it's what you want, and sometimes it's not.
    Instead I think having that bound to a key is super nice, so you write your function, and code as you normally would. Then you stop, and realise it's OBVIOUS what your about to do next, you know it, the LLM knows it. so save everyone time by hitting that 'manual' button and have the code pop up when YOU ask it to.
    That I think is just 'better auto-complete', and it's amazing. It's the perfect combination of automation and still having enough control to get to where you need to go without ending up with a bunch of code you don't understand and can't debug.

  • @Titere05
    @Titere05 5 месяцев назад

    Oh an LLM augmented compiler/linter could be quite cool actually

  • @matta5749
    @matta5749 22 дня назад

    There’s a huge range of tasks where 100% accuracy on the first response is not required. If an LLM is looking for bugs and finds 75% of them plus a 50% false positive rate, that’s still useful if you’re a competent programmer and know how to apply critical thinking… you just can’t be a robot and have a mindset where everything is either perfect or worthless.

  • @t1nytim
    @t1nytim 5 месяцев назад

    Where I think I've gotten to is, when I get it by an LLM, and if it doesn't get it in 1 attempt, I move on to figure it out myself. As the amount of time I would previously spend on some error that was because of a single character being wrong, and wasn't picked up by the likes of a linter/lsp was frustrating. But beyond that use case I feel like I've felt it frustrating longer term, because of its lack of quality mostly. That and learning Neovim, I think of something that I haven't encountered before that I wanna do, and it tells me if there is a hotkey I don't know about. But again, that's a ask it something, it's got 1 chance, but because it's essentially just answering from a manually, it's been like 99% accurate for the basic learnings anyway, as I would have just been googling them anyway.

  • @AliKazai
    @AliKazai 4 месяца назад

    using it as a pair programmer to explore code, or get a better understanding of flow etc, is awesome.
    but iv strictly told it i dont want it to code. as it goes on a tangent and keeps repeating tons of code. then it gets confused lol.

  • @kanescott1300
    @kanescott1300 3 месяца назад

    Ideally it should be used for internal pre-checks for a PR, using for automated bug bounties is obviously bad. But if I can run against my PR to check if I missed anything before submission, sounds good to me.

  • @Tony-dp1rl
    @Tony-dp1rl 5 месяцев назад

    Somewhat ironically, the latest models do actually pick up that strlen check before a strcpy far better

  • @jaye5632
    @jaye5632 5 месяцев назад

    There is a line where LLMs are helpful on one side and problematic on the other side and today they exist on both sides. You can use an LLM as a tool to help you work, some might like copilot others may like using LLMs to scope out a problem, and in other cases they may not be useful at all. I don't think that they will be replacing devs anytime soon but they will be alleviating us from some tasks.
    Using a LLM is like being part of a team an having to read the code of some other developer, you can read it for structure or you can read it for solving the problem it is trying to address.

  • @TheNewton
    @TheNewton 5 месяцев назад

    8:50 "seleticve arena", Isn't the LLM take just going to be heavy survivor bias? Linus is an end maintainer so the amount of filtering that happens before every code review is massive.
    Meanwhile downstream see how intermediaries feel towards an increase in submissions because LLM's give people the idea they can code fast with no regard to quality.

  • @jamesmarx
    @jamesmarx 5 месяцев назад +3

    I was fully expecting LLMs get reckt

  • @polymetric2614
    @polymetric2614 5 месяцев назад

    That's what I've been saying!! Even if you had a "perfect" AI that was always right, If you just use cheat codes for everything you never learn. that's like half the fun in life is learning

  • @ItZxDraW
    @ItZxDraW 5 месяцев назад +1

    When an LLM creates something is a guaranteed mess but other than that its op.
    Especially for learning (real) languages.

  • @carlosmspk
    @carlosmspk 4 месяца назад +1

    8:30 I'm weirdly annoyed that Prime didn't react at the "humble" joke :(

  • @DE-sf9sr
    @DE-sf9sr 5 месяцев назад

    100% it still takes SMEs to be effective. The LLM still depends on the inputs being perfect to be right.
    Copilot depends upon inputs that are not always right, or not always relevant, or not always applicable. Code that is older version, etc.
    Still takes insights and SMEs to be useful

  • @batboy49
    @batboy49 4 месяца назад

    LLMs are useful, but they are not the end all be all. I find them most useful to get acquainted with a library quickly.

  • @Jasonlhy
    @Jasonlhy 5 месяцев назад

    There is a Chinese term called: 盲人摸象 ( Blind men and an elephant). This is what I feel like it I ask LLM to generate the code I don't have experience to work on before

  • @gnuemacs1166
    @gnuemacs1166 5 месяцев назад

    How do u access the low level details of a video card or ai hardware ? That’s the real Linux question

  • @konrTF
    @konrTF 5 месяцев назад

    That article about the dude telling the LLM that it isn't even answering questions and is just stating untruths repeatedly while saying "Sorry, " before.. that's happened loads of times in almost every LLM I've tested on so many topics and uses

  • @jdcodersteinersky7257
    @jdcodersteinersky7257 День назад

    I think Linus's ambivalence is because he's not focused on its current state (which has some value) but with how quickly it's improving and what its likely near-term capabilities will be -- even for low level code requiring greater depth in hardware and how to interact with it correctly.

  • @gmt-yt
    @gmt-yt 5 месяцев назад

    In the early days of linux I downloaded slack, IIRC. But I got a kernel panic because I was supposed to change over from the boot floppy to the root floppy (something like that, maybe someone will remember how this worked better -- basically this was the first problem anyone trying it for the first time would likely have, and surely well documented). So, I e-mailed Linus. He explained via private e-mail that you had to change the floppy or whatever. Yeah, problem solved, I was "in!". Can't remember if I thought to thank him.

  • @trendingtopicresearch9440
    @trendingtopicresearch9440 5 месяцев назад

    Maybe as a CI job for spotting errors or suggesting improvements.

  • @dev.sharif
    @dev.sharif 4 месяца назад

    the croudstrike bug was not because of a bad test. it was a release bug, i saw somebody said some random file in update had all 0 inside,
    so this is why even we can't trust the code that review our code.
    "being safe rather than sorry" stuffs i guss!

  • @notapplicable7292
    @notapplicable7292 5 месяцев назад +4

    I have been arguing since GPT3 that AI will be amazing static analysis tools one day, awesome to see linus agree. It makes perfect sense as a good 30% of the bugs we catch in code reviews at work probably could have been caught by an AI (although maybe not a large language model with the current design)

  • @emnul8583
    @emnul8583 5 месяцев назад

    🚨🦀🗣 Rust mentioned 🗣🦀🚨
    LETS GOOOOOOOO

  • @tutacat
    @tutacat Месяц назад

    You can get neural net compilers

  • @TradieTrev
    @TradieTrev 5 месяцев назад

    I really like your attitude as a dev.

  • @entelin
    @entelin 5 месяцев назад

    The difference is this: He cares about the results that land in his inbox. What tools people use is kind of besides the point. He will rake you over the coals if the patch you submit sucks regardless of how you got there.