Torvalds Speaks: Impact of Artificial Intelligence on Programming

Поделиться
HTML-код
  • Опубликовано: 16 янв 2024
  • 🚀 Torvalds delves into the transformative influence of Artificial Intelligence on the world of coding.
    🚀 Key Topics:
    * Evolution of programming languages in the era of AI.
    * Enhancements in development workflows through machine learning.
    * Predictions for the future of software development with the integration of AI.
  • НаукаНаука

Комментарии • 1,6 тыс.

  • @modernrice
    @modernrice 5 месяцев назад +6138

    These are the true Linus tech tips

    • @ginogarcia8730
      @ginogarcia8730 5 месяцев назад +31

      hahaha

    • @rooot_
      @rooot_ 5 месяцев назад +23

      so true lmao

    • @denisblack9897
      @denisblack9897 5 месяцев назад +142

      This!
      Hate that lame wannabe dude pretending to know stuff

    • @authentic_101
      @authentic_101 5 месяцев назад +5

      😅

    • @viktorsincic8039
      @viktorsincic8039 5 месяцев назад +152

      @@denisblack9897don't hate anyone man, the guy is responsible for countless kids getting into tech, people tend to sort out the educational "bugs" on the way up :)

  • @alakani
    @alakani 5 месяцев назад +1652

    Man Linus is always such a refreshing glimpse of sanity

    • @JosiahWarren
      @JosiahWarren 5 месяцев назад +5

      His argumet was bugs are shallow .we have compliers for shallow bugs llm can gind not so shallow .he is not the brightest

    • @rickgray
      @rickgray 5 месяцев назад +152

      ​@@JosiahWarren Try that again with proper grammar chief.

    • @Ryochan7
      @Ryochan7 5 месяцев назад +4

      He let his own kernel and dev community get destroyed. Screw him. RIP Linux

    • @alakani
      @alakani 5 месяцев назад +72

      @@Ryochan7 Fun fact, fMRI studies show trolling has the same neural activation patterns as psychopaths thinking about torturing puppies; it's very specific, right down to the part where they vacillate between thinking it's their universal right, and that they're helping someone somehow

    • @Phirebirdphoenix
      @Phirebirdphoenix 5 месяцев назад +2

      ​@@alakani and some people who troll do not think about it at all. they're easier to deal with if we aren't ascribing beneficial qualities to them.

  • @the.elsewhere
    @the.elsewhere 5 месяцев назад +1648

    "Sometimes you have to be a bit too optimistic to make a difference"

    • @bartonfarnsworth7690
      @bartonfarnsworth7690 5 месяцев назад +16

      -Stockton Rush

    • @harmez7
      @harmez7 5 месяцев назад

      it;s actually originally from William Paul Young, The Shack@@bartonfarnsworth7690

    • @Martinit0
      @Martinit0 5 месяцев назад +4

      Understatement of the day, LOL.

    • @MommysGoodPuppy
      @MommysGoodPuppy 5 месяцев назад +8

      hell of a motivational quote

    • @harmez7
      @harmez7 5 месяцев назад +4

      that is also what a scammer wants from you.
      dont put everything that looks fancy in you mind kiddo.

  • @lexsongtw
    @lexsongtw 5 месяцев назад +1967

    LLMs write way better commit messages than I do and I appreciate that.

    • @SaintNath
      @SaintNath 5 месяцев назад +200

      And they actually comment their code 😂

    • @Sindoku
      @Sindoku 5 месяцев назад +83

      @@SaintNathcomments are usually bad though but are good if you’re learning I suppose but they can be out of date and thus misleading

    • @steffanstelzer3071
      @steffanstelzer3071 5 месяцев назад +290

      @@Sindoku i hope your comment gets out of date quickly because its already misleading

    • @NetherFX
      @NetherFX 5 месяцев назад +144

      @@Sindoku While I get your point, comments are definitely a good thing.
      Yes code should be self-explanatory, and if it isn't you try your best to fix this. But there's definitely cases where it's best to add a short comment explaining why you've done something. It shouldn't describe *what* but *why*

    • @user-oj9iz4vb4q
      @user-oj9iz4vb4q 5 месяцев назад +56

      @@NetherFX That's the point, a comment is worthless unless it touches on the why. A comment that just discusses the what is absolutely garbage because the code documents the what.

  • @Hobbitstomper
    @Hobbitstomper 5 месяцев назад +393

    Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

    • @mercster
      @mercster 5 месяцев назад +2

      Where was this talk held?

    • @DavidHnilica
      @DavidHnilica 5 месяцев назад +15

      thanks "so" much! It's pretty appalling that these folks don't even quote the source

    • @kurshadqaya1684
      @kurshadqaya1684 4 месяца назад +2

      Thank you a ton!

    • @KaiCarver
      @KaiCarver 4 месяца назад

      Thank you ruclips.net/video/OvuEYtkOH88/видео.html

    • @captaincaption
      @captaincaption 3 месяца назад +1

      Thank you so much!

  • @porky1118
    @porky1118 5 месяцев назад +622

    1:06 "Now we're moving on from C to Rust" This is much more interesting than the title. I always thought, Torvalds viewed Rust as an experiment.

    • @feignit
      @feignit 5 месяцев назад +95

      Rust just isn't his expertise. It's going in the kernel, he's just letting others oversee it.

    • @SecretAgentBartFargo
      @SecretAgentBartFargo 5 месяцев назад +62

      @@feignit It's already in the mainline kernel for a while. It's very stable and Rust just works really well now.

    • @yifeiren8004
      @yifeiren8004 5 месяцев назад +7

      I actually think go is better than Rust

    • @speedytruck
      @speedytruck 5 месяцев назад +212

      @@yifeiren8004 You want a garbage collector running in the kernel?

    • @catmanmovie8759
      @catmanmovie8759 5 месяцев назад +12

      ​@@SecretAgentBartFargoRust isn't even close to the stable.

  • @ficolas2
    @ficolas2 5 месяцев назад +753

    I have had copilot suggest an if statement that fixed an edge case I didn't contemplate, enough times to see it could really shine in fixing obvious bugs like that.

    • @doodlebroSH
      @doodlebroSH 5 месяцев назад +122

      Skill issue

    • @antesajjas3371
      @antesajjas3371 5 месяцев назад +301

      ​@@doodlebroSHif you always think of every edge case in all of the code you write you are not programming that much

    • @ficolas2
      @ficolas2 5 месяцев назад

      @@doodlebroSH I can tell you are new to programming and talking out of your ass just by that comment.

    • @markoates9057
      @markoates9057 5 месяцев назад +24

      @@doodlebroSH :D yikes

    • @turolretar
      @turolretar 5 месяцев назад +3

      @@antesajjas3371I think you misspelled edge

  • @MethodOverRide
    @MethodOverRide 5 месяцев назад +363

    I am senior software engineer and I use chat gpt sometimes at work to write powershell scripts. They usually provide a good enough start for me to modify to do what i want. That saves me time and allows me to create more scripts to automate more. Its not my main programming task, but it definitely saves me time when I need to do it.

    • @falkensmaize
      @falkensmaize 5 месяцев назад +48

      Same. ChatGPT is great for throwing together a quick shell or python script to do boring data tasks that would otherwise take much longer.

    • @alakani
      @alakani 5 месяцев назад +19

      Yep, saves me so much time with data preprocessing, and adds nice little features that I wouldn't normally bother with for a 1 time use throwaway script

    • @jsrjsr
      @jsrjsr 5 месяцев назад +10

      Quit your job.

    • @alakani
      @alakani 5 месяцев назад +32

      @@jsrjsr And light a fart?

    • @jsrjsr
      @jsrjsr 5 месяцев назад +2

      @@alakani he should do worse than that.

  • @mikicerise6250
    @mikicerise6250 5 месяцев назад +626

    If you let the LLM author code without checking it, then inevitably you will just get broken code. If you don't use LLMs you will take twice as long. If you use LLMs and review and verify what it says and proposes, and use it as Linus rightly suggests as a code reviewer who will actually read your code and can guess at your intent, you get more reliable code much faster. At least that is the state of things as of today.

    • @keyser456
      @keyser456 5 месяцев назад +33

      Perhaps anecdotal, but it (AI Assistant in my case, I'm using JB Rider, pretty sure that's tied to ChatGPT) seems to get better with time. After finishing a method, I have another method already in mind. I move the cursor and put a blank line or two in under the method I just created in prep for the new method. If I let it sit for just a second or two before any keystrokes, often times it will predict what method I'm about to create all on its own, without me even starting the method signature. Yes, sometimes it gets it very wrong and I'll just hit escape to clear it, but sometimes it gets it right... and I mean really scary right. Like every line down to the keystroke and even naming is spot on, consistent w/ naming throughout the rest of the project. Yes, agreed, you still need to review the generated code, but I suspect that will only continually get better with every iteration. Rather then autocompleting methods, eventually entire files, then entire projects, then entire solutions. It's probably best for developers to try to learn to work with it in harmony as it evolves, or they will fall behind their peers that are embracing it. Scary and exciting times ahead.

    • @pvanukoff
      @pvanukoff 5 месяцев назад +18

      @@keyser456 Same experience for me. It predicts what I was about to write next about 80% of the time, and when it gets it right, it's pretty much spot on. Insane progress just over the past year. Imagine where it will be in another year. Or five years. Coding is going to be a thing of the past, and it's going to happen very quickly.

    • @rayyanabdulwajid7681
      @rayyanabdulwajid7681 5 месяцев назад +7

      If it is intelligent enough to write code, it will eventually become intelligent enough to debug complex code, as long as you tell it what is the issue that arises

    • @CausallyExplained
      @CausallyExplained 5 месяцев назад +11

      You are training the llm for the inevitable.

    • @derAtze
      @derAtze 5 месяцев назад +2

      Oh man, now i really want to get into coding just to get that same transformative experience of a tool thinking ahead of you. I am a Designer, and to be frank, the experience with AI in my field is much less exciting, its just stockfootage on steroids, all the handywork of editing and putting it together is sadly the same. But the models are evolving rapidly and stuff like AI object select and masking, vector generation in Adobe Illustrator, transformative AI (making a summer valley into a snow valley e.g.) and motion graphics AI are on the horizon to be good or are already there. Indeed, what a time to be alive :D might get into coding soon tho

  • @Kaelygon
    @Kaelygon 5 месяцев назад +715

    While AI lowers the bar to start programming, I'm afraid it also makes programming bad code easier. But with like any other tool, more power brings more responsibility and manual review should still be just as important.

    • @footballuniverse6522
      @footballuniverse6522 5 месяцев назад +56

      as a cloud engineer I gotta say chatgpt with gpt 4 really turbocharges me for most tasks, my productivity shot up 100-200% and i'm not kidding. You gotta know how to make it work for you and it's amazing :)

    • @alexhguerra
      @alexhguerra 5 месяцев назад +15

      There will be more than one AI , for each task, to create code and to validate code. Make no mistake, AGI is the last target, but the intermediate ones are good enough to speed up the whole ordeal/effort

    • @musiqtee
      @musiqtee 5 месяцев назад +89

      Ok, speed, efficiency, productivity… All true, but to what effect? Isn’t it so that every time we’ve had a serious paradigm shift, we thought we could “save time”.
      Sadly, since corporations are not ‘human’, we’ve ended up working *more* not less, raising the almighty GDP - having less free time and not making significantly more money.
      Unless… you own shares, IP, patents and other *derivatives* of AI as capital.
      AI is a tool. A sharp knife is also one. This “debate” should ask “who is holding the tool, and for what purpose?”. That question reveals very different answers to a corporation, a government, a community or a single person.
      It’s not what AI is or can do. It’s more about what we are, and what we do with AI… 👍

    • @westongpt
      @westongpt 5 месяцев назад +15

      Couldn't the same be said of Stack Overflow? I am not disagreeing with you, just adding an example to show it's not a new phenomenon.

    • @pledger6197
      @pledger6197 5 месяцев назад +18

      It reminds me about talk in some podcasts before LLM, where speaker said that they tried to use AI as an assistant for medical reports and they faced the following problem:
      sometimes people see that AI gets the right answers and then when they disagree with it, they still choose the AI's conclusion, because "system can't be wrong".
      So to fight it, they programmed the system to sometimes give the wrong results, and ask the person to agree or disagree with it, to force people to chose the "right" answer and not to agree with anything that system says.
      And this is what I believe the weak point of LLM.
      While it's helpful in some scenarios, in other it can give SO deceiving answers which looks exactly how it should be, but in fact it's something that doesn't even exists.
      E.g. I tried to ask it about best way to get an achievement in the game, and it came with things that really exists in the game and sounds like they should be related to the achievement, but in fact they not.
      Or my friend tried to google windows error codes, and it came up with the problem and their descriptions, though it doesn't really exists either.

  • @PauloJorgeMonteiro
    @PauloJorgeMonteiro 5 месяцев назад +384

    Linus..... My man!!!
    I would probably hate working with him, because I am not a very good software engineer and he would be going nuts with my time-complexity solutions... but boy has he inspired me.
    Thank you!

    • @MrFallout92
      @MrFallout92 5 месяцев назад +25

      bro do you even O(n^2)?

    • @PauloJorgeMonteiro
      @PauloJorgeMonteiro 5 месяцев назад +55

      @@MrFallout92 I wish!!!
      These days I have a deep love for factorials!

    • @TestTest12332
      @TestTest12332 5 месяцев назад +40

      I don't think he would. His famous rants on LKML before he changed his tone were ate people who SHOULD HAVE KNOWN BETTER. I don't remember him going nuts at newbies for being newbies. He did go nuts at experts who tried to submit sub-par/lazy/incomplete/etc work and should have know it's sub-par and needs fixing and didn't bother doing that. He was quite accurate and fair in that.

    • @Saitanen
      @Saitanen 5 месяцев назад +4

      @@TestTest12332 Has this ever happened? Do you have any specific examples?

    • @uis246
      @uis246 5 месяцев назад +8

      ​@@Saitanenthat time when fd-based syscall returned file not found error code. Linus went nuts.

  • @vlasquez53
    @vlasquez53 5 месяцев назад +165

    Linus sounds so calmed and relaxed until you see his comments on others PRs

    • @thewhitefalcon8539
      @thewhitefalcon8539 5 месяцев назад +21

      That was a terrible PR though

    • @Alguem387
      @Alguem387 3 месяца назад +3

      I think he does it for fun tbh

    • @gruberu
      @gruberu 3 месяца назад +12

      who amongst us that hasnt had a bad day because of a bad PR cast the first stone

    • @MechMK1
      @MechMK1 3 месяца назад +2

      You gotta let off steam somehow

    • @__Henry__
      @__Henry__ 3 месяца назад +1

      Yeah :/

  • @ZeroPlayerGame
    @ZeroPlayerGame 5 месяцев назад +131

    Man, Linus looks a noticeably older, wiser man than I've seen him in older talks. More respect for the guy.

    • @RyanMartinRAM
      @RyanMartinRAM 5 месяцев назад +11

      Great people often age like wine.

    • @ZeroPlayerGame
      @ZeroPlayerGame 5 месяцев назад +18

      @@RyanMartinRAMI have another adage - with age comes wisdom, but sometimes age comes alone. Not this time though!

    • @DielsonSales
      @DielsonSales Месяц назад

      I think age makes anyone more humble, but sometimes less open minded. It’s good to see Linus recognize that LLMs have their uses, while some projects like Gentoo have stood completely against LLMs. Nothing is black and white, and when the hype is over, I think LLMs will still be used as assistants to pay attention to small stuff we sometimes neglect.

  • @heshercharacter5555
    @heshercharacter5555 4 месяца назад +11

    I find LLM's extremely usefull for generating small code snippet very quickly. For example advanced regular expressions. Saved me tons of hours.

  • @illyam689
    @illyam689 5 месяцев назад +445

    I think that Linus, in 2024, should run his own podcast

    • @TalsBadKidney
      @TalsBadKidney 5 месяцев назад +42

      and his first guest should be joe rogan

    • @SergioGomez-qe3kn
      @SergioGomez-qe3kn 5 месяцев назад +74

      @@TalsBadKidney
      Linus: - "What language do you think should be first tought at elementary school, Joe?"
      Joe: - "Jujitsu"

    • @turolretar
      @turolretar 5 месяцев назад +4

      @@TalsBadKidneythis is such a great idea

    • @ton4eg1
      @ton4eg1 5 месяцев назад +1

      And has a stand-up.

    • @madisonhanberry6019
      @madisonhanberry6019 5 месяцев назад +4

      He's such a great speaker, but I doubt he would have much time between managing Linux, family life, and whatever else

  • @alcedob.5850
    @alcedob.5850 5 месяцев назад +293

    Wow finally someone who acknowledges the options LLMs give without being overhyped or calling out an existential threat

    • @darklittlepeople
      @darklittlepeople 5 месяцев назад +5

      yes, i find him very refreshing indeed

    • @MikehMike01
      @MikehMike01 5 месяцев назад

      LLMs are total crap, there’s no reason to be optimistic

    • @deeplife9654
      @deeplife9654 5 месяцев назад +30

      Yes. Because he is not a marketing guy or not ceo of a company.

    • @genekisayan6564
      @genekisayan6564 5 месяцев назад +2

      Man they can t even count additions. Of course they are not a threat. At least yet

    • @curious_banda
      @curious_banda 5 месяцев назад

      ​@@genekisayan6564 never used gpt4 and other later models?

  • @Pantong
    @Pantong 5 месяцев назад +166

    It's another tool like static and dynamic analysis. No programmer will follow these tools blindly, but can use them to make suggestions or improve a feature. There have been times i've been stuck on picking a good data structure, and gpt has given more insightful ideas or edge cases i was not considering. That's this most useful moment right now. A Rubber Ducky.

    • @AM-yk5yd
      @AM-yk5yd 5 месяцев назад

      >No programmer will follow these tools blindly
      My sweet summer child. CURL authors already have to deal with "security reports" because some [REDACTED]s used Bard to find "vulnerabilities" to get a bug bounty. Wait for next jam in style "submit N PRs and you get our merch" and instead of PRs that fix a typo, you'll get even worse - the code that doesn't compile.

    • @conrad42
      @conrad42 5 месяцев назад +16

      I agree that it can help in these scenarios. People should make aware of this, as the current discussion is way over the top and scare people in losing their jobs (an therefore their mental health). Another thing is, as sustainability was a topic, I'm not sure if the energy consumed by this technology justifies these trivial tasks. Talking with a colleague seems more energy efficient.

    • @LordChen
      @LordChen 5 месяцев назад

      aha. until it writes a Go GTK phone app (Linux phone) zero to hero with no code review and only UI design discussions.
      6 months ago. just chatgpt4.
      programming is dying and you people are dreaming.
      in 2023 there were 30% less new hires across all programming languages.
      for 2024, out of 950 tech companies, over 40% plan layoffs due to AI.
      a bit tired to link the source

    • @larryjonn9451
      @larryjonn9451 5 месяцев назад +23

      You underestimate the stupidity of people

    • @Gokuguy1243
      @Gokuguy1243 5 месяцев назад +17

      Absolutely, im convinced the other commenters claiming LLMs will make programming obsolete in 3 years or whatever are either not programmers or bad programmers lol

  • @joemiller8409
    @joemiller8409 5 месяцев назад +56

    the deafening silence when that phone alarm dared to go off mid torvalds dialogue 😆

  • @vishusingh008
    @vishusingh008 4 месяца назад +1

    In such a short video, one can easily witness the brilliance of the man!!!

  • @elliott8596
    @elliott8596 5 месяцев назад +209

    Linus has really mellowed out as he has gotten older.

    • @duffy666
      @duffy666 5 месяцев назад +42

      In a good way.

    • @Munchkin303
      @Munchkin303 5 месяцев назад +61

      He became hopeful and humble

    • @mikicerise6250
      @mikicerise6250 5 месяцев назад +29

      The therapy worked. 😉

    • @Rajmanov
      @Rajmanov 5 месяцев назад

      no therapy at all, just wisdom @@mikicerise6250

    • @darxoonwasser
      @darxoonwasser 5 месяцев назад +23

      ​@@Munchkin303Linus hopeful and humble Torvalds

  • @ginebro1930
    @ginebro1930 5 месяцев назад +57

    Smart answer from Linus.

  • @ChrisM541
    @ChrisM541 5 месяцев назад +72

    For experienced programmers, most of the mistakes they make can be categorised as 'stupid' i.e. a simple overlook, where the fix is equally stupidly trivial. Exactly the same with building a PC - you might have done it 'millions' of times, but forgetting something stupid in the build is always stupidly easy to do, and though you might not do it often, you will inevitably still do it. At some point. Unfortunately, the fixes seem to always take forever to find.

    • @Jonas-Seiler
      @Jonas-Seiler 5 месяцев назад +16

      That’s the only good take on ai in the video, and maybe the only truly helpful thing ai might ever be used for, finding the obvious mistakes humans make because they’re thinking about more important shit.

    • @autohmae
      @autohmae 5 месяцев назад +5

      That's the problem with computers, you need to do it all 100% correct or it won't work.

    • @hallrules
      @hallrules 5 месяцев назад +6

      @@autohmae That also doubles as the good thing about computers, because it will never do something that you didn't tell it to do

    • @chunkyMunky329
      @chunkyMunky329 5 месяцев назад +4

      I disagree with this. Simple bugs are easier to find, so we find more of them. The other bugs are more complex which makes them harder to find, so we find less of them. For example, not realising that the HTTP protocol has certain ramifications that become a serious problem when you structure your web app a certain way.

    • @ChrisM541
      @ChrisM541 5 месяцев назад

      @@chunkyMunky329 It's definitely true that there are always exceptions, though I'd politely suggest "not realising" is primarily a result of inexperience.
      A badly written and/or badly translated urs can lead to significant issues when the inevitable subsequent change requests flood in, especially if there's poor documentation in the code.
      Any organisation is only as good as it's QA. We see this more and more in the games industry, where we increasingly, and deliberately, offload the testing aspect of that onto the end consumer.
      Simple bugs should be easy to find, you'd think, but they're also very, very easy to hide, unfortunately.

  • @draoi99
    @draoi99 5 месяцев назад +2

    Linus is always chill about new things.

  • @duffy666
    @duffy666 5 месяцев назад +144

    "we are all autocorrects on steroids to some degree" - agree 100%

    • @alang.2054
      @alang.2054 5 месяцев назад +9

      Could you elaborate why do you agree? Your comment adds no value right now

    • @RFC3514
      @RFC3514 5 месяцев назад +13

      I think he really meant to say "autocomplete". Because it basically takes your prompt and looks for what answer is mostly likely to follow it, based on material it has read.
      Which _is_ indeed kind of how humans work... if you remove creativity and the ability to _interact_ with the world, and only allow them to read books and answer written questions.
      And by "creativity" I'm including the ability to spot gaps in our own knowledge and do experiments to acquire _new_ information that wasn't part of our training.

    • @sbqp3
      @sbqp3 5 месяцев назад +19

      The thing people with the interviewers mindset misses is what it takes to predict correctly. The language model has to have an implicit understanding of the data in order to predict. And ChatGPT is using a large language model to produce text, but you could just as well use it to produce something else, like actions in a robot. Which is kind of what humans do; they see and hear things, and act accordingly. People who dismiss the brilliance of large language models on the basis that they're "just predicting text" are really missing the point.

    • @RFC3514
      @RFC3514 5 месяцев назад +1

      @@sbqp3 - No, you couldn't really use it to "produce actions in a robot", because what makes ChatGPT (and LLMs in general) reasonably competent is the huge amount of material it was trained on, and there isn't anywhere near the same amount of material (certainly not in a standardised, easily digestible form) of robot control files and outcomes.
      The recent "leap" in generative AI came from the volume of training data (and ability to process it), not from any revolutionary new algorithms. Just more memory + more CPU power + easy access to documents on the internet = more connections & better weigh(t)ing = better output.
      And in any application where you just don't have that volume of easily accessible, easily processable data, LLMs are going to give you poor results.
      We're still waiting for remotely competent self-driving vehicles, and there are billions of hours of dashcam footage and hundreds of companies investing millions in it. Now imagine trying to use a similar machine-learning model to train a mobile industrial robot, that has to deal with things like "finger" pressure, spatial clearance, humans moving around it, etc.. Explicitly coded logic (possibly aided by some generic AI for object recognition, etc. - which is already used) is still going to be the norm for the foreseeable future.

    • @duffy666
      @duffy666 5 месяцев назад +4

      ​@@alang.2054 I like his comment because most thinking humans do is in fact system 1 thinking - which is reflex-like and on a similar level as what LLMs do.

  • @nathanmccarthy6209
    @nathanmccarthy6209 5 месяцев назад +9

    There is absolutely no doubt in my mind that things like co-pilot are already part of pull requests that have been merged into the Linux kernel.

  • @bergonius
    @bergonius 5 месяцев назад +10

    "You have to kinda be a bit too optimistic at times to make a difference" -This is profound

  • @srinivaschillara4023
    @srinivaschillara4023 Месяц назад +1

    so nice, and also the quality of comments for this video.... there ishope for humanity.

  • @caesare1968
    @caesare1968 5 месяцев назад

    How nice letting the advertisement after the program, aplauses

  • @shroomer3867
    @shroomer3867 5 месяцев назад +128

    At 1:10 you can see how Linus is locating the Apple user and was considering to kill him on the spot but decides against it and continues his thought

  • @vaibhawc
    @vaibhawc 5 месяцев назад +33

    Always love to hear Sir Linus Hopeful Humble Torvalds

    • @latt.qcd9221
      @latt.qcd9221 5 месяцев назад +1

      Sir Linus Hopeful *_And_* Humble Torvalds

  • @nettebulut
    @nettebulut 5 месяцев назад +2

    03:34 "hopeful and humble that's my middle name" and laughing.. :)

  • @TjPhysicist
    @TjPhysicist 4 месяца назад +1

    I love this little short. I think what both of them said is true. LLM is definitely "autocorrect on steroids", as it were. But honestly, a lot of programming or really a lot of jobs in general don't really require higher level of intelligence, as Linus said - we all are autocorrect on steroids to some degree, because for the most part a lot of things we do, that's all you need. The problem is knowing the limitations of such a tool and not attempting to subvert human creativity with it.

  • @WokerThanThou
    @WokerThanThou 5 месяцев назад +5

    Man .... I really wanted to see what would happen if that phone rang again.

  • @roylxp
    @roylxp 4 месяца назад +3

    no one commenting on the moderator? He is doing a great job driving the conversation

  • @LiebeGruesse
    @LiebeGruesse 5 месяцев назад +1

    3:02 So true. And so rarely heard. 🙏

  • @pullingweeds
    @pullingweeds 5 месяцев назад

    Great to hear Linux comment on the opening statement made by the interviewer. I think he maybahve expdcted Linus to agree with him.

  • @Willow1w
    @Willow1w 5 месяцев назад +137

    AI is helpful with beginner programming tasks. It's fantastic for converting textual data between formats. But as soon as you ask for help with more advanced subjects like for example, help writing a KMDF driver or a bottom-up parser is will spit out complete garbage. Training the model on text scrapped from the internet will only take you so far.

    • @jumpstar9000
      @jumpstar9000 5 месяцев назад +13

      I'm pretty sure it can sketch out both, and then you can use the model to drill down and fill in the pieces. At least, that is how I use it. It does pretty well. I currently have to keep an eye on it, but it isn't stupid and is quite capable of writing novel code (with some prompting), or converting algorithms to AVX2 or writing CUDA or...
      The value seems to be in the eye of the beholder. If you approach it with skepticism and cynicism and refuse to put some effort in, well, you get what you deserve, imho.

    • @thegoldenatlas753
      @thegoldenatlas753 5 месяцев назад +5

      Part of the issue is quantity. There are far fewer resources on the lower level concepts and that lack of resources hampers chances of improving quality.
      Ai in programming is essentially a programmer that's only ever done tutorials and you'll be hard pressed to find enough tutorials for something low level like a driver when compared to something like a website. So of course an AI will spit gibberish for a driver.
      Personally ive used ai mostly for quickly finding out if a thing already exists for what I'm doing, like if you didn't know the map function existed asking the ai how you could combine two sets of values together the ai tells you about map.

    • @Jcossette1
      @Jcossette1 5 месяцев назад +19

      You have to cut the tasks into smaller individual prompts. You can't ask it to code an OS in one prompt.

    • @ukpropertycommunity
      @ukpropertycommunity 5 месяцев назад +3

      @@Jcossette1it’s a form of supervised learning, so you need the knowledge to specify the expected supervised behaviour that you can write it yourself already, such that they can just do autocorrect in steroids. As for long lines of code, it might not face the 128K context window limitation directly, but it will face sparse self-attention issues that will delete random lines of code way before that!

    • @IrregularPineapples
      @IrregularPineapples 5 месяцев назад +5

      you say that like you're an expert -- AI LLM's like chatgpt have only been around for like 6-12 months

  • @EdwardBlair
    @EdwardBlair 5 месяцев назад +3

    “Auto correct on steroids” is what people who are experts in their field of engineering say when they aren’t SME of ML. Human intelligence is just “auto correct on steroids” we predict what we believe is the most logical next step. Just in a much more efficient manner than our current silicon hardware can execute.

  • @datboi449
    @datboi449 4 месяца назад

    I have used llms to help me learn react when I only was familiar with angular at the time. I knew angular jargon and could prompt for a react version of my angular thought process. then take the response and pinpoint the features to research further

  • @denisblack9897
    @denisblack9897 5 месяцев назад

    This made my day, thanks!

  • @alextrebek5237
    @alextrebek5237 5 месяцев назад +61

    (Average typing speed*number of working days a year)/6 words per line of code ~=1milLOC/year. But we dont write that much. Why? Most coding is just sitting and thinking, then writing little
    LLMs are great to get started with a new language, library or writing repetitive datastructs or algs, but bad for production or logic (design patterns such as the Strategy pattern) due to not logically understanding the problem domain, which from our napkin math just proved is the largest part coding assistants arent improving

    • @antman7673
      @antman7673 5 месяцев назад +2

      I wouldn’t even agree.
      Imagine yourself just getting the job to code x project.
      In that case, you can rely on a very limited amount of information.
      Within the right, there are very few ways in which LLMs fail.

    • @coryc9040
      @coryc9040 5 месяцев назад +1

      Maybe if many programmers sit down and explain their thought process on multiple different problems it can learn to abstract the problem solving method programmers use. While the auto correct on steroids might be technically accurate for what it's doing, the models it builds to predict the next token are extremely sophisticated and for all we know may have some similarity to our logical understanding of problem domains. Also LLMs are still in their infancy. There are probably controls or additional complexity that could be added to address current shortcomings. I'm skeptical of some of the AI hype, I'm equally skeptical of the naysayers. I tend to think the naysayers are wrong based on what LLMs have already accomplished. Plenty of people just 2-3 years ago would've said some of the things they are doing now are impossible.

    • @SimGunther
      @SimGunther 5 месяцев назад +5

      Read the original documentation and if there's something you don't understand, Google it and be social. Only let the LLM regurgitate that part of the docs in terms you understand as a last resort.
      I'm surprised at the creativity LLMs have in their own context, but don't replace reading the docs and writing code with LLMs. You must understand why the algo/struct is important and what problems each algorithm solves.
      If you think LLMs replace experience, you're surely mistaken and you'll be trapped in learned helplessness for eternity.

    • @mobbs8229
      @mobbs8229 5 месяцев назад +4

      I literally asked chatGPT today to explain to MVCC pattern (Which I could've sworn is called the MVVC pattern but it corrected me to that) and its explanation got worse after every attempt of me telling it, it was not doing a good job.

    • @RobFisherUK
      @RobFisherUK 5 месяцев назад +3

      ​@@SimGuntherreading the docs only works if you know what you're looking for. LLMs are great at understanding your badly written question.
      I once proposed a solution to a problem I had to ChatGPT and it said: that sounds similar to the technique in statistics called bootstrapping. Opened up a whole new box of tricks previously unknown to me.
      I could have spent months cultivating social relationships with statisticians but it would have been a lot more work and I'm not sure they'd have the patience.

  • @mdimransarkar1103
    @mdimransarkar1103 5 месяцев назад +3

    could be a great tool for static analysis.

    • @chunkyMunky329
      @chunkyMunky329 5 месяцев назад +3

      If it was great at static analysis then people would probably already be using it for static analysis

  • @arindam-karmakar
    @arindam-karmakar 12 дней назад +1

    Where can I see the full video?

  • @Eimrine
    @Eimrine 5 месяцев назад

    I love the pun in the middle of the video.

  • @DAG_42
    @DAG_42 5 месяцев назад +21

    I'm glad he corrected the host. We are indeed all basically autocorrect to the extent LLMs are. LLMs are also creative and clever, at times. I get the feeling the host hasn't used them much, or perhaps at all

    • @kralg
      @kralg 4 месяца назад +13

      It _seems_ to be creative and it _seems_ to be clever especially to those who are not. The host was fully correct stating that it has nothing to do with "intelligence", it only _seems_ to be intelligent.

    • @doomsdayrule
      @doomsdayrule 4 месяца назад +3

      @@kralg If we made a future LLM that is indistinguishable from a human being, that answers questions correctly, that can solve novel problems, that "seems" creative... what is it that distinguishes our intelligence than the model's?
      It's just picking one token before the next, but isn't that what I'm also doing while writing this comment? In my view, there can certainly be intelligence involved in those simple choices.

    • @kralg
      @kralg 4 месяца назад +2

      @@doomsdayrule Intelligence is much more than just about writing a text. Our decisions are based not only on lexical facts, but on our personal experiences, personal interests, emotions etc. I cannot and not going to much deeper into that, but it must be way more complex than a simple algorithm based on a bunch of data.
      I am not saying less than you will never ever be able to make a future LLM that is indistinguishable from a human being. Of course when you are presented just a text written by "somebody" you may not be able to figure it out, but if you start living with a person controlled by an LLM you will distinguish much sooner than later. It is all because the bunch of data these LLMs are using is missing one important thing: personality. And this word is highly related to intelligence.

    • @KoflerDavid
      @KoflerDavid 4 месяца назад +5

      @@doomsdayrule As I am writing this comment, I'm not starting with a random word like "As" and then try to figure out what to write next. (Actually, the first draft started with "When")
      I have a thought in mind, and then somehow pick a sentence pattern suitable for expressing it. Then I read over (usually while still typing) and revise. At some point, my desire to fiddle with the comment is defeated by the need to do something else with my day, and I submit the reply. And then I notice obvious possibilities for improvements and edit what I just submitted.

    • @kralg
      @kralg 3 месяца назад

      @@MarcusHilarius One aspect to this is that we are living in an overhyped world. Just in recent years we have heard so many promises like what you made. Just think about the promises made by Elon Musk and other questionable people. The marketing around these technologies are way "in front" of the reality. If there is just a theoretical possibility of something, the marketing jumps on it, they create thousands of believers in the obvious aim to gather support for further development. I think it is just smart to be cautious.
      The other aspect to this is that many believers do not know the real details of the technologies they believe in. The examples you mentioned are not in the future, at some extent they are available now. We call it automation and they do not require AI at all. Instead they rely on sensor technology and simple logic. Put an AI sticker on it and sell a lot more.
      Sure machine learning will be a great tool in the future, but not much more. We are in the phase of admiration now, but soon we will face the challenges and disadvantages of it and we will just live with them as we did so with many other technologies from the past.

  • @nox5282
    @nox5282 5 месяцев назад +6

    I use ai as a learning tool, if I get stuck I bounce ideas similar to a person, I then use it as a basis to keep going. I discover things I didn’t consider and continue reading other sources. Right now ai os not good to teach you, but great to get directions to explore or make of things or concepts to lookup.
    That being said next generation will be unable to form thoughts without ai, how many knows how to do long division anymore by hand

  • @alleged_STINK
    @alleged_STINK 17 дней назад

    "This pattern doesn't look like the usual pattern, are you sure?" awesome

  • @TheNimaid
    @TheNimaid 5 месяцев назад +132

    As someone with a degree in Machine Learning, hearing him call it LLMs "Autocorrect on steroids" gave me catharsis. The way people talk and think about the field of AI is totally absurd and grounded in SciFi only. I want to vomit every time someone tells me to "just use AI to write the code for that" or similar.
    AI, as it exists now, is the perfect tool to aid humans (think pair programming, code auto-completion for stuff like simple loops, rough prototypes that can inspire new ideas, etc.) Don't let it trick you into thinking it can do anyone's job though. It's just a digital sycophant, never forget that.

    • @vuralmecbur9958
      @vuralmecbur9958 4 месяца назад +12

      Do you have any valid arguments that make you think that it cannot do anyone's job or is it just your emotions?

    • @legendarymortarplayer9453
      @legendarymortarplayer9453 3 месяца назад

      ​@@vuralmecbur9958if your job relies on not thinking and copy pasting code then yes it can replace you but if it is not,if you understand code and can modify it properly to your needs and specifications it can not replace you,I work on ai as well

    • @user-zf4nq1dy2n
      @user-zf4nq1dy2n 3 месяца назад

      ​@@vuralmecbur9958it's not about AI not being an "autocorrect on steroids". It's about "there are a lot of jobs out there, that could be done by autocorrect on steroids"

    • @DDracee
      @DDracee 3 месяца назад

      @@vuralmecbur9958do you have any valid arguments as to why people will get layed off instead of companies scaling up their projects? 200-300% increase in productivity simply means 200-300% increase in future project sizes, the field you're working in is already dying anyway if scaling up isn't possible and you're barking at the wrong tree
      where i'm working were constantly turning down projects because there's too much to do and no skilled labour to hire (avionics/defense)

    • @jeromemoutou9744
      @jeromemoutou9744 3 месяца назад

      ​@@vuralmecbur9958 go prompt it to make you a simple application and you'll see it's not taking anyone's job anytime soon.
      If anything, it's an amazing learning tool. You can study code and anything you don't understand, it will explain in depth. You don't quite grasp a concept? Prompt it to explain it further.

  • @wabbajocky8235
    @wabbajocky8235 5 месяцев назад +5

    linus with the hot takes. love to see it

  • @msromike123
    @msromike123 2 дня назад +1

    Well, my first Arduino project went very well. A medium complexity differential temperature project with 3 operating modes, hysteresis, etc. Medium complexity. I know BASIC and 4NT batch language. Microsoft Co-pilot helped me produce tight, memory efficient, buffer safe, and well documented code. So, AI for the win!

  • @7rich79
    @7rich79 5 месяцев назад +21

    Personally I think that while it will be extremely useful, there will also be this belief over time that the "computer is always right". In this sense we will surely end up with a scandal like Horizon in the future, but this time it will be much harder to prove that there was a fault in the system.

    • @arentyr
      @arentyr 5 месяцев назад

      Precisely this. With Horizon it took years of them being incredulous that there were any bugs at all, that it must be perfect and that instead thousands of postmasters were simply thieves. Eventually the bugs/errors became so glaring (and finally maybe someone competent actually looked at the code) that it was then known that the software was in fact broken. What then followed were many many more years of cover ups and lies, with people mainly concerned with protecting their own status/reputation/business revenue rather than do what was right and just.
      Given all this, the AI scenario is going to be far worse: the AI system that “hallucinates” faulty code will also “hallucinate” spurious but very plausible explanations.
      99.99% won’t have the requisite technical knowledge to determine that it is in fact wrong. The 0.01% won’t be believed or listened to.
      The terrifying prospect of AI is in fact very mundane (not Terminator nonsense): its ability to be completely wrong or fabricate entirely incorrect information, and then proceed to explain/defend it with seemingly absolute authority and clarity.
      It is only a matter of time before people naturally entrust them far too much, under the illusion that they are never incorrect, in the same way that one assumes something must be correct if 99/100 people believe it to be so. Probability/mathematics is a good example of where 99/100 might think something is correct, but in fact they’re all wrong - sometimes facts can be deeply counterintuitive, and go against our natural intelligence heuristics.

    • @mattmaas5790
      @mattmaas5790 4 месяца назад

      Maybe. But it depends what we allow ai to be in charge of. Remember, if we vote out the gop we can like pass laws again to do things for the benefit of the people including ai regulations if needed.

  • @memaimu
    @memaimu 5 месяцев назад +5

    "Linus Benedict Torvalds is a Finnish-American software engineer who is the creator and lead developer of the Linux kernel, used by Operating Systems such as Chrome OS, Android, and GNU/Linux distributions such as Debian and Arch. He also created the distributed version control system Git."

  • @fafutuka
    @fafutuka 5 месяцев назад +1

    The fact that you can to him about code reviews its just humbling, man hasn't change at all

  • @johncompassion9054
    @johncompassion9054 Месяц назад

    This is why Linus is Linus. Just look at his intelligence, attitude to life and optimism. No negativity, rivalry or hate. My respect.

  • @lmamakos
    @lmamakos 5 месяцев назад +15

    Is cut-and-paste from StackOverflow that far from asking the LLM for the answer?

    • @derekhettinger451
      @derekhettinger451 4 месяца назад +17

      ive never been insulted by gpt

    • @David-gu8hv
      @David-gu8hv 4 месяца назад +1

      @@derekhettinger451 Ha Ha!!!!!

    • @VoyivodaFTW1
      @VoyivodaFTW1 4 месяца назад

      Lmao. Well, a senior dev is likely on the other end of a stack overflow answer, so basically yea

    • @pauldraper1736
      @pauldraper1736 2 месяца назад

      @@VoyivodaFTW1 optimistic I see

    • @mongoosae
      @mongoosae Месяц назад

      Any help forum is just a distributed neural net when you think about it

  • @user-rh2xc4eq7d
    @user-rh2xc4eq7d 5 месяцев назад +27

    A responsible programmer might use AI to generate code, but they would never submit it without understanding it and testing it first.

    • @traveller23e
      @traveller23e 5 месяцев назад +14

      Although by the time you read and fully understand the code, you may as well have written it.

    • @user-rh2xc4eq7d
      @user-rh2xc4eq7d 5 месяцев назад +2

      @@traveller23e if the code fails for some reason, I'll be glad I took the time to understand it.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 5 месяцев назад +4

      @@traveller23e actually true. if you understand every aspect of the code, why wouldn't you just have written it yourself? at some point when using llms these people will become used to the answers being mostly correct so they'll stop checking. productivity 200% bla bla, yeah sure dude. man llms will ruin modern software even more, todays releases are already full of bugs

    • @MrHaggyy
      @MrHaggyy 5 месяцев назад

      @@traveller23e Well the same goes for the compiler. If you "fully understand" the code there should never be a warning or error. Most tools like GitHub-copilot require you to write anyway, but they give you the option of writing a view dozen chars with a single keystroke. This is pretty nice if most of your work is assembling different algorithms or data structures, not creating new ones.

    • @Mpanagiotopoulos
      @Mpanagiotopoulos 5 месяцев назад

      I submit all the times code I don't understand, I simply ask in english the LLM to explain it to me. I have written a whole app in javascript without learning JS in my entire life

  • @mingzhu8093
    @mingzhu8093 5 месяцев назад +2

    Program generated code goes way back for decades, if you ever use any ORM almost all of them generate tables for class and sql and vice versa. But I don’t think anybody just takes it as is without reviewing.

    • @caLLLendar
      @caLLLendar 5 месяцев назад

      Reviewing can be automated.

  • @cesarlapa
    @cesarlapa 5 месяцев назад +1

    That Canadian guy was lucky enough to be given the name of a true tech genius

  • @br3nto
    @br3nto 5 месяцев назад +15

    LLMs are interesting. They can be super helpful to write out a ton of code from a short description, allowing you to formulate an idea really quickly, but often the finer details are wrong. That is using an LLM to write unique code is problematic. You may want the basic structure of idiomatic code, but then introduce subtle differences. When doing this, the LLM seems to struggle, often suggesting methods that don’t exist, or used to exist, or starts mixing methodologies from multiple versions of the library in use. E.g trying to use WebApplicationFactory in C#, but introducing some new reusable interfaces to configure the services and WebApplication that can be overridden in tests. It couldn’t find/suggest a solution. It’s a reminder that it can only write code it’s seen before. It can’t write something new. At least not yet.

    • @elle305
      @elle305 5 месяцев назад +9

      you'll spend more time making sure it didn't add confident errors than it would take to write the code in the first place. complete gimmick only attractive to weak programmers

    • @br3nto
      @br3nto 5 месяцев назад +3

      @@elle305 I don’t think that’s accurate. Sure, you need the expertise to spot errors. Sure, you need the expertise to know what to ask for. But I don’t agree with the idea that you’ll take more time with LLMs than without. It’s boosted by productivity significantly. It’s boosted my ability to try new ideas quickly and iterate quickly. It’s boosted my ability to debug problems in existing code. It’s been incredibly useful. It’s a soundboard. It’s like doing pair programming but you get instant code. I want more of it, not less.

    • @elle305
      @elle305 5 месяцев назад +3

      @@br3nto i have no way to validate your personal experience because i have no idea of your background. but I'm a full time developer and have been for decades, and I'm telling you that reviewing llm output is harder and more error prone than programming. there are no shortcuts to this discipline and people who look for them tend to fail

    • @Jonas-Seiler
      @Jonas-Seiler 5 месяцев назад

      ⁠@@elle305 it’s no different for any other discipline. but sometimes doing it the hard way (fucking around trying to make the ai output work somehow) is more efficient than doing it the right way, especially for one-of things, like trying to cobble together an assignment. and unfortunately more often than not, weak programmers (writers, artists, …) are perfectly sufficient for the purposes of most companies.

    • @elle305
      @elle305 5 месяцев назад

      @@Jonas-Seiler i disagree

  • @kibiz0r
    @kibiz0r 5 месяцев назад +23

    As a central figure in the FOSS movement, I'm surprised he doesn't have any scathing remarks about OpenAI and Microsoft hijacking the entire body of open source work to wrap it in an opaque for-profit subscription service.

    • @nothingtoseehere93
      @nothingtoseehere93 3 месяца назад

      He has to be careful now that the SJWs neutered him and sent him to tolerance camp. Thank the people who wrote absolute garbage like the contributor covenant code of conduct

    • @haroldcruz8550
      @haroldcruz8550 2 месяца назад +1

      Then you're not in the loop. Linus was never the central figure of the FOSS movement. While his contribution to the Linux Kernel is appreciated he's not really considered one of the leaders when it comes to the FOSS movement.

    • @jasperdevries1726
      @jasperdevries1726 Месяц назад

      @@haroldcruz8550 Well said. I'd expect stronger opinions from Richard Stallman for instance.

  • @sj6986
    @sj6986 9 дней назад

    Has been a long time since I have seen such a hard argument - both are very right. They will have to master the equivalent of unit testing to ensure that LLM driven decision-making doesn’t become a runaway train. Even if you put a human to actually “pull the trigger”, if the choices are provided LLM then they could be false choices. On the other hand, there is likely a ton of low lying fruit that LLM could mop up in no time. There could be enormous efficiencies in entire stacks and all the associated compute in terms of performance and stability if code is consistent.

  • @timothybruce9366
    @timothybruce9366 3 месяца назад

    My last company started using AI over a year ago. We write the docblock and the AI writes the function. And it's largely correct. This is production code in smartphones and home appliances world-wide.

  • @aniellodimeglio8369
    @aniellodimeglio8369 5 месяцев назад +3

    LLMs are certainly useful and can very much assist in many areas. The future is really is open source models which are explainable and share their training data.

  • @Standbackforscience
    @Standbackforscience 5 месяцев назад +9

    There's a world of difference between using AI to find bugs in your code, vs using AI to generate novel code from a prompt. Linus is talking about the former, AI Bros mean the latter.

  • @frantisek_heca
    @frantisek_heca 5 месяцев назад +1

    The full talk is where please?

    • @Hobbitstomper
      @Hobbitstomper 5 месяцев назад +1

      Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

  • @LuicMarin
    @LuicMarin 3 месяца назад

    It is already helping review code, just look at million lint, it's not all AI but it has aspects where it uses LLMs to help you find performance issues in react code. A similar thing could be applied to code reviews in general

  • @CausallyExplained
    @CausallyExplained 5 месяцев назад +8

    Linus is definitely not the sheep, you can tell just how different he is from the general.

    • @chunkyMunky329
      @chunkyMunky329 5 месяцев назад +2

      He is different but something I've noticed is that smart people have a great at understanding things that the rest of us struggle with, but they are kinda dumb when it comes to things of simple common sense. Like for him to not understand the down side to an AI writing bad code for you is just kinda silly. It should be obvious that a more reliable tool would be better than a less reliable tool.

    • @justsomerandomnesss604
      @justsomerandomnesss604 5 месяцев назад +3

      ​@@chunkyMunky329There is no "more reliable tool" though
      It is about tools in your toolbox in general
      Just because your hammer is really good at hammering in a nail, you're not gonna use it to saw a plank.
      Same with programming. You use the tools that get the job done.

    • @pauldraper1736
      @pauldraper1736 2 месяца назад

      @@chunkyMunky329 You have an implicit assumption that people are more reliable tools than LLMs. I think that is up for debate.

    • @chunkyMunky329
      @chunkyMunky329 2 месяца назад

      @@pauldraper1736 "people" is a vague term. Also, I never said that it was a battle between manual effort vs LLMs. It should be a battle between an S-Tier human invention such as a compiler vs an LLM. Great human-built software will cause chat GPT to want to delete itself

    • @pauldraper1736
      @pauldraper1736 2 месяца назад

      @@chunkyMunky329 linter is only one possible use of ai

  • @AlbertCloete
    @AlbertCloete 5 месяцев назад +93

    Those subtle bugs are what LLMs produce copious amounts of. And it takes very long to debug. To the degree where you probably would have been better off if you just wrote the code by hand yourself.

    • @xSyn08
      @xSyn08 5 месяцев назад +17

      @@user-qd4xs8zb8sWhat, like a "Prompt Engineer"? It's ridiculous that this became a thing given how LLMs work.
      It's all about intuition that most people can figure out if they spend a day messing around with it.

    • @joshmogil8562
      @joshmogil8562 5 месяцев назад +4

      Honestly this has not been my experience using GPT4

    • @tbunreall
      @tbunreall 5 месяцев назад +2

      Disagree. Since humans constantly creating bugs when coding themselves, even subtle, even the best of the best. LLM are amazing. I realized my python code ended up needing to be multi threaded. I fed it my code, and it multi threaded everything. They are incredible and only this is just beginning? 5 years will blow peoples minds, completely. People who don't find how amazing llms are, just aren't that bright in my opinion.

    • @asterinycht5438
      @asterinycht5438 5 месяцев назад +2

      thats why you must input the psuedocode on llm to control the output more be precise to what you want.

    • @gabrielkdc17
      @gabrielkdc17 5 месяцев назад +7

      It's amusing how we, as programmers, often tell users that if they input poor quality data into the system, they should expect poor quality results. In this case, the fault lies with the user, not the system. However, now we find ourselves complaining about a system when we input low-quality data and receive unsatisfactory results. This time, though, we blame the system instead of ourselves

  • @ITentrepreneur
    @ITentrepreneur 4 месяца назад

    *_What did Linus Torvalds say in summary?_*

  • @BCOOOL100
    @BCOOOL100 4 месяца назад

    Link to the original?

  • @samson_77
    @samson_77 5 месяцев назад +44

    Good interview, but I disagree with the introduction, where it is said that LLM's are "auto-correction on steroids" . Yes, LLMs do next token prediction. But that's just one part. The engine of a LLM is a giant neural network, that learned a (more or less sophisticated) model of the world. It is being used during inference to match input information against and, based on that correlations, creates new output information which leads, in an iterative process, to a series of next token. So the magic happens, when input information is matched against the learned world model, that leads to new output information.

    • @thedave0004
      @thedave0004 5 месяцев назад +19

      Agreed! This is the type of thing people say somewhat arrogantly when they've only had a limited play with the modern LLMs. My mind was blown when I wrote a parser of what I would call medium complexity in python for a particular proprietary protocol. It worked great but it was taking 45 mins to process a days worth of data, and I was using it every day to hunt down a weird edge case that only happened every few days. So out of interest I copied and pasted the entire thing into GPT4 and said "This is too slow, please re-write it in C and make it faster" and it did. Multiple files, including headers, all perfect. It compiled first time, and did in about 30s (I forget how long exactly but that ballpark) what my hand written python program was doing in 45 mins. I don't think I've EVER written even a simple program that's compiled first time, let alone something medium complicated.
      To call this auto complete doesn't give it the respect it deserves. GPT4 did in a few seconds what would have taken me a couple of days (if I even managed it at all, I'm not an expert in C by a long stretch).

    • @davidparker5530
      @davidparker5530 5 месяцев назад +9

      I agree, the reductionist argument trivializes the power of LLMs. We could say the same thing about humans, we "just predict the next word in a series of sentences". That doesn't capture the power and magic of human ingenuity.

    • @thegoncaloalves
      @thegoncaloalves 5 месяцев назад +4

      Even Linus says that. Some of the things that LLMs produce are almost black magic.

    • @mitchhudson3972
      @mitchhudson3972 5 месяцев назад +9

      So... Autocorrect

    • @mitchhudson3972
      @mitchhudson3972 5 месяцев назад +6

      ​@@davidparker5530humans don't just predict the next word though. Llms do. Neural networks don't think, all they do is guess based on some inputs. Humans think about problems and work through them, llms by nature don't think about anything more than what they've seen before.

  • @sidharthv
    @sidharthv 5 месяцев назад +19

    I learned python on my on from RUclips and online tutorials. And recently I started learning Go the same way, but this time also with the help of Bard. The learning experience has been nothing short of incredible.

    • @Spacemonkeymojo
      @Spacemonkeymojo 5 месяцев назад +3

      You should pat yourself on the back for not asking ChatGPT to write code for you.

    • @incremental_failure
      @incremental_failure 5 месяцев назад

      @@Spacemonkeymojo Only my RUclips comments are written by ChatGPT, not my code.

    • @etziowingeler3173
      @etziowingeler3173 5 месяцев назад

      Bard and code, only for simple stuff

  • @laughingvampire7555
    @laughingvampire7555 4 месяца назад

    I'm more interested in code synthesizers, is something the PLT folks are doing, using a sophisticated type system and theorem prover to generate the code that fits the given criteria.

  • @raielschwartz6837
    @raielschwartz6837 5 месяцев назад +2

    It's truly fascinating to hear Torvalds' insightful perspective on how Artificial Intelligence is molding the programming landscape. This video does a commendable job of breaking down complex concepts into understandable dialogue for the viewers. AI's potential in automating tasks and improving efficiency is a game-changer, and it's exciting to see what the future holds in this sphere. Thank you for sharing such an enlightening discussion. Looking forward to more content like this.

  • @calmhorizons
    @calmhorizons 5 месяцев назад +3

    There is a fundamental philosophical difference between the type of wrong humans do, and the type AI does (in its present form). I think programmers are in danger of seriously devaluing the relative difference between incidental errors and constitutive errors - that is, humans are wrong accidentally, LLMs are wrong by design - and while we know we can train people better to reduce the former, it remains to be seen if the latter will remain inherent in the implementation realities of the latter - i.e. relying on statistical inference as a substitute for reason.

    • @caLLLendar
      @caLLLendar 5 месяцев назад

      You got stuck in your own word salad. Start over; Think like a programmer. Break the problem down. How would you go about proving the LLM's code is correct using today's technology?

    • @calmhorizons
      @calmhorizons 5 месяцев назад +1

      ​@@caLLLendar
      First, I don't appreciate your tone. I know this is RUclips and standards of discourse here are notoriously low, but there is no need to be rude.
      I wasn't making a point about engineering.
      The issue is not the code, code can of course be Unit Tested etc. for validity.
      The issue is that the method of producing the code is fundamentally statistical, and not arrived at through any form of reason. This means there is a ceiling of trust that we must impose if we are to avoid the obvious pitfalls of such an approach.
      As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data - and you, as the developer, if you do not preference your own problem solving skills are increasingly relegated to the role of code babysitter. This is not something to be treated casually.
      Early research is now starting to validate this concern: visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx
      These models have their undeniable uses, but I find it depressing how many developers are rushing to proclaim their own obsolescence in the face of a provably flawed (though powerful) tool.

    • @caLLLendar
      @caLLLendar 5 месяцев назад

      @@calmhorizons Have one developer draft psueudocode that is transformed to whatever scripting language that is preferred and then use a boatload of QA tools. The output from the QA tools prompt the LLM. Look at Python Wolverine to see automated debugging. Google the loooooonnnnng list of free open source QA tools that can be wrapped around the LLMs. The LLMs can take care of most of the code (like writing unit tests, type hinting, documentation, etc).
      The first thing you'd have to do is get some hands on experience in writing the pseudocode in a style that LLMs and non-programmers can understand.
      From there, you will get better at it and ultimately SEE it with your own eyes. I admit that there are times that I have to delete a conversation (because the LLM seems to become stubborn). However, that too can be automated.
      The result?
      19 out of 20 developers fired. LOL I definitely wouldn't hire a developer who wouldn't be able to come up with a solution for the problems you posed (even if the LLM and tools are doing most of the work).
      Some devs pose the problem and cannot solve it. Other devs think that the LLM should be able to do everything (i.e. "Write me a software program that will make me a million dollars next week).
      Both perceptions are provably wrong. As programmers it is our job to break the problem down and solve it.
      Finally, there are ALREADY companies doing this work (and they are very easy to find).

    • @vibovitold
      @vibovitold Месяц назад

      @@calmhorizons exactly. Agreed, and very well put. Respect for taking time to reply to a rather shallow and asinine comment.
      "As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data "
      I would add that this will likely be exacerbated once more and more AI-generated code makes its way into the training datasets (and good luck filtering it out).
      We already know that it has a very deteriorating effect on the quality (already proven for the case of image generation), because all flaws inherent to the method get amplified as a result.

  • @avananana
    @avananana 5 месяцев назад +27

    I personally believe, much like many others, that AI/ML will only speedup the rate at which bad programmers become even worse programmers. Part of the art of writing software is writing it efficiently, and you can't do that if you always use tools to solve your problems for you. You need to experience the failures and downsides in order to fully understand how it works. There is a line when it turns from an efficient tool to a tool used to avoid actually thinking about solutions. I fully believe that there is a place for AI/ML in making software, but if people blindly use them to write software for them it'll just lead to hard-to-find bugs and code that nobody knows how it works because nobody actually wrote it.

    • @cookie_space
      @cookie_space 5 месяцев назад +7

      You don't always have to reinvent the wheel when it comes to learning how to code.
      Everyone starts by copying code from Stack Overflow and many still do that for novel concepts they want to understand.
      It can be pretty helpful to ask AI for specific things instead of spending hours trying to search for something fitting...
      Sure thing, if you just stop at copying you don't learn anything

    • @conchitacaparroz
      @conchitacaparroz 5 месяцев назад

      @@cookie_space but i think that's the thing, the risk of "just copying" will be higher because all the AI tools and AI features in our IDEs will make it a lot easier and more probable to get the code ready for you

    • @Markus-iq4sm
      @Markus-iq4sm 5 месяцев назад +1

      @@cookie_spaceeveryone? Man don't throw everyone to the same bucket. Are you the guy who can not even write a bubble sort out of your head and you need to google every single solution? Well, that is sad

    • @cookie_space
      @cookie_space 5 месяцев назад +4

      @@Markus-iq4sm I wasn't aware that your highness was born with the knowledge of every programming language and concept imprinted in your brain already. It might be hard to fathom for you, but some of us actually have to learn programming at some point

    • @Markus-iq4sm
      @Markus-iq4sm 5 месяцев назад +1

      @@cookie_space you learn nothing by copy-pasting, actually it will even make you worse especially for beginners

  • @agdevoq
    @agdevoq 4 месяца назад +2

    The "I" into "LLM" stands for "Intelligence"

  • @Mari_Selalu_Berbuat_Kebaikan
    @Mari_Selalu_Berbuat_Kebaikan 4 месяца назад

    Let's always do alot of good 🔥

  • @nissimtrifonov5314
    @nissimtrifonov5314 5 месяцев назад +5

    Somehow, Palpatine returned 😯😯😯

  • @sfacets
    @sfacets 5 месяцев назад +20

    If programmers aren't debugging their own work, then they will gradually loose the ability to do so. Just like when a child learns to multiply with a calculator and not in their minds - they lose the ability to multiply, and become reliant on the machine.
    Programmers learn as they program. It is mind-expanding work. Look at Torvalds and you see a person who is highly intelligent, because he has put the work in over many years.
    We can become more efficient programers using AI tools - but it will come at a cost.
    "Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology." - Martin Heidegger
    When a programer, for example, is asked to check on a solution given by AI, and lacks the competency to do so (because, like the child, they never learned the process) then this is a dangerous position we as humans are placing ourselves in - caged in inscrutable logic that will nonetheless come to govern our lives.

    • @JhoferGamer
      @JhoferGamer 5 месяцев назад

      yep

    • @dan-cj1rr
      @dan-cj1rr 5 месяцев назад +2

      yep but companies dont care on the spot, they want the feature as fast as possible and the cheapest way

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 5 месяцев назад

      nicely put

  • @tecTitus
    @tecTitus 3 месяца назад

    If you don't mind being a prompt engineer and a code reviewer of LLMs, they are great

  • @starmap
    @starmap 5 месяцев назад +1

    When Linus speaks I listen.

  • @MrVampify
    @MrVampify 5 месяцев назад +24

    I think LLM technology will make bad programmers faster at being bad bad programmers and hopefully push them to be better programmers faster as well.
    LLMs I think will make good programmers more efficient at writing good code they probably would already write.

    • @melvin6228
      @melvin6228 5 месяцев назад +7

      LLMs solve not needing to remember how you write things. You still have to be able to read it and have good judgement on where the code is subpar.

    • @skyleite
      @skyleite 5 месяцев назад +7

      @@melvin6228 This is nonsense. How can you audit code that you yourself don't remember how to write?

    • @yjlom
      @yjlom 5 месяцев назад +2

      @@skyleite is that function you use twice a year called "empty_foo_bar" or "clear_foo_bar"? Or maybe "foo_bar_clear"? Those kinds of questions are very important and annoying to answer when writing, useless when reading.

    • @unkarsthug4429
      @unkarsthug4429 5 месяцев назад +5

      ​@@yjlom Or even just something as simple like the question of how you get the length of an array in the particular language you are using. After using enough languages, they kind of all blend together, and I can't remember if this one is x.length, x.length(), size(x), or len instead of length somewhere. I'm used to flipping between a lot of languages quickly, and it's really easy to forget the specifics of a particular one sometimes, even if I understand the flow I would like the program to follow. Essentially, having an AI that can act as a sort of active documentation can really help.

    • @RobFisherUK
      @RobFisherUK 5 месяцев назад

      I was using ChatGPT to help me write code just today. I'm making a Python module in Rust and I'm new to Rust.
      I wanted to improve my error handling. I asked how to do something and ChatGPT explained that I could put Results in my iterator and just collect at the end to get a vector if all the results are ok or an error if there was a problem. I didn't understand how that worked and asked a bunch of follow-up questions about various edge cases. ChatGPT explained it all.
      Several things happened at once: I got an immediate, working solution to my specific problem. I didn't have to look up the functions and other names. And I got tutored in a new technique that I'll remember next time I have a similar situation.
      And it's not just the output. It's that your badly explained question, where you don't know the correct terminology, gets turned into a useful answer.
      On a separate occasion I learned about the statistical technique of bootstrapping by coming up with a similar idea myself and asking ChatGPT for prior art. I wouldn't have been able to search for it without already knowing the term.

  • @lindhe
    @lindhe 5 месяцев назад +3

    "Hopeful and humble" sounds like a good name for a Linux release. Just saying…

  • @chrisakaschulbus4903
    @chrisakaschulbus4903 5 месяцев назад

    Just pasting some java code i wrote and asking a stupid question like "why does it overflow?" or something has saved me many headaches.
    Googling these problems can lead to much unrelated info and then it's just other people pasting their own code and asking questions.

  • @user-qz6em2ss4n
    @user-qz6em2ss4n 4 месяца назад

    You're right. But we're also hearing some negative stories in terms of teamwork. For example, there are some situations where a junior developer sits and waits for an AI code that keeps giving different answers instead of writing code, or it takes more time to analyze why the code was written the way it was, as opposed to the other way around, but it still helps to gain insight or a new approach, even if it's a completely different answer.

    • @tapetwo7115
      @tapetwo7115 3 месяца назад

      That junior coder needs more GitHubs so we can bring them on as a lead dev to work with AI. The middle management and entry level is over in the future.

  • @shobanchiddarth_old
    @shobanchiddarth_old 5 месяцев назад +3

    Link to original?

    • @Hobbitstomper
      @Hobbitstomper 5 месяцев назад +1

      Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

  • @hyperthreaded
    @hyperthreaded 5 месяцев назад +3

    I love how Hohndel disses AI as "not very intelligent" / "just predicts the next word" and Linus retorts that it's actually pretty great lol

    • @EdwardBlair
      @EdwardBlair 5 месяцев назад +1

      Because the guy doesn’t know what he is talking about with respect to LLMs and ML. He is clearly intelligent but has only surface level knowledge in that field.

    • @hyperthreaded
      @hyperthreaded 5 месяцев назад

      @@EdwardBlair yeah I also found it curious that as he was about to ask Linus about AI in kernel development, he apparently felt an overwhelming need to first vent his own opinion on AI in general even though that wasn't even the topic at hand and he wasn't the person that was being interviewed.

    • @GSBarlev
      @GSBarlev 5 месяцев назад +3

      I'm an expert in the field and I _still_ think it's "autocorrect on steroids." It's just that I think that autocorrect was a revolutionary tool, even when it was just Markov chains.

  • @piotrek7633
    @piotrek7633 4 месяца назад +2

    You people dont understand, it never was if ai would replace programmerw, it always was if ai will reduce job position by a critical amount so that its hard to get hired

  • @terry-
    @terry- 4 месяца назад

    Great!

  • @hyphenpointhyphen
    @hyphenpointhyphen 5 месяцев назад +9

    I think some humans would be glad if they still had the time to hallucinate, dream or imagine things from time to time.

    • @asainpopiu6033
      @asainpopiu6033 5 месяцев назад +1

      good point xD

    • @verdiss7487
      @verdiss7487 5 месяцев назад

      I think most project leads would not be glad if one of their devs submitted a PR for code they hallucinated

    • @hyphenpointhyphen
      @hyphenpointhyphen 5 месяцев назад

      @@verdiss7487 not what i am talking about

    • @pueraeternus.
      @pueraeternus. 5 месяцев назад

      late stage ca-

    • @asainpopiu6033
      @asainpopiu6033 5 месяцев назад

      @@pueraeternus. cannibalism?

  • @Kersich86
    @Kersich86 5 месяцев назад +4

    my main fear is that this is something we will start relying on too much. especially when people start even autocompletion can become a crutch so much so that a developer becomes useless without it. imagine that but when it comes to thinking about code. we are looking at a feature where all software will be as bad as modern web develooment.

    • @kevinmcq7968
      @kevinmcq7968 5 месяцев назад

      technology as an idea is reliable - a hammer will always be a hard thing + leverage. We have relied on technology since the dawn of mankind, so I'm not sure what you're saying here.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 5 месяцев назад

      @@kevinmcq7968 llms are reliable? how so? can you name a technology that we have relied on in the past that is as random as llms? I am genuinely curious

    • @diadetediotedio6918
      @diadetediotedio6918 4 месяца назад

      @@kevinmcq7968
      I think you are just intentionally misunderstanding what he is saying. He is not saying tools are not usefull, he is saying that if the tool starts to replace the use of your own mind it can make you dependent at the point that it will prejudice your own reasoning skills (and we have some evidence that this is happening, that's why some schools are turning back to use handwritting for example / Miguel Nicolelis also has some takes on this matter).

  • @gleitonfranco1260
    @gleitonfranco1260 3 месяца назад

    There are already Lints tools for several languagens that kinda do this work

  • @TehIdiotOne
    @TehIdiotOne 5 месяцев назад

    I'm actually surprised that he seems quite open to it, but his points do make a lot of sense.

  • @roaringdragon2628
    @roaringdragon2628 5 месяцев назад +9

    I find that in their current state, these models tend to make more work for me deleting and fixing bad code and poor comments than the work they save. It's usually faster for me to write something and prune it than to prune the ai code. This may be partially because it's easier for me to understand and prune my own code than to do the same with the generated stuff, but there is usually a lot less pruning to do without ai.

    • @voltydequa845
      @voltydequa845 5 месяцев назад

      No. Your comment was for me like a fresh air in the middle of all this pseudo-cognitive farting about the so-called AI. No, it is not only for you. Those who say otherwise are just posers, actors, mystifying parrots repeating the instilled marketing hype.

  • @DemPilafian
    @DemPilafian 5 месяцев назад +4

    Auto-correct can cause bugs like tricking developers into importing unintended packages. I've seen production code that should fail miserably, but pure happenstance results in the code miraculously not blowing up. AI is a powerful tool, but it will amp up these problems.

    • @caLLLendar
      @caLLLendar 5 месяцев назад

      No. Thinking like a programmer, are you able to come up with a solution?

  • @kaikulimu
    @kaikulimu 4 месяца назад

    Linus Torvalds is so smart!

  • @RayDusso
    @RayDusso 3 месяца назад

    I like how he didn’t fall into the trap of Ai bashing like the host was trying to lead him to. That’s how you can differentiate a trend follower from a visionary.

  • @nati7728
    @nati7728 5 месяцев назад +3

    I already feel helpless without intellisense. I can imagine how future developers will feel banging their head against their keyboard because their LLM won't load with the right context for their environment.

    • @willsamadi
      @willsamadi 5 месяцев назад +1

      I use intellisense on a daily but I know people who code on raw vim and get more things done than me in a day. AI is going to make typical things more easy and is going to have limitations for a long time and to do anything outside those limitations we'll need actual programmers.

  • @flokar6197
    @flokar6197 5 месяцев назад +14

    I have never programmed before in my life and with GPT4 I have programmed several little programs in Phython. From code that helps me renaming large amount of files to more advanced stuff. LLMs give me the opportunity to play around. Only thing I need to learn is how to prompt better.

    • @kevinmcq7968
      @kevinmcq7968 5 месяцев назад +2

      you're a programmer in my eyes!

    • @twigsagan3857
      @twigsagan3857 5 месяцев назад +4

      "Only thing I need to learn is how to prompt better."
      This is exactly the problem. Especially when you scale. You can't prompt to make a change to an already complex system. It then becomes easier to just code or refactor yourself.

    • @chunkyMunky329
      @chunkyMunky329 5 месяцев назад

      The fact that anybody needs to "prompt better" suggest that LLMs are not very good yet

    • @flokar6197
      @flokar6197 5 месяцев назад

      @@twigsagan3857 Only problem is when the code exceeds the Token Limit. Otherwise I can still let the LLM correct my code. Takes a while to get there but It works.. And no I am not at all a programmer xD

    • @flokar6197
      @flokar6197 5 месяцев назад +1

      @@chunkyMunky329 huh? LLMs predict the most likely answer. So the way you describe the Task is the most important thing in dealing with it..