I Didn’t Believe that AI is the Future of Coding. I Was Right.

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024

Комментарии • 6 тыс.

  • @frankhabermann9083
    @frankhabermann9083 Месяц назад +2469

    Amazon AI: You just bought a fridge? You probably need another fridge, let me show you some more fridges.

    • @SkorjOlafsen
      @SkorjOlafsen Месяц назад +161

      To be fair, there's a 50% chance you'll need another fridge in a few months.

    • @frankhabermann9083
      @frankhabermann9083 Месяц назад +221

      @@SkorjOlafsen I don't ever question the AI overlord, I just became a proud fridge collector with monthly subscription.

    • @SkorjOlafsen
      @SkorjOlafsen Месяц назад +41

      @@frankhabermann9083 Can't stop laughing.

    • @blucat4
      @blucat4 Месяц назад +8

      @@SkorjOlafsen 😄

    • @youdontneedit9114
      @youdontneedit9114 Месяц назад +9

      @@SkorjOlafsen Sure, you'll either need it, or not.

  • @AdamAlpaca
    @AdamAlpaca Месяц назад +2723

    As a software developer I can say with certainty that my productivity has gone up. I still write my own code but guess who writes my emails 😂 …

    • @zoeherriot
      @zoeherriot Месяц назад +105

      @@AdamAlpaca lol - yes, I’ve used it to break the dead lock on ideas - or find new avenues for looking for a solution, but it’s not great at some problems. 90% of the bugs I run into it will never have been trained on because it’s in the heart of a proprietary game engine, with a custom build system.
      However, corporate crap never changes. :)

    • @skamithi
      @skamithi Месяц назад +41

      Agreed it is very helpful for crafting business email drafts. Saved me time too in that area. Agree that in coding the AI tools are like a stack overflow aggregator. Same shit on that platform, but better summaries.

    • @user-pt1kj5uw3b
      @user-pt1kj5uw3b Месяц назад

      Yes these things don't have to cure cancer to be useful. Even if they don't superscale I think there's still more juice to squeeze.

    • @Messmers_flame
      @Messmers_flame Месяц назад

      I don't write anything. All is done by llms. 10x developer is an understatement

    • @Aaron-wg6ft
      @Aaron-wg6ft Месяц назад +16

      This is the way

  • @peterprokop
    @peterprokop Месяц назад +938

    Having 45 years of coding experience and written well over a million lines of code in my life in a dozen programming
    languages, I like using AI for specific purposes:
    - generating boilerplate code
    - refactoring existing code
    - translating code from one programming language to another
    - exploring APIs unfamiliar to me
    - prototyping
    Generally the AI code quality is between bad and mediocre, but if you know how the result should look, you can get there. However, there are limits in terms of code amount, complexity and sophistication current AI just can't cope with.
    Once you approach these limits, it indeed takes more time to understand and debug the code than writing it yourself.
    In some cases, code turned out to be practically not debugable, for example code that plays connect four, despite the AI "knowing" about minimax algorithms and alpha-beta pruning etc. Also, once your code reaches more than a few hundred lines, AI just can't handle it all at once.
    This will probably improve, but not by orders of magnitude, at least not with current LLM technology.
    I see and use current transformer-based AI basically as a very powerful text editor for now.

    • @artur9782
      @artur9782 Месяц назад +41

      Yep It's very good on doing dumb and boring shit

    • @grapy83
      @grapy83 Месяц назад +9

      Wow. That's a really useful reply. it all binds so well with the underlying concept of LLM.

    • @13137713
      @13137713 Месяц назад +45

      and again, you can only recognize that AI gave you bad code if you know better. For me, it does help me debug and reminds me of some methods implementation i might have forgot. Oh, and it does help learning process when you can ask any amount of questions about the concepts you learn.

    • @xNWDD
      @xNWDD Месяц назад +10

      This.
      I've also found it very useful for transforming bullet points into an email and when designing intuitive API / interfaces, to get a second opinion and generating some test cases.
      I love giving it some function name (or signature) and project explanation and asking it if it can guess what the function does, much better than asking other people for the few first iterations.
      Definitely doesn't increase the throughput of my work but allows me to focus better on the big problems and give a better polish to the code in the same amount of time.

    • @AdelaideBen1
      @AdelaideBen1 Месяц назад +6

      And what are these new models going to be trained on... the problem, and this is unavoidable, is that the code-base will become poorer quality, and the effort to improve it will become uneconomic (because the people that can, won't exist). The idea of the dead internet, is rapidly becoming the zombie workforce. Except we're the one's becoming the zombies, because we're offer a few novel treats...some artificial brains.

  • @vrmartin202
    @vrmartin202 Месяц назад +203

    Retired software developer here: reminds me of how outsourcing code development was supposed to increase productivity. It didn’t. Produced costly poor quality software.

    • @miklosprisznyak9102
      @miklosprisznyak9102 Месяц назад +29

      Very true. 😂 The "good" ol' days when I had to explain stuff to the outsourced team to such a detail that I almost should have done it myself... and they still delivered substandard code.

    • @osvaldogarcia4245
      @osvaldogarcia4245 Месяц назад +1

      Actually it depends

    • @antoniorocha9438
      @antoniorocha9438 Месяц назад +1

      The difference of delivering a product and a solution. Outsourcing requires different disciplines that many customers are not aware of.

    • @robwillett4960
      @robwillett4960 29 дней назад +3

      Me too, plus there has always been the pipe dream of not needing to be able to code - just specify what you want. Since the 80s.

    • @amorelus
      @amorelus 27 дней назад +9

      There is quality outsource team. Unfortunately Manager who want to outsource are not looking for expensive and quality, Just the cheapest people who can "do the job".

  • @Nicoya
    @Nicoya Месяц назад +2294

    The worst thing you can put in front of a programmer (of any skill level) is code that looks correct but is subtly wrong. And AI is especially efficient at generating code that looks correct but is subtly wrong.

    • @MrWizardGG
      @MrWizardGG Месяц назад +43

      Ai actually can detect and solve that, it's which the best code tools now use AI to delint and iterate the output now.

    • @llamaelitista
      @llamaelitista Месяц назад +178

      ​@@MrWizardGG I think that's exactly the problem, code that looks correct (no errors and perfectly delinted) but with bugs that are not so easy to detect.

    • @thtcaribbeanguy
      @thtcaribbeanguy Месяц назад +84

      at a extremely fast glance yes it "looks correct"
      but once you start reading line by line you can see that its guessing

    • @amigalemming
      @amigalemming Месяц назад +75

      @@llamaelitista You may write software that is so simple that it has obviously no errors or you may write software that is so complex that it has no obvious errors. (by Hoare)

    • @MrWizardGG
      @MrWizardGG Месяц назад +1

      @@llamaelitista good point. Our job security lifeline for now. But to be clear AI can fix bugs you identify... we will be planning engineers now.

  • @CristianGeorgescu
    @CristianGeorgescu Месяц назад +2411

    The future of coding is not AI, but the future is coding with AI... There are already tools to help writing code, generate tests, automate repetitive tasks... AI is another tool. Spell check helps writers, makes them more productive, but it won't replace writers.

    • @no_name4796
      @no_name4796 Месяц назад +146

      Which are actually more inefficient then writing things yourself, unless you are writing in html or css which even if you are wrong, you don't risk exploding a computer somewhere lol

    • @krielsavino5368
      @krielsavino5368 Месяц назад +11

      Well said! that simple

    • @MrVohveli
      @MrVohveli Месяц назад +99

      AI won't replace you, but it will make the best so efficient you won't be needed anyway.

    • @NanheeByrnesPhD
      @NanheeByrnesPhD Месяц назад +5

      Precisely.

    • @Khomyakov.Vladimir
      @Khomyakov.Vladimir Месяц назад +7

      Deeper but smaller: Higher-order interactions increase linear stability but shrink basins
      SCIENCE ADVANCES

  • @ScottHess
    @ScottHess Месяц назад +1056

    Overestimating the short term and underestimating the long term is the broad shape of every tech revolution.

    • @15wileyr
      @15wileyr Месяц назад +28

      A wise thought

    • @SignumEternis
      @SignumEternis Месяц назад +87

      Well said. A common sentiment I see with AI right now is something along the lines of: "it's not doing every single thing it was hyped up to do perfectly right now, so it's worthless and should be disregarded". Not much better than those at the opposite extreme thinking AI will destroy humanity in a few years.

    • @adamskrodzki6152
      @adamskrodzki6152 Месяц назад +6

      So here is the trick. Most of software is not like a road it is not ment to stay in shape for generations or centuries. It is ment to do what it's ment to do and be replaced in 2 years.

    • @drillerdev4624
      @drillerdev4624 Месяц назад +7

      In the case of coding, definitely it's not going to do your job for you, but it's a great assistant. I can't predict the future of such a technology, but in the near future at least, the trend will be toward assist tools for the experts in the respective fields. Same with image generation AIs. It's being integrated in the software professionals use as a tool, instead of substituting it, and many people are using it for "the boring parts" (The way masters of the past used their students)

    • @ianr4222
      @ianr4222 Месяц назад +49

      Reading a lot of these comments reminds me of last year, when people were talking about AI pictures of people all had 6 fingers on a hand as though this would never improve.

  • @drkalamity4518
    @drkalamity4518 Месяц назад +236

    To me, the real value of AI with coding has been in learning. I’m a self taught programmer and learning a new language with the help of an AI to ask questions specifically about the code I’m working with is CRAZY valuable. This, like the advent of the IDE, is a TOOL to assist programmers, not a replacement.

    • @Kevin_Street
      @Kevin_Street Месяц назад +14

      It's kind of encouraging, actually. Helping people to learn more easily is better than doing the job for them.

    • @jeanchindeko5477
      @jeanchindeko5477 Месяц назад

      Too many people are forgetting current LLM technology based on Transformer architecture are just token predictors! Those things don’t think yet! But the research is moving fast and we might eventually find a way to make it do something which will look like or be close enough of how human think.
      The key part which is missing in all those videos for or against current AI technology is context!
      If we take ChatGPT release as T0 (and this is intentional here) we could say current LLM technology (and I intentionally not calling it AI) is just 2 years old. Who remember the first generation of iPhone computational photography of Portrait Mode? And look how good it is today (not saying it’s perfect because perfection doesn’t exist or is really subjective to context).
      So let’s give it a few years and have this discussion again.
      Don’t get me wrong I agree with her and the result of those research, current LLM technology are not necessarily that great at coding at least for what I’m working on which is not web development.

    • @ripleyhrgiger4669
      @ripleyhrgiger4669 Месяц назад +10

      I encourage you to not assume that it's correct though.
      ChatGPT gets a lot wrong and that's the point of this video and other videos from experts showing how poorly it is at programming. It can define things well enough - but the code it writes is often wrong and incomplete. I made a lengthy post about this already - so I'll give you the TLDR version: It could not detect why one version of a program worked and the other version did. It had no idea how to iterate through .xml files, find a field, change it, and then write it back without truncating the file.
      These are tasks I require all my programmers to understand and that a lot people assume is easy. There is a reason why it is not easy. You'll find out in due time if you keep at it.

    • @drkalamity4518
      @drkalamity4518 Месяц назад

      @@ripleyhrgiger4669 Just like using google to look up stuff I never assume it's correct, but way more often than not even the incorrect answers can still shed some light on the problem at hand. Like I said, its an excellent learning tool - using it to write your code for you is misusing AI in its current state IMO.

    • @drkalamity4518
      @drkalamity4518 Месяц назад

      @@ripleyhrgiger4669 I think you're conflating learning with doing, I'm talking about using AI as a learning tool. If you're asking the AI to write code for you, you're doing it wrong IMO, I'm not talking about copy pasting here. I don't assume its correct just like I don't assume a google search result is. However, even the wrong answers more often than not shed some light on the issue at hand.
      There are so many times where I got stuck and ChatGPT simply pointed me in the right direction. Simply asking it to explain a piece of code and what its doing can be invaluable to a newbie. Sometimes I might not even know what/how to google my particular issue, but with AI I can get an answer since it has the context of my problem.
      If I could have an experienced programmer sitting next to me that would be great but this is the next best thing, and you'd be naive to ignore that potential. I think even for experts this can be quite an advantage when used with caution.

  • @Dimkar3000
    @Dimkar3000 Месяц назад +186

    Professional software developer here. From experience AI is really useful to get you through the door on a concept you have no idea off. But after that introduction, the error rate goes through the roof. So I don't use it at all for work, where I know what I am doing. But when I explore a new domain I may use it to get some introductory information. Then I go back to reading documentation, it's always better

    • @MrWizardGG
      @MrWizardGG Месяц назад +2

      You are not using an error correcting tool like Aider Chat.

    • @Dimkar3000
      @Dimkar3000 Месяц назад +6

      @@MrWizardGG I dont care to do. I am not a native english speaker.

    • @moonasha
      @moonasha Месяц назад +10

      I sometimes use GPT to teach me documentation by asking it questions, it works out

    • @mrpocock
      @mrpocock Месяц назад +3

      It is also pretty good after that initial sketch at digesting the documentation. So you move from having it sketch code, to talking with it about code with a prompt to suspend generation, or shift it from generating implementations to generating tests and assertions. And it absolutely slays at generating documentation.

    • @Dimkar3000
      @Dimkar3000 Месяц назад +8

      @@mrpocock if AI is writing your tests then you have no tests. Maybe your in slightly different situation than me, but there is no way an AI can understand enough to write them. The type of test they can write, I don't care for.

  • @MachineYearning
    @MachineYearning Месяц назад +741

    Highly recommend Peter Naur's 1985 paper, "Programming as Theory Building". He argues that code itself is not the value produced by programmers. The value is in modeling of real-world problems (what I'd call "theorycrafting") which is extremely difficult to represent as code.
    Theory, as Naur explains, is, "the knowledge a person must have in order not only to do certain things intelligently but also to explain them, to answer queries about them, to argue about them, and so forth".

    • @l2k55
      @l2k55 Месяц назад +24

      Best comment on this subject.

    • @ref8893
      @ref8893 Месяц назад +2

      ...excactly!!! For the other stuff, AI will be invaluable.

    • @ericpmoss
      @ericpmoss Месяц назад +39

      My corollary to that is that the first and foremost task of a programmer is to understand the problem at least as well as the subject matter experts do. It sounds obvious, but it is shocking how often this does not happen, and how often projects fail as a result.

    • @ysf-d9i
      @ysf-d9i Месяц назад +14

      But AI can do that too. Perhaps not to the degree that one would like, yet. But it can do that to a degree we never could have imagined 5 years ago. What will happen in 5 years time?

    • @MrWizardGG
      @MrWizardGG Месяц назад +4

      ​@leob3447 no.... also you can just attach giant fully written plans written by a team of experts and ai can execute it. I don't know what you guys don't believe yet.

  • @richardchaney7295
    @richardchaney7295 Месяц назад +538

    As a programmer myself, there's nothing more annoying than trying to find and fix bugs in someone else's code. I don't want my whole job to become fixing bugs in AI generated code instead.

    • @PatrickPoet
      @PatrickPoet Месяц назад +38

      I'm retired now, but finding hard bugs in other's code was often a favorite thing of mine.

    • @dwaneanderson8039
      @dwaneanderson8039 Месяц назад +10

      They need to design an AI that finds bugs. What process do you use to find bugs? It should be possible for an AI to follow the same procedure and do it faster.

    • @k-c
      @k-c Месяц назад +10

      Just feed it to AI and ask it to find bugs

    • @burgerbobbelcher
      @burgerbobbelcher Месяц назад +8

      Once you think of how said AI was built, in a roundabout way you're now fixing EVERYONE's bugs.

    • @nigelrhodes4330
      @nigelrhodes4330 Месяц назад +5

      I have news for you, your job will totally mostly be fixing bugs in AI code soon.

  • @kickinit333
    @kickinit333 Месяц назад +14

    Been coding for 4 decades. AI has most definitely made me more able to explore things (languages and tasks) I've never done, converting code to other languages and doing mundane tasks like figuring out what files are doing, asking what they should be doing and creating documentation and then generating unit and integration tests. AI fills gaps that many people don't want to think about and therefore, don't do which impacts performance, function and security by omission.

  • @icyghost7561
    @icyghost7561 Месяц назад +273

    A big issue with studies that measure improvements in productivity of programmers, is that no one really knows how to properly measure this productivity.
    Metrics like number of commits, amount of code written or pull requests opened are obviously not very useful.

    • @mrpocock
      @mrpocock Месяц назад +30

      And if you target something like issues or features delivered, then people game the triviality of issues and features. You want a code-base that is as small as possible, that does as much of what is needed as possible, with as few bugs as possible. But how on earth do you quantify that without introducing obvious exploits?

    • @PhillipRhodes
      @PhillipRhodes Месяц назад +4

      Bingo!

    • @flyingwambulance
      @flyingwambulance Месяц назад +1

      Which makes it more funny when you got real life experience because I know so many people who use AIs like Cursor to just spit out prestine code. It seems like the only people complaining are already bad coders. Good codes realize that code is just a means to an end and it's nothing special about it.

    • @nedames3328
      @nedames3328 Месяц назад +6

      You don't want to miss the most important metric: line of old, buggy, confusing code retired and removed.

    • @MephiticMiasma
      @MephiticMiasma Месяц назад +17

      "measuring a code base by number of lines is like measuring an aircraft's performance by weight"

  • @stefan_becker
    @stefan_becker Месяц назад +33

    Another big problem is that as a software developer, you are usually not allowed to give an AI like ChatGPT or Gemini ​​access to the source code of your product, as this would mean it would also be available to the company that created the AI. So you would need your own "corporate AI" to be able to use AI at all. But most companies do not have their own "corporate AI".

    • @gaboralmasi1225
      @gaboralmasi1225 Месяц назад +6

      There are corporate licences, for which OpenAI / Github / Google make a promise to protect your data and not to store or use your data. This way employees are allowed to give the AI access to the source code of the product and ask for insights, suggestions or modifications.

    • @Skozerny
      @Skozerny Месяц назад +21

      @@gaboralmasi1225 These promises mean nothing.

    • @oompalumpus699
      @oompalumpus699 Месяц назад +8

      ​​@@gaboralmasi1225 'A promise. Sure. The same way social media platforms promise that they don't misuse any of the data they scrape from their users.
      For certain, totally trustworthy (sarcasm).'

    • @mikebarushok5361
      @mikebarushok5361 Месяц назад

      It's entirely feasible and somewhat common to run AI like ChatGPT or Gemini locally on a network that doesn't give access to or from the Internet.

    • @IslandPlumber
      @IslandPlumber Месяц назад +1

      @@stefan_becker that's why you host your own.

  • @_Mentat
    @_Mentat Месяц назад +101

    As a software engineer, I agree with the other commenters here that coding is the least part of the job. The problem solving and research beforehand is a greater part.
    Only the devs break problems down into their elemental parts - because they have to to code it. Non devs give up as our discussions get too intricate for them to follow; their eyes glaze over as we say what if these things happen in this order and half way through this one thing this other thing jumps in? Soon they just want us to go away and make it work.
    That said, I have tried AI for coding and it can be quite helpful. It can also be completely and ridiculously wrong.

    • @rawwrrob9395
      @rawwrrob9395 Месяц назад +5

      I think it's a good learning tool for beginners. It allows you to take an output in your head and get an idea of what the code needed to get that outcome would look like. It's probably not correct, but it can at least get you started. I certainly wouldn't trust it for anything advanced.
      I've used it for some basic python and html in conjuction with online documentation and it was very helpful. As long as you're very specific with your prompts, you often get a decent framework for your code that you can correct and build on. Also teaches you how to debug because it's often outputting incorrect things in obvious ways lol.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Месяц назад +2

      Yes, AI can definitely be a useful tool for coding, but it’s not perfect. It can provide helpful suggestions, boilerplate code, or guide you through debugging. However, it’s important to critically evaluate what it generates because sometimes it can produce incorrect or nonsensical outputs, especially if the problem is complex or the requirements aren't clear. Have you found any particular coding tasks where AI has been especially helpful or unhelpful for you?

    • @Ainglish-qj5bb
      @Ainglish-qj5bb Месяц назад

      I seems to me that the more you try to get it to do, the worse it does. If you say, "Write me a webpage that does XYZ using JavaScript and has a kitten image for the background," yeah you're not gonna get a good result.
      If you say, "Write CRUD methods using Dapper in C# for a class with properties X, Y, Z," then congratulations! You just saved yourself about 15 minutes of typing, and maybe a google search.
      Depending on what I'm working on, it's perfectly possible for me to save a couple hours in a day using a bot.

    • @X3S000
      @X3S000 Месяц назад

      So can humans though, the biggest difference here is that it can’t search the web and find solutions to fix the bugs or solve a problem, YET.
      Y’all are comparing the current state of this technology to the top engineers, researchers, and scientists in the field but in reality these models are already more capable than a considerable percentage of the human population.
      You really think this technology won’t advance significantly in the next 10-20 years?
      Also I’ve done some research and the engineering industry can be lazy sometimes. They prefer to use convenient tools instead of improving powerful low level languages and tools because they “cause too many issues” 🫤 This slows things down in the long run because too many languages have specific requirements and updates often make the tools incompatible with other tools/libraries.
      I think the scientific community as a whole needs to take a more standardized approach to building tools and frameworks but that would require them to stop being lazy which probably won’t happen 😂

    • @joecarioti629
      @joecarioti629 Месяц назад +1

      I built an entire SaaS website with AI Codegen in under 2 months by myself. When the codegen is wrong, you just say, "hey, that's wrong, do it more like " and have it try again.
      This idea that you just submit your prompt and accept whatever comes out of it first time is why people think AI code gen sucks, IMO. It's a tool, you have to iterate with it just like you would with a junior consultant.

  • @Someoneelse_XD
    @Someoneelse_XD Месяц назад +3

    It does help a lot especially for prototyping and also for some repetitive tasks. It does not (yet) help working on advanced or difficult tasks in large code bases

  • @mettaursp309
    @mettaursp309 Месяц назад +21

    In a professional environment, if what you need is structured code generation, then we've had tools for doing that perfectly error free for decades. Structured code generation is not a task that requires AI to do. It's also the only thing that generative AI is even remotely good at with any level of acceptable quality. It also happens to provide worse quality by introducing chances for hallucinations and other miscellaneous errors that simply don't happen with the classical tools.
    The other thing that generative AI is good at is regurgitating the most common practices, hallucinations and errors included, but outside of novice level programming that doesn't have as much usefulness. It's the kind of stuff that a well experienced programmer can type out in the same time it takes to type out a prompt and wait for a response, so returns diminish the better you get.
    For anything beyond that, architectural decisions are best made with the intent and the experience of the software developer, not by what some AI finds to be statistically relevant. Software development is an ever evolving field, and most jobs are on a case by case basis. Usually the actual hard part is handled long before you go in to write the code, and the code is just a reflection of the solution that was found. Using generative AI to write the code first as the solution is putting the cart before the horse.

  • @SivaranjanGoswami
    @SivaranjanGoswami Месяц назад +131

    I am a software engineer and I have written extensive amount of codes in the last 10 years, for both academic research and businesses. I do use AI to write code. As a programmer I can break a problem down to well defined steps. I provide sufficient low level details in my prompt to ensure that the AI fully understands the problem. For a non-programmer, this is impossible. Even with my experience, I can trust AI only if the problem is simple enough and does not require a lot of background information. As the complexity of the problem increases, it becomes more and more difficult to get correct code from AI.

    • @ausnetting
      @ausnetting Месяц назад +13

      @@SivaranjanGoswami right? I can say “write me a function that does this …” and it will likely give good output but I can’t say “write me a browser” let alone “write me an air traffic control system”

    • @doogie812
      @doogie812 Месяц назад +2

      Yes but at that point are you not just about done?

    • @ausnetting
      @ausnetting Месяц назад +8

      @@doogie812 having the simple steps completed is like getting the ingredients out for a recipe. You’re not almost done - you’ve just started - but having the ingredients assembled for you is a helpful start

    • @SivaranjanGoswami
      @SivaranjanGoswami Месяц назад +3

      ​@@ausnetting sometimes I also include the exact library to be used, examples of input and output, or 2-3 lines of codes as a hint.

    • @vlax12
      @vlax12 Месяц назад +6

      @@SivaranjanGoswami so, it types instead of you because it can generate text faster than you? If you have to know technology to make AI code correctly that means that you could write code yourself and omit possible hallucinations. I use AI to learn new stuff mostly, and for boiler plate code. I've tried to use it for some serious work but at the end I've spent more and more time correcting the code and AI so I've stopped using it. The thing that irritates me the most is that it does not provide all possible solutions even when I ask for them. That leads me to google information even if AI gave me the answer because it is not reliable enough.

  • @MaxMustermann-vu8ir
    @MaxMustermann-vu8ir Месяц назад +11

    Software developer with some experience in natural language processing (aka AI before ChatGPT). I do use tools such as Github Copilot at work and it does help but the productivity gains are far lower than the 50% claimed by MS. It's rather 5-10%, depending on the setting. I use the tool as a better version of auto-complete on a line-by-line basis. But I would never use it to generate larger amounts of code I cannot oversee.
    Sabine gets it right. LLMs are not very useful if precision is important. But at coding, it's all about precision.

  • @roy-helgesamny2550
    @roy-helgesamny2550 13 дней назад

    "This sounds like an unpleasant conversation with a dentist" Gold nuggets like these delivered in such a dry way. I love it.

  • @Diogenes76
    @Diogenes76 Месяц назад +555

    People who think AI will replace devs just don't realize that for a lot of us, coding itself is not the hardest part of our job. Going to meetings, getting the requirements right, setting up the env, digging through ancient code and DBs to get the info we need etc takes a lot more time than coding.

    • @paddy9609
      @paddy9609 Месяц назад +70

      All stuff that AI can do or will be able to do.

    • @pottedrosepetal6906
      @pottedrosepetal6906 Месяц назад +5

      which will all be superhuman once agents drop in a meaningful way...

    • @aarionsievo
      @aarionsievo Месяц назад +11

      AI will probably not do it better but much cheaper.
      Therefore programmers will not die out but will become luxury.

    • @andrasbiro3007
      @andrasbiro3007 Месяц назад +5

      And those can be done by AI too.
      Also it depends on the job, for me it's 90% coding.

    • @rogerphelps9939
      @rogerphelps9939 Месяц назад

      Correct

  • @MrThomashorst
    @MrThomashorst Месяц назад +216

    As a coder who does this for 34 years now, I tell you that efficent coding has left the scene a long time ago. Besides realtime applications (such games), it's all about the shine and no longer about efficency.

    • @stoneneils
      @stoneneils Месяц назад

      That's why I quit. I was coding in 79...the idea was always a few guys working 16 hours a day to make it work as quickly and simply as possible. Now its about 100 people and three years on gitfuckinghub....no way. The first time they hired a 'css' guy for me I wanted to kill him lol. I didn't need him...then they added six other dudes..wtf.

    • @beauthestdane
      @beauthestdane Месяц назад +34

      Yep, very much seeing a drop in said efficiency. For modern applications people just don't care because the computing power and memory available are such that it is not as important. I still do embedded firmware, and I do still have to be careful about efficiency.

    • @Asdayasman
      @Asdayasman Месяц назад

      That attitude is the reason why every piece of modern software sucks ass. Fuck you.
      Sincerely, a programmer who remembers using winAMP 2.

    • @HaydenHatTrick
      @HaydenHatTrick Месяц назад +45

      We have seen the electricity costs of training AI models, but what is the electrical cost of garbage code? I never fail to be amazed how much computer hardware has improved and never fail to be disappointed with performance due to bloatware.
      Honestly, it boggles my mind. I understand how it happens, but with the number of certificates available in IT, there should be something they can put on software to encourage consumers to pay for optimised code and removal of bloat. Like certified vegan but for code... I donno.

    • @timsn274
      @timsn274 Месяц назад +9

      True, and a pity. I remember desperately trying to save space and gain efficiency, as some of our programs ran for hours on a Cray Y-MP

  • @generessler6282
    @generessler6282 Месяц назад +41

    Yeah. I'm a software engineer. The current AIs are good at generating code that looks right but isn't quite. Consequently, when I want to include it in a project, time needed to verify correctness tends to offset any time saved writing. There are exceptions, but as you said, they're not common.

    • @PhillipRhodes
      @PhillipRhodes Месяц назад

      > time needed to verify correctness tends to offset any time saved writing.
      That does not jibe with my experience, FWIW. Now I mainly use Github CoPilot for AI coding assistance, and I do use it as an *assistant* - I don't ask it to generate entire programs for me. But used this way, it absolutely saves me time. How much? Hard to quantify exactly, but depending on what I'm doing anywhere from 0% to maybe 30% of the time I'd otherwise spend, is shaved off.
      Now as far as asking the AI to build an entire app for me... meh. From what I've seen you might get something halfway usable, or you might not. But consider... depending one's age and status vis-a-vis retirement, the issue isnt' "how good are the AI's today?" but rather "how good will the AI's be in 2 years? 5 years? 10 years? 20 years?"
      I'd struggle with recommending a young person take up computer programming today with the intent of making that their career, TBH.

    • @simonmackenzie6230
      @simonmackenzie6230 Месяц назад +7

      @@CeresOutpost I believe if they looked up Dunning Kruger they would find it doesn't really apply to predictions about the future.

    • @PeterAllen09
      @PeterAllen09 Месяц назад

      Do you review the code before hitting Approve on a PR?

  • @valis992000
    @valis992000 Месяц назад +2

    I always forget how to get started on something, no matter how many times I've done it. 25% actually sounds about right, it just keeps me from looking up things I have forgotten. I've never had AI create truly usable code, but it gives me a starting point. But then again maybe it was just as quick when I used to look it up.

  • @MatthewMartinDean
    @MatthewMartinDean Месяц назад +227

    I'm a professional software developer & I've found that using AI is a whole new workflow. It took me half a year to get to the point where I felt good at AI-assisted development. If you give a person a code writing bot and they continue with their same workflow, I'd expect results to not be very different from before. The stats show the AI helped the juniors more. That seems wrong because only a senior is going to be able to read the code and put the bot back on track when it makes mistakes. AI is like a free intern, you need to be skilled in getting work out of interns before you can get the most out of AI assisted development. And measuring developer productivity is hard. Who knows what these studies really measured.

    • @carlosgomezsoza
      @carlosgomezsoza Месяц назад +11

      Fully agree. I see how impactfull these technologies are and on my team at least i observe that senior engineers can get the most value out of these tools.

    • @jonatanwestholm
      @jonatanwestholm Месяц назад +8

      Juniors were probably more willing to learn something new.

    • @benpierce2150
      @benpierce2150 Месяц назад +8

      its because the seniors are already used to using juniors as their ai, they already have a workflow to assign simple tasks and fix it up before incorporating.
      juniors actually needed inspiration and a little guidance, the seniors just see a bad idea generator.

    • @VoltLover00
      @VoltLover00 Месяц назад +6

      If AI is helping you, you're a terrible developer solving trivial problems

    • @carlosgomezsoza
      @carlosgomezsoza Месяц назад +18

      @@VoltLover00 most problems are trivial anyway if you are smart enough. I would rather say that usefulness of a tool is a combination of the qualities of a tool plus the proficiency of the user for that particular tool.

  • @TerryBollinger
    @TerryBollinger Месяц назад +70

    Sabine, in the two years since businesses began pushing the use of LLM-based AI, I have seen nothing but degradation in software tools I used to trust. The forms of degradation are often small, but they are also profoundly annoying.
    I suspect, but cannot prove, that some of this tool degradation comes from people relying too much on LLM code generation. LLM is inherently sloppy and always adds some noise each time it reuses human-verified code.
    The deeper problem is that by its nature, LLM tech has no comprehension of what it's doing. For software, that kind of creeping random damage is deadly in the long term. It may take us decades to recover from the undocumented damage that overreliance on auto-generation code has already done to our software infrastructure. I hate to think what may happen if this situation continues for years.
    AI and robotics have been my day job for many years, and I've been interested in this topic since high school, which was somewhere back in the late Neolithic era.
    I've been busy doing a deep dive on critical LLM tech papers lately. Some of the issues cannot be resolved by slathering on even more LLM tech, for pretty much the same reason that you can't build a fully adequate dam out of tissue paper by adding more tissue paper. It's just the wrong starting point.
    LLM is a fantastic technology when used correctly, such as in many physics examples given for the Physics Nobel Prize this year. That is why I supported research in the neural net technologies behind LLMs when most researchers looked down on neural nets as fuddy-duddy, obsolete 1950s technology.
    Now, bizarrely, people get Physics Nobel Prizes for neural net research, though the selection of which researchers should get the prize was a bit odd.
    For example, I now suspect Yann LeCun inadvertently blew his chance to share the Nobel Prize this year because he complained a few months ago about new European Union rules protecting consumers against kleptomaniac LLM systems.
    Hinton, in contrast, established himself as a concerned person by quitting Google and giving a pointed 60 Minutes interview a year ago.
    Now, wouldn't that be ironic? Imagine missing out on a Nobel Prize because you complained publicly about a topic that may have offended the Nobel Prize committee - one that you had no idea was considering you. Ouch!
    Perhaps a Nobel Peace Prize might have been more appropriate.
    However, Hinton and company are wrong about the AGI risks of LLM tech. The astronomical energy consumptions alone tell us that their approach is like trying to recreate photosynthesis by paying close attention to which green pigments you use to paint a picture of a tree. As Richard Feynman once noted, this is Cargo Cult science: Too much attention to the superficial appearance and zero insight into the internal mechanisms of energy-efficient intelligence in biological systems.
    LLM tech alone will never get us to AGI. I'm still involved with research efforts that may get us to AGI someday. Unlike this LLM insanity, these research areas view the energy issue as a critical clue.
    I remain unsettled about this issue and sometimes wonder if I should just shut my mouth and let folks who are fooling themselves burn up the world in pursuit of a strategy that will never give them the AGI results they crave. Argh.
    The world is complicated.

    • @traumflug
      @traumflug Месяц назад

      I see this degradation since Integrated Development Environments (IDEs) appeared. They make things easier in the beginning, but just more complicated long term.
      Next step backwards was the appearance of Github. Git its self is a great tool, Github makes a huge mess out of it. Duplicating all the code just to commit a patch? Only complicated ways outside Github to update such patches. What a mess.
      The conclusion is, humanity is destined to reinvent its technology over and over again. 6 steps forward, 5 steps backwards, repeat. Kept alive only by faster and faster hardware.
      Oh, and then there's Rust. Finally a huge step forward. Still a number of flaws, like the avoidance of shared libraries, yet writing code is fun again. Flawless code as soon as the compiler gives its OK, never seen this before.

    • @deanparker7867
      @deanparker7867 Месяц назад +4

      With 30 years as a professional under my belt I find myself creating training material for devs new to gen ai for some very large and famous companies…. But yet I am swiftly coming to the same conclusion you have reached. This situation will improve over time but the current manic fervor driving it is making me deeply uneasy on every level. It is fundamentally tech based on sheer probabilities that often creates an incredibly difficult problem to solve at enterprise scale. Not impossible, but it needs truly skilled folks at the helm to even have a chance at success at scale. That said, it saves me time on the odds and ends of being a software architect each day but is not core to my actual success. Not yet anyway.

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад +2

      Nice essay :p

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад +2

      The “bug creep” you mention is real and a very good point.

    • @endintiers
      @endintiers 27 дней назад +1

      The LLMs are only one component of the new models.

  • @HughCStevenson1
    @HughCStevenson1 Месяц назад +42

    I am an engineer who used to write code years ago but has not been in it for a few decades. Not since Pascal was cool! I recently needed to write some Python for a Project - I gave the requirements for each tiny block to AI and it gave me some code that I could then adapt for my requirements. It meant that I didn't have to spend ages trawling the net to understand what functions were available and what the syntax was. Because I took a lot of effort to define what I wanted the AI gave me precisely what I wanted. For me it was excellent. Just don't give it too big a problem! Little by little...

    • @joecarioti629
      @joecarioti629 Месяц назад +2

      I've been coding since starting my career 15 years ago and I don't understand the other engineers who think AI code gen is some awful or useless tool. It gives me the same vibes as musicians who don't think DJs are creating music because they don't play a traditional instrument.

    • @kerose
      @kerose Месяц назад +2

      Same. I’m no software engineer but I often need to use different languages I’m not familiar with. Toss it some English algorithm and I quickly have code that I can work with. I’m then competent enough to take over from there.

  • @GlobalScienceNetwork
    @GlobalScienceNetwork Месяц назад

    I use Claude with the professional plan to program and help write scripts in Python. I love it! You can copy and paste any error into Claude and it is way faster than looking for a forum that might have the answer to the error. I would say it makes me 10-30X faster at programming. Sure it makes mistakes but sometimes writes the entire code that would take a few hours in 1 minute. I used to be an old-school programmer who would not even use built-in functions because I wanted to code everything from scratch. Sometimes this is needed but for many small projects, I love having Claude do most of the work with minimal effort on my end. It is also very good at commenting the code and explaining what the code is doing. It is better than a hidden built-in function in MATLAB that you can not see how it is working. Open-source Python programming with AI changed the game for the better.

  • @PierreMullin
    @PierreMullin Месяц назад +30

    You are absolutely correct. GenAI is often a smarter Google to generate code snippets, but it doesn't address the most fundamental root cause of many failed IT projects: poorly articulated requirements. GenAI simply takes vaguely articulated specifications and stacks on some hallucinations.

    • @RangeWilson
      @RangeWilson Месяц назад +6

      "simply takes vaguely articulated specifications and stacks on some hallucinations"... to be fair, that sounds like what a whole bunch of human programmers do for a living.

    • @Mark-s7d6l
      @Mark-s7d6l Месяц назад +1

      I'll know it when I see it. How many task requirements get this response when you are asking for definitive explanations.

    • @SkorjOlafsen
      @SkorjOlafsen Месяц назад +3

      To be fair, most of the requirement documents I've seen could only be satisfied with hallucinations, so maybe the AI is ahead of us all.

    • @PeterAllen09
      @PeterAllen09 Месяц назад

      ​@@RangeWilsonExactly, lol. How do software engineers, of all people, not get that?

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад

      @@RangeWilson It’s just cuz we’re programming drunk or high. Don’t blame us! 😂

  • @tj2375
    @tj2375 Месяц назад +50

    0:12 that's my opinion since the beginning of the hype. The LLMs can be as large as they want, they will generally give more work correcting the errors they generate than doing the task without AI.

    • @geekswithfeet9137
      @geekswithfeet9137 Месяц назад +7

      Have you actually used it recently or are you relying on an experience from the distant past?

    • @41-Haiku
      @41-Haiku Месяц назад +4

      They didn't invent a broken technology and then give up and just try to make it bigger. They invented a technology that they discovered would always get better as long as they made it bigger, so they're making it bigger, and it's getting better.

    • @mickearrow8035
      @mickearrow8035 Месяц назад

      I will say it's decent assistance when you start learning a new language. If you already have code experience before, you can use it to understand the small things. Like setup for loop and other quality of life stuff. As not all languages use exactly same setup.
      I should not use it to write a full program. It's often better results than spending 1 hour to Google a solution. Especially for more complicated tasks you want to be done. You find often just generic and less complex with Google search.

    • @juliand3630
      @juliand3630 Месяц назад

      If you ever explored a new programming language and did this with AI using Curser (highly recommend that VS fork) this phrase that AI is overhyped is pretty far fetched…

    • @programmingwithbaker-codin953
      @programmingwithbaker-codin953 22 дня назад

      AI doesn’t write good or working code in my experience
      It makes pretty good porn

  • @playingwithdata
    @playingwithdata Месяц назад +287

    It's the StackOverflow effect again, but on steroids. After SO got big there was trend towards people writing code by essentially googling and copy-pastng whatever they found. There's a whole section of devs coming from that background that have a minimal understanding of half of the code they "write". AI coding tools are doing the same, but now they have business backing the idea because they imagine they're going to be able to cut out a load of expensive engineers. It's going to take some painful failures before people realise that there's a huge gap between AI being able to usefully spit out chunks of customised boilerplate within narrow constraints and AI being able to turn business and consumer needs into fully realised software.

    • @Consoneer
      @Consoneer Месяц назад +16

      We got a lot of people in the industry saying "I don't know why it works but if it works it works"😂

    • @venanziadorromatagni1641
      @venanziadorromatagni1641 Месяц назад +25

      There is nothing. Wrong with consulting StackOverFlow per se. Many times it’s given me good hints. There problem is using it in a copy-paste way, instead of taking it as a hint on how the problem could be tackled.

    • @EbenBransome
      @EbenBransome Месяц назад +23

      Nothing new. I remember "5th generation languages" and programs that would write programs. From the 1970s-80s. There are always snake oil merchants in the programming industry.

    • @JimBob1937
      @JimBob1937 Месяц назад +8

      @@venanziadorromatagni1641 , agree, SO is a resource, and why not use resources available to you. The issue is definitely in the usage. You should never blindly copy/paste code from any resource, you should understand the code and how to best integrate it within your code base. This also lets you catch bugs and/or security problems that may exist in the SO code.

    • @kyriosity-at-github
      @kyriosity-at-github Месяц назад +2

      @@venanziadorromatagni1641 however the enthusiasm to contribute to StackOverflow has diminished. Could i skip explanation.

  • @jjvp1249
    @jjvp1249 24 дня назад

    As a senior software engineer, i can assure it's a large gain of time to use LLM for many tasks. What those studies might have missed was the real time used by the devs to do what they had to code : probably way lower, however the total amount of work could be the same simply because if the job is done, nothing pushed them to request more work.

  • @KwizzyDaAwesome
    @KwizzyDaAwesome Месяц назад +13

    You can tell who actually knows how to code and the complexity and non-trivial decisions involved in creating usable software by how fiercely/enthusiastically they are hyping AI code (including the ones that might themselves just be copy-pasted Ai responses).

    • @MrWizardGG
      @MrWizardGG Месяц назад +5

      I actually find that engineers are the ones who believe in AI and random kids on RUclips are the ones trying to form opinions on things they don't have experience about.

    • @zachduff6018
      @zachduff6018 Месяц назад +5

      The truth is somewhere in the middle. As a professional developer, I find it extremely useful for knocking out mundane tasks and for minimizing the amount of google operations I have to perform. But I can also attest that the correctness of the AI output decreases with the complexity of prompt/task.
      This in the hands of a good dev is good, in the hands of a bad dev... is messy, possibly dangerous. But who allows junior devs high privileges in high valued applications anyway? that's a bigger problem than AI.

    • @PeterAllen09
      @PeterAllen09 Месяц назад +6

      You can tell who's actually written code by whether or not they expect AI to generate perfect, bug-free code the first time. Do you write perfect, bug-free code the first time? Me neither

  • @tortenschachtel9498
    @tortenschachtel9498 Месяц назад +16

    When i heard the only significant change was more pull requests i was immediately thinking they produce more bugs that need to be fixed after they checked their code in (instead of testing it locally first - because someone was apparently pretty sure the AI won't produce any bugs ...)
    Kids these days ...

    • @id01_01
      @id01_01 Месяц назад

      This is such a good point! You need extra pull requests to fix extra bugs

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад

      That’s the key. AI magnifies your ability - It doesn’t GIVE you ability.

  • @JamesD837c
    @JamesD837c Месяц назад +5

    I want that spinning globe at 3:30 for the extra monitor on my desk. That way people know I’m smart.

  • @mariofrancocarbone7593
    @mariofrancocarbone7593 9 дней назад +1

    From "The case for targeted regulation" on the site of Anthropic:
    "In the realm of cyber capabilities, models have rapidly advanced on a broad range of coding tasks and cyber offense evaluations. On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024)."

  • @robertfindley921
    @robertfindley921 Месяц назад +58

    One of the painful rules I learned several times during my career is: If you don't know how it works, it doesn't. But I guess if you're not writing code that anyone's life depends on, go for it. Most of what makes the market today is garbage anyway.

    • @arandomstreetcat
      @arandomstreetcat Месяц назад

      sure they're not killing people, but have you heard of the cheap robot cat litter boxes? apparantly they killed people's cats because they used chatgpt code

    • @johnbrobston1334
      @johnbrobston1334 Месяц назад

      @@arandomstreetcat Cheap robot cat litter boxes were injuring or killing cats long before ChatGPT was a thing.

    • @johnk6757
      @johnk6757 Месяц назад +1

      Software engineering is engineering, "good enough" was always the name of the game.
      And also maybe I'm dumb but "understanding how it works" is possible if you have enough time, but non-trivial software is simply too complex to really take that time. That's what unit tests and continuous integration are for, because you CAN'T be so big-brained as to hold the whole thing in your head at the same time.
      For the most part LLMs give me code that generally works better than my own first draft would anyways. It's simply very useful and I don't see how software devs would not be using it at this point. But interfacing with it is a skill unto itself, it's not like you don't need to "understand what's going on" you just need to operate on the level of managing the LLM rather than being the code-monkey. It's kind of like managing people; I think the near-term future of software development involves humans acting more as project managers to a team of AIs

    • @andrewclimo5709
      @andrewclimo5709 Месяц назад +2

      Which engineering discipline taught you 'good enough' is okay? The mantra in both engineering disciplines I have worked in was "If it doesn't work as per the spec, it's not fit for purpose."

    • @larryblumerjr
      @larryblumerjr Месяц назад

      @@andrewclimo5709 The variation I've heard is "If it doesn't work on paper, it doesn't work."

  • @jimsackmanbusinesscoaching1344
    @jimsackmanbusinesscoaching1344 Месяц назад +46

    The basic problem is that with any significant coding project the basic issue is NOT coding but architecture. AI might help code a specific algorithm, but we are decades away from proper architectures. That is because architecture is a non-linear problem. It is envisioned not analyzed. And that is the problem with AI, particularly Generalized AI. Imagine owning a General AI robot that gets charged overnight. It is alarmed to activate at 7AM. How does it decide what to do on any given day? If you say this needs to be determined by a person (like tasking and priority setting), then I will claim it is not a Generalized AI.

    • @poppers7317
      @poppers7317 Месяц назад

      It's great for using AI withing game engines. Normal software architecture principles aren't that important there because for example in Unity Engine there are just some scripts that are used in game objects in the editor. It really reduces the tedium.

    • @MaakaSakuranbo
      @MaakaSakuranbo Месяц назад +4

      @@poppers7317 Architecture is still important there if you want your game to not turn into tech-debt hell where any change you try to make breaks tons of others things or you can't find the piece of code to change cause you have 500 duplicated pieces of code

    • @poppers7317
      @poppers7317 Месяц назад

      @@MaakaSakuranbo but why wouldn't AI help there? Of course you need to manage your game objects, scripts, assets, etc. but AI is a big help to reduce the tedious poopoo like for example creating pathfinding algorithms. I use it for 1.5 years now and it's just way more fun now to quickly prototype ideas.

    • @jimsackmanbusinesscoaching1344
      @jimsackmanbusinesscoaching1344 Месяц назад +4

      @@poppers7317 Because Architecture sets the core structures that become immutable as a product ages and essentially sets what can reasonably be done within that structure. There is not a right answer but there are lots of wrong ones.

    • @MaakaSakuranbo
      @MaakaSakuranbo Месяц назад +1

      @@poppers7317 your engine probably has a pathfinding system
      But I mean like, you're saying the architecture doesn't matter since everything is just some scripts in game objects, but well your scripts need to interact, so they still form an architecture. Your guns need to shoot bullets,t hat needs to collide with stuff and hurt things, etc.
      So unless you funnel your whole project back into the AI each time, when you ask it to make something new it might now know of those systems and how they operate, etc.

  • @SkyNhett
    @SkyNhett Месяц назад +46

    As a software engineer, I was always worried about AI taking my job, but after my therapist told me to repeat 10 times every day, 'AI will never replace me,' I feel so much better, and it now seems far less likely to happen.

    • @NokiaTablet-pl7vt
      @NokiaTablet-pl7vt Месяц назад +16

      Your therapist should be replaced by ai

    • @MNbenMN
      @MNbenMN Месяц назад +6

      ​@@NokiaTablet-pl7vtTherapist probably was AI already.

    • @interstellarsurfer
      @interstellarsurfer Месяц назад +2

      Terminators aren't real, they can't hurt you. 🤖🤞

    • @MNbenMN
      @MNbenMN Месяц назад +1

      @@interstellarsurfer ...period, semicolon, null byte!
      Bwuuuhahaha! Happy spooktober!

    • @gcewing
      @gcewing Месяц назад +2

      Of course that's what the AI that replaced your therapist would say.

  • @82vojtech
    @82vojtech Месяц назад +4

    Fixing bugs is only possible if the general architecture of the solution is good enough. You cannot make Titanic longer to carry more people, sometimes you have to build a better ship...

  • @dmitryburlakov6920
    @dmitryburlakov6920 Месяц назад +6

    3:50 thats probably the most accurate summary I could have stated after working with copilot practically from when it appeared. It's impressive that you could get to that just by reading 1,5 papers (half of first seems like an ad rather tham scientific paper)

  • @mglmouser
    @mglmouser Месяц назад +16

    I was downsized last year, after 27 years. I took a six month sabbatical to publish a novel and now have been looking for a suitable job to resume my 35 years career as a software developer. The amount of delusional expectations on devs, caused by this AI BS, is astounding. They're listing full-stack jobs with proficiencies across platforms on every listable technobabble buzzwords HRs can find at a fraction of the salary that once was minimal for senior jobs on a single platform, expecting AI to fill in the gap.
    I finished the second novel and headed to literary review. The third one is in planning. Definitely adding a Dr. Sabine cameo in it.

    • @AdelaideBen1
      @AdelaideBen1 Месяц назад +1

      The problem I see - no one cares... we're sleep-walking into inconsequentiality. And I think it has to do with the work model now (not just in coding)... why pay for 80% even, when customers still keep buying at 40%. Quality and optimisation isn't important... but the weird thing is... for all the emphasis on pushing product to market, I've seen a lot of evidence of coding/creative endeavors taking longer, and with lower quality. I often get confused by the state of things...

    • @mikicerise6250
      @mikicerise6250 Месяц назад +5

      That's not the fault of AI. It's the fault of HR recruitment. They're not even delusional, just incompetent. It's as if you put me in charge of hiring a physics professor. What do you expect me to do beyond say, 'you have to have published 50 papers in Nature'? I *can't* evaluate the competency of a physicist.

  • @kh9242
    @kh9242 Месяц назад +38

    I swear to god some people will just not allow us to have the full dystopian sci-fi future that we were all promised as kids.

  • @figthorn
    @figthorn 28 дней назад +1

    I’ve been seeing this happen for years in the translation industry. The technology got stuck at some point and didn’t progress further. What it did was create more work for human translators, in the form of “post-editing” (fixing up machine translations), which is paid at a much lower rate.

  • @jpt3640
    @jpt3640 Месяц назад +11

    I recently ordered some car parts at a massively advertising company in Germany.
    They accidentally swapped billing and shipping address. This was annoying. I had to wait another week to get my parts. I tried to order a fixed bill. I had to do it via a chat bot. This wasn't really an AI it was more like navigating though a phone computer. Very annoying. After i successfully posted the billing address on the second try i received an email to pay additional a few euro as they accidentally had applied 23% vat instead of 19%. I complained.
    I got a very friendly email that did not help at all.
    I asked again for a fixed receipt.
    A few days later they sent the initial bill. I again explained my problem and asked for a new receipt.
    Let's see if they finally will succeed.
    Somebody has completely broken software.

    • @SkorjOlafsen
      @SkorjOlafsen Месяц назад +4

      So the company took your money, didn't actually ship you the parts, and raised a near-impossible barrier to fix the problem? Sounds like the software is doing exactly what the company wants it to do.

    • @jpt3640
      @jpt3640 Месяц назад

      @@SkorjOlafsen well, yesterday i received my parts. But still they are not able to fix the problems which were caused by their bad software

  • @RFC3514
    @RFC3514 Месяц назад +11

    2:00 - That's what happens when you let AI edit videos.

  • @FrancescoDiMauro
    @FrancescoDiMauro Месяц назад +13

    one thing is for sure, AI will never be able to reproduce the smugness of human coders

    • @ssz-zd3mz
      @ssz-zd3mz Месяц назад

      while post == true return 'whatever.' 😄

    • @FutureBusinessTech
      @FutureBusinessTech 25 дней назад

      Until the day humans decide to program the smugness trait into AI. 😳

  • @LiveLifeWithLove
    @LiveLifeWithLove 28 дней назад +1

    Also in addition to what others have said - make no mistake AI can retrieve process and serve information in hours for which developer will take days. For ex. If I ask to write for option trading for USA market with equity underlying and print risk and connect to reuter page - AI knows / retrieve / process these terms while developer might not know what is going on

  • @theyruinedyoutubeagain
    @theyruinedyoutubeagain Месяц назад +35

    As someone with over a decade of experience as a programmer, LLMs are tremendously useful if you know what you're doing. The idea that they can replace a solid understanding of the many prerequisites to be an effective programmer is laughable, though.
    PS: it's weird (and reassuring) to see Sabine have such a decent take on a subject she's not an expert in

    • @Chief-wx1fj
      @Chief-wx1fj Месяц назад +13

      Why is it laughable? It’s only going to become better from here, two years ago we didn’t even have this available publicly. Imagine the coming 10 years, it will definitely outsmart any human. But keep cope

    • @jimmysyar889
      @jimmysyar889 Месяц назад +1

      @@Chief-wx1fj exactly

    • @PeterAllen09
      @PeterAllen09 Месяц назад +7

      ​@@Chief-wx1fjTotally agree. The average software engineer has a massive ego, and can't imagine anyone taking their job

    • @codingcrashcourses8533
      @codingcrashcourses8533 Месяц назад +2

      @@PeterAllen09 Well, if you work in an office, your job is probably replaced years before it´s time for the software engineers.

    • @PeterAllen09
      @PeterAllen09 Месяц назад

      @@codingcrashcourses8533 Not mine. I'm the best software engineer in the whole company. This place would instantly collapse the second I left.

  • @Lhorez
    @Lhorez Месяц назад +10

    Here's the thing. In a nutshell Luddites protested against mechanization not because it was better but because it cost them jobs and the quality was worse. Why did the owners use mechanized equipment when the products were worse? Because it was cheaper and the results were deemed 'good enough'.
    For a large enough portion of society AI generated solutions will be deemed 'good enough' and a bit more profit will be extracted.

    • @thenonsequitur
      @thenonsequitur Месяц назад +2

      The problem with this analogy is that LLMs are still not up to the task of producing good enough solutions for anything non-trivial.

    • @Lhorez
      @Lhorez Месяц назад +6

      @@thenonsequitur I didn't say they _are_ good enough. I said they're _deemed_ good enough. Trivial or not people are being laid off because of AI. And that's my point.

    • @thenonsequitur
      @thenonsequitur Месяц назад +2

      @@Lhorez Ah, gotcha.

    • @SkorjOlafsen
      @SkorjOlafsen Месяц назад +5

      @@Lhorez Sure, it's been clear for some time that AI would be disruptive because if there's one thing worse than AI hallucinations, it's executive delusions. But the question is: will the companies that throw out their skilled workers for AI survive, or collapse due to not being able to make a product that customers actually want?

  • @Kohlenstoffkarbid
    @Kohlenstoffkarbid Месяц назад +16

    I have written an Fractal Explorer in Python with the support of AI without much knowledge of Python. The AI helped me by writing codes for functions which i could not do by myself. It was wayyy faster than doing it by learning and googling. The AI did even bring lots of new Ideas, which brought this program much beyond my imaginations.

    • @exscape
      @exscape Месяц назад +3

      That matches fairly well with what Sabine says, though. If you were a Python expert, it wouldn't have helped as much.
      That's also my personal experience. ChatGPT has helped me write and translate code into languages I rarely use (most notably Powershell), but when I try to use it for languages and projects I'm well versed in, it rarely gives me any useful help; any problem I can't solve easily myself, it gives back incorrect code (and often logically incorrect reasoning).
      Basically, it's not a better programmer than I am, even though I've never worked as a programmer professionally, but since it doesn't really understand programming languages, it's better at knowing the basics of almost every computer language that exists.

    • @FrancoJ-c7p
      @FrancoJ-c7p Месяц назад

      👍

    • @raul36
      @raul36 Месяц назад +3

      Well, technically speaking you haven't written anything. Don't say "It helped me write". Probably, very probably, chatgpt has done more than 90% of the work. Saying that "you have written..." is certainly insulting to people who really know how to use python.

    • @TheSteveTheDragon
      @TheSteveTheDragon Месяц назад +1

      Ok, so he should have used the word develop, but I understood that was implied.

  • @drancerd
    @drancerd 22 дня назад +1

    Im a programmer: IA's feed with 'demo codes' from forums and plataforms (copy pastes), doesn't 'think' code and surely not solve problems

  • @AmandaFessler
    @AmandaFessler Месяц назад +8

    "Using a chainsaw to cut butter". No way ChatGPT could come up with such a hilarious yet accurate analogy. Not in its current state.

    • @Thomas-gk42
      @Thomas-gk42 Месяц назад +1

      She´s always good for another inspiration😉

  • @christopherlawley1842
    @christopherlawley1842 Месяц назад +17

    In the early 1980s there was a program called "The Last One" which claimed to be the last program you'd ever need.

  • @robgoffroad
    @robgoffroad Месяц назад +5

    I hope you're right. As a programmer of nearly 30 years, I kinda need to keep going for another 3-5 years so I have some hope of retiring. And I've played with ChatGPT and the code it writes is useful but has yet to be "ready to run." All of it has had issues that I've had to correct. BUT it will get better with time. That's the problem.

    • @thenonsequitur
      @thenonsequitur Месяц назад

      I don't think LLMs can continue getting better indefinitely. They are running out of good sources of training data. LLMs will reach a plateau eventually, and I think we are already approaching that limit.

    • @robgoffroad
      @robgoffroad Месяц назад

      @@thenonsequitur I hope so!! All those managers and executives that think they're going to fire all their developers to "save money" are in for a surprise.

    • @CrazyHorseInvincible
      @CrazyHorseInvincible Месяц назад

      It's unclear how it will get better. They used nearly all information available on the Internet to train it; the well is dry. On top of that, the amount of AI generated content is growing exponentially, which means the Internet is now full of synthetic data that can create a feedback loop that increases hallucinations and losses of context.
      The very first thing businesses tried to do was hire people with basic English skills to tell ChatGPT to write software, as early as December of 2022. The second it ventured past the "getting started" tutorial's level of depth, they would run into their first bug, and it was all over. ChatGPT either has suggestions that only make sense to a programmer, or it hallucinates.
      To this day, with the latest version of ChatGPT, if you ask it to build a react app it will give you a CRA command, even though CRA is an obsolete turd that the react team doesn't refer to anymore in its official documentation.

  • @burnt1ce85
    @burnt1ce85 27 дней назад

    Wow great video. As a software developer I agree with what you said. I have been using LLMs for years and i notice that it makes simple mistakes on tasks that it was never trained on. Where LLMs shine is replacing most of my basic google queries

  • @dimitriemilinovich7247
    @dimitriemilinovich7247 Месяц назад +27

    As an experienced software engineer I found this video a bit disappointing in terms of accuracy. Developer productivity is notoriously difficult to measure and counting pull requests is a dreadfully bad way to do so. AI code gen tools absolutely increase productivity when used properly and can even decrease bug frequency. The issue is that they have a steep learning curve and often slow developers down at first.

    • @Joao-uj9km
      @Joao-uj9km Месяц назад +2

      Utterly agree. I program c++ since 2007. And AI is an amazing tool for me.

  • @moonasha
    @moonasha Месяц назад +42

    anyone who actually coded complex things beyond web front ends knew AI replacing programmers was always a pipe dream. I'm a hobbyist programmer, but I program games, which are very complex with many interlocking systems. I've used AI code before (well I put my own code in, then have it refactor it) and it works out alright. But the moment, the very nanosecond, you get to something complex like graphics programming, the entire illusion goes up in flames and the LLM can't handle it. It produces insidious hallucination code that looks like it should work, but doesn't.

    • @jimmysyar889
      @jimmysyar889 Месяц назад +2

      AI will replace it eventually sooner rather than later

    • @hugolindum7728
      @hugolindum7728 Месяц назад +5

      “Computers will never play chess.”

    • @anm3037
      @anm3037 Месяц назад +5

      @@jimmysyar889no, people do more than websites development, weather app and database management 😅. That’s what AI will replace. AI will not get close to solving the problems I deal with by programming… not even the internet can help you.

    • @bradanderson4589
      @bradanderson4589 Месяц назад +6

      "anyone who actually coded complex things beyond web front ends knew AI replacing programmers was always a pipe dream."
      Hmm... *always* a pipe dream? This is a failure of imagination, in my opinion. I have a degree in Computer Science, and seeing what generative AI can *already* accomplish has really shocked me in recent years. An artificial intelligence that meets or exceeds the thresholds of human intelligence feels less like a distant dream, and more like an inevitable nightmare, these days.
      In fact, some days I am so freaked out by what the emergence of Artificial General Intelligence would mean for humanity that I find myself hoping that global thermonuclear war breaks out soon - destroying all of the world's advanced chip fabrication plants in the process - because I believe that nothing short of that will stop this technology from advancing. And the biosphere in the Southern Hemisphere seems like it *might* be able to outlast the consequences of a nuclear war in the Northern Hemisphere.

    • @LorenzVdv
      @LorenzVdv Месяц назад +6

      @@hugolindum7728 chess is very simple compared to coding. Your actions are limited and progress can be measured in chess. Even if you have 100 million calculations for your next move, that's a small amount for a computer. Code cant be measured easily because every problem is different and therefor the AI has no idea to know if it's actions are correct. It can only do this by comparing it to already existing code or by measuring it against predefined results. None which will help you with solving complex coding problems.

  • @Jppnametaken
    @Jppnametaken Месяц назад +5

    It is definitely helping me a lot (I'm not a coder, but I do occasionally need to dabble in some coding). It is so much faster to ask the AI how to do something than to look for it online and most of the time the basic level advice works (I can't say whether there would be a better solution or not, but for my use case it doesn't matter as all my projects are so small code optimization is not something that needs to be thought about). However I have noticed that when I want to do something relatively complex it struggles a lot with helping, usually just suggesting something that doesn't work at all.

  • @Maibes
    @Maibes Месяц назад

    I find that it's good for interactive note taking that can help you learn new ways of doing things, simple reformatting (like turn this json into xml), as an alternative to Google for obtaining customized highly googlable results, trying to decipher less familiar or context specific error messages, generating customized boilerplate, writing some quick tests for when you don't have time otherwise, and even some more creative tasks with the o1 models. It's actually an extremely useful tool as long as you understand that it's not a replacement for understanding your own code.

  • @JRyomaru
    @JRyomaru Месяц назад +39

    I don't think many devs had the belief that we were getting replaced. Most of us found it absurd that CEOs thought they would.

    • @dhirajpallin2572
      @dhirajpallin2572 Месяц назад +9

      Although if your company has 100 coders, and they gain 25% efficiency, then that's 25 coder jobs replaced.

    • @thenonsequitur
      @thenonsequitur Месяц назад +7

      Yeah, it's not replacing jobs per se, but it is allowing coding teams to be smaller, thus reducing the number of available jobs (mostly junior positions).

    • @BlueBeam10
      @BlueBeam10 Месяц назад +5

      Well, in order to get replaced by AI you would have to get hired in the first place...

    • @Volkbrecht
      @Volkbrecht Месяц назад +3

      @@BlueBeam10 Not really, no. In a lot of situations, existing positions simply don't get refilled when natural fluctuation occurs. I'm not working in software development, but I have seen how my company handled automation gains over the years. Flexible people get shifted around, temp contracts aren't prolonged, new business activities absorb some heads... but an initial team of 5 people may now be running with 3, with the same output, without anyone ever being fired.

    • @BlueBeam10
      @BlueBeam10 Месяц назад

      @@Volkbrecht That's exactly what I was saying. People talk so much about loosing jobs while it's almost impossible to get one in the first place :))

  • @br3nto
    @br3nto Месяц назад +9

    0:48 the AI code responses are getting worse. First versions of ChatGPT were great. I was able to replicate a basic prototype of an advanced thing film optimisation algorithm within a few mins and iterate on the design to add more advanced features. I haven’t been able to replicate that success for a while now. They also are only as good as the API documentation…. And if there are lots of versions, it’ll very likely get confused, and even invent functions that don’t exist lol.

    • @ransentheberge2233
      @ransentheberge2233 Месяц назад +1

      Have you tried using o1 mini/preview? From the people I've heard of who have tried it it was able to do things *on the first try* that all other LLMs the users tried either would take many rounds of back and forth debugging, or just be unable to produce properly functioning code

    • @manishm9478
      @manishm9478 Месяц назад

      I use copilot at work and it's great at generating sensible sounding method calls - for methods that don't exist. -_-
      Which does highlight how many APIs have non-sensible endpoints or methods. But i have to check AI generated code carefully. And it sometimes fails in subtle ways, that aren't as obvious as a non existent method (which my IDE can detect easily enough).

  • @stackowoflow
    @stackowoflow Месяц назад +21

    Professional software engineer here with over 2 decades of experience. While LLMs may not yet completely replace developers it is already replacing more junior roles and allows seniors to prototype and build MVPs more quickly. I'm able to do now in a few days with AI tools what it would take a team of 3 to do in a month without such tools. That's quite significant. If you have enough experience to know what good design and code should look like these tools are especially helpful.

    • @kurku3725
      @kurku3725 Месяц назад

      ok, then we would pass to the point where people no longer can get that experience of `good design and code`
      because you are not interested in hiring them & showing them how to do the thing properly
      and your legacy would eventually... be lost?
      we already have the problem of passing the knowledge to newer generations
      because there are little to no natural incentives to do so
      a lot of stuff written in 90-s
      a lot of knowledge we had, just... got lost
      because the landscape of computing changed too quickly
      for us to cover that and it is a shame
      many software people today have no clue how their machine really operates
      how the things really work, what are the problems and limitations
      can it be bettered?
      ruclips.net/video/ZSRHeXYDLko/видео.html
      and from your words it seems like AI would accelerate that process
      which is very bad

  • @perozointo
    @perozointo Месяц назад +2

    As the manager of 3 software teams and a seasoned dev who uses AI code generation daily. I can confirm it saves tons of time! However, you must consider the use cases. For example, AI is trained on existing code. Meaning its great at generating boiler plate code that works with a few adjustments. But, iterating on code would require providing some feedback. For this you need a framework like Autogen which can execute the code and check the results at a minimum.

  • @krakulandia
    @krakulandia Месяц назад +8

    Basically inside 5 years AI degenerates the skills and know-how of coders to non-existent level if they use it consistently.

    • @mostexcellentlordship
      @mostexcellentlordship Месяц назад

      This is indeed a major problem and that's something I am actually worried about, not about AI itself. We are anxiously awaiting the moment we can throw our hard-worn skills as humans in the dustbin, but I have a feeling we failed to properly think through what this would entail in the long term. Perhaps AI can do that thinking for us..

  • @elizakimori8720
    @elizakimori8720 Месяц назад +9

    It's not that the bear dances well, it's that the bear dances

    • @RealStonedApe
      @RealStonedApe Месяц назад

      Holy fucking hell this is the best comment I've read in quite some time 😂😂😂 I fucking love this! What I've been losing my mind over too - like, fucking hell LLMS are Nutsos!! But nope, people just keep raising that fucking bar, higher and higher and higher.
      "Yea the Bear can dance, but its footing is shit, it's awkward, got no rhythm, can't breakdance, can't Riverdance, can barely Square Dance!" 😂😂

  • @wookang3466
    @wookang3466 25 дней назад +3

    In terms of fact checks, AI has only been dissapointing for me. I asked AI to summarize several biological research papers I already know (including my own publications) and its summary was completely off point on many of them and failed to capture nuanced messages on ALL of them. Maybe it will get better in the future.

  • @brunoboksic9696
    @brunoboksic9696 Месяц назад

    Many comments mentioned that coding isn't just "coding," it's theorycrafting and a bunch of other stuff. It's the same thing with writing. Writing consists of 4 elements:
    -Idea generation
    -Research
    -Writing
    -Editing
    AI can technically help you out in all of these, but it can't replace any of it.

  • @Jeremyak
    @Jeremyak Месяц назад +16

    AI does one thing insanely well, it raises capital.

  • @xasm83
    @xasm83 Месяц назад +4

    good example of a non software engineer reasoning about ai, all the input for the software requirements is “blurry vague human language” also it is already possible to feed tables and lists as reqs so the future of coding IS plain English the same way it is currently used to construct requirements

  • @boo766
    @boo766 Месяц назад +3

    How is AI meh? I wish it was meh. But I find it's abilities are astounding 🤔

  • @MauroRincon
    @MauroRincon Месяц назад

    As a coder myself, it saves time in simple tasks: read from a table, perform some regression and give back a plot. But you must be very precise in the way you ask for this.

  • @chrisdrake4692
    @chrisdrake4692 Месяц назад +5

    @3:10 I think that +41% extra bugs is not high enough - I've been using AI every day for code for a year, and it NEVER gets it properly right. There's ALWAYS edge cases and other subtle things it messes up (no matter how much you tell it what mistakes it made). I suggest that 41% number is just the bugs they *noticed* - the true number is way higher.
    It's actually a defacto new way to quickly tell if a programmer is skilled or not: if they think AI is more productive; they're not a (good) coder.

    • @codingcrashcourses8533
      @codingcrashcourses8533 Месяц назад +1

      I would rather argue that people who just copy+paste code from LLMs (which would lead to numbers larger higher than 41%) without double checking and reprompting, are not that good at working in pair with AI. You don´t just copy and paste from an LLM, it´s an iterative process with manual intervention. Saying people who increase their productivity with AI are bad coders is just a nice cope of people who try to avoid adopting new technology.

    • @chrisdrake4692
      @chrisdrake4692 Месяц назад

      @@codingcrashcourses8533 Having done your "iterative process with manual intervention" a few times per week, for the last year at least, including every day last week, I can say with ABSOLUTE certainty that it's both safer and faster to not use AI in the first place. If you've not discovered this yourself yet, it's because you're not doing real work. Put down the silly trivial examples and try doing something real for production. You'll see really fast. It requires _intelligence_ to do logic, not pandimensional lookup tables!

  • @phoenixamaranth
    @phoenixamaranth Месяц назад +3

    The intro comment about companies making models larger in response to errors showcases that you don't understand AI models and machine learning. Hardware limitations were the biggest hurdle to AI being able to form "contexts" like humans do so they can follow along with topics correctly and find relationships. Basically, much like the hardware difference between a human brain and a dog's, if you want the evolved complexity you have to give it the hardware. A dog could likely be as smart as a human if they could hold more contexts and were able to relate more areas simultaneously.
    We currently expect general AI to understand a massively broad array of human topics and they give back above 90% accuracy. Humans don't even come close so to be dismissive of their current performance is silly. They already will give you more accurate answers than half the human population, depending on context and their training.

    • @bionic_batman
      @bionic_batman 28 дней назад +1

      >The intro comment about companies making models larger in response to errors showcases that you don't understand AI models
      The fact that Google alone needs to build 7 new nuclear plants to supply power for their future AI projects proves that the author of the video actually gets it right
      > if you want the evolved complexity you have to give it the hardware.
      Complexity does not mean efficiency. Who cares if some thing is super complex when you can achieve same results way cheaper

    • @RR-ds4sd
      @RR-ds4sd 28 дней назад

      AI is only as intelligent, as the humans that created the content they were trained on. But humans will cease to give AI content for free. Only dumb humans will. Then AI will get dumb as well, or will remain forever as smart as mankind was smart in the 2020s.

    • @phoenixamaranth
      @phoenixamaranth 28 дней назад

      @@bionic_batman Not even close. Google's AI serves BILLIONS of people. Keep that in mind. This isn't a service for a handful of people. Your argument is like complaining that we have several power plants serving an entire country. By your argument we should go ahead and shut down all the data centers, including the ones you're using right now because they use so much power...surely all that could be done cheaper, right???
      And no one said complexity means efficiency. The point is growing hardware allows AI to hold more context relationships and give even more accurate and useful answers and results. Something she directly complained about even though AI has over 90% accuracy rates already for a broad array of subjects.
      She doesn't get it and neither do you. We couldn't solve protein folding until AI, so no, there's not cheaper, easier ways to do what AI does. At least not yet.

  • @undercrackers56
    @undercrackers56 Месяц назад +13

    I have been writing commercial code since 1979 architecting and writing multi-user database applications and embedded software engineering. I have lost count of the times that the rest/media has claimed that professional programmers are no longer needed because of some hot new technology. Good and reliable software is not just about lines of code.

    • @axle.student
      @axle.student Месяц назад +1

      I agree. Just because it runs(executes) doesn't mean that it is correct lol

    • @williambranch4283
      @williambranch4283 Месяц назад

      We never get rid of bureaucrats or managers ... bwahaha.

    • @mpsmith35
      @mpsmith35 Месяц назад +2

      In my experience, figuring out what is wanted is the problem. In practice, the job is only partially specified at the start of a project and humans fill in the missing bits later. Using AI means you have to get the requirements figured out at the beginning and then it can generate the code - something I have never seen happen in 40 years when the AI were humans!

  • @epicmap
    @epicmap 29 дней назад

    As a software developer I use it only to automate some routines like mapping tens of fields to other fields, one by one. And to generate a framework of a future code, but I always end up rewriting it. It’s just easier to do it right when you see a wrong solution.

  • @victorkrawchuk9141
    @victorkrawchuk9141 Месяц назад +31

    All code is bad code until it's been made to work. If AI doesn't test code, but simply writes sections of code for humans to debug and integrate, then there is no good mechanism for it to learn from its mistakes. This is how I learned it's a bad idea to pull an all-nighter to make system code work, after taking a break to have a few drinks in a bar with some friends.

    • @christoffer886
      @christoffer886 Месяц назад +1

      And what if there's an AI model that could cycle through testing a software code and make improvements and fix bugs based on its analysis of said testing? We're just at the start of using these tools and we've yet to see advanced software that utilize AI in specialized functions like that. Right now we just have the large models, LLMs which mostly just acts through being mediocre at a large set of tasks, rather than specialized for a specific purpose. A model that is specifically made for writing software code might have a lot of further parameters to mitigate errors. And those models will also be improved upon further.
      This is why I look at studies like these that conclude performance to be bad with some skepticism because such conclusions can't really be made for the future if they're based on outdated models. The o1 model from OpenAI is for example alot better than what's been used so far, and it's just the preview version. I'd like to see similar studies on cases that use the o1 model when its out and in use.

    • @MrWizardGG
      @MrWizardGG Месяц назад

      I think you might just be a bad AI user. I have let AI make several complex web apps and MAUI multiplatform smartphone app completed automated.

    • @victorkrawchuk9141
      @victorkrawchuk9141 Месяц назад

      @@MrWizardGG I didn't say that AI is poor in software development, just that it would be much better if it could learn from its mistakes by being able to test the code it wrote. I'd say the same thing for any human programmer. I actually think that AI has a bright future. Lacking a status-seeking herd instinct, AI might teach humans to be less divisive in their communications and less inclined to jump to conclusions.

    • @victorkrawchuk9141
      @victorkrawchuk9141 Месяц назад

      @@christoffer886 I didn't say that AI is ineffective or has no future in software development, just that a crucial ability to learn from mistakes might be missing until AI can actually test the code that it writes. I'm wondering if AI might someday be used to help convert code from one language to another in an intelligent manner that at least includes a thorough desk-check of the logic of the original code and of the converted code. For example, I think there is still a lot of business software written in the old IBM COBOL language which few people now know and for which maintenance can be difficult. There is probably code-generation software that can convert from COBOL to some other language, but the main part of the work would still involve testing and in preventing long-standing bugs and idiosyncrasies from being carried over from the old code to the new code. If AI-based software development could be applied in this direction then the benefits might be very significant.

    • @nathanbanks2354
      @nathanbanks2354 Месяц назад +1

      I often get AI to write tests to verify the code it wrote actually works. Not sure why it doesn't do this by default--I also often have to ask for comments. But yeah, AI code is generated faster, sometimes prettier, but usually not better.

  • @bigutubefan2738
    @bigutubefan2738 Месяц назад +5

    Raising PRs is trivial (there's even an API). Getting PRs accepted and merged by a maintainer is not.

    • @Talon5516-tx3ih
      @Talon5516-tx3ih Месяц назад

      Should use an AI for the maintainer. Problem solved.

  • @GeoEmertech
    @GeoEmertech Месяц назад +10

    I own a web dev company and we actually found the opposite. LLMs can be used only by seniors to increase their productivity. Because they can review the code and catch the errors and optimize the code. The juniors will just accept the garbage and waste a lot of time going back and forth with the QA team. But even the seniors won't catch the most insidious errors because LLMs are pretty good at generating errors a human won't think of checking. And then good luck catching it in QA as well. So yeah, for seniors about 20% increased productivity. At the cost of additional risks in operations later on when the surviving bugs make it into production. From the business perspective I don't think I want to accept the extra liabilities.

  • @stillmattwest
    @stillmattwest Месяц назад

    As a software engineer I think AI tools are sometimes useful, but the bit about it often being easier to write your own code is spot on. A workflow I've gotten into lately is to copy and paste a few things into an LLM and ask it for a function that does X. Now, I know out of the gate that its not going to produce working code unless I'm doing something trivial, but it's something to get me started. Even if that start is sometimes "um, yeah, no."

    • @I_am_Raziel
      @I_am_Raziel Месяц назад +1

      I tried it in 05/23 and saw that it would take me hours to explain to it what I need. So why learning talking to "AI" and forgetting the actual coding in that process? I will write the code, will get better result and I will be faster.
      But that's in a specific environment, I am pretty sure it can be very useful for certain tasks.

  • @blendedplanet
    @blendedplanet Месяц назад +10

    My friend asked how the weather would be today. My answer : That's a great question! Weather affects all of our lives and knowing more about weather patterns can help keep us safe when strong weather patterns threaten our daily lives. After my former friend clarified his question (and I picked myself up off the floor), I said : warm, then repeated everything else. We're no longer friends.

  • @carnezz4384
    @carnezz4384 Месяц назад +30

    Good points. I still believe AI is the worst it will ever be. Just 3 years ago if you mentioned AI writing any code at all you'd be laughed at.

    • @raul36
      @raul36 Месяц назад +2

      You didn't remember anything. That's exactly what Altman said about AI, so don't take credit for it. 😂

    • @carnezz4384
      @carnezz4384 Месяц назад +5

      @@raul36 We went from it not existing -> "don't worry guys it wont take your jobs" within 3 years. It's an observation. Everyone can take credit for it.

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад

      It’s getting kinda awesome.

    • @oompalumpus699
      @oompalumpus699 Месяц назад

      'That's because we haven't hit the limit yet and we are getting there.
      According to Goldman Sachs, AI investments have consumed billions with very little profit.
      Furthermore, that's just blind optimism on your part.
      For example, way back when launching spacecraft was the hottest trend, people thought:
      If we can send spacecraft out into the cosmos with just a few megabytes of ram, imagine what we can do with gigabytes of it!
      Fast forward to today and a gigabyte of ram can barely run Google Chrome.
      The future of AI is not centralized AI that you have to pay subscriptions to access.
      The future of AI is being able to create your own the same way we create applications.
      In-house, without requiring internet access, and without being subject to price gouging from big corpos.
      LLMs are boring. The AI development that excites me is wetware.
      There's a company that uses brain organoids combined with AI.
      Currently, only select people have access to it. Subscription model though.
      But if the tech becomes public, we should be able to grow our own cyborgs in a garage soon.'

    • @levilukeskytrekker
      @levilukeskytrekker Месяц назад +2

      Actually, based on research out of the UK and Japan, AI is getting steadily worse. Because it's a mathematical hat trick reliant on scraping stupidly huge quantities of data from the internet, and also has been used to completely flood the internet with AI-generated content, and it automatically gets worse if you train it on AI-generated content (unavoidable downside of the hat trick), and Silicon Valley moves fast and breaks things without bothering to label billions of images and lines of text as AI-generated, the (probably illegal) magic trick of pirating the internet to feed a torturously misapplied sorting algorithm is becoming inevitably less and less effective. This is the best "AI" will ever be (at least if we continue to use a sci-fi term for mistakenly employed sorting algorithms we've had for well over a decade).

  • @GabrielFranciscoss
    @GabrielFranciscoss Месяц назад +26

    As an AI researcher and senior software engineer, I would say it’s hard to confirm anything for now. I do think we’ll achieve fully autonomous coding sometime in the near future, but probably not with a single model. That’s the key: specialized agents can potentially tackle specific problems better than a single large model, and certainly better than many junior and mid-level engineers. I can say with 100% confidence that I spend considerably more time reviewing code from junior engineers than fixing what an LLM did wrong, and that’s what scares me. I believe that once we achieve multi-agent autonomy, it’ll be a no-brainer for companies to hire only senior engineers and use agents as their junior engineers

    • @ANTICHRIS619
      @ANTICHRIS619 Месяц назад +7

      Exactly my point, whenever someone says that "oh, nothing to worry about AI just keep grinding leetcode and keep learning MERN stack there are tons of jobs for freshers and junior software developers in future"
      The development of AI doesn't mean that developers jobs will be extinct but it'll slowly but steadily reduce to being just senior level jobs... resulting in lots of newcomers in the software field feeling betrayed or cheated.

    • @vicdreyer6413
      @vicdreyer6413 Месяц назад +8

      And how long will it be, exactly, before there are no more juniors progressing through the ranks to become seniors, to fill the requirement for all those seniors?

    • @davidmackie3497
      @davidmackie3497 Месяц назад +2

      @@vicdreyer6413 It's the tragedy of the commons. In the old system, as you point out, coders progress from junior to senior, in a sort of apprenticeship system. If a system exists to do without the apprentices, then any business that still uses the old apprenticeship system will go out of business. But of course, as you point out, the new system is doomed in the long term. We've seen this already in many construction and factory jobs, as robots replace the apprentices.

    • @katrinabryce
      @katrinabryce Месяц назад +2

      I'm not quite old enough to remember when they made the same claims for COBOL. But they seem to make the same claims for every new computer-related technology that comes out, and I am old enough to remember a lot of them.

    • @Justashortcomment
      @Justashortcomment Месяц назад

      This will possibly never occur, but what will occur is that the supply / demand laws imply that demand for juniors will go down and so will pay.

  • @iguanaamphibioustruck7352
    @iguanaamphibioustruck7352 14 дней назад

    I agree with you totally. It is like setting up an R&D company in the Library of Congress. They do not invite or deal with new technology. Every year they survive they become exponentially behind in progress

  • @VaShthestampede2
    @VaShthestampede2 Месяц назад +12

    From my perspective, it's a large productivity enhancer. It's like having the input for a pair-programming session of a domain expert that would cost hundreds of thousands of dollars to have on-staff available for $20 a month.

  • @ClockworkGearhead
    @ClockworkGearhead Месяц назад +5

    Not the first AI winter, won't be the last.

    • @41-Haiku
      @41-Haiku Месяц назад

      We are in an AI spring. It will not be winter next year, as much as I would hope so.

  • @treelineresearch3387
    @treelineresearch3387 Месяц назад +11

    I'm not disillusioned because this is pretty much how AI has gone for the last 50+ years. A big boom cycle where it's gonna TaKe OvEr ThE WoRlD and then an "AI winter" where the megafunding dries up and it just goes back to a research thing for a decade or two. Specifically with code, I just see it as another automation/boiler plate generator, but one that has to be hand checked because it can't be relied on to always generate a correct answer. In particular it's way too likely to just make up API endpoints with plausible names that don't actually exist in the library you asked about.
    Best use for it as it exists now I think is loading documentation for what you're working with in the modelspace and using it as a search/summary engine, since what we're calling "AI" is much closer to a search engine than anything else.

    • @satanpills6232
      @satanpills6232 Месяц назад

      Difference is now 100s of billions of dollars are spent, more than ever.
      Also, even Joe Biden told the UN that in the next 2 years we will see more development than the last 50.
      If the senate and all this money say something, that it’s at least going to give more than you think

    • @spaceowl5957
      @spaceowl5957 Месяц назад

      I agree it’s great for giving you an intro to a concept or pointing you in different directions but I rarely let it code except for very simple and generic subroutines.

    • @MrWizardGG
      @MrWizardGG Месяц назад

      ​@spaceowl5957 I've let it code entire web apps and even a multiplat smartphone app with .NET Maui. It literally does the entire job for you.

    • @squamish4244
      @squamish4244 Месяц назад +2

      The past of AI is no guide to the future anymore. Many things are different, including the information available to train the datasets, the volume, power and efficiency of the compute, the money being invested etc. Just because there have been booms and busts in the past doesn't mean that cycle will continue indefinitely.

  • @demovideos187
    @demovideos187 18 дней назад

    It’s about reducing toil. Like how word processors reduced the toil in desktop publishing. Machines won’t code for machines, just like machines didn’t publish newsletters for machines to read. Even if technically, sure, they could.

  • @slothmasterjack9646
    @slothmasterjack9646 Месяц назад +22

    Idk… I am not an expert coder but I do have to code a lot for my research (applied physics in medical devices), and in the last 4 months have started using AI to great effect. The CS professionals I work with use it a lot more than I do. Perhaps we are outliers, or perhaps these studies are already outdated.

    • @Lukas-hb2dk
      @Lukas-hb2dk Месяц назад +12

      It’s just another one of Sabines blind spot videos. Just give it 1-2 years and she’ll upload a vid saying she was wrong

    • @nikolayhidalgodiaz9463
      @nikolayhidalgodiaz9463 Месяц назад +2

      Or the tool used in study was inferior from the best available in the market

    • @MrNocturne260
      @MrNocturne260 Месяц назад +2

      ai code does indeed suck even on the highest level. it's just good for beginners. experienced programmers use it because it's faster (and that's fine in some cases), not because it's better

    • @carlosgomezsoza
      @carlosgomezsoza Месяц назад +4

      I think it is just bias. There are also studies showing Ai impact as productive and impactfull but in this video only the negative ones were covered. Also use AI daily and lead an small software engineering team and overall AI help us a lot.

    • @raybod1775
      @raybod1775 Месяц назад +1

      @@carlosgomezsoza Glad that AI is helping, seems like a lot of people are finding AI useful for many tasks and you give me hope. My problem is my need to have a full grasp of how to do AI before jumping in. Fortunately, there are a lot of RUclips videos, research papers, etc. available… getting there.

  • @dilaisy_loone2846
    @dilaisy_loone2846 Месяц назад +17

    I don’t understand how people who write code are excited about it. Like, I’ve studied algorithm for 6 years and I was never surprised nor impressed about generative ai. After all, it’s always been a thing. Now it’s just bigger.

    • @StankHunt42
      @StankHunt42 Месяц назад +1

      You might be surprised by the next product release from Chat GPT as its going onto Chain-of-thought (CoT) prompting which mirrors human reasoning by systematic problem solving through logical deductions. So abstrations and hullicinations will end up being solved

    • @honestlocksmith5428
      @honestlocksmith5428 Месяц назад +3

      Try studying programming and find out.

    • @dilaisy_loone2846
      @dilaisy_loone2846 Месяц назад +1

      @@StankHunt42 that’s still not impressive. Is complex algorithm.

    • @RealtyWebDesigners
      @RealtyWebDesigners Месяц назад +1

      It’s because you’ve only studied algorithm. One algorithm. 😂

  • @ramshambo2001
    @ramshambo2001 Месяц назад +4

    This is vastly oversimplifying the situation, like any tool, it really matters how you use it. I think its kinda ridiculous to make these widespread vague conclusions so early based on so few studies.

  • @fxbehr
    @fxbehr Месяц назад

    2:39 this was by far the most easy and complete explanation of pull request I have come across 👍

  • @PredaBoon
    @PredaBoon Месяц назад +9

    As a software engineer, this has been my experience with copilot. I ask it to generate a piece of code for a method that calls a library with certain parameters. It gives me the code, I put it in and the compiler gives error about how the method doesn't take such parameters. I tell copilot that and it apologizes and gives another one. Also failure.
    Where it DID help though, is something new that I have no idea about and it gives me the summary instead of having to read pages of documentation.

    • @Nasox
      @Nasox Месяц назад +3

      It's also very useful for more complex but we'll known algorithms. One time I coded a ray tracing algorithm and made a mistake in my calculations. After our looking I threw everything into chatGPT and it found my mistake immediately. I switched the x and the y.
      It's good as long as you don't try to do your entire job with ai. Just use it for error Handling, code reviewing and simple algorithms.
      PS. The new voice chat of chatGPT is extremely good for learning a new language.

    • @MrWizardGG
      @MrWizardGG Месяц назад

      Copilot isn't the only Ai coding tool. Aider Chat makes fully functioning apps.

  • @annaczgli2983
    @annaczgli2983 Месяц назад +42

    This is great news. Means that developers jobs aren't going away - they'll be fully employed fixing all the bugs introduced by AI.

    • @strumyktomira
      @strumyktomira Месяц назад +5

      I may disappoint you, but there are already more AIs that fix the mistakes of the first AIs :D

    • @psychohist
      @psychohist Месяц назад +2

      Oh great, fewer jobs coding, more jobs debugging.

    • @strumyktomira
      @strumyktomira Месяц назад

      @@psychohist Even better! No more any jobs! :D

    • @semkjaer3581
      @semkjaer3581 Месяц назад

      ​@@strumyktomirathey will just use a multiagent system where one instance of gpt generates the code and another checks/runs it and will send it back if it's wrong. Pretty sure that's essentially what chatGPT o1 is. Problem is it might still end up not working.

    • @Mallchad
      @Mallchad Месяц назад

      @@strumyktomira Great. Now you need a person who knows how to set up 2 AI's on a codebase

  • @TheBalbrigganTelescope
    @TheBalbrigganTelescope Месяц назад +4

    Studies on AI have all had the same flaw so far. They're not keeping up with the models. With a new model from multiple companies every six months while the studies are taking at least a year, leaves me to a conclusion -Only AI can collect and analyze data fast enough to make a relevant peer-reviewed study on AI.

    • @-PureRogue
      @-PureRogue Месяц назад

      True, that is actually my concern with AI, while everyone is afraid how AI will build robots to conquer world, I am more worried about people being psychologically able to keep up with change and following it. And in general what it will do with society

    • @ifcoltransg2
      @ifcoltransg2 Месяц назад +2

      Specific models change so quickly, but the claims accompanying them are remarkably consistent: AI will replace everyone at everything ever. GPT-3, GPT 3.5, GPT 4, o1... There's no doubt models are getting more impressive, although until the papers are in, I'm going to remain sceptical of the claim.

    • @TheBalbrigganTelescope
      @TheBalbrigganTelescope Месяц назад

      @@-PureRogue Yes, there was a photo the other day of a giraffe getting rescued, posted on fb, obvious to me it was AI generated, not so to baby boomers, it got 20k likes lol

  • @roccociccone597
    @roccociccone597 28 дней назад

    Yeah as a software engineer I knew this for a long time… it’s funny to watch everyone catch up with that piece of knowledge

  • @goodspellr1057
    @goodspellr1057 Месяц назад +15

    In my experience, junior coders are less likely to thoroughly test their code before issuing a pull request. They think that if it runs, it must be working. They perform fewer "sanity checks" on known inputs/outputs, and do not consider as many edge cases as senior coders.
    This means that the number of pull requests from junior coders scales with the speed at which they write untested code. It makes sense that the ability to copy-and-paste code (even bad, wrong code) would increase the number of their pull requests. That's not really "productivity", as bad code has complicated bugs that end up requiring fixes that can take way more time.

    • @traumflug
      @traumflug Месяц назад +1

      Didn't you know? "Works for me!" is the highest level of quality imaginable.

    • @mrpocock
      @mrpocock Месяц назад +2

      OK, so one way to address this is for the tooling to automate test case generation. Make it part of the IDE that as you type, it suggests invariants and edge case values, and makes them part of a test suite as you type. Make it difficult to write untested code.

    • @traumflug
      @traumflug Месяц назад +2

      @@mrpocock Good point. For testing and analysis an AI would make sense.

    • @igori3532
      @igori3532 Месяц назад +1

      Why are Devs not generating tests using AI. Why are PRs accepted without tests ?

    • @traumflug
      @traumflug Месяц назад

      @@igori3532 Writing tests is just as much work as writing the application its self. Tests are also just as buggy as the tested application. Tests can test only trivial things. A meaningful test would e.g. evaluate whether a login mechanism is secure: pretty much impossible to write. In short: tests are totally overhyped.

  • @johngiraldi1150
    @johngiraldi1150 Месяц назад +36

    Chat GPT apologized to me after providing code that didn't work because of a non-existent method in a standard python library.

    • @harmless6813
      @harmless6813 Месяц назад +10

      Yeah, it likes to make things up. That's not limited to coding but it's not exempt either. In its defense, the functions it invents are well-named and would be really useful if they existed. ;-)
      But I hear the latest o1 model is a lot better at coding. (Can't test because I'm not paying for it.)

    • @thenonsequitur
      @thenonsequitur Месяц назад +7

      @@harmless6813 I pay for o1-preview. It does produce better output than 4o, but it is still very prone to make stuff up. It still calls functions that don't exist and it produces broken code for anything non-trivial.

    • @nathanbanks2354
      @nathanbanks2354 Месяц назад

      @@harmless6813 Yeah, o1-preview is slightly better. Claude 3.5 sonnet is better than GPT-4o, and it's also free. I think o1-preview may be better than Claude, but a couple weeks ago I turned to my free Claude account because I used my 50 queries to o1-preview that week. It solved the issue I was having with emulating an ARM processor with qemu; GPT-4o couldn't figure it out.

    • @Ivan.Wright
      @Ivan.Wright Месяц назад +3

      @@thenonsequitur The worst part about o1-preview in my experience so far is how much extra stuff it tries to implement without me asking. It's very eager to go above and beyond but it puts the cart before the horse half the time.

    • @alexanderpoplawski577
      @alexanderpoplawski577 Месяц назад +1

      I had the same experience trying to modify a cups config file. It came up with non existent parameters. I told it this parameter doesn't exist and the answer was, yes this parameter is not defined for the cups config.

  • @TheDidier1969
    @TheDidier1969 Месяц назад +7

    Hi Sabine, another great comment again, thanks for all the previous ones 🤓
    Personally, I use artificial intelligence to help me research algorithms for specific contexts. I explain the ins and outs to it, which allows me to obtain keywords for future searches on Wikipedia, for example.
    But indeed, the first few times I copy-pasted the code of such an algorithm, I realized that if it was buggy, I still had to understand it from A to Z in order to debug it. So not much benefit there.
    On the other hand, reading is, in my opinion, easier than writing. Therefore, a code proposal allows me to evaluate it if I take the time, and possibly identify bugs preemptively.
    So I definitely think it is beneficial, but good luck to companies hiring copy-pasters !😁

    • @SabineHossenfelder
      @SabineHossenfelder  Месяц назад +4

      Thanks for sharing your experience. I see what you mean, I think, which is why I said, it probably has some good use cases, it's just that currently people are using it both for cases where it makes sense and where it does not makes sense. So I would suspect that the benefits increase in the future when they find out how to use it better. What do you think?

    • @aaronjennings8385
      @aaronjennings8385 Месяц назад +1

      ​@SabineHossenfelder there's room for improvement!

    • @3DisFuntastic
      @3DisFuntastic Месяц назад

      Same experience on my side

    • @harmless6813
      @harmless6813 Месяц назад

      @@SabineHossenfelder It is already massively useful. I wonder what the expectation here was. I mean, I can explain a problem to a machine in plain text and it responds with (almost) working code. That would have seemed like a miracle twenty years ago.
      Also, the improvements between AI versions are immense, so I really don't get the pessimism.

  • @DevidasBhobe
    @DevidasBhobe 25 дней назад +1

    I love coding along with AI it helps me focus on the strategy and delegate the tactics