Large Language Models and The End of Programming - CS50 Tech Talk with Dr. Matt Welsh

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 2,1 тыс.

  • @amansahani2001
    @amansahani2001 Год назад +657

    "People, writing in C is a federal crime in 2023" is the most misleading statement, Man how you design low latency embedded systems without C? Lot of low level devices are depenedent on C. Even Tesla FSD or Autopilot uses C++. IOT devices use C.

    • @happywednesday6741
      @happywednesday6741 Год назад +59

      No one cares bro

    • @anilgandhi
      @anilgandhi Год назад

      Tesla is going to rewrite 300k lines of code using neural networks, no more C or C++.

    • @easygreasy3989
      @easygreasy3989 Год назад +28

      I bet u I can get my gran to type that into GPT4 and would do better than what ur whole team could do 2 years ago. U better hold on bra, I don't think ur ready. 😶

    • @amansahani2001
      @amansahani2001 Год назад +160

      @@easygreasy3989 bruh, go and ask your GPT Boi to write assembly code for newly designed chips from any vendor. Those LLMs can't generate code outside of the scope of training data. If you've written the LLMs from scratch or at least read the paper then you know what I'm talking about. Else I strongly suggest you go and study CS 182.

    • @happywednesday6741
      @happywednesday6741 Год назад +18

      @@amansahani2001 God of the gaps my guy, soon an AI will be better at that too, why wouldn't they?

  • @miraculixxs
    @miraculixxs Год назад +367

    'See I don't know how it works and I'm ok with that' - that pretty much sums up the presentation.

    • @LarisaPetrenko2992
      @LarisaPetrenko2992 9 месяцев назад +23

      Yeah, you don't have to know every detail of a Honda, just buy it and drive it

    • @rmsoft
      @rmsoft 9 месяцев назад +15

      Well, you can get pieces of code and I've done it already, chatting with chatgpt helps a lot to get inside once you ask right questions. This presentation is just babbling, I'm waiting for full useful application development presentation using AI.

    • @contanoiutube
      @contanoiutube 9 месяцев назад

      @@LarisaPetrenko2992but then don’t call yourself a car engineer

    • @CapeSkill
      @CapeSkill 9 месяцев назад

      @@LarisaPetrenko2992 you can drive it, but you cannot lecture people about how it works and how it's going to revolutionize the ''future''

    • @davidlee588
      @davidlee588 9 месяцев назад

      @@LarisaPetrenko2992 but people who built Honda know every detail of a Honda.

  • @linonator
    @linonator Год назад +537

    I get the clickbait title but it can be really discouraging to people who are thinking about getting into software engineering. “Like why even try if ai is gonna do it?”
    Mainly because it’s coming from an institution like this. I know it’ll take time to eventually get there but A lot of people have already lost hope and new students thinking about joining may just turn a different direction
    Note: I’m not speaking of myself here, I’m a senior engineer and I volunteer at coding camps on weekends and tutor online and I get this sentiment from the people I coach and teach. When you’re completely new to a field and you see things like this from a reputable institution along with all the hoopla of tech bloggers online, it does discourage many people from trying to enter this field.

    • @samk6170
      @samk6170 Год назад +50

      perhaps, but such is reality.

    • @sineadward5225
      @sineadward5225 Год назад +38

      Still, 'everyone should learn to code' is valid. Just do it anyway for your own intellectual development. No point in trying to blame a video title for not doing something. Just do it.

    • @Boogieeeeeeee
      @Boogieeeeeeee Год назад +7

      It's the presentation name, bud. Don't get discouraged, presenters often put a clickbaity title but then debunk said title during the presentation. In any case, it's what this guy wanted to call his presentation, can't really fault Harvard for it.

    • @fintech1378
      @fintech1378 Год назад +10

      we've got to face this 'harsh' reality head on, there is nothing you can do

    • @phsopher
      @phsopher Год назад +76

      Somwhere in 1889: Welcome to my talk titled "Cars and the end of horse carriages".
      Someone in the audience: Very mean and dicouraging title, dude, what about all the people who want to become a horse carrage driver?

  • @donesitackacom
    @donesitackacom Год назад +1102

    "AI will replace us all, anyway here's my startup"
    Exactly 8 days later, OpenAI released a single feature (GPTs) that solved the entire premise of his startup.

    • @CGiess
      @CGiess Год назад +61

      So true hahahaha

    • @tomasurbonas5835
      @tomasurbonas5835 Год назад +26

      Oh my god, thought exactly the same!

    • @miguelfernandes6533
      @miguelfernandes6533 Год назад +78

      Funny thing is he said programming will die but it was exactly through programming that the new feature that solved the premise of his startup was created

    • @KP-sg9fm
      @KP-sg9fm Год назад +93

      Which just further reaffirmed everything else he said. Too many people are coping right now, LLM's are gonna put a lot of people out of work, not just programmers. I work customer service and internally I am freaking out right now.

    • @ste1zzzz
      @ste1zzzz Год назад +29

      so he was correct, AI will replace us all ))

  • @fredg8328
    @fredg8328 Год назад +362

    That reminds me when I was in middle school. My teacher had to teach us how to program in Basic but he really didn't want to. So he simply told us "in 2 or 3 years we will have speech recognition so you don't need to learn programming". That was 35 years ago... That's a bit bold to tell that programming languages have not improved the way we code in 50 years and to think AI will save us.

    • @dansmar_2414
      @dansmar_2414 Год назад +15

      one day they will get it right

    • @vladimir945
      @vladimir945 Год назад +37

      I remember one of my teacher, while not been bold enough to speak about speech recognition in the early 90-s, saying that there are _already_ only system programmers left, the application programmers have been made obsolete by - are you ready for it? - SuperCalc, a spreadsheet software for MS-DOS and such. Makes me wonder, now that I think of it, why would there still be a need for system programmers if MS-DOS was already a sufficient operating system for the only applied task that was left - the one of running SuperCalc...

    • @edmundkudzayi7571
      @edmundkudzayi7571 Год назад +4

      You've clearly not used Grimoire. It's game over.

    • @IAAM9
      @IAAM9 Год назад +8

      Most probably You have not used AI enough, its magical in some sense. Soon you will realize give it a year or two

    • @raylopez99
      @raylopez99 Год назад +3

      But speech recognition is really good these days...it just took about 10-35 years, depending on how 'good' you think 'good' is (I recall speech recognition that was decent about 25 years ago).

  • @imba69420
    @imba69420 Год назад +402

    LLMs are going to replace idiots doing stupid talks 100%.

    • @DarthKumar
      @DarthKumar 9 месяцев назад +2

      Lmao 😂😂😂

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un 9 месяцев назад +5

      natural language programming is a thing now accept it

    • @gdwe1831
      @gdwe1831 9 месяцев назад +9

      ​@@DipeshSapkota-lo3un natural language is imprecise and makes a poor programming language.

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un 9 месяцев назад +2

      Yes i get it but which basically means we don't need to have software cycle anymore. all those clean code rules for dev to dev visibility is not required now since just need to understand what is the function doing and for that dev will be there 😉 what matters now is input output and definition of function and that's what the business wants too !

    • @imba69420
      @imba69420 9 месяцев назад +9

      @@DipeshSapkota-lo3un Tell me you've never touched code without telling me.

  • @ryanxaiken
    @ryanxaiken Год назад +43

    Do not be discouraged.
    Enjoy life and study what you are interested in. Everything else will fall into its rightful place. Tomorrow is not guaranteed, do not fret about things beyond your control.

    • @pradhyumansolanki6509
      @pradhyumansolanki6509 6 месяцев назад

      correct because i thinks its dumb to think so far ahead when we don't even understand how ai work internally or how we are going to take data or if more computing is actually going to help, Dr. Matt Welsh does not know how the algorithm( the most important part) is going to be created ,there are a lot of other thing where he says i believes which is not so reliable (specially when choosing your career )

    • @SaurabhSingh-fr8yi
      @SaurabhSingh-fr8yi Месяц назад

      Story of the Chinese farmer........AlanWatts

  • @fayezhesham1057
    @fayezhesham1057 Год назад +264

    I think it's time for Dr. Matt and his team to pivot away from fixie's custom chat GPT idea after OpenAI released GPTs.
    How unexpected!

    • @castorseasworth8423
      @castorseasworth8423 Год назад +15

      I was thinking the same. It is basically the GPTs concept, although Fixie’s AI.JSX still offers seamless integration into a react app. Let’s see OpenAI’s response to that

    • @merridius2006
      @merridius2006 Год назад +23

      @@rahxl while you are right it doesn't mean he's wrong

    • @brandall101
      @brandall101 Год назад

      @@castorseasworth8423So you can just use their Assistant's API and create a React front-end on your own.

    • @TransgirlsEnjoyer
      @TransgirlsEnjoyer Год назад +16

      @@rahxl whether he does it or somebody else, it is immaterial, openAI just proved his concept was right and worthy. he is already successful while u need to find a good job

    •  Год назад

      @@merridius2006 ​ @TheObserver-we2co this is not scientifically correct, a program written for a given task X can be written (and exist in hardware) so its the theoretical most performant solution, while an AI can cost a million times more to run the same task, take for example "2+2", at the same time, a program is a crystallized form of ontology and intelligence, that means, instead of reasoning the solution on every execution, programs grow as a library of efficient solutions that dont need to be thought over and over again, in the future is programming languages what will remove the need to write code, as we aproach an objective description of computable problems that we will be able to write for the last time, in a way we already did this with libraries (in a disorganized way), and obviously we will use AI to help write these programs, but because we will solve these problems a single time for the infinite we will review and read and write them ourselves as a way of verification, just as today. After that we will use an optimized form of AI that maps these solved solutions on user request, but interfaces will also be mature enough (think of spatial gesture and contextual interfaces) to make speech obsolete. Current LLMs are more a trend of our current times than the ideal, efficient, unfallibe solution we need to standarize on all aspects of society from IT.
      If all the software thats already running in your computers would run using AI, it would cost thousands more in energy and time, software is already closer to the theoretical maximal efficiency, the ideal software is closer to solved math than to stochastic biology or random neuron dynamics. Training better a model wont solve any of these things.
      And AIs that evolve into more performant solutions are statistical models programmed into known subsets of the problem after the mathematical model of the problem is understood enough to do that, is the same as we have already done since forever, statistics like that used in modern LLM have always been used in computers and are part of what programs are required to do.
      Just imagine if every key we pressed were interpreted by AI just to reach your browser.
      Along all these, we still have a lot of work to do, i would say we have only written a third of all the software that we need in the world, and at the same time, almost all the software that already exists needs to be rewritten in new languages more closer to the new level of abstraction and ontological organization described here, given time all code in c++ will be moved to rust, and rust will be replaced by an even better language, and no institution will just let you do it with AI and not read or understand what it did.
      Just go study and stop being silly thinking you know what programming is without any real experience in the field, all these opinions come from marketers, hustlers, wannabes, teenager ai opiniologists and doomers.

  • @kpharck
    @kpharck Год назад +53

    Law is written in plain English too. For reproducible results, the limit of input precision will lie where the modern legal jargon reaches it's least understandable form. You will be left with an input that is still as hard to comprehend as a programming language text, but much less precise. Good for RUclips descriptions perhaps, but not for avionics.

    • @oldspammer
      @oldspammer 9 месяцев назад +1

      The constitution and most contracts are in legalese which looks like English but is strictly NOT. To know and appreciate fully what is said in legal documents, you must use a legal dictionary. Capitalization is often key. Amature researchers have uncovered much-hidden history by seeing what is said and meant in older legal documents. The world turns out to be more nuanced than I thought by the lectures by these legal scholars telling us what the elite have in store for us.
      Here is an example,
      London the strawman identity youtube
      You have a person, you are not a person. A person is a legal fiction--legal paperwork of identification issued by the government. Ergo, you have a person, you are not a person. That is why a corporation is considered a person and has personhood--it is all about legal fictions written in all capital letters--in the dead handwritten on an individual's tombstone.
      Some tricky legislation was at one time written in a hidden way in some foreign language so that the public would be much less likely to discover what trickery was being done by their so-called elected officials. This was in the 1600s in order to reduce the power of the church and increase that of the crown which turns out to be the inns of court of the crown temple in the City of London that is a separate state than England or UK similar to how the Vatigan in Rome is its own city-state, and that of Washington DC that is its own city-state.
      This was all explained years ago in a video on RUclips that gave away many secrets so likely it is banned now. but few watched the entire video because of TLDR.
      I found a copy still on RUclips:
      Ring of power - Empire of the city [Documentary] [Amen Stop Productions]

    • @mikecole2837
      @mikecole2837 8 месяцев назад +3

      ie if Product Managers could specify what they wanted with enough precision to create a product, they would be coders.

    • @gaditproductions
      @gaditproductions 7 месяцев назад

      law will be impacted heavily. But law has a human aspect - the motivational speaker and projection and questioning a witness with emotional appeal...that's the difference and why its safer.

    • @oldspammer
      @oldspammer 7 месяцев назад

      @@gaditproductionsThere is a difference between a living individual, a machine, and an entity with personhood such as an immoral & immortal corporation who holds the debt of people, and nations that cannot be repaid due to usury compounded semi-annual interest charges.
      What if all money in existence was borrowed as debt into existence? Well, that is what has ended up happening as a trick of financial mathematics--the implications of which simple folk do not appreciate the implications, so vote for more government free stuff with their hands out waiting.
      Patrick Bet David of Valuetainment breaks down the information regarding the hyperinflation seen in Venezuela and what other countries did when they saw this same thing happening to them, namely Israel got rid of practically all its debt and so has one of the lowest rates of inflation.
      Lower standards of living are on the way if one is not careful who one has been representing them in Government.
      I had an epub formatted book. I used the ReadAloud Microsoft store app read it to me. It horribly mispronounced some specific word when reading back the material therein. The book was from 1992.
      Here are some of the epub formatted docs in my downloads folder.
      Lords of Creation - Frederick Lewis Allen
      The Contagion - Thomas S. Cowan
      The Gulag Archipelago, 1918-1956. Abridged (1973-1976), Aleksandr Solzhenitsyn
      Votescam of America (Forbidden Bookshelf) - James M. Collier
      Wall Street and the Russian Revolution, 1905-1925 by Richard B. Spence
      The individual voice types in the Windows TTS system determine how to break into syllables each word, and to pronounce well or badly any given word. The word that came out very badly, I believe, was "elephantine." Sometimes some of these TTS voices use online AI to assist in the pronunciations and smooth transitions between sentences, pitch of voice elevation during questions and so forth. Obviously, if there was a Nuke or EMP, the entire power grid would go down for decades unless the well intending people rebuild everything overnight without the build back better destroyers holding them back from doing so.
      As such, it might be better to have each computer holding a small chunk of civilization and enlightenment, lest it all be lost should a key datacenter be targeted directly.
      What safety precautions have your local officials done? How about your electric grid suppliers--what safeguards are in place to get everything back running after there has been no phones, no power grid, no gas station pumps working, no diesel truck fuel pumps running, no credit card transactions, no banking, and so on?
      I asked an AI about EMP precautions. I suggested wrapping spare electrical transformers and generators in metal wrap--thick aluminum foil layers, then burying them somewhat deep in the ground to reduce pulse damage. It said that the foil had better be thick enough and very well grounded to displace the electrical energy.

  • @alborzjelvani
    @alborzjelvani Год назад +413

    The example with Conways game of life does no justice to the 50 years of programming language research he refers to. Also, Rust was designed to overcome the memory safety problems that plagued C and C++; it is a programming language that emphasizes performance and memory-safety. Programming languages like Fortran and C were designed the way they are for a very specific reason: They target Von Neumann architectures, and fall under the category of "Von Neumann programming languages". The goal of these languages is to provide humans with a language to specify the behavior of a Von Neumann machine, so of course the language itself will have constructs that model the von Neumann architecture. Programming languages like Rust or C do exactly what they were designed to do, they are not "attempts" to improve only code readability for Conways game of life when compared to Fortran.

    • @hanielulises841
      @hanielulises841 Год назад +8

      Totally agree your comment

    • @datoubi
      @datoubi Год назад +12

      well they could become irrelevant though. Because the programming language of the future probably looks like minified JavaScript and will be designed by AI for AI.

    • @true_xander
      @true_xander Год назад

      @@datoubi good luck with that, see you in 10 years. Humans should not loose control over their own life and things that life depends on. As soon as they do, they'll become slaves of their own technology. And despite there still won't be a cent of consciousness in a machine in 50 years, if humans will loose the ability to understand the software on their own without "AI" help, it could quickly become a tragedy because of 1000 other reasons than the comic-book 'machine revolt'.

    • @ruffianeo3418
      @ruffianeo3418 Год назад +14

      If a natural language were such a SUPERIOR specification language, there would not be on going efforts to find working specification languages. What he claims is, that plain english is the best you can ever get :)

    • @wi2rd
      @wi2rd Год назад +12

      True, yet non of that is an argument against his point.

  • @firefiber8760
    @firefiber8760 Год назад +195

    I genuinely cannot understand how humans are just... incapable of thinking of the future. Like, the idea of 'just 'cause you can, doesn't mean you should' is just so much the case, right now. But nope, because we can, we will.
    Okay, so we all slowly forget how to program, and we, generation after generation, depend more on language models writing code for us, and us just instructing the language models. Great, let's just, for a second, take this further shall we? First, the ways we communicate with language models are going to eventually become more like programming languages, because people are lazy, and the entire reason we have ANY symbols in mathematics PROVES this. We don't like to write more than we absolutely have to.
    (EDIT: To expand on this - what I'm trying to say is this: we use specific patterns of sound in our languages to wrap up concepts, or ideas. We do this so that more complex communication can happen, by building on top of the layer below. We create functions in programming to wrap up sets of actions so that we can build on top of that. This is how abstraction works. I've used mathematical symbols as an example, but the same concept applies pretty much anywhere you look. Condense repetition, so that we can build more complexity on top.)
    So we're going to get "AI" based programming dialects, you could say (look at the way image generation prompting has already evolved as an example).
    Then, as we also develop these language models, the models themselves are going to have free rein on the 'coding' part. We will obviously instruct these systems to create newer programming languages that will, after a while, become unreadable to us. And we will ask, well, why do we need to understand it? The machines are there to handle it (this is essentially what this guy is saying). So now we have dialects of humans telling machines what to do, and then we have machines telling other machines what to do in a language we don't understand.
    Does ANYONE see the issue with this? Like, even a little?
    Just because programming is hard does not mean that we have to eliminate it. What absolutely idiotic thinking is this? It must always be a constant pursuit of efficiency. That's the whole point. We always remain in control. We always ultimately KNOW what is happening. By literally INTENTIONALLY taking ourselves out of the equation, we write our own Skynet. I don't mean that in an apocalyptic sense, I mean that in a "we are so fucking dumb as a species, like literally what is the point of programming, or doing anything at all, if not for our own benefit?" kind of way.
    Sure, use these systems and tools to write better code, write better documentation, I mean these are the actual areas where AI systems can help us. Literally to write the documentation and help us write better, more efficient, cleaner code, faster than we ever could. But still code that WE READ, AND WE WRITE, for US.
    This guy literally called Rust and Python "god awful languages" and apparently we need to take the humans out of developing things. Who does he think development is for?
    What's weird is that this is on CS50?

    • @ChrisHarperKC
      @ChrisHarperKC Год назад +38

      This will be lost on most people, especially academics who live in a fantasy world. Your comments are obvious to anyone who does regular old work.

    • @hamslammula6182
      @hamslammula6182 Год назад +26

      I think your thinking is a bit biased and shortsighted. And I’m guessing it’s because like me you’re a programmer. What I think you’re wrong about is that once we move up the abstraction layer, we don’t simply forgot the stuff underneath. People can still understand assembly and write programs using it if they so choose to but it’s ultimately a waste of time.
      I don’t think people will simply forget how to program, instead they’ll focus on more important things like solving problems that people are willing to pay for.
      I’m sure if you wanted to, you could rig up a set of logic gates to do some addition and subtraction operations but is that a business problem people are willing to pay you for?
      Essentially ai will be a layer of abstraction which allows us to focus on more complex problems rather than having to focus on getting all the right packages before even attempting to solve the problems of the users.

    • @noone-ld7pt
      @noone-ld7pt Год назад +18

      Dude, what are you on about? This is what coding has always been, a simplified version for us to convey ideas to computers. We don't write code in binary, we have compilers and interpreters that do that for us. The difference is that now instead of having to learn Python or Rust you can use English or Spanish or whatever to convey your ideas and have them be implemented. You can then ask the LLM directly questions about the implementation of different algorithms and optimize for whatever variable is relevant to your vision. Programming languages have been becoming more and more readable for decades now, this will just be the final step where we can finally interface with computers without having to learn a new language.

    • @gammalgris2497
      @gammalgris2497 Год назад +9

      Language has its own issues. It's context sensitive and highly ambiguous. Our "experimentations" with programming languages was an exercise in formalized and more precise languages. On the lower levels it's just signal processing with circuits. We built different levels of abstractions on top of that. We can only hide the complexity but we cannot make it vanish. Language models are just another layer of abstraction with its own pitfalls. The best thing one can do is heed the scientific method. Maintain a suitable degree of transparency so that things can be verified by others. 'Others' may be other developers, scientists, AI based tools, etc.. Completely removing humans from the equation will violate the scientific method.

    • @draco4717
      @draco4717 Год назад +11

      What if LLM write a buggy code in maybe 50 years from now and that code is only understandable this machine and it again writes another buggy code because it does not understand what it is doing and writes another buggy code till infinity 😅 the we as a human have to dust off those old BASIC books in order to start over and how cool is that 🙂

  • @pjcamp-eq1mj
    @pjcamp-eq1mj Год назад +135

    The talk was a perfect segway for AI startup ad

    • @joseoncrack
      @joseoncrack Год назад +7

      Indeed.

    • @jimbobkentucky
      @jimbobkentucky Год назад +13

      Seems like a lot of the invited speakers are hawking something.

    • @poeticvogon
      @poeticvogon Год назад +1

      I am pretty sure it was all an ad.

    • @gaditproductions
      @gaditproductions 7 месяцев назад

      @@poeticvogon this is cs50...its a class...they wont just do a add and risk loosing credibility...if this is coming from a institution like this...things are very very serious.

    • @poeticvogon
      @poeticvogon 7 месяцев назад

      @@gaditproductions Of course they would. They just did.

  • @ldandco
    @ldandco Год назад +294

    Software Engineering will eventually be the role of just a few, not because of AI replacing jobs, but because of discouragment many people will feel and quitting before even starting the journey

    • @darylallen2485
      @darylallen2485 Год назад +42

      One day, people may look at code the same way we look at the Pyramids. The knowledge of Pyramid making came and went.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад +14

      we need 4 mechanical engineers and 2 electronic engineers for every software engineer, because software is easy.

    • @hungrygator4716
      @hungrygator4716 Год назад

      @@reasonerenlightened2456 software is easy. Good software is hard.

    • @dwight4k
      @dwight4k Год назад +1

      Or will we need coders for the lower levels?

    • @KienHoang-jc6gw
      @KienHoang-jc6gw Год назад +30

      @@reasonerenlightened2456 you dont even know the difference between engineer and developer...

  • @cruzjay
    @cruzjay Год назад +87

    He called CSS "a pile of garbage" and that writing C should be a federal crime. I smell senior engineer burnout, that want's to just cash in on his startup and work on a farm.

    • @-BarathKumarS
      @-BarathKumarS Год назад +14

      his startup flopped horribly btw lol.

    • @anthonyd4703
      @anthonyd4703 11 месяцев назад +1

      Hahhaha even as a newbie, i kinda agree with you

    • @rogerh3306
      @rogerh3306 2 месяца назад

      47:25 Can he be more apparent w/ his motives? Douchebag move.

  • @abnabdullah
    @abnabdullah Год назад +34

    I am amazed that students didn't ask about anything related to "security" because, right now, we are just seeing an innovation but what about the future, when, on a larger scale, if we say we want to build a public program like Facebook or any other platform. This is presuming to be a live programming or language model building whatsoever it is so how can we encrypt all of our data from building to running and so on.

    • @rookie_racer
      @rookie_racer Год назад +6

      While security is something lacking I feel your focus is on the wrong aspect of it. You reference encryption which isn’t necessary for the source code so its ability to assist you to build won’t be impacted. I’m more concerned about the data you’re providing to the LLM. If I’m building a proprietary function and I need some insight from an LLM and I need to upload my source code for them to evaluate I am potentially sharing some seriously protected intellectual property. What happens to that? Can that code snippet show up in someone else’s code when trying to solve the same problem? Maybe your competitor?

    • @Invariel
      @Invariel Год назад

      @@rookie_racer More importantly than that, he's already demonstrated in his talk that these LLMs have -- call it "undocumented" or "emergent" or whatever you want -- behaviour that gives the questioner control over how the answer is given. Recall the "my dear deceased grandmother" "attack" that let people ask about how to make napalm or pipe bombs or whatever. Giving LLMs unfettered access to proprietary data, and having those LLMs all be based on the same nugget/core/kernel vulnerable to the same attack vectors means giving attackers access to all of that proprietary data by "casually" using your interface.

    • @abnabdullah
      @abnabdullah Год назад +3

      @@rookie_racer yes, you are right... actually what I was trying to highlight is "Data" and I mean how can we trust our confidential information to something that is "open source and a third party revolving around and across the internet.

  • @rohan2962
    @rohan2962 Год назад +68

    He starts off with no one will code and he ends with his own programming language for AIs. lol

    • @znubionek
      @znubionek 11 месяцев назад +1

      Lmao

    • @rogerh3306
      @rogerh3306 2 месяца назад

      47:25 bashes the art of programming so he can sell his LLM service. Douchebag move.

  • @alphabee8171
    @alphabee8171 Год назад +59

    It's not that gpt blew up because it was super good overnight. Well sort of but the real reason is it's ease of use. It's just like back when home computers became popular, when you introduce a computer as a marvel of engineering nobody cares about that but if you say "it's a box that lets you play some games and music etc with a bunch of clicks" you have everyone's attention. The idea of making it feasible for the masses that's what kicked it off, poured in billions of dollars and years of research to make computing better and better, same stuff happened with gpt and it's again on the same path but at a much much faster rate.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @brianallossery4628
      @brianallossery4628 Год назад

      Computational power increases made gpt possible from what I understand

    • @LyricalMurderer1
      @LyricalMurderer1 9 месяцев назад

      That and it was super good… understood that a lot has to do with data and compute but it really is very good as a product right now…

  • @epajarjestys9981
    @epajarjestys9981 Год назад +47

    I'm at 6:43 and all I've seen so far is that guy projecting his incompetence onto the rest of humanity.

    • @jzimmer11
      @jzimmer11 10 месяцев назад +7

      Indeed! I mean WTF? Of course, you can always write programs in the least understandable way possible.

    • @Henry_Wilder
      @Henry_Wilder 8 месяцев назад

      You call an Harvard Computer Science prof. incompetent?, you fool😂😂

    • @Henry_Wilder
      @Henry_Wilder 8 месяцев назад

      Why don't you go ahead and answer the questions, since you're the competent one then🤨...ya'll just come on to the comment section talking trash, no sense🤧

    • @epajarjestys9981
      @epajarjestys9981 8 месяцев назад

      @@Henry_Wilder Which questions?

    • @Henry_Wilder
      @Henry_Wilder 8 месяцев назад

      @@epajarjestys9981 the questions posed at him that he couldn't answer. He kept saying "I don't know " remember?

  • @caneridge
    @caneridge 11 месяцев назад +72

    The purpose of computer science in a nutshell was not to translate ideas into programs. The goal was to find higher levels of abstractions to enable describing and solving ever bigger problems. Programming and programming languages were emergent properties of that goal. The question for LLMs is if they will be able to continue the quest for higher and simpler levels of abstraction or forever get stuck in the mundane as most programmers did by their jobs.

    • @katehamilton7240
      @katehamilton7240 11 месяцев назад +3

      Thanks, I'm saving this idea

    • @mriduldeka850
      @mriduldeka850 9 месяцев назад +2

      Thats a deep thought. I feel purpose of comp science is to automate task which humans can do or think of doing. Programming is just one step for it. Instead of create models which can write code, humans should think of bigger ideas which can impact living beings. It may be accomplish by manual or automatic programming, does not matter

    • @switzerland
      @switzerland 9 месяцев назад +2

      Reality is near infinitely complex. As programmers we create a finite abstraction. AI will do it better yet can't solve exponential complexity. AI is not infinite and has not infinite compute. Infinite is usually a warning signal of a lack of knowledge. Infinity means everything starts to behave weird. There is also physics … latency, a set of fundamental problems

    • @aoeu256
      @aoeu256 8 месяцев назад

      We have too many people doing software so software salaries are going to go down, we need to tell Indians & Chinese and Westerners to focus on swarm robotics, mini-robots, having the robot sworms build things etc... Take a robot-hand, make all of its parts like legos that it itself can assemble. Then make it so that it can either print out its parts, sketch out its parts, or mold its ports. Have it replicate itself in smaller and smaller until you hav e a huge swarm of robots, but you also need a lot of redundancy and "sanity checks". Swarm robots can do stuff like look for minerals/fossils/animals, look for crime, map out where everything is so you know where you put your cellphone, build houses/food/stuff/energy collectors/computers. @@mriduldeka850

    • @mriduldeka850
      @mriduldeka850 8 месяцев назад

      @@aoeu256 That's a good point. Japaneese are good at building robots. Indians are good and abundant in software sector but lagging way behind in manufacturing and hardware industry. Chinese have strength in manufacturing sector so perhaps they can adopt to robotics growth more quickly than Indians.

  • @TheOriginalJohnDoe
    @TheOriginalJohnDoe Год назад +229

    Dr. Welsh does make good statements I think we all can agree on, but as an AI student and Software Engineer for 10+ years, regarding what Welsh said: "People still program in C in 2023", well if you study AI you will even learn Assembly, very very low-level programming and since models have been written by programmers, we still need programmers to maintain and improve on these. AI is getting there, but it's still at a very immature level compared to the maturity we seem to desire as a humanity. We still need PhD students with a solid programming and AI background to do extensive research within the field of AI in order to help invent new technologies, specialized chips, improved algorithms etc. We are still far away from letting AI generate code that is as good as a programmer who has mastered it. Sure, it can write code, but there's still ton of scenarios where it fails to make things work.

    • @timsell8751
      @timsell8751 Год назад +28

      2 more years should do the trick!

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад +20

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @LucidDreamn
      @LucidDreamn Год назад +11

      I give it 5 more years before AI is super-intelligent

    • @headlights-go-up
      @headlights-go-up Год назад +22

      @@LucidDreamnbased on what data?

    • @chuangcaiyan7114
      @chuangcaiyan7114 Год назад +4

      I think the problem is about the purpose or the goal of the program that you are programing, in case of the Conway's Game of Life, the concept it self it is not easy to explain even with human language, we could get some ideas watching it performe but to be able to understand it completly, from logic to meaning or even to purpose and what coorelation it has with other topics such as math, physic or phylosophy, it is just not easy to understand, it won't be easy anyway

  • @snarkyboojum
    @snarkyboojum Год назад +71

    I prefer this take - natural language isn't well suited for describing to computers what they should do, which is why programming languages were developed. LLMs can do some translation from natural languages to programming languages, but not very well and not as accurately as we would like (yet), so they're good for getting you part of the way there, and currently they'll likely generate less than accurate or reliable code, but if you're not trying to write reliable programs, they could be helpful :D

    • @Siroitin
      @Siroitin Год назад +9

      Good to remember that rigorous symbolic notation for math is pretty modern idea in itself. One could argue that math is just "esoteric language" like Matt Welsh is implicating about programming language.

    • @restingsmirkface
      @restingsmirkface Год назад +6

      I agree. AI can do things like computing Pi, finding factors, and other relatively trivial things which could just be bits of static data. It may not even be generating code - just returning the closest match. If it is generating code, it's not very useful yet unless you know exactly how to speak those sweet-nothings. I asked ChatGPT about a week ago to create a website in the style of Wikipedia with 4 page-sections relevant to simulation-theory. It gave me an HTML tag with 4 empty DIV elements - nothing else. No other structure, no content, no styling, no mock-up of interactive elements.

    • @Siroitin
      @Siroitin Год назад

      @@restingsmirkface You might have to do some "prompt engineering".
      When I try ML and statistics related stuff, I often just copy text book formulas. The copied text is obscure for humans but somehow ChatGPT is able to understand it. Also it is really hard to ask python code for neural networks because it forces the use of external packages. C language doesn't have external packages so I often ask ChatGPT to write in C code and I translate the code to Python or Julia

    • @keiichicom7891
      @keiichicom7891 Год назад +4

      Agree. I noticed, although AI chatbots like ChatGPT can write complex Python programs( I asked it to create simpler neural net chatbots in Tensorflow / Keras), it is often buggy, and it has a hard time fixing the bugs if you ask it.

    • @choc3732
      @choc3732 Год назад

      @@Siroitinthis is very interesting, ChatGPT has a better hit rate when it comes to writing in C?
      I’ve only tried Python so far, will have to give this a go

  • @MarkMusu92
    @MarkMusu92 Год назад +21

    I’m legally mandated to pitch my startup… that’s all I needed to know.

  • @Hangglide
    @Hangglide 10 месяцев назад +3

    Great presentation! Thank you!
    One nitpick: 19:23 "average lines of code checked in per day ~= 100" I can tell you that is not the case for average SWEs in the silicon valley do. ~10 lines/day would already be pretty good.

  • @MarceloDezem
    @MarceloDezem Год назад +254

    "If the dev is not using copilot then he's fired". Tell me you never worked in a commercial application without telling me you've never worked in a commercial application.

    • @jak3f
      @jak3f 9 месяцев назад +20

      What do you think hes writing? Personal pet projects? Lmao.

    • @tracyrreed
      @tracyrreed 7 месяцев назад +5

      ​@@jak3fHe's marketing. Not writing.

    • @LarsRyeJeppesen
      @LarsRyeJeppesen 7 месяцев назад

      I wager that Code Assist with Gemini 1.5 is much better than Copilot now.

    • @gaiustacitus4242
      @gaiustacitus4242 6 месяцев назад +2

      @@jak3f Have you ever heard of copyright law? Are you seriously unaware that federal courts have already ruled that AI generated output is ineligible for copyright protection?

    • @jak3f
      @jak3f 6 месяцев назад

      @@gaiustacitus4242 good luck proving that

  • @simonmeier
    @simonmeier 11 месяцев назад +31

    Dr. Matt Welsh points out the crucial point about AI in programming: The better it gets and the more we trust in it, without actively know how to code or without knowing how it does what it's doing, we lose power over our daily automatic routines. Imagine what a risk AI generated code would be in a nuclear power plant. I think this talk is rather a great wake up call for learning how to code and coding inside AI instead of just letting it go.

    • @randotkatsenko5157
      @randotkatsenko5157 11 месяцев назад +1

      Humans are fundamentally lazy and default to the option which takes the least energy and effort. Meaning, most people will try to automate their own work as much as possible. AI learns from this and gets increasingly better, until human-in-the-loop is not needed anymore. Eventually, AI might be even better than humans at programming. As for nuclear power plant, I dont know, depends how reliable the system is.

    • @gordonramsdale
      @gordonramsdale 10 месяцев назад +5

      Except in 5 years, you might be saying the opposite. Humans introduce error inherently. Think how much better AI is now than it was programming 5 years ago, give it 5 more years, and writing human code will seem like the insecure risky option.

    • @Ivcota
      @Ivcota 10 месяцев назад +1

      @@gordonramsdale My take: A good chunk of software bugs exist because requirements were not refined well enough by the engineer breaking down the work. They make assumptions and write code that does something it shouldn't. With good testing no real bugs get into the system and we have modern compilers that remove the issues with syntax errors. AI coding will likely produces the same errors and make these types of assumptions humans make when working with poorly defined requirements.

    • @dblezi
      @dblezi 10 месяцев назад

      Nuclear power plants have a strict design and review process that is fully vetted. So i would not worry about this specialized software aka AI in this application.

    • @simonmeier
      @simonmeier 10 месяцев назад

      @@dblezi Hi, I think I understand what you are saying. But then again what does fully vetted mean in that context? We also have a review process where each Merge Request is fully vetted but still, errors can slip trough. AI MRs might slip through more easily.

  • @restingsmirkface
    @restingsmirkface Год назад +22

    In almost all scenarios, AI represents an "it runs on my machine" approach to problem-solving - a "good enough", probabilistic mechanism.
    But maybe that is sufficient. We get by in the world despite uncertainty at the quantum level... maybe once _everything_ is AI-ified, the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" even if we'll never be sure it's at 100% outside of the training-sets run on it.

    • @bens5859
      @bens5859 11 месяцев назад +3

      > the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough"
      This is a deep insight. Many great minds of the western philosophical tradition have expressed this view in one way or another. In fact it's the school of thought known as American Pragmatism (which is known as the quintessentially "American" school, in philosophy circles) which most closely aligns with this view.
      Some pithy quotes about truth from the most notable figures in Pragmatism:
      - William James (active 1878-1910): “Truth is what works.”
      - Charles Sanders Peirce (1867-1914): “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth.”
      - John Dewey (1884-1951): “Truth is a function of inquiry.”
      - Richard Rorty (1961-2007): “Truth is what your contemporaries let you get away with saying.”

    • @lubeckable
      @lubeckable 11 месяцев назад

      dockerize AI problem solved xd lmao

  • @GigaFro
    @GigaFro Год назад +17

    I believe that in the short term there will be a shift in both time and focus from coding a solution to the architecture design, testing, and security of that solution.

    • @christislight
      @christislight 11 месяцев назад +1

      Architecture is KEY

    • @sourenasahraian2055
      @sourenasahraian2055 10 месяцев назад +2

      Architecture is nothing but the applications of known patterns and reasoning/ tradeoffs . I use chatGPT for my architecture challenges all the time and I say though it’s not perfect, it’s already doing a decent job . It will get even better , exponentially better .

    • @Gauravkumar-jm4ve
      @Gauravkumar-jm4ve 10 месяцев назад

      agreed

  • @annoorange123
    @annoorange123 Год назад +38

    Last week i was working on some rust code that had to deal with linux syscalls, chatgpt gave incorrect data on every single question. There are limits to how well trained it can be based on the amount of data it was trained on. It's good for common problems, not so in a niche environments that real SWE deal with daily. It just makes JS bootcamps obsolete.
    Now imagine if plane control computers were used to generate all the code, as he suggests, without a person in the loop. Good luck flying that. Until AGI is here, we can't talk about any of this

    • @danri9839
      @danri9839 11 месяцев назад

      It's true but for now. What about the evolution of these models over 5, 10, or 15 years. BTW, no model yet receives data directly from the physical world. And sooner or later, it will heppen.

    • @annoorange123
      @annoorange123 11 месяцев назад +2

      @@danri9839 it's a fuzzy black box system. Until we have AGI it's just marketing hype that they are smart, while in reality precision isn't there if there was little training data

    • @not_zafarali
      @not_zafarali 10 месяцев назад +1

      ​@@danri9839 The problem is that large language models get data from the world but can't figure out what's useful and what isn't, what's keep and drop on their own what's useful and what isn't. Right now, humans decide for them. If we want models to make their own choices, they need to understand what's right and wrong, which in itself is already complex even for humans in a lot of cases.

    • @dekooks1543
      @dekooks1543 9 месяцев назад

      you're the 927483927839273 I've seen who wrote this comment. You sound like the crypto bros who promised an unprecedented economic crash and how the blockchain would revolutionise everything... and yet.

    • @josephp.3341
      @josephp.3341 8 месяцев назад

      I tried to generate Rust code for a relatively trivial problem (8puzzle) and its solution was wrong and didn't compile. I fixed the compilation errors and the solution was still terrible because it used Box::new(parent.clone()) every time a child node was generated (very, very inefficient). I had already written the code myself so it was easy to spot these errors but I really can't see how chatgpt is supposed to write code better than humans...

  • @another_dude_online
    @another_dude_online Год назад +9

    "The line, it is drawn, the curse, it is cast
    The slow one now will later be fast
    As the present now will later be past
    The order is rapidly fading
    And the first one now will later be last
    For the times, they are AI-changin'"

  • @Rico.308
    @Rico.308 8 месяцев назад +2

    Learning to code right now and I can definitely say this has not made me give up it only shows me the cool tools I will one day be able to build.

  • @manabukun
    @manabukun Год назад +99

    Back in the real world, you still need to double check the code generated by copilot which often is wrong. I'm not sure if I'm bad at using copilot or the people using it are simply not checking what has been generated.
    Not to mention, none of the large companies are willing to use a version of copilot that allows it to send the learned data from their private repos back home for obvious reasons.

    • @Peter-bg1ku
      @Peter-bg1ku Год назад +28

      that's the problem I find with AI generated code. You have to verify it, which is a task that takes as much, if not more effort that writing the code by hand.

    • @cardiderek
      @cardiderek Год назад +1

      @@Peter-bg1kuwrong

    • @cardiderek
      @cardiderek Год назад +2

      wrong

    • @Peter-bg1ku
      @Peter-bg1ku Год назад +1

      @@cardiderek what do you mean?

    • @cardiderek
      @cardiderek Год назад

      @@Peter-bg1ku that isn't the problem to worry about. We are so close to solving hallucinations.

  • @kenjimiwa3739
    @kenjimiwa3739 11 месяцев назад +18

    There's SO much to SWE jobs aside from just coding, like collaborating with product and design, understanding business needs, convincing management that something is worthwhile. Additionally, someone will need to review the AI code, deal with legacy code, set up services, etc.. I view these AI tools as tools that will make everyone's job more productive but not necessarily replace.

    • @LupusMechanicus
      @LupusMechanicus 11 месяцев назад +4

      The cope is real.

    • @TomThompson
      @TomThompson 11 месяцев назад +10

      ​@LupusMechanicus Anyone who thinks an AI can help anyone write a program to solve problems hasn't worked in the field at all. More often than not a person will bring a problem and their ill conceived solution. Then the experienced software engineer will discuss the original problem, propose alternate solutions, ideas that still solve the problem but better make use of resources (memory, time, etc) and provide a useful and intuitive workflow. That IS part of being a SWE and if you think an AI is going to do that naturally and simply you are out of touch. Say others are "cope" if you want, but perhaps educate yourself more than watching a RUclips video by a guy desperate to sell is product.

    • @LupusMechanicus
      @LupusMechanicus 11 месяцев назад

      @@TomThompson Bruh try to build a house profitably with just your fingers. You need a saw and air hammers, lifts and screw guns. Thusly you can now build a million dollar house with 8 people in 6 months instead of 40 in 1 year. This will eliminate alot of employees, thusly it is cope.

    • @TomThompson
      @TomThompson 11 месяцев назад +10

      @@LupusMechanicus You again miss the point. No one is saying the industry won't be affected, it will. What we are saying is it is uninformed to say the industry is "dead" because of AI. Just look at the history. The job has gone from being primarily hardware based (setting tons of switches) to using a machine level language (assembly). Then gradually to higher level languages (fortran, cobol, c, etc). Then we have gone through adding IDE and lint, and code sharing, and review systems. The introduction of AI will not replace everything and everyone. It will be a tool that will make the job easier. And yes, it could easily mean a company that currently has 100 engineers in staff can gradually cut back to 10. But it also means other jobs will open up in areas such as making these AI and making systems that make using but easier.
      The invention of the hammer didn't kill the home building industry.

    • @2011fallenstar
      @2011fallenstar 10 месяцев назад +1

      There won't be legacy code anymore, having a computer that writes code, so ppl will understand the computer's code sounds pointless. Do you need to know your router's code in order to use the wifi?

  • @KaLaka16
    @KaLaka16 Год назад +117

    If programmers will get replaced, who will not get replaced? Programming is one of the most difficult fields for humans. If most of it can be automated, most of everything else can be automated too. This AI revolution won't affect just programmers, it will affect everyone. Programmers are more aware of it than the average person though.
    It might still take 20 years for us to see AGI. Probably way less, but nobody really knows.

    • @BARONsProductions
      @BARONsProductions Год назад +38

      Manual labour isn't going to be replaced. Nurses, waitress, handyman, plumber... shit like that

    • @KaLaka16
      @KaLaka16 Год назад +17

      @@BARONsProductions Eventually it is, unless we specifically want humans for the roles. Machines will do everything better once we get to artificial superintelligence. We will probably get it before 2040, but who knows, it could take way longer. Also, people need time to adapt to technology. When something is invented, it doesn't get immediately applied on the practical level.

    • @ataleincolor
      @ataleincolor Год назад +16

      @@BARONsProductionsif anything manual labour is going to be replaced faster due to the repetitiveness of their roles.

    • @Nobodylihshdheuhdhd
      @Nobodylihshdheuhdhd Год назад +8

      ​@@BARONsProductionsthose jobs are more likely to be replaced than programmers

    • @dineshbs444
      @dineshbs444 Год назад +22

      The physical labour will take more time. For that, actual physical robots should be built that won't be any good for like 10 years at least (I believe). Yeah the digital ones are ones that will take the hit first.

  • @frankgreco
    @frankgreco Год назад +18

    His startup is completely based on a Javascript framework. You don't have to use an LLM to tell you that was a bad idea.

    • @godismyway7305
      @godismyway7305 6 месяцев назад

      Who said you can't use javascript for ML?

    • @frankgreco
      @frankgreco 6 месяцев назад

      @@godismyway7305 No one did.

  • @HarpaAI
    @HarpaAI Год назад +165

    🎯 Key Takeaways for quick navigation:
    00:00 🍕 Introduction and Background
    - Introduction of Dr. Matt Welsh and his work on sensor networks.
    - Mention of the challenges in writing code for distributed sensor networks.
    01:23 🤖 The Current State of Computer Science
    - Computer science involves translating ideas into programs for Von Neumann machines.
    - Humans struggle with writing, maintaining, and understanding code.
    - Programming languages and tools have not significantly improved this.
    04:04 🖥️ Evolution of Programming Languages
    - Historical examples of programming languages (Fortran, Basic, APL, Rust) with complex code.
    - Emphasis on the continued difficulty of writing understandable code.
    06:54 🧠 Transition to AI-Powered Programming
    - Introduction to AI-generated code and the use of natural language instructions.
    - Example of instructing GPT-4 to summarize a podcast segment using plain English.
    - Emphasis on the shift towards instructing AI models instead of conventional programming.
    11:26 🚀 Impact of AI Tools like CoPilot
    - CoPilot's role in aiding developers, keeping them in the zone, and improving productivity.
    - Mention of ChatGPT's ability to understand and generate code snippets from natural language requests.
    17:32 💰 Cost and Implications
    - Calculation of the cost savings in replacing human developers with AI tools.
    - Discussion of the potential impact on the software development industry.
    20:24 🤖 Future of Software Development
    - Advantages of using AI for coding, including consistency, speed, and adaptability.
    - Consideration of the changing landscape of software development and its implications.
    23:18 🤖 The role of product managers in a future software team with AI code generators,
    - Product managers translating business and user requirements for AI code generation.
    - Evolution of code review processes with AI-generated code.
    - The changing perspective on code maintainability.
    25:10 🚀 The rapid advancement of AI models and their impact on the field of computer science,
    - Comparing the rapid advancement of AI to the evolution of computer graphics.
    - Shift in societal dialogue regarding AI's potential and impact.
    29:04 📜 Evolution of programming from machine instructions to AI-assisted development,
    - Historical overview of programming evolution.
    - The concept of skipping the programming step entirely.
    - Teaching AI models new skills and interfacing with software.
    33:44 🧠 The emergence of the "natural language computer" architecture and its potential,
    - The natural language computer as a new computational architecture.
    - Leveraging language models as a core component.
    - The development of AI.JSX framework for building LLM-based applications.
    35:09 🛠️ The role of Fixie in simplifying AI integration and its focus on chatbots,
    - Fixie's vision of making AI integration easier for developer teams.
    - Building custom chatbots with AI capabilities.
    - The importance of a unified programming abstraction for natural language and code.
    39:14 🎙️ Demonstrating real-time voice interaction with AI in a drive-thru scenario,
    - Showcase of an interactive voice-driven ordering system.
    - Streamlining interactions with AI for real-time performance.
    44:55 🌍 Expanding access to computing through AI empowerment,
    - The potential for AI to empower individuals without formal computer science training.
    - A vision for broader access to computing capabilities.
    - Aspiration for computing power to be more accessible to all.
    46:49 🧠 Discovering the latent ability of language models for computation.
    - Language models can perform computation when prompted with specific phrases like "let's think step-by-step."
    - This discovery was made empirically and wasn't part of the model's initial training.
    48:17 💻 The challenges of testing AI-generated code.
    - Testing AI-generated code that humans can't easily understand poses challenges.
    - Writing test cases is essential, but the process can be easier than crafting complex logic.
    50:40 🌟 Milestones and technical obstacles for AI in the future.
    - The future of AI development requires addressing milestones and technical challenges.
    - Scaling AI models with more transistors and data is a key milestone, but there are limitations.
    54:23 🤖 The possibility of one AI model explaining another.
    - The idea of one AI model explaining or understanding another is intriguing but not explored in depth.
    - The field of explainability for language models is still evolving.
    55:44 🤔 Godel's theorem and its implications for AI.
    - The discussion about Godel's theorem's relevance to AI and its limitations.
    - Theoretical aspects of AI are not extensively covered in the talk.
    56:42 🔄 Diminishing returns and data challenges.
    - Addressing the diminishing returns of data and computation in AI.
    - Exploring the limitations of data availability for AI training.
    58:34 🚀 The future of programming as an abstraction.
    - The discussion on the future of programming where AI serves as an abstraction layer.
    - The potential for future software engineers to be highly productive but still retain their roles.
    01:04:12 📚 The evolving landscape of computer science education.
    - Considering the relevance of traditional computer science education in light of AI advancements.
    - The need for foundational knowledge alongside evolving programming paradigms.
    Made with HARPA AI

    • @ericamelodecarvalho5714
      @ericamelodecarvalho5714 Год назад +1

      000p

    • @sitrakaforler8696
      @sitrakaforler8696 Год назад

      Dam that's niiiice! ! It's like Merlin ?!

    • @xwdarchitect
      @xwdarchitect Год назад

      @@sitrakaforler8696 better :)

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад +4

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @முரளி-ழ7த
      @முரளி-ழ7த Год назад +5

      @@reasonerenlightened2456 you guys need to stop think AI as some conscious thing, it is just like a knife or gun. It is entirely about who is using it with what intent.

  • @anandiyer_iitm
    @anandiyer_iitm 11 месяцев назад +13

    That he stays away from addressing the "most important" problem as he puts it at the beginning of the talk (that of CS education in the future), makes it sound like just empty talk...Unfortunately, I had to watch the entire thing to realize this...

  • @beMUSICaI
    @beMUSICaI Год назад +18

    The problem with LLM is that they cannot solve independently computationally irreductible problems. So there is interaction between classical computation and LLM in symbiosis. So I do not agree that computer languages should disappear completely. Also right now checking google is much more energy efficient than prompting chatgpt. So there are the energy efficiency issues. When you build apps with AI somebody has to pay the token bill.

    • @Fs3i
      @Fs3i Год назад

      > The problem with LLM is that they cannot solve independently computationally irreductible problems
      It can write programs that do. For example, this is what the current GPT-4 can do on the normal openai chat website (can't post url to conversation because YT spam filter). I've asked "Hey there! Can you give me a word which has an MD5 hash starting with `adca` (in hex)?"
      I've chosen adca, because those were the first four hex letters in your name. This is likely not in its training set.
      The model was "analyzing" for a bit, and then replied
      > A word whose MD5 hash starts with adca (in hexadecimal) is '23456'. The MD5 hash for this word is adcaec3805aa912c0d0b14a81bedb6ff. ​​
      You can see how it answered it, it wrote a python program to solve it. I didn't need to prompt to do it, it knows - like a human! - that it should pass these classically computationally irreducible problems off to a classical computer.
      And yes, there's still programming involved, but like, my 16 years of experience with computer science didn't help me at all, except in terms of coming up with an example.

    • @BattleBrotherCasten
      @BattleBrotherCasten 11 месяцев назад +1

      No code applications getting better and A.I. getting better looks like a programless future is really close or a near programless one at least. Eventually A.I. will be better,faster and cheaper than any human by a large margian.

    • @icenomad99
      @icenomad99 9 месяцев назад

      What you forgot to add is "YET".

  • @ataleincolor
    @ataleincolor Год назад +107

    Professor: Ai will replace all programmers
    Students who took student loans to become programmers: 👁️👄👁️

    • @NicholausC.McGee.
      @NicholausC.McGee. Год назад +7

      Professor: Programing sucks lets let the robots do it!

    • @llothar68
      @llothar68 Год назад +18

      I don't understand why people think Professors know anything about programming. They have not time to get real practice

    • @tomashorych394
      @tomashorych394 Год назад +2

      yep. Pretty harsh reality

    • @lmnts556
      @lmnts556 Год назад +4

      Not the case tho, at least not now lol. Ai is not even close to taking programmers jobs, AI is not very good at programming, just very basic functions and can't put the pieces together.

    • @tomashorych394
      @tomashorych394 Год назад +7

      @@lmnts556 Are you sure? It can do a lot of stuff. Then, you have all the no code solutions. Then, you have all the SaaSs and libraries. In the end. You need 1 engineer to build a platform instead of a 100. "At least not now" can mean in 5 years (which is very realistic)

  • @ai_outline
    @ai_outline Год назад +72

    Something I did not understand was how would Computer Science become obsolete? So okay, you replace programming with prompting. But who will develop all those magical models that you are prompting? Aren’t they built by computer scientists and SWEs?
    What I mean is, if you are bold enough to claim programming will become obsolete, then doesn’t that mean learning mathematics and physics would also become obsolete? Like I could just ask some AI model to develop what I need in the context of physics and mathematics… and won’t need to understand the dynamics of those sciences, I just need to know how to speak English and ask for something.
    Note: I actually can see programming becoming more automated. But Computer Science? I can’t see that happening… aren’t we supposed to understand how do computers and AI work? Should they be seen as black boxes in the future?
    Also, programming would still not be fully automated because it’s weird to believe that an ambiguous sequence of tokens (English language) can be mapped with precision to a deterministic sequence (code) without any proper revision by a human… what if AI starts to hallucinate and not align with human goals? At best we would create a new programming language that is similar to “Prompting”…
    What are your opinions on these?

    • @stefanbuica5502
      @stefanbuica5502 Год назад +9

      My opinion is that before doing a ratinal action, there is an emotional action. So all decisions you can write on the prompt, cannot be accurate.
      My take is that technology will automate further and transform and humans will have the opportunity to use more of their creativity and thus becoming more human!

    • @algro9567
      @algro9567 Год назад +8

      There are two main concepts that you need to wrap your mind around:
      1) Ease of use, 2) Programming as a tool
      When Welsh talks about 'the end' of programming, he means to future mass adoption of LLM models to program for them instead of programming themselves due to ease of use. Essentially, LLM's will be the new user interface for people to use programming languages, so the need for expert programmers will be limited to specialty roles in the future, like how can I write an API for LLMs to interact with or how can I make this LLM that checks that another LLM works properly?
      Obsolete is not the right word here, as you can see Welsh using copilot himself even though he is still technically a programmer. It's just the science of writing code by hand will be displaced by prompting to ask an AI to manipulate code for you. For now, you need to read the code the LLM wrote to use it, but in the future, it might as well be a magical black box that does x for you, testing and implementation included.
      Or in other words:
      LLM's are going to be easier to use than programming by hand, and LLM's will use coding as a tool instead of people. Computer science is then the art of getting better code from LLMs instead of getting humans to write code faster and better.

    • @tomashorych394
      @tomashorych394 Год назад +3

      You are right. These people will still be needed. But AI might reduce the number of such positions down to

    • @jpcfernandes
      @jpcfernandes Год назад +10

      Not only that, who develops all the connections between LLMs and all existing systems. Who will replace existing systems that nobody knows what are doing with systems that can use AI. In the short term at least, I foresee more programmers needed, not less.

    • @metadaat5791
      @metadaat5791 Год назад +14

      I for one will be glad when the people who think that "programming sucks" and "no progress has been made in 50 years" will actually give up and leave the field, they have no idea what CS entails. Computer Science is about computer programming like Astronomy is about looking through telescopes.

  • @artemkotelevych2523
    @artemkotelevych2523 Год назад +26

    The thing with LLMs is that it's just another level of abstraction. If you take a product documentation as a highest level of abstraction to describe how that product should behave, to have it correct you still need to describe all the corner cases and the way some things should be done, you can't just say "this page should show weekly sales report". And all this documentation might not be easy to understand. Code is just a very precise way to describe behavior.

    • @wi2rd
      @wi2rd Год назад +1

      Do you trust close friends who know you well to give you a decent result when you ask them "this page should show weekly sales report"?

    • @artemkotelevych2523
      @artemkotelevych2523 Год назад +3

      @@wi2rd you understand how documentation work right?

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Год назад +1

      From your logic, it's impossible for non-technical project manager to instruct developers on how the application should be programmed.

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Год назад +1

      AI can ask clarification questions to make the requirements clearer. It's can do long-term back-and-forth conversations with the whole context of the project.
      It's not just inputting a single prompt and the project is done

    • @marcelocruz7644
      @marcelocruz7644 Год назад +2

      @@MaiThanh-om5nm Non-technical and people with low abstraction for the field usually will instruct on how something will behave instead of how something is to be programmed.
      Also project managers manage the team time etc, architects, developers and engineers with know-how to translate expected behaviour from clients to technical field are the ones who instruct how it's programmed. Lots of developers are able to understand what a client want without an intermediate, because developers are system users as well and know what could be better on apps and what they'd like to see, expect etc, also you can see freelancers and github projects all around without a project manager etc, confirming they would understand it anyway with or without those helpers.

  • @suryamanian8492
    @suryamanian8492 Год назад +46

    the ‘gotch’ in using AI is we need to know if the code is right or not
    so we need to know basic stuffs

    • @augustnkk2788
      @augustnkk2788 Год назад +6

      For now, eventually it will be able to write perfect code on its own, reducing the need from 100 software engineers to 5-10

    • @Pavel-wj7gy
      @Pavel-wj7gy Год назад +1

      What is the basic stuff in a pyramid of abstractions? Assembly code?

    • @tiagomaia5173
      @tiagomaia5173 Год назад +4

      @@augustnkk2788 I don't think it'll replace all good software engineers so soon. And I really don't think it will get to a point of always generating perfect code.

    • @augustnkk2788
      @augustnkk2788 Год назад

      @@tiagomaia5173 Itll replace maybe 90%, some still need to make sure its safe, but no one will work in wed dev f. ex; all tech work is gonna be about AI, unless the governemnt steps in. I give it 10 years before it can replace every software engineer

    • @dekooks1543
      @dekooks1543 9 месяцев назад

      you have the confidence of someone who doesn't know what they're talking about

  • @thomasr22272
    @thomasr22272 Год назад +55

    My main question is: in which of the LLM ai startups is he an investor?

    • @RoyRope
      @RoyRope Год назад +5

      crossed my mind lol

    • @rollotomasi1832
      @rollotomasi1832 Год назад

      Please listen to the talk with an open mind, and face this was reality.

    • @hisham_hm
      @hisham_hm 5 месяцев назад

      He literally says at the end: he's pitching his own AI startup.

  • @Tetsujinfr
    @Tetsujinfr Год назад +32

    We are not yet to the stage where one can ask chatGPT4 to write chatGPT5, at least as far as I know. Also, if you ask chatGPT4 to produce the model of the physical world unifying general relativity with the standard model, you will notice it struggles quite a bit and does not deliver. Those models cannot just create new knowledge, or not in a scientific proven way. Maybe through randomness they will to some extend though, but let's see.

    • @christislight
      @christislight 11 месяцев назад +5

      You need code to build. God coded humans, we code businesses. Just using language to create code doesnt mean coding is obsolete

    • @RateOfChange
      @RateOfChange 10 месяцев назад +5

      AIs are making some breakthroughs in science and math already. Look up the new matrix multiplication algorithm discovered by an AI.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 10 месяцев назад

      Well, the code for chatGPT5, at least for the model as such, is likely not very complicated, so chatGPT4 might be able to write it. Someone has to tell it what the program should do, though. At this point, that would be a human.

    • @dblezi
      @dblezi 10 месяцев назад

      That’s because there has to be an overseer. Like someone else stated God created mankind and this ecosystem. Men manipulated and created based on this ecosystem. The creations of Men didn’t invent themselves. The best special software of AI can do is create derivates of digital data that is digital known to said AI model. Look at art for instance many AI models steal and scan what mankind created to make a model. An AI model would never create a Star Wars, blade runner or mass effect story/universe out base coding blocks which dictate how the software runs. AI needs to plagiarize to create. It’s just that these plagiarized derivates with procedural generation full many normies into thinking it’s so great.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 10 месяцев назад

      @@dblezi could you please clarify "has to be"? Where does that knowledge come from? What's the logic explanation? What does "an overseer" mean? What does "an overseer" do, in practical terms?

  • @ながれる季節
    @ながれる季節 Год назад +9

    I love Prof. Malan for maintaining such a badass RUclips channel!

  • @christislight
    @christislight 11 месяцев назад +3

    I’m an AI Business Owner - It’s great to know how to program even if programming is obsolete due to AI, you can use code as an asset. I created a model that uses Python to solve any math equation. Could’ve used Google, but using Python makes the solution more accurate and near instantaneous.

    • @aqf0786
      @aqf0786 11 месяцев назад

      Can you share a reference to your model?

  • @chenjus
    @chenjus Год назад +31

    12:57 that's exactly right. The way I've been describing using GPT-4 for swe is that whereas I used to have to stop to look up error messages and read documentation, now I can ask GPT-4. GPT-4 smooths out all the road bumps for me so I can keep driving.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their output! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @miraculixxs
      @miraculixxs Год назад +4

      Except when it doesn't. But sure spending an afternoon with Copilot can often safe 5 minutes of RTFM

    • @fappylp2574
      @fappylp2574 Год назад +1

      @@miraculixxs "Hello Chat GPT, please read this F manual for me"

  • @sandrinjoy
    @sandrinjoy Год назад +3

    That has been the most professional Ad Break I have ever seen in my life. HAHA

  • @casla1960
    @casla1960 Год назад +10

    Thank you CS50 team for sharing this with all of us

  • @kostian8354
    @kostian8354 Год назад +2

    About prompt program
    - Can you reason about it's performance and class of algorithmic complexity ?
    - Can you reason about the resources required to run it, like RAM ?
    - Can it process more data than fits into RAM ?
    One day it will, but not yet...

  • @smanqele
    @smanqele 11 месяцев назад +5

    I agree, the biggest problem with humans in programming is how we mentally map how to solve problems. Code reviews can be a huge waste of time if you don't have it in you to push back. It truly makes me wonder the ROI for companies to host a lot of the software development ceremonies today.

    • @jamesschinner5388
      @jamesschinner5388 9 месяцев назад

      Code review is all about regression to the mean

    • @smanqele
      @smanqele 9 месяцев назад

      @@jamesschinner5388 But we probably haven't got a single methodology to arrive at the mean. Our individual Means are terribly diverse

  • @jonkbox2009
    @jonkbox2009 Год назад +23

    I took a clip of the FORTRAN code and sent it to GPT-4 Vision and asked it what the code did but it could not tell me because the pictured code was incomplete. Understandable. I sent it the BASIC code and it got it right. I asked it if the name CONWAY helped with its answer. It said No. I started a new chat and sent the BASIC program without the program name. It got it right. I sent the APL program and it didn't recognize the language or understand it at all, even that it was a programming language. I told it the language was APL and it got it right. Pretty cool.

    • @reddove17
      @reddove17 Год назад +4

      Because they are somewhere in the training set, the presenter got them from somewhere I would assume.

    • @elawchess
      @elawchess Год назад

      @@reddove17 The best of them are good enough to recognize a program that was not directly in the training set. Of course something about the program is in the training set e.g the idea of Conways game of life (or whatever it was), but that piece of code itself doesn't need to be in the training data for it to be able recognise it.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

  • @ivan88buble
    @ivan88buble Год назад +3

    Great sales presentation!

  • @BanditHighwayMan
    @BanditHighwayMan Год назад +61

    Me: Asks chat gpt to help me with a bug I am facing in my code.
    ChatGPT: Returns my exact same code
    (This was a joke)

    • @luckydevil1601
      @luckydevil1601 Год назад +6

      Ahah yeh, same sh*t happens to me too 😂

    • @invysible
      @invysible Год назад +3

      true broo... happend to me a few days ago

    • @mykyta_so
      @mykyta_so Год назад +13

      In this way ChatGPT hints that the main bug in your code is you :)

    • @IntrospectiveMinds
      @IntrospectiveMinds Год назад +8

      GPT 3.5 I'm guessing? Try 4. People keep coping by saying it doesn't work but are using the outdated model or have poor instructions.

    • @jbo8540
      @jbo8540 Год назад +3

      Try 4, and if that doesnt improve things, you need to work on your prompt engineering.

  • @Kraktoos
    @Kraktoos Год назад +117

    🎯 Key Takeaways for quick navigation:
    01:23 🚀 The field of computer science is undergoing a major transformation where AI models like GPT-3 are being used to write code, marking a significant shift in programming.
    06:54 💻 Natural language is becoming a key tool in programming, allowing developers to instruct AI models to generate code without the need for traditional programming languages.
    14:47 📈 AI technology, like GPT-3, has the potential to significantly reduce the cost of software development, making it more efficient and cost-effective.
    20:52 🤖 The rise of AI in programming will likely change the roles of software engineers, with a shift towards product managers instructing AI models and AI-generated code.
    23:46 👁️ Code review practices will evolve to incorporate AI-generated code, requiring a different kind of review process to ensure code quality and functionality.
    24:41 🤖 Code maintainability may become less essential with AI-generated code, as long as it works as intended.
    25:58 📊 The rapid advancement of AI models like ChatGPT has transformed the computer science field and its societal expectations.
    29:04 🌐 Programming is evolving, with AI assisting humans in generating code, and the future may involve direct interaction with AI models instead of traditional programming.
    33:44 💬 The concept of a "natural language computer" is emerging, where AI models process natural language commands and perform tasks autonomously.
    45:52 💡 The model itself becomes the computer, representing a future where AI empowers people without formal computer science training to harness its capabilities.
    49:15 🤖 AI-generated tests are becoming more prevalent, but there's uncertainty about the role of humans in the testing process.
    51:07 🧩 The future of AI models relies on the increased availability of transistors and data, which may require custom hardware solutions.
    52:06 🤔 Formal reasoning about the capabilities of AI models is a significant challenge, and we may need to shift towards more sociological approaches.
    54:23 🤖 Exploring whether one AI model can understand and explain another model is an intriguing idea, but its feasibility remains uncertain.
    59:30 🧠 While AI may make software engineers more productive, certain human aspects, like ethics, may remain essential in software development.
    Made with HARPA AI

  • @kostian8354
    @kostian8354 Год назад +7

    Even if robots generate code, you would still want it to have less duplication and some abstractions, because it will lower the amount of context tokens required to modify the code.
    You would probably also want to keep interfaces between regenerations, because you would like to keep the tests from the older version...

    • @christislight
      @christislight 11 месяцев назад

      You’ll need to code the robot, or code a solution to code into the robot. It’s deeper than these people understand

    • @sourenasahraian2055
      @sourenasahraian2055 10 месяцев назад

      No you don’t , they can write optimized code , that’s literally the whole point of AI , it’s an optimization problem, adjust my weight to reduce the cost function and code duplication be yet another parameter.

  • @sortof3337
    @sortof3337 Год назад +22

    surprise surprise, guy selling the shovel says gold rush is the best.

    • @ldandco
      @ldandco Год назад

      Yep... noticed the same.

  • @alrasch4829
    @alrasch4829 10 месяцев назад +5

    A great lecture/talk, illuminating and informative. As a practitioner, I find it very true and relevant.

    • @michellehunter8775
      @michellehunter8775 9 месяцев назад

      Agreed!

    • @michellehunter8775
      @michellehunter8775 9 месяцев назад

      Agreed. There's a lot of push-back against his message in the comments, but I'm already seeing it happen within tech companies where, for example, 10% of employees are let go and the ones staying are now doing several of those roles, along with their own, all by using AI.

  • @moonstrobe
    @moonstrobe Год назад +13

    I didn't hear him get into the topic of consistency and feature updates. How about performance based programming for games and ultra efficiency? Or shower thought innovations that create entirely new paradigms and ways of approaching problems? AI might be able to do some of this eventually, but I doubt it will be as rosy as he imagines.

    • @fappylp2574
      @fappylp2574 Год назад +1

      yeah, like 99% of people don't invent new paradigms or ways of approaching problems. The vast majority of people in software will be out of jobs, with maybe a few hyper-PhDs sticking around.

    • @dekooks1543
      @dekooks1543 9 месяцев назад

      stay fappin, fappy. It's not going to happen. Maybe the soydev macbook in startbucks react bros will get replaced but true programming that actually requires deep knowledge ? not happening.

  • @CaptTerrific
    @CaptTerrific Год назад +15

    The biggest red flag was there at the start: the beginning of the video description says that gpt can do general purpose reasoning. It's neither general purpose nor can it reason

    • @MinecraftN3rd
      @MinecraftN3rd Год назад

      Hmmm I think It is both general purpose and can reason

    • @dekooks1543
      @dekooks1543 9 месяцев назад

      then you should go to a mental health professional

  • @ChinchillaBONK
    @ChinchillaBONK Год назад +18

    The problem with LLMs in Generative AI is that in 5 years time, the AI will be learning upon large percentage of data that other AI have generated and then even longer down the road, how do we know what is real or generated data?
    We still need humans to understand what is fake. The creativity from AI must make sense if the goal for that specific data requires such precision like in the medical industry or other industries for lives are at stake.

    • @verigumetin4291
      @verigumetin4291 11 месяцев назад +2

      It's been established already that synthetic data is superior for training LLMs, compared to raw human data.
      I mean, think about it, does the open web not have data that is bad? Well ChatGPT was trained don it and it does pretty well. Synthetic data has been proven already to be superior to that, so simply by training the next iteration of the LLM on synthetic data is going to get us to the next step.

    • @ChinchillaBONK
      @ChinchillaBONK 11 месяцев назад

      @@verigumetin4291 What about fake news or lobbyist outlets? Or books/art generated on someone else's copyright? What if bad actors create fake generated data for their own nefarious purposes? Then these scammers or spammers constantly create these fake data? You can already make a fake Obama dancing "Livin' La Vida Loca". How would the AI know it's real or fake once these generative AI become more skilled? Years down the road, our newer AI LLM may not know the difference and use these data to train. We already got bad science news regarding mask wearing and vaccinations. This will become worse when the less than average intelligent human believes in nonsensical data in a world where such synthetic data will be practically spam.

    • @aligajani
      @aligajani 11 месяцев назад

      GPT 4 is getting dumber according to Stanford Research.@@verigumetin4291

    • @tybaltmercutio
      @tybaltmercutio 11 месяцев назад +3

      ⁠@@verigumetin4291Do you have any source for that? Preferably a peer-reviewed paper rather that some „research“ by Google or OpenAI published by themselves.
      I am asking because what you are saying does not make any sense to me.

    • @luzak1943
      @luzak1943 11 месяцев назад

      ​@tybaltmercutio I think he is talking about the Orca 2 paper

  • @CasualViewer-t4f
    @CasualViewer-t4f Год назад +16

    It’s a lot to expect everyone to know what they want to enter into a query. It will take some time for the query interface to truly be inviting. I’m also mildly concerned that AI will grow impatient with us end users and spit out something we may not want and will simply say “deal with it 😎”

    • @robbrown2
      @robbrown2 Год назад +3

      Seems like an AI that is owned by a company that makes a profit would train it not to do as you describe, since that would drive people away. Chat GPT, in its current state, is incredibly patient, and that is one of its most striking and valuable features. I don't think that's an accident.

    • @robertfletcher8964
      @robertfletcher8964 Год назад +7

      @@robbrown2 GPT isn't patient, and doesn't think. All it does is propose the most statistically likely word that should come next given a user provided context.
      This isn't AGI its a predictive model. I'm not trying to be mean or critical, but you need to understand this if you want to use the tool efficiently

    • @metznoah
      @metznoah Год назад

      @@robbrown2 It will literally return the statistically next most likely token as soon as it is physically able. What is your definition of patient for this to meet it?

    • @sgramstrup
      @sgramstrup Год назад

      They won't write, but just discuss the final product with the AI while it builds it. No writing is needed/wanted for future programming.

    • @elawchess
      @elawchess Год назад +2

      @@robertfletcher8964 The way you've characterised it undersells it quite a bit by saying the stuff about "statistically likely". Don't forget RLHF (Reinforcement Learning with Human Feedback) where many undesirable styles the model might do are weeded out and the model is steered towards answering in a way humans prefer. You say it spits out statistically likely within user context but you seem to not be considering that part of that user context could be "patience", the very thing that you seem to be alleging that it can't do.

  • @simulation5627
    @simulation5627 Год назад +11

    It started interesting but it's just an ad for (another one) gpt wrapper.

  • @bilalarain4632
    @bilalarain4632 Год назад +5

    Welcome to the new erra of debugging.

  • @EnglishGeekWahoo
    @EnglishGeekWahoo 9 месяцев назад +1

    This is a good video for high school students to be careful when they want to go to college, they might not only think not to approach CS but to go to something that wont be replaced by AI soon. Our era is tough, and it has never been any easier.

    • @rogerh3306
      @rogerh3306 2 месяца назад

      This is a good marketing video for selling his own software by bashing programming and calling it annoying (47:25).

  • @jimpresser3438
    @jimpresser3438 Год назад +2

    Over time this professor is absolutely correct. I have been a developer since the late 1980s.

    • @hansyuan5186
      @hansyuan5186 Год назад +1

      Maybe it takes a developer from the late 1970s or the early 2020s to understand how this professor is wrong.

    • @dijoxx
      @dijoxx 10 месяцев назад

      I've been doing it since the 90s and I disagree with him.

  • @MikkoRantalainen
    @MikkoRantalainen 11 месяцев назад +7

    Great lecture! I've been writing code professionally for 20 years and I feel like Copilot is a the level of first year university student learning IT stuff. Not perfect co-worker, obviously, but much better than basic autocomplete in your IDE or some other tools you could use. I'm fully expecting to see Copilot rapidly improve so much that I write all my code with it. Right now, I feel that it can provide some support already and with fast internet connection, having it available is a good thing.
    Most of the time Copilot writes a bit worse code than I could do myself but it's much faster at it. As a result, I can do all the non-important stuff with a bit lower quality code that Copilot generates so I can focus my time on the important parts only. I'd love to see Copilot to improve even at the level that the easy stuff is perfect.

    • @ndic3
      @ndic3 11 месяцев назад

      Copilot is terrible though. Gpt4 is 50x times better. In comparison co-pilot is unusable
      Edit: number is obv made up from what it feels like

    • @MikkoRantalainen
      @MikkoRantalainen 11 месяцев назад

      @@ndic3 Can get GPT-4 integrated in your code editor?

    • @LionKimbro
      @LionKimbro 11 месяцев назад +5

      I’ve been programming for 40 years of my life. Professionally for about 24 years. I absolutely coding with Chat-GPT. But what people don’t get is that architecture still matters. You are still accountable for the code working out. You still need a picture of the system as a whole. You still need to get what’s going on. You still need to understand algorithms, you still need to be able to perform calculations on performance and resources. You still have to know stuff. You have to put the pieces together into a working whole. And the appetite for software is near infinite.
      I don’t think people quite get that.
      Chat-GPT can’t do it all for you, by a long shot. Chat-GPT is a great intern. But you can’t make Excel with even two hundred interns. Not even a thousand interns can make Excel. There are other problems.
      And I am not saying that one day we won’t have AIs that can fully replace competent programmers. We probably will- one day. But that day is not today, and it is not even tomorrow.
      What I tell young people who are afraid, “but will there even be programmers in ten years?” I tell them, “maybe not, but I can tell you this: It has never been easier to learn programming, than it is today. You can ask anything of Chat GPT, and it will answer for you. If you know one programming language, you can now write in any programming language. The cost of learning to program has dropped incredibly. And the money is right- right over there.”

    • @edwardgarson
      @edwardgarson 10 месяцев назад

      ​@@ndic3Copilot is based on GPT-4

  • @1dosstx
    @1dosstx Год назад +5

    38:17 what is considered kid safe? Based on what milestones? Emotional ? Psychological? Etc? You need to know what child development sources are peer reviewed , etc. yes you could ask the AI for those but then you’d need to ensure they were not hallucinations. Etc.

  • @ergestx
    @ergestx Год назад +17

    The speaker here is pushing for a paradigm of “LLM’s as a compute substrate” and English as a programming language” which I definitely see the value of. Certain programs would be easy to express in English but nearly impossible to program using traditional languages. Of course the paradigm does happen to benefit his startup but to claim that this will spell the end of software engineering as we know it is absurd.
    First of all this requires disregarding decades of research into system design principles which call for modularization and separation of concerns, in order to make systems more legible, easier to debug, easier to maintain. I wouldn’t want key operational software that’s an inscrutable black box that requires “magical” phrases to do the right thing.
    Just because an LLM is writing the code this doesn’t invalidate the need for proper design. Software engineers are taught design principles for a reason; not just to make their code easier read, and understand by humans, but also to make it easy to debug, extend and adapt.
    Second, just because it’s easier to program now using just English it doesn’t mean that software engineers are no longer needed. How would you evaluate the correctness of the software generated by the LLM? How would you improve its performance? That requires understanding logic, probability, algorithmic complexity, algorithmic thinking, and a plethora of other software engineering skills taught in college.
    In my opinion it makes the need for highly trained engineers even more important

    • @janekschleicher9661
      @janekschleicher9661 11 месяцев назад

      Indeed, especially as we already have at least 2 completely (very close to plain) english programming languages around for > 50 years that are widely used: SQL and COBOL.
      For small examples, both are great to write, understand and efficient.
      But for real world problems, both are complicated, hard to understand and need a computer science education (for at least to some extend), to get your job done.
      We even deprecated COBOL what is as close as possible to english, especially as it gets very verbose and so harder to understand again compared with more formal languages.
      The problem is not writing the code, but being explicit enough so you really get what you want. And independent of technical constraints, the requirements engineering still is engineering and even if the output is plain english, just read any formal document and you'll find out, it's not simple english. That's true even for non engineering, like law, standardization documents, pharmaceutical documents or to come back to programming RFCs.
      There's probably a reason, why the presenter didn't show the prompt to write Conway's Game of Life via ChatGPT that doesn't involve using external knowledge already. Once you have to define it accurately, it's probably not much shorter than the Fortran or BASIC example and might even be less readable than the Rust version he showed there. The usual text book description are either using images to explain what's going on (what won't work in general), or they just describe it mathematically and would be 1:1 to the APL version he presented. It just sounds so easy as we are used to the concept, but what is a cell, what is a neighbar, how big is the sheet, when does the game end, what does a round mean, what is the initial state, what does it mean to survive, or create new life, or how is it outputted and what do we optimize for? None of it is just trivial to explain unless the concepts are already known (Conway create a game for mathematicians), but in general for most programs, the concepts are not known.

  • @ChetanVashistth
    @ChetanVashistth 10 месяцев назад

    Questions in this lecture are very interesting. Even better than the whole lecture.

  • @alexforget
    @alexforget 7 месяцев назад +1

    More data and transistors will help, but I think that better algorithm will help way more.
    We are continualy rebuilding the same thing and letting them unused.

  • @frankgreco
    @frankgreco Год назад +4

    46:36 "No one understands how large language models work"... back in 2008, no one understood how derivatives worked.

  • @anuradhawick
    @anuradhawick Год назад +32

    It’s very likely that AI startups will get replaced by OpenAI products for a while until the tech saturates.
    I think we could do most of the donut demo with what OpenAI announced a few days ago.

  • @nmg8225
    @nmg8225 11 месяцев назад +5

    Give credit to the people that coded copilot, chat gpt etc; now to seamless to use these LLMs but behind the scene are still the coders, the statisticians, the scientists, the engineers optimizing these models. You have to know both; how to code and how to use the models.

    • @LuisFernandoGaido
      @LuisFernandoGaido 10 месяцев назад

      Exactly what I think. A SWE needs write code explicity and write models to get solutions implicitly. None of these tasks seen to desapear in the future.

    • @jakelake-u1q
      @jakelake-u1q 9 месяцев назад +1

      @@LuisFernandoGaido he never said they will. he just said the way software development happens will change drastically. it already has actually everyone at my job uses copilot

  • @TheGamerDad82
    @TheGamerDad82 Год назад +23

    Well, generative models might eventually replace some software engineering interns at companies but as a lead developer / architect I don't see my job endangered yet.
    Software development and designs is not only about writing code. Writing code is the easy part - understanding the problem, both functional and non-functional requirements, the operating circumstances and making design decisions and compromises when needed is a whole different dimension.
    I can already see a lot of startups failing miserably by trying to develop software with a few low cost developers armed with some generative AI tool. This is like "we don't need database experts, we have SQL generators" all over again... 😂

    • @bdjfw2681
      @bdjfw2681 Год назад +2

      true dude

    • @sgramstrup
      @sgramstrup Год назад +4

      Doctors are also claiming they can do more, but AI have already beaten top doctors in diagnosing certain illnesses. I think you'll wake up very soon. No offence oc..

    • @farzinfrank2553
      @farzinfrank2553 Год назад

      I agree with you. Its making the coding much easier but analysis is still a challenge

    • @martinkomora5525
      @martinkomora5525 Год назад +6

      @@sgramstrup so would you undergo surgery operated fully by AI tomorrow?

    • @Linters-uh1kk
      @Linters-uh1kk 11 месяцев назад +2

      These were my thoughts too... I recently started learning full stack. I don't think Dr. Welsh understood fully the way LLM works and how reliant they are on humans. Any reasonable business should feel worried if a "code monkey" was writing random lines without a way to know specifically what was happening. Problems of the future are likely related to security, not necessarily deploying code that works. We need developers with experience and actual understanding of the code and how it interplays with the system. Other comments above mention programming languages with specific use cases such as memory, NOT necessarily human readability. This reminds me futurists who believed teachers and instruction would be outright replaced by multimedia in the 60's and 70's. The Clark and Kozma debates are a famous example of this. I wonder how many people dreamed of being a teacher and gave it up from fearmongering? The fact is context is everything. Humans are making the context, and we will be doing so for a long time. A threat to this is AGI, not a brain in a jar which is generative AI. If I were in computer science I would take what Dr. Welsh says with a grain of salt. Instead, think about what kind of problems are going to be introduced with AI and understand it as deeply as possible. With every innovation, new problems are born.

  • @matthewrummler
    @matthewrummler 10 месяцев назад +1

    I'm putting this here as a note for myself (I'll see if that works).
    POINTS REGARDING HIS "IMPOSSIBLE" ALGORITHM (no I don't think he literally means impossible):
    1. The AI is not a simple algorithm itself
    - The AI can not be summarized as an algorithm in the way someone would write one... the complexity is fairly expansive... even to setup the ML models
    2. Most of what he is asking would not be difficult for a reasonably simple program
    - Getting the title, etc...
    3. DO NOT "": This would be the default of a program
    - When he says DO NOT use any information about the world... does not mean do not utilize your predictive analysis it just means don't mix information in that is not in the transcript
    4. Summarizing is hard, a targeted predictive learning model IS probably the best algorithm for this
    - The only very difficult piece for a custom built program (including one or more algorithms to make this infinitely repeatable) IS the summarization
    So, my conclusion: Part of writing code well will, in the future, include targeted ML*
    (though my take is not monolithic, gargantuan systems like Open AI & Google produce... though those could be a good way to train a targeted ML model)

  • @me_souljah
    @me_souljah Год назад +24

    This feels like the Theranos equivalent of the future of software, it's all dreamville

    • @jwesley235
      @jwesley235 Год назад +15

      Tell me you don't understand what's going on in AI without saying you don't know what's going on in AI.

    • @me_souljah
      @me_souljah Год назад

      Sure, I know nothing JOn SNow.@@jwesley235

    • @AD-ox4ng
      @AD-ox4ng Год назад +6

      ​@@jwesley235how about you explain it to us then?

    • @calliped-co5mj
      @calliped-co5mj Год назад +3

      ​@@AD-ox4nghow about you do your own research.

  • @ZaidMarouf-q9e
    @ZaidMarouf-q9e Год назад +14

    That's a pretty funny and bold claim when a lot of AI systems can't count the number of words in a paragraph excerpt correctly.

    • @ksoss1
      @ksoss1 Год назад +1

      Can you? All the time? What would it take for you to do it perfectly each time? What would it take for the AI system to do it perfectly every time? Interesting times ahead...

    • @ZaidMarouf-q9e
      @ZaidMarouf-q9e Год назад

      @@ksoss1 As far as I'm aware, there looks to be a problem that chatbots seem to have where in terms of computational speed causes them to skip some instructions of code that's not too dissimlair when setting compiler execution speed to a certain level that results in some unwanted glitches like in assembly language programs via accidental instruction skips.

    • @juleswombat5309
      @juleswombat5309 Год назад

      You are referring to simple LLMs, the proposed architecture is LLMs+ Compute Tools (c.f. Calculators etc) Just as an normal human can answer 3x 9 =27 off the top of their head, they would need pencil and paper, or just use a a calculator, to answer what is 4567 x 2382?

    • @ZaidMarouf-q9e
      @ZaidMarouf-q9e Год назад

      ​@@juleswombat5309So, what does that make my testing of Bing AI's capabilites, built on top of OpenAI tech, in regards to a pretty simple task on a pretty short excerpt of word counting? Because I'm pretty sure Microsofts' proprietary AI app doesn't fall in the category of being powered by a simple LLM.

    • @juleswombat5309
      @juleswombat5309 Год назад

      @@ZaidMarouf-q9e It means you have not tested against an LLM combined with access to relevant tools.

  • @regularnick
    @regularnick Год назад +5

    19:26
    > "I've been coding whole day", but you threw away 90%
    Oh, that's pretty bold claim to say, that with chatGPT you will get correct code snippet first try, without any need to prompt it with like 20 more messages clarifying and making sure id doesn't confuse language, paradigm etc.
    You should not compare "clear code" of SWE with GPT tokens, because you are guaranteed to spend many more than ideal. Considering they are dirty cheap, this may not be the problem though

  • @OswaldoDantas
    @OswaldoDantas 11 месяцев назад +1

    Thought-provoking talk that needs to be taken with a serious amount of critical thinking. I personally have a different view about how programming will evolve and by no means I would ever agree with adding "The End of Programming" in a title or main message unless the objective is in short click baiting to a sales talk.
    Just as photography didn't kill painting and ai generated images won't kill photography, if you have to write your instructions in English or whatever other language and you already expect to be following some specific patterns to get the expected results, with some try and error in between, well, you are basically programming :)
    Dr. Welsh raises valid concerns about the evolution of programming and the nature of being a programmer or software engineer, although I beg to differ in the specificities.

    • @aqf0786
      @aqf0786 11 месяцев назад +2

      All I see is a English to targeted language compiler, where we don't know exactly how the compiler works... it doesn't seem like a good idea

  • @pradeepebey6246
    @pradeepebey6246 Год назад +10

    I think large language models are really cool, buts too much of black box. Sure there’s plenty of of use cases, as far as entirety replacing code, it needs to be customisable enough and consistent with its functionality. Not sure how that would be possible!

    • @codytownsend3259
      @codytownsend3259 Год назад

      I mean we've already almost got there. Won't be long. Context window is huge now

    • @silencedogood7297
      @silencedogood7297 Год назад +1

      You are restricting your options to computers as we know them, operating on limited versions of ones and zeroes. We cannot have true AI until we have bio-chips that operate like real brains.

    • @fappylp2574
      @fappylp2574 Год назад

      Most of tech is currently already a black box. I write mostly C++ and can't even begin to fathom how these modern optimizing compilers work (and I never will). Heck, even the V8-runtime is almost arcane to most people. Only very few exceptional human beings can understand and work on these systems, everyone else can start to look for toilet cleaning jobs.

  • @vinipoars
    @vinipoars Год назад +14

    I'm wondering if Fixie (35:00) hasn't already become obsolete with OpenAI's announcement on November 7th... lol

    • @ltnlabs
      @ltnlabs Год назад +3

      Exactly

    • @ranjancse26
      @ranjancse26 11 месяцев назад

      AI.JSX, who needs to learn in the era of AI lol

  • @andrebatista8501
    @andrebatista8501 Год назад +8

    If AI can write programs, it’d be able to substitute a lot of people, and not just on tech but on many fields, then we gonna have more efficient services but with so many people unemployed, who would pay for those services?

    • @compateur
      @compateur Год назад +4

      This is a very interesting question. Take it to the extreme: LLMs are able to take over any job. What makes live worthwhile? Can ChatGPT enjoy the first sun ray that warms up its AI chip, does it enjoy the tranquility of Nature, can it enjoy the soft sea breeze, can it get excited about new discoveries? What makes the heart of ChatGPT tick? Does it have a heart? Sometimes we forget that we are multidimensional creatures. Maybe we have to come up with a complete new model for society. We have to redefine ourselves.

    • @-BarathKumarS
      @-BarathKumarS Год назад +1

      @@compateur dude seriously,think about it! One of my friends works as a consultant and another one works as an accountant at top firm,i have personally looked at the kind of work they do which at the end of the day is the most brain numbing manual repetitive task that i have ever seen...to put it pluntly an high schooler can do their job well enough.
      What will happen to these people then?

  • @cityofmadrid
    @cityofmadrid Год назад +9

    Why hasn’t the “lecture” started saying “today we are gonna have my buddy which has an AI for programmers startup”, it would have saved me an hour of this info-commercial

  • @JaredArms
    @JaredArms 4 месяца назад

    Love this video, he's thinking ahead of the curve.

  • @lunarjournal
    @lunarjournal 9 месяцев назад +1

    Good presentation. I particularly like when he used Rust as an example of bad language design.

  • @abdulshabazz8597
    @abdulshabazz8597 Год назад +4

    We must move forward with the advanced computational and reasoning capabilities these software models affords us, but we cannot move forward with these black box models which have no formal method of verification or "instruction manual", so to speak. These models should be considered idle malware. I mean imagine: these advanced advanced models and models like these in our appliances, our aircraft, and our ground transportation systems which cannot be verified yet behave properly 99.99 percent of the time yet cannot be actually Verified correct...

  • @troyhackney148
    @troyhackney148 Год назад +10

    Sir... This is a Dr. Donut.

  • @MaxNerius
    @MaxNerius Год назад +3

    > It's 2023, and people are still coding in C -- that should be a federal crime
    Not because it's their language of choice, though. Think embedded systems: Even if you want to use Rust or any other language with training wheels on it (metaphorically speaking), the platform you're developing for may not be a targeted by it. Or worse, maybe your toolchain needs to meet certain criteria to pass a regulatory body of sorts.
    Disclaimer: I'm not writing this because of confirmation bias or me being an offended C programmer (I'm working with Java). Please don't get me wrong: I understand that Dr. Welsh didn't intend to oversimplify things, though he generalizes a bit too much imho. It's putting a whole industry in a really bad light and it's just like saying: "if using C is bad because bad behaving C programs have killed people, then, by this logic, we shouldn't be riding trains or going by car anymore".

  • @thecasualengineer99
    @thecasualengineer99 11 месяцев назад

    I have tried it for a few days and a job that needs 2-3 days became 4 hours for the first pass code. very nice.

  • @mrthanhca
    @mrthanhca Год назад +1

    Thank you for the information; it's very useful.

  • @davidsmind
    @davidsmind Год назад +7

    "react for building llm applications"
    I cackled for about a minute

  • @EyeIn_The_Sky
    @EyeIn_The_Sky Год назад +7

    Guy introducing him: "Hey Kids, this guy is going to make sure that the cripppling debt that you and your parents undertook to send you to college was all for absolutely nothing thanks to his AI"

  • @alfonsobaqueiro
    @alfonsobaqueiro Год назад +7

    Programming is challenging, beautiful, fun, and makes you think like a machine.

    • @juleswombat5309
      @juleswombat5309 Год назад

      That's great as a hobby like fishing. But if your boss cannot afford to employ you, as AI tools means he only needs to hire a few staff, then you will not make a living from coding. Adapt to exploiting these tools if you still want to make a living in the computer industry.

  • @Koyasi78
    @Koyasi78 11 месяцев назад

    This is why i minored in philosophy. Computer science is applied philosophy. The real ability is thinking logically and understanding the human mind and what it is you want to create. Thinking clearly. My personal opinion... When you create something and don't know why it did what it does but does so consistently is because you stumbled upon an equation of nature, that is some fundamental way nature works. In this case human nature. Computer science has always been a funny term. How can there be a science of the computer which is not a natural phenomenon. The science of computation or how to calculate. I find it fascinating that giving chatgpt a personality like you would an actor and shaping a narrative works. But we do this as people everyday going through the different aspects of ourselves depending on the circumstance. So excited for the future of the field.

  • @ianmyers5784
    @ianmyers5784 11 месяцев назад +1

    2.5 years proffesional software dev here currently developing Trichotillomania.

  • @coltennabers634
    @coltennabers634 Год назад +5

    19:00 Lines of code is a vanity metric that does not translate to value... this guy is definitely in management

  • @aungthuhein007
    @aungthuhein007 Год назад +37

    It's nice of David to let the students have a taste of silicon valley's sensationalism and the outlandish "predictions" of where the future is headed. "This is the only way everyone will ever interact with computers in the future." Even if that turns out to be true, it is soooo far away from the real world right now that it doesn't take a real computer scientist to realize this is delusional. That's not even to mention the question of whether or not we *should* be heading in that direction as a society. Not much more than silicon valley's way of raising funds for more products/services, the vast majority of which fade away after some time.

    • @bdjfw2681
      @bdjfw2681 Год назад +2

      feel the same. i just think ai is dump and keep dump in at least 100 years, or longer , not in my life time or even not before human extinct will ai become that smart. maybe only advanced alien can actually build that levels of ai.

    • @hamzamalik9705
      @hamzamalik9705 Год назад +10

      5 years down the line your comment will seem silly !

    • @bdjfw2681
      @bdjfw2681 Год назад

      if 5 years later AI could be so powerful that my comment seem silly , i am actually happy with that. i do hope tech advanced fast but at the same time Very pessimistic about the speed of technological development@@hamzamalik9705

    • @devsquaredTV
      @devsquaredTV Год назад +3

      what floored me was his claim that no one could write an algorithm in a programming languauge that is equivalent to his prompt string.

    • @user-oz4tb
      @user-oz4tb Год назад +1

      For real, I am on my 2nd big tech job since the ChatGPT rise and of all my team members I am the only person who uses it.
      In production i saw some ML models in:
      - adtech for improving ads suggestions. They were there for more than last 6 years, long before the "AI will do everything soon" hypetrain. They were, as i've said, only improvements above the not ML written ad rotation core and didn't generate much money for the company at all.
      - security SIEM systems used for threat detection on users laptops, but in reality it was doing more harm than profit, like banning our git-lfs executables, lol.
      - I saw some LLAMA model, trained for a company internal domain (code, wiki etc), but its usefulness was a joke, to be honest.
      Also I saw an arise of infinite amount of startups with AI solutions for everything after the experts started to promote "Everything as a model" idea. They were trying to solve with ML such problems which never required an ml solution. Looked like every startup, which used to be a crypto startup now is an AI startup or has something from AI word cloud in its name.
      I see all the experts predicting obsoletion of software development as a job in 5-10 years, but I see literally close to none signs of GPT models in production, left alone profit from its usage. Maybe it is used widely in another tech domains? Maybe in 5 years situation will drastically change? Well, maybe, who knows. But now for me it does not look like more than another race for a venture capital.
      P.S.: oh, yeah, ChatGPT-4 is insanely good for catching missing Lisp parenthesis, btw.

  • @kanji_nakamoto
    @kanji_nakamoto Год назад +5

    Best talk in CS so far in 2023!

  • @stephene.robbins6273
    @stephene.robbins6273 Год назад +1

    Two things, speaking from 35 years of banking-software programming: 1) code reviewers are only as good as their expertise (and in years!) in the language (and business functionality). If AI removes all opportunity for experience in the language, where does this expertise come from? 2) The business function knowledge side of the business now subsumes the entire burden of the required specifications to the AI - an enormous effort. How long before we try to automate that? An infinite regress is arising here....

  • @Amanjotsinghg
    @Amanjotsinghg 8 месяцев назад

    Dry audience, really enjoyed the the talk and the gags