This video will change your mind about the AI hype

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024

Комментарии • 2,4 тыс.

  • @NeetCode
    @NeetCode  4 месяца назад +1227

    A few more points I didn't mention in the video:
    1. A day after i uploaded this video I saw tech bros on twitter saying "all you need is Claude" and you can code almost anything.. yet they couldn't even recreate a basic component on neetcode.io that I literally coded as a junior engineer. So once again, people are vastly overstating what AI can do. If only this hadn't happened in human history a million times before.
    2. Amazon invested billions in Alexa, only for it to be obsoleted by LLMs. I worked in Alexa (for a brief time) and it's obvious it wasn't well run. Big tech doesn't always know what they're doing.
    3. Amazon Go's "AI" turned out to be indian workers watching security cameras.
    4. Nearly every advancement goes through hype cycles. Not just the dotcom bubble, even the railroads were overbuilt in the 1800s. "The Panic of 1893 was the largest economic depression in U.S. history at that time. It was the result of railroad overbuilding and shaky railroad financing, which set off a series of bank failures."
    Fwiw I literally use LLMs on a daily basis to automate my own tasks. Yes it helps somewhat, but I'm also very familiar with its limitations.
    If you disagree with me you may very well me right. But at least give me your best argument :)
    Sources:
    - ruclips.net/video/U_cSLPv34xk/видео.htmlsi=Czh2GAG1wfVxjfhD
    - x.com/swyx/status/1815053785548661128
    - arstechnica.com/gadgets/2023/11/amazon-lays-off-alexa-employees-as-2010s-voice-assistant-boom-gives-way-to-ai/
    - www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4
    - en.wikipedia.org/wiki/History_of_rail_transportation_in_the_United_States

    • @ronak4489
      @ronak4489 4 месяца назад +1

      Lol wut. Can’t recreate a basic component from your shit shilling website?

    • @szebike
      @szebike 4 месяца назад

      I fully agree the hype still surprises me and don't worry even when I post a comment and question the capabilities of curent A.I. approaches, some tech bro fanboys emerge and will say "this didn't age well "when an update of an LLM is made or they saw one of those (faked) techdemos. I mean if anyone worked (or rather tried to work) lets say like 5 hours on something more serious and accurate you will very quickly learn about the (crippling) limitations. I can't trust an "assistant" if it doesn't know what id doesn't know and "hallucinates" righot out false infromation instead. These programs have their usecases but its neither transformative nor money making at the moment and probably in the near future.

    • @bbom9197
      @bbom9197 4 месяца назад +38

      Please do "this will change your mind for quantum computing" plzzzz

    • @ajlee1216
      @ajlee1216 4 месяца назад +20

      Thanks for such a cool headed commentary on this subject!
      Would be lovely to see a video on how you leverage LLMs to automate your tasks.
      Please keep up the great work!

    • @paulocacella
      @paulocacella 4 месяца назад +46

      I think that you, like a lot of devs I know, are underestimating what is going on. I work in IT since 80's and AI from 8 to 10 years ago. The first mistake is to think that a single LLM model is the benchmark of AI evolution. No, it is not. From design, LLMs can only give you reasoning. Information always will be unprecise. But this is not the way it will go ahead. When you start to use agentic and tooling systems this is a whole new world. LLM is only a tool in a system that is capable to have better than human performance and cost effectiveness. The simple LLMs that we have today running locally are already capable, if you use the correct tecniques , to make things no single LLM can dream of. The next level is the level of specialist systems. The ones designed to replace humans in complex tasks like programming, engineering, medicine etc. You may ask why you are not seeing this now. The first reason is that you need knowledge. The vast majority of young guys that are devs or work with AI have no hint of the knowledge an engineer or a physician must have to do the work. That is why people continues to check useless benchmarks on LLMs. They want somebody (AGI) to solve problems for them, as they do not even are capable to ask for. The second reason is that these systems are difficult to develop. We have bunch of people capable to develop solutions, but only if someone else details exactly the problem and expected solution. The problem is not the dev part , it is the necessary business knowledge. The third reason is security. Big companies certainly are developing these kind of solution for themselves to gain efficiency and very, very likely this will mean white collar layoffs. I expect from 1.5 to 4.5 years to have a huge impact in the job market area. Security of their(company) knowledge is paramount. These systems will not be marketed. They will be the company core. I understand that a lot of peple do not agree with what I am saying. I have a lot of experience in all these areas. What we are seeing is a revolution in the making. Not because these entertainment things like SORAs or mimicking voices, but because these tools will make a huge impact on our lives. I think the market for IT will shrink after a short term bump. Large part will be automated. These systems certainly will layoff a lot of professionals because this efficiency gain will be absorbed by the companies and not by the employees. If you have 10 people, this will change to 8, 6 or even less. The clock is ticking, there is no way out. No, it will not take 20 or 30 years. We are talking of a span of time that is fatal for young generations. In five to ten years the world will be alien for people with university degrees and professionals that do not work with material things (like surgeons or field engineers). If the work is only intellectual it will be replaced.

  • @okaytokay
    @okaytokay 4 месяца назад +3931

    I hate this hype economy

    • @3breze757
      @3breze757 4 месяца назад +206

      you mean capitalism?

    • @juanmacias5922
      @juanmacias5922 4 месяца назад +113

      @@3breze757 capitalism go brrrr, until no one can afford it.

    • @roymarshall_
      @roymarshall_ 4 месяца назад

      ​​@@juanmacias5922what's great about socialism is that nobody affording anything becomes the default

    • @mistycloud4455
      @mistycloud4455 4 месяца назад

      AGI Will be man's last invention

    • @p3num6ra
      @p3num6ra 4 месяца назад +57

      we've been in an attention economy for a long time now

  • @JiwaChhetri
    @JiwaChhetri 4 месяца назад +5290

    Bro just solved the "should i drop out of college" problem in O(1) time complexity

    • @readdaily5680
      @readdaily5680 4 месяца назад +68

      Computer Engineering

    • @oscarcharliezulu
      @oscarcharliezulu 4 месяца назад +12

      Hilarious

    • @Chronomatrix
      @Chronomatrix 4 месяца назад +13

      Good one.

    • @nasamind
      @nasamind 4 месяца назад +4

      😂

    • @cagedgandalf3472
      @cagedgandalf3472 4 месяца назад +128

      I've read from a book somewhere to help with decisions, "Imagine you are 90 years old right now, looking back on this decision, would you regret it?"

  • @richjohararar
    @richjohararar 3 месяца назад +1375

    "If you wish to make an apple pie from scratch, you must first invent the universe" ... pure gold

    • @MagnumCarta
      @MagnumCarta 3 месяца назад +48

      Don't eat from that tree, Adam! I need those apples for my apple pie.

    • @LivingBreathingRedFlag
      @LivingBreathingRedFlag 3 месяца назад +18

      That bit from Carl Sagan was used in the first stanza of Glorious Dawn by Melodysheep :) Lovely

    • @VolodymyrPankov
      @VolodymyrPankov 3 месяца назад +3

      Этот момент зацепил меня сильно. Очень в тему.

    • @takashimurakami3560
      @takashimurakami3560 3 месяца назад +5

      @@VolodymyrPankov vsem poebat'

    • @VolodymyrPankov
      @VolodymyrPankov 3 месяца назад

      @@takashimurakami3560 on you and on the fact that you are a senseless biological non-entity.

  • @kietvo96
    @kietvo96 3 месяца назад +389

    "Google has a shit ton of money and they are not giving it to their employees" delivered blankly is peak dystopian humor.

  • @zyfigamer
    @zyfigamer 3 месяца назад +559

    Someone asked me (SE) if I was worried about my job being automated. I told them, no, because if a machine could do my job, then it could also make a better version of itself. That's the singularity.

    • @urhot
      @urhot 2 месяца назад

      That’s simply not true lol. Coming from an engineer

    • @LuckyLucky-pc3tz
      @LuckyLucky-pc3tz 2 месяца назад +12

      Your forgetting it's sentient and improves on its own.

    • @zyfigamer
      @zyfigamer 2 месяца назад +144

      @@LuckyLucky-pc3tz no I get that part. I'm saying there won't be any jobs after that.

    • @nasseq
      @nasseq Месяц назад +7

      @@LuckyLucky-pc3tz whoosh

    • @triplebeam23
      @triplebeam23 Месяц назад +3

      Its still taking your job though 😂

  • @akashverma5756
    @akashverma5756 4 месяца назад +1578

    Celebrity CEO's job is marketing.

    • @fx-studio
      @fx-studio 3 месяца назад +25

      There prime role is Acting.
      Acting the rols part of a CEO

    • @Rein______
      @Rein______ 3 месяца назад +8

      Musk

    • @fx-studio
      @fx-studio 3 месяца назад

      @@jamad-y7m " She seems to have gone from an intern to a Senior Product Manager at Tesla...then a couple jumps later and suddenly she works as an AI researcher...or from other sources as the VP of Applied AI and partnerships...with no educational background in AI. She must be extremely brilliant, but I am just dumbfounded on how quickly she went up the ladder. "

    • @manonamission2000
      @manonamission2000 2 месяца назад

      @@jamad-y7m the interview on Bloomberg TV was worth a watch

    • @OfficialCANVAS
      @OfficialCANVAS 19 дней назад

      Steve job pioneered as an marketing CEO. Everyone forgets he couldn't even write one line of code.

  • @_mippi_
    @_mippi_ 4 месяца назад +320

    "Fake it till you make" it the philosophy a lot of start up companies follow because its the only way to get financial support to gain the resources they need to propel themselves
    This is commonly seen in the Bay Area at places such as Stanford, Berkeley, SF. .. etc

    • @AL-kb3cb
      @AL-kb3cb 4 месяца назад +17

      It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.

    • @bmanpura
      @bmanpura 4 месяца назад +3

      Every startups have to take some big risks, and that phrase "fake it till you make it" is usually spoken by the already successful. The whole story behind anybody's success is often way more complicated.
      That being said, taking risk is not bad for business nor the customer when handled properly.

    • @jeffsteyn7174
      @jeffsteyn7174 4 месяца назад +3

      And the philosophy of Elon😂

    • @bramdoe3303
      @bramdoe3303 3 месяца назад

      Almost like California is a cancer

    • @GreatTaiwan
      @GreatTaiwan 3 месяца назад

      @@AL-kb3cbthis here what matters the most and most practical explaining of our current situation

  • @hamzarashid7579
    @hamzarashid7579 4 месяца назад +2475

    When Devin first came out there was job opening for a developer on their official website.

    • @mrkike7343
      @mrkike7343 4 месяца назад +15

      Bro what is your educational qualification?

    • @hamzarashid7579
      @hamzarashid7579 4 месяца назад +281

      @@mrkike7343 Ain't got time for shit

    • @TianYuanEX
      @TianYuanEX 4 месяца назад +375

      @@mrkike7343 Why does his educational qualification have anything to do with what he said lmao

    • @tanvirahmed7993
      @tanvirahmed7993 4 месяца назад +3

      lol

    • @TehCourier
      @TehCourier 4 месяца назад +6

      lmao that's funny

  • @Mpmb3022
    @Mpmb3022 3 месяца назад +91

    As a network engineer, I remember as far back as 2017 there was hype about automation systems and Software-defined networks taking over and that network engineers would be obsolete.
    7 years later and I've only seen the requirements increase for engineers, just that the technology stack has been changing. So not falling for that hype again.
    Just get good at what you do and explore how to use AI and new tech in your workflow.

  • @MRlreable
    @MRlreable 3 месяца назад +178

    I have a masters in computer science focusing mainly on machine learning and I was furious when back then gpt 3 launched and everyone (even whom I respect in the field, Fireship, Nick Chapsas, etc.) was losing their minds about AI taking people's jobs in a year or so. It's good to see that now most of the knowledgable people are actively settling the discussion and point out the false claims. Glad that you do that also!

    • @the3Dandres
      @the3Dandres Месяц назад +8

      Well people are finally realizing that the improvement in LLMs is really stagnating. The jump between gpt2 and 3.5 was huge, the jump between gpt3.5 and 4 was impressive but not of the same magnitude. And right now we don’t see that big of a difference anymore between each new generation so the hype is finally slowing down!

    • @seer775
      @seer775 Месяц назад

      @@the3Dandres I'm pretty sure that comes down to the decelerating growth in the model parameters. GPT-3 already used thousands of GPUs and months of training time.

    • @the3Dandres
      @the3Dandres Месяц назад +1

      @@seer775 Partially, but also consider that a doubling in parameters does not result in a doubling of output quality. so this curve is really flattening out. after all its still just matrix operations not real intelligence.

    • @seer775
      @seer775 Месяц назад

      @@the3Dandres depends on what task you're measuring on. Complex tasks will see more ROI with more compute, but you will eventually hit a ceiling in terms of how much compute you can provide.

    • @yourlogicalnightmare1014
      @yourlogicalnightmare1014 29 дней назад

      AI is a great toy for making memes and writing blogs. Its entirely useless for 99% of things I need it for. I doubt it will ever be more than a novelty. I doubt fsd or humanoid bots will be a thing in the next 5 years

  • @ddosan4108
    @ddosan4108 4 месяца назад +1252

    The 99% applies very hard to the software development part.
    Dev codes, it fails, Dev fixes.
    AI codes, it fails, Dev has to go through an entire codebase of generated code to fix …
    If you push the idea to its paradox: In a world where AI produces most of the code, fewer and fewer Devs would be able to fix errors which would increase the cost of failures => failure rate goes down, failure cost goes up.

    • @please7959
      @please7959 4 месяца назад +13

      So true

    • @yalnisinfo
      @yalnisinfo 4 месяца назад +20

      is this the same for legacy code now that almost everyone programming in high level languages(including c)

    • @adadaprout
      @adadaprout 4 месяца назад +65

      It's not how it works. I produced big chunks of code with LLMs, I know the code as if I wrote it myself. The flow is like that rather : Dev+AI code, it fails, Dev+AI fix.

    • @TragicGFuel
      @TragicGFuel 4 месяца назад +54

      @@adadaprout how much real world experience do you have? Could I look at your github?
      Sorry if this comes across as hostile, but I am genuinely curious.

    • @adadaprout
      @adadaprout 4 месяца назад +14

      @@TragicGFuel Yes I code in real world, in what other world can we code than the real world ?

  • @Chronomatrix
    @Chronomatrix 4 месяца назад +697

    Many now claim AI is overrated and all that, but I'm pretty sure the hype was just a collective misunderstanding of what this 'AI' actually is. I think people expected 'AI' to suddenly change all paradigms; they were mislead by the media and youtubers and the explosion of 'AI apps' (which are mostly based on GPT API calls). The Distributed System example is a great one.

    • @TechnoMageCreator
      @TechnoMageCreator 4 месяца назад

      With every new technology happens the same. When block-chain came out first waves were the grifters and deep inside were 1% true useful project (it's a tracking technology). I grew up in Romania's early 90's. We went from one single bank CEC to a decentralized system. We had many scams and national pyramid schemes (main cause - education). Same is just happening with AI, just much faster and I feel because of the disconnect from information and the amount of distractions only a small percentage of the world's population really understand what AI brings while 99% are witnessing the grifting part of AI hype. The big problem I see is actually a tsunami coming if you really understand the level of AI compute we currently have.
      For example the price per token is down 99% in just two years for top AI. Not to mention there are plenty of open source that work relatively well on any machine at this point.
      AI is a tool. Anyone can just build and create now at a lower cost than ever. In order to generate software the most important part is to actually have the idea and be able to communicate it. You are not good at communication, no worries AI can help you with that too. You want AI to make a plan for you? Done. Anyone can create almost anything at this point... The more context about yourself, your dreams and current skills and assets the more it can help you achieve your goals. As long as you use information properly feeding in the results can be phenomenal for any individual from any corner of the earth.

    • @JeffCaplan313
      @JeffCaplan313 4 месяца назад +10

      I feel the same about the 2nd coming of J.C.

    • @worldspam5682
      @worldspam5682 4 месяца назад +56

      it's not a collective misunderstanding, but a plain stupidity. People love to overhype things they don't understand. Not only that, but people in addition to that love to act as if they actually know what they are hyping for.
      I hate it when kids on youtube are doing so. Like that time when some minecrafter made "an ai" from command blocks, when in reality it was just a pathfinding algotythm with overcomplicated momorization process. But I can't really hate those kids, because I know their fathers are doing exacly the same with Elon Musk's persona.
      Hyping over AI being overhyped would be a next step for sure, so people can jump on the vagon of diminishing AI hype and start something actually useful 😂

    • @drno87
      @drno87 4 месяца назад

      It was a disinformation campaign led by people who stood to profit, actively supported by academics who wanted to get in on the action, and disseminated by a willfully uncritical media. The general public never stood a chance.

    • @flor.7797
      @flor.7797 4 месяца назад

      The end of the world an Ai so powerful it can’t be controlled not even on an remote island

  • @wangchengyan2248
    @wangchengyan2248 4 месяца назад +548

    My dad and I are both software engineers and our recent conversations were mostly around AI because my dad's company started to replace part of the team that my dad's on with LLM, the anxiety that people have been having about AI is sometimes soul-crushing when it comes to your closed ones, so I had to keep constantly reminding myself that for those content creators/companies that are intentionally hyping LLM up, they make money out of doing that and most of them never cared about where the tech is bringing us. I am glad to see this vid.

    • @TheManinBlack9054
      @TheManinBlack9054 4 месяца назад +50

      Its great that this video helped someone! I do think that there is a lot of hype on AI, but I do think that this hype is not baseless, as there is a base of technological innovation there, similar to the internet in the 90s. There was a bubble and it was initially overhyped, but looking from it back now some promises were certainly overoptimistic in a year, but right on the money in 20 years.

    • @wangchengyan2248
      @wangchengyan2248 4 месяца назад +4

      @@TheManinBlack9054 That's also true, imagination is still the foundation of innovation, keep learning and try staying educated!

    • @cock_sauce8336
      @cock_sauce8336 4 месяца назад

      I hope they will cry and seethe when they find out their automated "employees" don't generate any value and they don't have enough workers lmao.
      3 hours of work vs 1 hour debug
      VS
      1 minute of work, 6 hours of debugging, scrapping the code and doing the above.

    • @DanielFenandes
      @DanielFenandes 4 месяца назад +52

      There is no way any LLM today is replacing any software engineer...

    • @jasonthirded
      @jasonthirded 4 месяца назад +33

      ​@@DanielFenandes It can make software engineers more efficient and axe a lot of support roles

  • @justaname999
    @justaname999 2 месяца назад +47

    Such a good video!
    I did physics in undergrad and comp. neuroscience in grad school and am now working with a mix of researchers from various disciplines, broadly around cognitive science and evolution of human/primate cognition. I was never upset because my job was acutely endangered but it's been quite disheartening to listen to so many really very intelligent people get so on board with this hype to the point where it was ridiculous.
    I am very much on board with automating as much as we can. But the degree to which people were willing to believe it can do *anything* and would call more realistic assessments "unnecessarily negative" was crazy.
    When people who study neuroscience and infant development and (should) know how different human learning is from what LLMs do stand there and tell you that AI can replicate human-level cognition and will "soon" be able to randomly learn and synthesize knowledge with just a few more iterations of the models, I really am at a loss for words.

    • @DD-pm2vh
      @DD-pm2vh Месяц назад

      So it is true, what I understood? We are miles away from granularly "randomly" learning AI models?

  • @demetrius.w11
    @demetrius.w11 2 месяца назад +24

    4:36 “Google is a monopoly. They have so much money they don’t know what to do with it. They sure as hell aren’t going to give it to their employees...” I subbed after that. 😂

  • @vasanthkannan3398
    @vasanthkannan3398 4 месяца назад +1024

    “They sure as hell aren’t gonna give that to the employees”

    • @mofumofutenngoku
      @mofumofutenngoku 3 месяца назад +27

      Google emploees make fucking bank, while I am over here busting my ass making minimum wage. That statement was just inaccurate.

    • @xavandres
      @xavandres 3 месяца назад +86

      ​@nightshade8958 is not really inaccurate, I mean, they do make bank, but that doesn't even compare to the numbers Google is supposedly worth.

    • @sladeTek
      @sladeTek 3 месяца назад

      ​@@mofumofutenngoku​except that it was accurate. Those google employees make bank compared to your brokeass but in comparison to what the company makes that's barely a quarter.

    • @BangMaster96
      @BangMaster96 3 месяца назад +9

      The Employees working for Google, Apple, Amazon, and Facebook are already making $200k/Year salaries.

    • @professormancaptain4210
      @professormancaptain4210 3 месяца назад +1

      4:35

  • @epistemicompute
    @epistemicompute 4 месяца назад +539

    As a ML Engineer, I hate the conversations we’re having around AI and ML and all the hype.
    ML is a good tool for a subset of problems, but it’s not the endgame of CS. At work, we do our best to find a deterministic solution first before we use ML.
    People think this tech should be used to think for them instead.

    • @AL-kb3cb
      @AL-kb3cb 4 месяца назад +28

      Being a ML Engineer is not renough to make you some kind of authority on the subject, you're a data scientist basically, not a scientist from OpenAI or Anthropic.

    • @RoboticsOdyssey
      @RoboticsOdyssey 3 месяца назад +19

      As another "ml engineer", i would say that all human functions will be done better by machines, except those involving empathy, connection, or responsibility.
      if i have a robot that costs 5,000 and it has super human intelligence and types 200 WPM, why would i hire a human?
      i would basically only hire humans for front desk receptionist

    • @epistemicompute
      @epistemicompute 3 месяца назад +60

      ⁠@@AL-kb3cbI don’t think I’m an “authority,” but given that I understand and develop the algos and systems that utilize the algos, and often implement papers into code, I am educated enough to be able to discern bs from reality in my field.
      But on a side note:
      I have also done research in the field, which makes me think I am capable, but likely not competitive for research roles.

    • @epistemicompute
      @epistemicompute 3 месяца назад

      @@RoboticsOdysseyA good book to read is called “The Myth of Artificial Intelligence”.
      It talks about the fundamental reasons ML algorithms likely can’t completely replace humans even in cognition.

    • @cybervigilante
      @cybervigilante 3 месяца назад +16

      And ML still hallucinates, gaslights, lies, or refuses to cooperate at times. You should know enough about your problem-solution set, so you can see if a "solution" is dead wrong, without wasting time, money, or causing a disaster.

  • @DrPastah
    @DrPastah 3 месяца назад +384

    It's the No Man's Sky marketing strat lmao
    Hype your crap then actually finish building and delivering it a decade later after the promised date.

    • @hundvd_7
      @hundvd_7 3 месяца назад +56

      They already delivered everything after like a year or two.
      They have made up for the overpromises like five times already.
      If AI is anything like that, then we are getting superintelligent AGI in 2030

    • @vitriolicAmaranth
      @vitriolicAmaranth 3 месяца назад +3

      NMS eventually had the bare minimum to technically meet the claims that were made pre-release that could be verified or easily quantified (eg now it has multiplayer!! Wow!!), and then paid a popular content creator to make an hour long videoessay to hype up that update as way more than it was. Just like the other guy said, that kind of thing is already happening with AI!

    • @Etcher
      @Etcher 3 месяца назад

      Haha so true! I'm stealing that 🤣

    • @hundvd_7
      @hundvd_7 2 месяца назад +4

      @@vitriolicAmaranth You are crazy if you think they haven't overdelivered on _everything_ they promised by now.
      I still think the game is kinda boring, but they've done everything and more to support it, and to even surpass the overblown initial expectations.
      If that's the kinda game you're looking for, NMS is it. No asterisks. It's just it.

    • @hundvd_7
      @hundvd_7 2 месяца назад +8

      You should much rather call it the Cyberpunk strat.
      It's a good game now, but it is _currently_ in the state that it should have released in. (even slightly behind in certain areas)

  • @msinaanc
    @msinaanc Месяц назад +5

    One of the most sane video I have seen about AI. I talk these things with my friends but your arguments are rigid and reasonable. And again, the oscar goes to hype economy. I think people are still trying to figure out a way to live with all this communication. We are being bombarded with connection but we mostly use it to fool others on the line so we are better off in this equation, which break us all. In the end we are all hungry for security and trust.

  • @gerardoricor
    @gerardoricor 4 месяца назад +293

    While AI hype can be misleading, real advancements are undeniable. DeepMind's AlphaFold, for example, revolutionized biology by accurately predicting protein structures. As a software engineer, I use multi-agent systems to automate tasks efficiently. These tools show AI's practical benefits beyond exaggerated claims.

    • @seva4411
      @seva4411 4 месяца назад +31

      Totally agree. AlphaFold is a perfect example of how amazing it is and how about these AI Chatbots that you can talk to that are indistinguishable from a human? That’s “Her” from the 2014 sci fi movie that’s sci fact in 2024 and this rate of improvement is exponential.

    • @LuisManuelLealDias
      @LuisManuelLealDias 4 месяца назад +44

      @@seva4411 These chatbots are really cool and cute and also, extremely useless. I mean, they have their uses, but it's almost decorative. They don't substitute anyone's work. At best, they can serve as useful learning tools.

    • @seva4411
      @seva4411 4 месяца назад +15

      @@LuisManuelLealDias They will soon serve as companions and mentors in many ways and will be far from just decorations.

    • @AL-kb3cb
      @AL-kb3cb 4 месяца назад +18

      It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.

    • @Tom-jy3in
      @Tom-jy3in 4 месяца назад +13

      @@seva4411 youre right, but AlphaFold has nothing to do with what people nowadays refer to as AI/predecessors of AGI. Its "simple" machine learning, as it has existed for a while. And it is for sure not threatening to replace half the workforce tomorrow

  • @morsumbra9692
    @morsumbra9692 4 месяца назад +114

    To anyone young and curious how to take advtange. My wisdom is this. The californians that got rich during the gold rush. There were a few that found gold. But the shops that sold shovels made far more bang for buck. The weed industry. Its not the growers pulling in fat stacks. Its the lights and water techs that service the warehouse. In hedge funds. Its the dude who finds the mew formula for others to exploit.
    What im trying to say. Is its probably less risky to sell to the people doing the risk, than it is to incur the risk yourself. Make honest money off their ambition and as long as its honest, youll be good.

    • @richardhall5489
      @richardhall5489 3 месяца назад +1

      Thank you.

    • @PH-0046
      @PH-0046 3 месяца назад +1

      Thank you.

    • @watcheronly71
      @watcheronly71 3 месяца назад +2

      So how this apply to Ai?

    • @richardhall5489
      @richardhall5489 3 месяца назад +9

      @watcheronly71 learn how to to draw hands and feet ;)

    • @taragnor
      @taragnor 3 месяца назад +5

      @@watcheronly71 NVIDIA is making a killing off the AI hype.

  • @rotface6969
    @rotface6969 4 месяца назад +30

    with the current economic conditions, i personally believe "AI" is just a unicorn that major tech companies want to ride and it is in their best interest to entice as many investors as possible to join them for the ride.

  • @Uthael_Kileanea
    @Uthael_Kileanea 3 месяца назад +6

    10:26 - That problem is actively being worked on. It's a software issue. There's several directions, but the one I like the most is:
    Once trained, the model ain't fixed. It can re-learn and overwrite what it learned in the past, allowing it to update tiny chunks of its knowledge instead of having to retrain its whole brain.

  • @Darth_Insidious
    @Darth_Insidious Месяц назад +27

    If Boeing let AI start coding thier controls software, a lot more planes would end up in the ocean.

    • @the-ironclad
      @the-ironclad 11 дней назад

      You don’t think when autopilot was in prototype, planes didn’t malfunction and crash? Of course it did. Same with any new tech. The beginning is always shittier than the new ones, which improve it. If everyone had a perfectionist mindset, we wouldn’t go anywhere past the Stone Age. The point of technology is to innovate. That’s why you have new versions of the same product, that does it better than the last. Sure a few planes need to be sacrificed, but that’s how you improve the AI. The scary thing is that AI is improving at such a rapid pace that now it gives people an opportunity to build anything of their wildest dreams and in 10 years, this technology will be extremely powerful. You clearly don’t see the big picture. But then again I’m sure when the Wright Brothers first invented the airplane that just flew a few feet, people said it will never touch the skies and the next century later, you have thousands of them in the air. You’re only limited to your imagination.

    • @Darth_Insidious
      @Darth_Insidious 10 дней назад

      @@the-ironclad It's because AI creates open ended functions. You can't possibly know how it will react to every single set of inputs, while for human designed functions you can know for sure how it will behave with enough analysis. In response to input that the AI wasn't trained on, AI can do some really weird things because it's wasn't trained to behave normally in that domain. If it can distort it's response in untrained domains to perform better in trained domains, it will. All that uncertainty means that all an engineer can tell you when asked "will it work?" is "hopefully, since it's worked thus-far on our finite amount of training data".

    • @UmNitro
      @UmNitro 8 дней назад +1

      You don't think automated cars are the same? They are at the point where they drive more reliable than most humans, so why would an airplane be any different?

  • @RawrxDev
    @RawrxDev 4 месяца назад +258

    It's been really hard to stay motivated with my school work as a CSE student, my life for the past ten years has been in shambles, and learning programming genuinely gave me happiness I have not felt since I was a child, I want to program for a living, I want to make software the people use on a daily basis, I dont want AI to do everything for me and/or completely replace me and programming becomes just a hobby with no chance of competing against AI systems.(I also hate AI for art, it kinda kills the whole purpose of it but thats a different story) I agree with all your points as I have been following Gary Marcus and Yann Lecun for a while now, but the chance we're wrong, and AI does invalidate all my hard work right now, creeps into my brain while trying to learn. I'm hoping either the bubble bursts or the tech just takes off, this middle area of not knowing is honestly killing me.

    • @armoredchimp
      @armoredchimp 4 месяца назад +35

      You got this. I'm also learning coding and just started in late 2022, but AI has only helped me to learn programming quicker, it's not an enemy. It's just another tool available in your arsenal. You will still be competing against other humans for jobs, all of whom will probably use AI to different degrees. But AI being able to do everything itself is absolutely not happening for a very long time. It's really just regurgitating publicly available code, the more you ask for unique instructions that are not be published on the internet somewhere, the more your margin for error shoots way up. Try and ask it to code in any brand new version of a framework or sdk that just came out this year - it literally can't because it only has been trained on the previous versions.
      Good luck out there, it's crazy times but if you work on your craft as much as possible and leverage AI to your advantage you can probably find something. I'm constantly looking at what other people with

    • @digitulized459
      @digitulized459 4 месяца назад +21

      I'm in the camp that current "AI" in no shape will invalidate any meaningful work you will do as a software engineer. Sure, it may be able to help generate some basic boilerplate, and maybe very basic CRUD apps, but that's it. Anything that is remotely complicated AI will NEVER be able to do, or at least this current version. Try doing any project with moderate scale; AI completely and utterly fails. And it will remain this way for the immediate future because I personally believe these LLMs are already near their limits.

    • @JeffCaplan313
      @JeffCaplan313 4 месяца назад +5

      Don't give up. There's always something new around the corner.

    • @Bigredsleep
      @Bigredsleep 4 месяца назад +26

      As an AI dev working for one of the big companies, I can tell you that we will always need more good programmers and engineers, AI is at the top of the hype cycle if you look at the gartner hype cycle chart we’re at the peak of inflated expectations and it’s going to crash at some point soon.

    • @ronilevarez901
      @ronilevarez901 4 месяца назад +8

      It's impressive how different one person can be to another, while still being similar.
      I wanted to learn programming since I was a child. I eventually wanted to program for a living, making software for everyone to use, but I DO want AI to replace most humans and do everything for us and even completely replace me even if programming becomes just a hobby because of that (for the last 20 years it's been just a hobby anyway since I haven't got any job related to computers so far, lol).
      And I also LOVE AI art. It makes art creation accesible for me and everyone who has always had something to express without the means to do it. I believe that whoever dislikes AI art is just denying the true purpose of art, (which is to communicate something) to instead exclusively elevate the technical part of art because that's the only thing they can do so they protect it to death.
      AI, if properly integrated, will put an end to all the bad things that humans have brought into the world. It will be the greatest change we will see in centuries.
      Been waiting for it since child. Developing AI is the reason why I wanted to learn to code, actually.
      Hype or not, it is The thing.
      We must keep trying to achieve it.
      At (almost) all costs.
      ___
      BTW, I don't think AI will be properly integrate into the world. In the end, we will just have a partial dystopia thanks to it being misused by corps and gov, but I'm just a person so I can't do anything about it, but to hope.

  • @liam45506
    @liam45506 4 месяца назад +160

    I discussed this with my professor. We also talked about how they change from GPT three to GPT four involved doubling the amount of neurons in the neural network- this begs the question if you are doubling the amount of neurons, are you doubling the performance? It seems like there is not a doubling in performance. This means that there are probably very severely diminishing returns as hardware tries to catch up with the exponentially increasing computational demands of iterative neural network improvement.

    • @janek4913
      @janek4913 3 месяца назад +49

      GPT 3.5 had 175B parameters, GPT 4 has 1.5T. Thats an 8x increase in parameters but there is nowhere near to an 8x increase in performance.
      Also just a couple days ago, Meta release Llama 3.1 with 405B parameters which is comparable to GPT 4. So just infinitely throwing more parameters at a model doesn't really help much.

    • @brunospasta
      @brunospasta 3 месяца назад

      @@janek4913 it can even reduce performance (e.g. if you have too little meaningful data).

    • @GoodByeSkyHarborLive
      @GoodByeSkyHarborLive 3 месяца назад

      @@janek4913 so what does it really improve like what tasks

    • @AAjax
      @AAjax 3 месяца назад +7

      Scaling isn't the only avenue AI researchers are pursuing, it's the hack that unlocked somewhat capable language models. Now that we have them, it's given researchers something tangible to study and build on, which has led to chain of thought, tree of thought, mixture of experts, retrievement-augmented generation, multimodal models, data distillation, etc.
      Scaling will be pursued as far as economics and data will allow, but it's not the only game in town. I also expect the recent trend of more capable smaller models to continue.

    • @superheaton
      @superheaton 3 месяца назад +3

      Even 3% is very good by the way. Along the way it picked up concepts in language, math, and or coding. Which other models spent a lot of years doing equivalent concepts. So yes it is huge. If you want chatgpt 4o to be double in performance that's scary because you and I may not know how many higher level of applications or concepts it knows. Of course they are doing more complicated models and end to end functionality which just like chatgpt 4o picks up language, math and coding along the way. It will rise up and still haven't see the plateau of transformer based models. Because although 100trillion seems overfitting but the architecture can still be improved for higher end to end functionality. You must not care too much about the diminishing returns because it's also dataset +architecture complexity and functionality. Not just parameter count. These are hyper parameters. Most are based on statistics to find optimal values.

  • @OnePlanetOneTribe
    @OnePlanetOneTribe 4 месяца назад +175

    ya i agree its happened before, with VR hype, I haven't touched my VR set in months

    • @RoboticsOdyssey
      @RoboticsOdyssey 3 месяца назад +15

      yeah the internet is just hype. so is indoor plumbing. and electricity.

    • @OnePlanetOneTribe
      @OnePlanetOneTribe 3 месяца назад +19

      @@RoboticsOdyssey I think its not that its @just hype' but rather that its a technological Gartner hype cycle, with specific stages and that we could be heading for the through of disillusionment soon, but after about 5 years it will be the plateau of productivity. 👍

    • @RoboticsOdyssey
      @RoboticsOdyssey 3 месяца назад +4

      @@OnePlanetOneTribe that's true, but aI has been through 60 years of those cycles since McCarthy formalized common sense in 1958.
      AI is a lot bigger than LLMs.
      Things like alphafold can create industries.
      No one really knows whats about to happen.

    • @chesshooligan1282
      @chesshooligan1282 3 месяца назад +31

      @@RoboticsOdyssey You sound like you're 15 years old and you missed the internet bubble pop of 2000. The internet WAS hype at one point. Many people who saw the hype and foresaw the pop made a pretty penny out of it. A few of them didn't need to work a whole day for the rest of their lives.
      I remember what telly sounded like in the late 90s. It was something like this: "Blah blah blah the internet this, blah blah the internet that, blah blah blah blah the internet patatee, blah blah blah blah the intenet patatah." Replace "the internet" with "AI" and that's where we are today.

    • @alpheusmadsen8485
      @alpheusmadsen8485 3 месяца назад

      @@chesshooligan1282 I cannot help but observe that the internet was the *one* success where the hype feeds into the notion that there's something to these other fads, whether they be AI, quantum computing, fusion, cryptocurrencies ... I'm pretty sure I'm missing others. What's more, while the internet itself ended up finding its place in the world, there were nonetheless a *lot* of companies that rode the hype bubble, and ended up collapsing rather than growing.

  • @johncaemmerer7094
    @johncaemmerer7094 3 месяца назад +1

    Finally some clear thinking! Well done!
    I think you're being generous when you say that there have been many times when people (i.e. for-profit corporations) have blurred the lines between hype and fraud. If the manufacturer of a machine tool claims its new product is the first to achieve milling tolerances below some value x and customers buy it on that basis, only to discover that the actual tolerances it can achieve are nowhere close to the claims, we would not say the manufacturer "blurred the lines between hype and fraud". What allows software companies to get away with this?

  • @femiairboy94
    @femiairboy94 14 дней назад +6

    AI cannot replace software engineers…for now. “if it bleeds, it can die”. If it recognizes error in code, it will eventually develop its own fully functional system. Give it time.

  • @johnappleseed2578
    @johnappleseed2578 4 месяца назад +251

    Unrelated but I’ve just gotten my first SWE job, looking at apartments to move into, and you’ve inspired me to find something more humble haha. You must have some serious bags but still living simple, good stuff man

    • @raydjyoti
      @raydjyoti 4 месяца назад +10

      Congrats man!

    • @georgethomas9068
      @georgethomas9068 4 месяца назад +5

      happy for you man! good luck!

    • @unusuariopromedio4229
      @unusuariopromedio4229 4 месяца назад +2

      :D

    • @abhaybisht5280
      @abhaybisht5280 3 месяца назад +6

      Lmaoo I’m in the same boat rn I just spent 2k furnishing my new apartment 😭

    • @Iquey
      @Iquey 3 месяца назад +1

      Always be prepared to get laid off, or work on something people actually want to use.

  • @adityachakravarty1054
    @adityachakravarty1054 4 месяца назад +368

    The ending with Carl Sagan punchline is 🤌🏾

    • @kSergio471
      @kSergio471 4 месяца назад +5

      How’s it exactly related to the topic?

    • @1337erBoards
      @1337erBoards 4 месяца назад +47

      @@kSergio471 AI (and AGI) is fundamentally based upon inputs to create something. It isn't from "scratch"/nothing. AGI is essentially trying to create human intelligence. It comes from a source, that being humans providing the algorithm and inputs. Always remember that something coming from nothing can seem a bit odd, since that something is probably based on something else (not nothing). This ignorance to that something that came from something (but is perceived to come from nothing), can lead to hype.
      This is what I took away from the ending. As with anything, you take from it whatever you want. Even if it's nothing.

    • @kSergio471
      @kSergio471 4 месяца назад

      @@1337erBoards thanks 👍 However, it seems a bit odd to me: even if ai is capped by what’s possible for human brain, this cap is still something unbelievable

    • @kSergio471
      @kSergio471 4 месяца назад

      @@LT-dn7mt this amount of power is required to _train_ a model simulating human brain?

    • @samuelodan2376
      @samuelodan2376 4 месяца назад

      @@1337erBoardsthanks for the breakdown. It wasn’t immediately obvious to me.

  • @PiyushBhagchandani1
    @PiyushBhagchandani1 4 месяца назад +86

    you are talking on point. Glad that someone talked on this hype of AI

  • @brianh9358
    @brianh9358 3 месяца назад +106

    There is an elephant in the room that they just don't want to talk about. If AI tools became broadly used the amount of electrical power needed is beyond the capability of our current electric infrastructure. I sure don't see fusion being available around the corner either.

    • @iubankz7020
      @iubankz7020 3 месяца назад +19

      if anyone needs anymore of anything and have the money to pay for it then the supply will expand to meet the demand. The current electricity supply we have right now matches the current electrical demand. I don’t know if we are running out of resources to build infrastructure and if so you’re right but the notion that AI is inviable because of the current electrical capacity of society goes against the laws of supply and demand

    • @vaolin1703
      @vaolin1703 3 месяца назад +10

      ​@@iubankz7020 But in the case of the power grid this process spans across several decades. Also with new environmental regulations and anti-nuclear sentiments it's unclear whether such an expansion is feasible at all.

    • @cortster12
      @cortster12 3 месяца назад

      What? The electricity used isn't as much as you think. Weird how this rumor circulated.

    • @brianh9358
      @brianh9358 3 месяца назад +8

      @@cortster12 This is not a rumor. I think you should do a search related to AI energy use.

    • @cortster12
      @cortster12 3 месяца назад

      @@brianh9358
      I did, and it's overblown. It's basically as energy intensive per output as playing a particularly gpu intensive game.

  • @shane1067
    @shane1067 3 месяца назад +4

    Completely get your point, but I'm still blown away by the leading edge models, and how fast better ones are coming out. GPT-4 is definitely smarter than all of us in a wide range of topics, but not specific ones. But the idea of it being the dumbest version definitely has me "hyped" as a young person given the room for improvement. Great video though.

  • @hongyihuang6856
    @hongyihuang6856 4 месяца назад +42

    NeetCode is not only good in coding, he is also good in seeing the truth~

  • @congeedaily
    @congeedaily 4 месяца назад +53

    I like how he allocates all his cycles to content. His room is still the same as when he started neetcode.

    • @stephpain
      @stephpain 4 месяца назад +9

      he probably has millions of dollars and is sleeping in what looks like a college room dorm

    • @thydevdom
      @thydevdom 3 месяца назад +5

      @@stephpainI heard this is just a studio to keep the aesthetic consistent.

    • @OP-lk4tw
      @OP-lk4tw 3 месяца назад +13

      in one video his cammera moved about 2 degrees to the left and you could see some gold bars stacked up to the ceiling

    • @info781
      @info781 3 месяца назад +1

      lol this is like saying Zuckerburg is still wearing Gap sweaters, how modest he is, while he is building a 1400 acre bunker in Hawaii.

    • @nehushtant
      @nehushtant 3 месяца назад

      All his cycles? Lol

  • @CrucialFlowResearch
    @CrucialFlowResearch 4 месяца назад +69

    AI is drinking its own kool aid, since their training data contains AI output

    • @sillymesilly
      @sillymesilly 4 месяца назад +33

      Yeah never thought about that. AI output will outnumber human output. Therefore 80% of input to AI will be by AI. A true garbage in garbage out garbage in.

    • @kingsleyoji649
      @kingsleyoji649 3 месяца назад +14

      Ai effectiveness decreases sharply as it cannibalizes itself.

    • @joshua50101
      @joshua50101 2 месяца назад +6

      it's like a thirsty guy on a dessert drinking his own pee

    • @ENEN-tz6eg
      @ENEN-tz6eg Месяц назад

      Im sure AI would be able to detect AI generated content and ignore it. Or maybe it’s something only humans can do so far.

    • @flubnub266
      @flubnub266 Месяц назад +1

      @@sillymesilly It's usually GIGO, but we've finally managed to invent GOGI...

  • @melaronvalkorith1301
    @melaronvalkorith1301 3 месяца назад

    I Love your approach:
    - facts driven
    - friendly/funny, but frank
    - clearly stated opinion
    - open to respectful disagreement
    I’ve gotten really I to AI/LLMs lately, but we need more people with your perspective - reasonable expectations for this tech, not hype.

  • @AbdulDerh
    @AbdulDerh 3 месяца назад +2

    I don't comment on RUclips videos much, but I have to give it to you: you are very articulate and you have excellent critical thinking skills. We need more of this!
    Personally, my takeaway over the past few years has been that, despite having a technical background, I (and my peers) could all benefit from more macro understanding (e.g., poltics, economics, ...). The world doesn't make sense right now and these "blurred lines" are a sign of the times. We will inherit the mess though, so we better wisen up and get ahead of it.

  • @leeris19
    @leeris19 4 месяца назад +260

    If you don't know anything about AI (It's not really AI though), it will look like magic. But as you unwrap its intricacies, you'll realize that AGI can still be classified as "impossible".

    • @yugioh8810
      @yugioh8810 4 месяца назад +3

      "intricacies"

    • @ozymandias_yt
      @ozymandias_yt 4 месяца назад +60

      It could be possible that making AGI out of the transformer architecture is impossible (at the moment I would say it is even very likely), but I think it is not really possible that AGI is impossible as a whole. General intelligence is possible within the laws of nature and it is achievable in a quite efficient way. The human brain represents a system with many functions that are not wanted for AGI (so it is more complex) and still absolutely possible. Even in the worst case where scientists need to mimic the functionality of the brain very closely, which would take us at least many decades and huge amounts of resources, AGI would technically still be possible.
      On the other hand, for the case of AGI being impossible, there needs to be something so inherently unique to biological brains that is categorically impossible to mimic or replicate. What process should that be? The formation of brains is complex but no wizardry.
      From my perspective the more important question is, how much of the brain’s complexity is needed for solid general intelligence. Considering how much capability is already achieved by rather simplistic mathematical models, the amount of groundbreaking discoveries to reach this level is seemingly much lower than expected, but still very high.

    • @JohnSmith-op7ls
      @JohnSmith-op7ls 4 месяца назад +52

      Yeah, LLMs are advanced auto complete. They won’t magically become sapient no matter how much training, memory, and processing you throw at it.
      It’s just fundamentally the wrong architecture.
      It’s like how people use to take these vague, nonsense estimates of the raw processing power of the human brain and point out that we’ll soon have super computers with more power.
      Well, we do, and yet none of them are sapient.
      The internet as a whole has orders of magnitude more processing power, why hasn’t it magically become self aware?
      People who don’t understand this stuff pretend it’s just a matter of more data, faster processing, that’s not how biological neural networks operate at all.

    • @leeris19
      @leeris19 4 месяца назад +5

      @@ozymandias_yt I will love to be enlightened more about how it can be possible without using "general" representations. Tell me some specific ones, like the technicalities of how "GI" is possible within the laws of nature and it is achievable in a quite efficient way". I am not a hater of AI in any way (I specialize in ML). But as far as my knowledge goes, "AI" is nothing but ML with lines on steroids. No hate for tech but I'm ready to be proven wrong and will stand on my claim that AGI is still impossible, atleast currently.

    • @ozymandias_yt
      @ozymandias_yt 4 месяца назад

      @@leeris19 Maybe our definition of general intelligence isn’t the same. For me AGI is the point of human-level intelligence (reasoning, consistency, competence…). The proof for the existence of human-level intelligence is trivial and the synthesis to some extent therefore always theoretically achievable. The concept of “general representations” isn’t really present in the human cognition without limitations. Example: What is a game? AGI as the ultimate clean intelligence of eternal truth is indeed impossible, because it is logically implausible. Language isn’t well defined in many aspects, so no amount of data can train an AI to give always “perfect answers”.
      To full fill the visions of the AI revolutionaries, AGI in Form of human-like intelligence is needed, so complex tasks can be understood and executed. We can train humans to do these tasks and an AGI should be capable of learning at least with the same success humans.
      Side note: Regarding the hype, I see a typical pattern of over correction. In the beginning of the computer revolution, AI was described as something of the near future, which was of course way to optimistic. Throughout the decades, the prognoses for AGI extended into the range of 2080-2200, which is rather pessimistic. AI companies bragging about AGI in the next few years are quite likely over correcting their predictions again.

  • @code-master
    @code-master 4 месяца назад +23

    I wasn't pro leetcode, but leetcode is like mental gymming which improves problem solving, step by step. Kudos to you, your voice is like music to my ears.

    • @EternalKernel
      @EternalKernel 3 месяца назад

      Leetcode et al are neurotypical gatekeeping and poverty enforcing machines.

  • @pc3340
    @pc3340 4 месяца назад +7

    always looked for a way to put this into words. NEVER buy into hype, engage as you would anything. Fundamentals tend to trump all

  • @jamesdanielelliott
    @jamesdanielelliott 3 месяца назад +15

    The problem with these LLM's is the bell curve distribution / probability distributions they use to determine their answers. They are gathering their input from the most common information. This is clearly the basis for the learning they do. The problem with this is three fold. First if you want excellent answers it's just not capable of doing this. Secondly, as content is generated from these responses it further dilutes the pool of exceptional content. Secondly people naturally will rely on this as a crutch and get worse at producing the content on their own. Thirdly as the LLM will learn from this double-diluted content further diluting the better content, points 1 and 2 will just speed that process up.
    Unless they find effective ways to drastically combat this I'm fairly sure it's a doomed technology.

    • @yuesujin8390
      @yuesujin8390 3 месяца назад

      Really. I have found a experiment that AI forgot what it learned from a math video after it watched several tik-tok shorts. The diluted information harms the cognitive ability of AI as it did for our brain.

    • @zoeherriot
      @zoeherriot 28 дней назад +1

      there is a fourth issue - if the most common answer is incorrect then you will get an incorrect answer. The LLM does not know the correct answer, it gives you the most likely answer - which is not the same thing. And a fifth issue is that it has to give you an answer, even if the likelihood of it being correct is low.

    • @aeroslythe6881
      @aeroslythe6881 27 дней назад

      @@zoeherriotthat’s just an extreme case of the issue of non-excellent sources

    • @zoeherriot
      @zoeherriot 27 дней назад

      @@aeroslythe6881 which... is still an issue.

    • @aeroslythe6881
      @aeroslythe6881 27 дней назад +1

      @@zoeherriot You’re right. In fact there’s a sixth issue…

  • @hectormejia499
    @hectormejia499 2 месяца назад +3

    NeetCode is slowly becoming my favorite tech person in youtube

  • @ZenonLite
    @ZenonLite 4 месяца назад +78

    16:24 Samir, you’re breaking the car!

    • @0deltasierra
      @0deltasierra 4 месяца назад +21

      please samir, listen to me samir please

    • @NeetCode
      @NeetCode  4 месяца назад +52

      listen to my calls 😡

  • @Nxck2440
    @Nxck2440 4 месяца назад +15

    I've had these thoughts for a while but it's great to hear it from you, glad not everyone is salivating for AI.

  • @vdanger7669
    @vdanger7669 4 месяца назад +20

    There was a lot of hoarded cash that needed to be spent. Stock buybacks weren't going to cut it.

    • @plaidchuck
      @plaidchuck 4 месяца назад +4

      Basically trump years tax cuts. You think companies take those cuts and put the money back into their businesses?

    • @ticketforlife2103
      @ticketforlife2103 4 месяца назад +7

      Even worse. The stock system entirely is a hoarding system...there are trillions locked in stocks. And we cry why we are poor. Where is all the money?

  • @andrewlee9286
    @andrewlee9286 3 месяца назад +3

    The 99 per cent thing is interesting. When you do something like linear regression it’s really easy to get to say 80 percent but to improve that by even 1 percent involves crazy amounts of fine tuning.

  • @mikehynz
    @mikehynz 3 месяца назад +1

    I worked on the early internet in the 90s until the 2000's. This hype all looks and feels SO familiar, and almost no technology ever gets used in the way it was originally intended. So great video. However, as tech teacher with 500 unique teens students a year, I would argue one point: human nature is changing.

  • @DLDS14
    @DLDS14 4 месяца назад +18

    Part two to the hypewave is when the RUclipsrs come out and call it a hypewave.

    • @info781
      @info781 3 месяца назад +1

      Love the meta.

  • @endianAphones
    @endianAphones 4 месяца назад +37

    You didn't even mention practical limits, like power usage.

    • @mikeharrington5593
      @mikeharrington5593 3 месяца назад +3

      The energy demands of data factories is a potential bottleneck

  • @DeMaLiTiOnKiNg
    @DeMaLiTiOnKiNg 4 месяца назад +26

    one point that I do have a problem with is the rate of improvement, there isn't any actual data with your ROI. Anybody that's used both especially for programming knows really well that 3.5 to 4.0 has a far more substantial improvement than what you're giving it credit for.

    • @hanikanaan4121
      @hanikanaan4121 4 месяца назад +3

      While that’s true, it’s asymptotic. Eventually, the output difference between being trained on 99% of data and 100% of data on the web is next to nothing. Pretty sure anything past 90% is largely the same. Even though the progression from chatgpt 3.5 to 4o (not 4.0) was large, those gaps will eventually be smaller and smaller until we have a “perfect” gpt that gives the most correct answer available to the entire internet. Now, is that anything more than a glorified search engine? It’s up to you to decide that.

    • @TheManinBlack9054
      @TheManinBlack9054 4 месяца назад +8

      @@hanikanaan4121 what makes you think that AI was already trained on 99% of the internet? Maybe it learned on 10% and thats not speaking on how the hardware is advancing too, and the software.

    • @pottedrosepetal6906
      @pottedrosepetal6906 4 месяца назад +10

      Another problem is the assumption that AI started in 2022. We have developed AI since the 70s. We have more data than one line between two points.

    • @hanikanaan4121
      @hanikanaan4121 4 месяца назад

      @@TheManinBlack9054 notice how I said eventually. Also, a significantly huge part of the internet is unusable, outdated, or ToS failing information. The data they’ve used so far is the vast majority of the data that’s usable and beneficial. Is there more to be used? Absolutely. Will it change the entire game, and result in AGI or something? Pretty much a guaranteed no.
      Additionally, hardware doesn’t actually improve the results or accuracy of the model, it just speeds up the process of training. More accurately, it requires less data to reach a “definitive” point where answers can/will be given with certainty, but the accuracy on the entire dataset will be unchanged regardless of whether you’re training on an intel celeron processor or the strongest TPU on the market.
      GPT is not the way forward in advancement of AI, it’s simply the replacement for search engines. To reach the next tier of “autonomous” AI, it’ll be through something different from the current progression of text based training. I’m fairly certain that NN chess engines have shown higher levels of “creativity” and “thinking” than any currently available GPT system, be it from Anthropic, OpenAI, Google, etc.

    • @AL-kb3cb
      @AL-kb3cb 4 месяца назад +1

      It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.

  • @Prod.Dizzy0nz
    @Prod.Dizzy0nz 26 дней назад

    Weirdly, this is the most comforting video about AI I've seen in the last months...

  • @BloodReaper8005.
    @BloodReaper8005. 7 дней назад

    I just want to thank you for lessening my anxiety in these topics.

  • @CharlesVanNoland
    @CharlesVanNoland 3 месяца назад +6

    As someone who has been on the cutting edge of AI and neuroscience research for 20 years now: massive backpropagation-trained networks will become a thing of the past within 5-10 years. They will be seen as the compute-hungry brute-force approach to making a computer learn after all is said and done. What's coming down the pipe are sparse predictive hierarchical behavior learning algorithms that can be put into a machine to have it learn from scratch how to perceive the world and itself in it, and be rewarded and motivated to explore unknowns in its internal world model - which will yield curiosity and playful behavior. These will be difficult to wrangle at first, with humans controlling the reward/punish signals manually, but once they're trained to behave they will be the most resilient, robust, adaptive, and versatile machines in the history of mankind. Judging by how compute-efficient the existing realtime learning algorithms that people have been experimenting with are, it won't be very expensive to have a pet robot that behaves like a pet, runs around and fiddles with stuff like a pet, and is self-aware and clever like a pet, and the whole thing will run on commonly available consumer hardware - like that you have in your laptops and phones. This same learning algorithm will be limited in its abstraction capability by the hardware it is running on. As such, it won't be difficult to scale it up to human and super-human levels of abstraction capability, as long as the hardware that it is running on has the capacity to run the algorithm in realtime (i.e. 20-30hz) so that it can realistically handle the dynamics of its physical self and the world around it. Mark my words.
    Nobody building a massive backprop network right now is going to be glad they did in another 2-3 years. They're going to look like the dotcom bubble hype bros of the 90s, and become disgraced for being so naive in their blind faith that backprop-training was the end-all be-all of machine intelligence, like there couldn't possibly be something better, more efficient and useful. They just took someone else's backprop work and ran with it like it was going out of style, and it's cringey, at least to someone like me who has been watching all of this unfold from my uncommon perspective. Some people learn the hard way, I guess.

    • @pungentzeus
      @pungentzeus 3 месяца назад +1

      Awesome comment, brother

    • @DynoDeso
      @DynoDeso 3 месяца назад +1

      That's a good point

    • @zvxcvxcz
      @zvxcvxcz 3 месяца назад +1

      But these more sparse approaches are... already existing and just not so shiny or hype-filled. We're essentially talking about interpolation with better sampling. Chebyshev Polynomials, Fast Krigging, Polyharmonic Splines, or the more Bayesian approaches and some other things along those lines with some sort of gradient-based performance metric or Bayesian sampling in the Bayesian cases. It's mostly stuff that exists... but it's not cool or sexy and doesn't get people excited thinking it might be a sort of real "intelligence." There's no hype for it. But these don't have quite the capabilities you aim for... those require a significant breakthrough that might happen... or might not. Maybe next year, maybe not for a hundred or a thousand.

  • @jwoya
    @jwoya 4 месяца назад +22

    When I began a major in computer science in 2007 the "everybody knows" prediction at the time was that all programming would move to remote workers in third world countries and wages would trend toward $20k / year or less. Outsourcing was all over publications like Software Developer magazine. Kids were being told not to go to school for CS. But outsourcing died because of communication and quality issues. AI is nowhere near surpassing third-world developers for these 2 shortcomings.

    • @kyokushinfighter78
      @kyokushinfighter78 3 месяца назад +1

      You'll see.

    • @jwoya
      @jwoya 3 месяца назад +8

      @@kyokushinfighter78 I guess here's a way to look at it: when a hospital administrator can say, "write me a system that manages my surgery staff and patient records", and the AI fully masters that use case, then it will have full real-world intelligence and we won't need hospital administrators, lawyers, Congress or anyone else. Until then, there will still be humans designing and specing these systems.

    • @MusicPlayingPeon
      @MusicPlayingPeon Месяц назад

      ​@@kyokushinfighter78I agree and very soon.

    • @MusicPlayingPeon
      @MusicPlayingPeon Месяц назад

      ​@jwoya I also believe that AI plus human may well always be better than AI alone regardless of how smart it gets.

  • @yassinebenazouz4529
    @yassinebenazouz4529 4 месяца назад +11

    finally somebody talked about this. thank you !!

  • @SugeXBT
    @SugeXBT 3 месяца назад

    from 10:23 to 14:13 , this was probably a mind-changing experience for people who doesn't major engineering. clean video and explanation bro. thanks

  • @Neprow3000
    @Neprow3000 3 месяца назад +1

    Hi Brett,
    I wanted to share some thoughts on the recent interview. Honestly, I didn’t find it as valuable as I had hoped. I’ve watched a lot of Ray Dalio and Warren Buffett, and one thing they consistently emphasize is the necessity of deep market knowledge to transition from gambling to truly understanding what you’re doing.
    I would have really appreciated it if this interview had addressed that fundamental perspective-how deep understanding of market movers and the ability to predict market behavior at current price points can lead to informed decision-making. This approach feels crucial to me and was something I missed in the discussion.
    That said, I do enjoy the tone and energy of your interviews in general, and I might check out some of your other interviews, perhaps those focused more on entrepreneurship. Unfortunately, this one just didn’t resonate with me the way I hoped it would.
    Best regards,

  • @christian-schubert
    @christian-schubert 4 месяца назад +6

    You know, it is SO refreshing seeing the hype cycle finally wearing off.
    Especially since being a [self proclaimed] "AI experts" has pretty much translated to being an unreflected OpenAI / Elon Musk fanboy in the last couple of years.
    Reminds me of how all the "digital natives" were once heralded as exceptional Internet prodigies, when in fact all most of them really mastered were Snapchat, Instagram and TikTok (tech that was largely conceived and created by the previous generation)
    There NEEDS to be a paradigm shift, LLMs simply won't cut it in the long run

    • @__D10S__
      @__D10S__ 4 месяца назад +2

      Thank you Mr. Christian Schubert. I have a direct line to Sam Altman if you'd like to enlighten him with your insights. Why the hell are you not heading a top AI research lab?!! How did you slip through the cracks?! Whoever said armchair quarterbacks can't throw? You've got a solid arm dude. Don't ever let anyone tell you that you don't know better than the coach. After all, you've got quite the view from the TV.
      I also am not sure if you are aware, but they are already moving beyond LLMs. The paradigm switch is already happening, but you're too blinded by your compulsive need to be a wet blanket, projecting a cynicism that implies an intelligence. It reeks of parochial insecurity. Wear it like a blanket. Use it as your pacifier.
      Use whatever heuristics you feel you need to use to make it through this period. 'Unreflected (the actual word would be unreflective) OpanAI/ Elon Musk fanboys' certainly works. That's definitely a way you can choose to understand what's happening before your eyes.

    • @christian-schubert
      @christian-schubert 4 месяца назад

      @@__D10S__ Well, in your defense, you've got one thing right. That should've been "unreflective". My phone apparently thought otherwise.

    • @diadetediotedio6918
      @diadetediotedio6918 3 месяца назад +4

      @@__D10S__
      Do you? Ask him how he defines consciousness, how he responds to the chinese room argument, how he proves computationalism and how he proves that all he is doing is not just a poor mimicks of humans. And also what does he think about SNNs.

  • @MrSurvival2
    @MrSurvival2 4 месяца назад +4

    This is an incredible explanation. Thank you for staying true to your word and not caving to the haters!

  • @sandipanroy3106
    @sandipanroy3106 2 месяца назад +3

    Something which is common among all the tech creators on YT that I follow keep saying that AI isn't taking any jobs. Can it be so that, the shift to other professions and interests among students, driven by concerns about CS's future profitability, lead to reduced engagement in their videos so they would want to make sure people continue watching them?

  • @douwemusic
    @douwemusic 18 дней назад

    Hey! I just found your channels and immediately became a fan!
    Just the smallest bit of feedback though: be careful not to sound too exasperated for too long, as it can get just a bit strenuous to listen to

  • @iamblack54
    @iamblack54 День назад

    The biggest thing is that-separate from what most would believe-a lot of CEOs have terrible business acumen. And less and less companies are strategic on the long term vs short. For them, the bottom line is everything. And salaries are a huge part of the bottom line. So they’ll follow the hype train as long as they can ride. But the trip back will be expensive

  • @StarDust_2077
    @StarDust_2077 4 месяца назад +27

    They sure as hell aren’t going to give it to their employees 😂😂😂😂

    • @SixTough
      @SixTough 19 дней назад

      That would not generate revenue, it's not surprising

  • @neilmcd123
    @neilmcd123 4 месяца назад +13

    It would be helpful to include any time frame assumptions at all in the video. Ofc current models suck. But what about in 5 or 10 years from now? That’s really not far away at all

    • @GreatTaiwan
      @GreatTaiwan 3 месяца назад

      I think he’s talking about people being fired NOW I guess

    • @zvxcvxcz
      @zvxcvxcz 3 месяца назад

      But that was the same in the 60s... you have no idea how much hype the Perceptron had.

  • @57d
    @57d 4 месяца назад +5

    My gut tells me this hype is in part attributable to public misunderstanding; I’m merely a hobbyist programmer, so really I’m apart of said public. I think there is a conflation of statistical data mashing (relevant xkcd: 1838/) with what has been popularised in Hollywood and other mainstream media that have sparked people’s imagination in the wrong direction.

  • @StashiaMass
    @StashiaMass 3 месяца назад

    This was such an amazing watch, thank you! My takeaway is hype is still required to an extent. Selling hope and dreams can still produce positive results - it makes us progress somehow.

  • @sararobin9452
    @sararobin9452 3 месяца назад +1

    Thank you for this video, I agree entirely with your "career change" point. I think you hit the nail on the head.
    I'm a future physician and my family has been bombarding me with "AI will steal your job, you have to find a safe career" and it's just crazy.
    It might, it might not, nobody knows, if it gets to the point no doctor is needed anymore I would assume no other job is safe and I'll join the revolution with billions of other people. I won't make any decision now based on absolutely zero knowledge and zero certainty.

  • @hrsbg
    @hrsbg 2 месяца назад +4

    ChatGPT was not first released in 2022. It had already been around for a couple of years at that point.

  • @huyhoangnguyenhuu2136
    @huyhoangnguyenhuu2136 4 месяца назад +19

    high quality content! bro is telling the hidden truth

    • @karlos1008
      @karlos1008 3 месяца назад +2

      It’s not hidden. Most people just don’t bother looking and take things on face value

    • @GreatTaiwan
      @GreatTaiwan 3 месяца назад

      @@karlos1008so it’s hidden from most eyes

  • @andresfernandez6437
    @andresfernandez6437 3 месяца назад +27

    This kinda sounds as the perspective of someone who's threatened - or feels as if - by the advances he is criticizing.
    For instance, quoting the image near the end of the video:
    2015 - Self driving in 2 years: The technology has existed since pretty much 2017, it can't be adequately deployed because most people can't afford it yet; and since few people use it, society as a whole hasn't changed fast enough to really adopt it.
    2016 - Radiologists obsolete in 5y: Hospitals can barely afford to function - they can't invest in deploying such sophisticated systems. But the capability exists and it's possible to make it work just as imagined.
    The whole video feels like cherry picking from the lowest branches possible. it lacks depth, it doesn't seem to consider second or third degree of consequences and what arguments are valid are actually very shallow and in so inconsiderate.
    "Remember something: this is the worst this technology will ever be."

    • @titaniumwolf2757
      @titaniumwolf2757 3 месяца назад +6

      This is the comment I was looking for!

    • @zvxcvxcz
      @zvxcvxcz 3 месяца назад +1

      @@minhuang8848 You're super confused. AI was never necessary to replace those office jobs and the AI implementations used are no better than the infamous phone mazes that replaced customer service call centers. Customers didn't like them then and won't now and they'll never be helpful for anything but the most trivial things there should never have been a call about while hampering real problems and information from reaching the company. Those companies will sink or figure things out in time. As usual, these trends come and go with the hype. You're clearly lacking the historical perspective. I am actually an expert by the way, most of my colleagues work almost exclusively in AI (the sub-team I work in does bioinformatics, in particular statistical genetics, because frankly the AI stuff can't be trusted in the context of real medical data where our conclusions may affect the real treatment people receive).

  • @Finalshark23
    @Finalshark23 Месяц назад

    One of the best takes I've seen on the topic, awesomely articulated.

  • @KristineSchachinger
    @KristineSchachinger Месяц назад

    Thank you taking a lot of things I've said and thought over the last 2 years and put it together so well.

  • @Mrflowerproductions
    @Mrflowerproductions Месяц назад +6

    "Human nature doesn't change", debatable.

    • @dukeofvoid6483
      @dukeofvoid6483 Месяц назад

      It evolves due to environment and culture.

    • @laelsnail5787
      @laelsnail5787 Месяц назад

      It "changes". It really just repeats itself. It is cyclical, but that's just me.

    • @Danuxsy
      @Danuxsy 20 дней назад

      Everything is in constant change, including homo sapiens.

  • @InwardRTMP
    @InwardRTMP Месяц назад +12

    Saying humans can learn how to drive in 30 minutes is just a blatant misunderstanding of reality. You should easily be able to see that that is blatantly false for 1 year old children, so obviously there are years of development at a minimum before people can even begin to learn how to drive. Even then, we have evolved over billions of years to interact with the world. Not taking this into account is being intentionally ignorant.

  • @captainteodor2252
    @captainteodor2252 3 месяца назад +7

    After watching this vid still not sure what AI overpromised and underdelivered.

  • @man-ham-city
    @man-ham-city 3 месяца назад +1

    I don't think you changed my mind but reinforced my previous thoughts and I am happy about that.

    • @ofarag
      @ofarag Месяц назад

      forsen

  • @stomana1
    @stomana1 3 месяца назад +1

    I just wanted to comment because I work in that field. I see the limits of large language models on a daily basis and you are correct in many ways.
    The last 10% is 50% of the work and that still applies today.
    I just wanted to let anyone reading this far into the comment know that LLMs are not the solve it all and we still don't have a solution to an ever expanding self learning compute or AGI as it's called. I don't know if we would soon or when but it may come. However for now, we are still within reasonable limits. With all that said, LLMs are extremely useful for a specific set of cases, not all, but a lot. Cheers to the future 🍻

  • @bwhit7919
    @bwhit7919 4 месяца назад +18

    10:56 this is actually false. OpenAI published a paper several years ago that explains exactly how fast AI will improve. And to summarize, we need to exponentially increase the data and compute to keep making AI better. Which means progress will slow down and OpenAI knows that it will slow down! All the hype is just marketing, designed so that investors keep giving them money.
    AI is almost guaranteed to get better, but it’s also almost guaranteed to slow down.

    • @nihilisticprophet6985
      @nihilisticprophet6985 2 месяца назад

      Does that mean AI improvement will slow down? OpenAI can just generate new data to train the next model. They are already doing that with synthetic data

    • @bwhit7919
      @bwhit7919 2 месяца назад

      @@nihilisticprophet6985 the models won’t necessarily slow down, but to maintain the current rate of progress, each model will have to be 10-100x more expensive than the one before.
      Synthetic data isn’t a silver bullet. There are many small techniques you can use to generate synthetic data-eg, translating computer code from a common language (like Python) to a less common language (like PHP). But I don’t know how well that can scale.

  • @DanielSeacrest
    @DanielSeacrest 3 месяца назад +5

    This did not start in 2022 lol. Atleast go back to GPT-3, hype was really starting to build then. ChatGPT was more available to the general population, and hype within tech circles definitely got bigger but it did not start with ChatGPT.
    "Computers are just incompatible with the level intelligence that many people are expecting them to have". If you are saying computers are just fundamentally incompatible, then I strongly disagree. If you are referring to current gen models then yeah.
    ALSO do not just compare timelines of release lol, compare compute over timelines. GPT-4o, from what I know, is a smaller model than GPT-4 (obviously. It is much cheaper and faster with lower latency), so OAI has done some sort of algorithmic improvement or trained on more data to get more performance out of smaller models. BUT, since GPT-4, every model that has released has been in the similar domain of GPT-4 level compute and cost to train. We know the main factor to intelligence in these models is effective compute which is highly dependant on raw compute. The ONLY model I know of trained with decent amount of compute over GPT-4 is Claude 3.5 Opus, which is yet to be released however Anthropic said it was trained with 4x the compute over Claude 3 Opus (which is a GPT-4 class and trained with approximately GPT-4 level compute). For context GPT-4 was trained with 6x the compute over GPT-3.5, and GPT-3.5 was trained with 12x the compute over GPT-3.This is the story of raw compute with GPT series models, but It gives us a window into the scales of compute needed for any form of improvement.
    To the people who do not have access to the training runs and current stages of models, bigger intelligence gains are not incremental over a time period, they are on a per model release basis. The last real intelligence gain was GPT-4, every other model released since then is some optimisation to that class of models or just straight up meant to be in this class of model. As I said the only model I know of to have compute scale up over GPT-4 is Claude 3.5 Opus, 4x the compute over current GPT-4 class models like Claude 3 Opus.
    And also Claude 3.5 Sonnet is 6x the compute over Claude 3 Sonnet. Claude 3 Sonnet was a high end GPT-3.5 class model, the compute jump put it as a high end GPT-4 class model, but not enough to go really beyond GPT-4 class models. That is what Claude 3.5 Opus is going to do. But, again, it will be a smaller gap than between GPT-3.5 and GPT-4.

  • @rustamgonezhukov8133
    @rustamgonezhukov8133 3 месяца назад +3

    at my current job we had github copilot business(?) version for a month, to give it a try. guess what, 90% of the generated code was calling not existent class methods in java, 5% didn’t work or looked incorrect, 5% was generating the code that worked and was looking correct but has a bug in it which was really hard to detect. after this month I have no anxiety anymore about ai replacing us(btw I turned this shit off in the end and threw it away). it was in may 2024.

  • @snorttroll4379
    @snorttroll4379 3 месяца назад +2

    you can just add a prefrontal cortex to the ai. it will override any command to crash the car. some hard coded limits on acceleration/decelleration/crashing and stuff.

  • @PoojaDutt
    @PoojaDutt 3 месяца назад

    Fantastic breakdown! Loved watching this video 😀

  • @kemita
    @kemita 3 месяца назад +91

    80 papers in 2 years, isn't that like a paper every 11 days? For sure, what kind of science is that? That man deserves ALL the Nobel Prizes for making humanity reach a technological breakthrough every 11 days.

    • @maloxi1472
      @maloxi1472 3 месяца назад +26

      Modern day "research", especially in the field of AI, is another Pandora box that would deserve its own video.
      He might as well have given the number of podcasts he had gone to and it would've still been a better vanity metric. That said, he probably expected most people who read that tweet to be either fools or deeply unfamiliar with how academia works... and that assumption would be correct

    • @FluhanFauci
      @FluhanFauci 3 месяца назад +28

      He's likely slapping his name as a contributor on every paper worked on at Meta, which can entail the work of hundreds if not thousands of researchers

    • @lucnotenboom8370
      @lucnotenboom8370 3 месяца назад +4

      I mean, he probably wasn't sole author considering his function. Most likely he got to put his name on there for guiding the team doing the actual research, which, don't get me wrong, can be a valuable task on its own

    • @DaveEtchells
      @DaveEtchells 3 месяца назад +7

      @@FluhanFauci DingDingDing - this is how most any kind of research works: (Please mentally change the pronouns to your own preference ;-) The senior researcher guides the work of the entire group, and his name appears somewhere in the list of authors of every paper the group puts out. If he contributed in some critical way, he’d be lead author, if he was fairly hands on but wasn’t directly involved in the work, he might be somewhere in the middle. If he just told someone “hey, you should check this out” he’d be toward the bottom, and if he had nothing much to do with it but it came out of his lab, he’d be the last author. So 80 papers or whatever is how many the entire team, possibly hundreds of people, put out.

    • @dasaauploads1143
      @dasaauploads1143 3 месяца назад +1

      I come from an university and I most researchers only mix papers in order to get a bonus lol

  • @juanmacias5922
    @juanmacias5922 4 месяца назад +30

    14:06 Yann LeCun with the receipts LMFAO once I learned "A.I." was probability, statistics, and linear algebra in a trench coat, I realized it was a bubble.

    • @lordseidon9
      @lordseidon9 4 месяца назад +24

      You will be surprised to know that your brain runs on probability and statistics too

    • @ozymandias_yt
      @ozymandias_yt 4 месяца назад

      @@lordseidon9”Planes are bullshit, they are just applied thermodynamics”
      The real argument should be about the complexity of the models that use these disciplines, so we can distinguish between what is solidly persisted competence and what is just a useful artefact from the data. Better AI models have a structural integrity beyond its NNs (like hard and soft beliefs and policies), so it can not just go from logical reasoning to total nonsense by just one unfortunate transition.

    • @sssurreal
      @sssurreal 4 месяца назад +6

      Life is probability and statistics

    • @bunny_rabbit5753
      @bunny_rabbit5753 4 месяца назад

      How u know😅 , even the greatest neurosurgeon cannot answer that question completely😅​@@lordseidon9

    • @__D10S__
      @__D10S__ 4 месяца назад +11

      Once I learned human brains are just neurons firing and neurotransmitters shuttling between synapses, I realized we are moronic.

  • @skucherov
    @skucherov 3 месяца назад +4

    In other words, you cannot think by yourself, you do not want to make decisions, you trust "smart people"! Very original!

  • @michalkrsik2702
    @michalkrsik2702 3 месяца назад

    This title is doing disservice to the video. It felt to me like "AI garbo" type of video, but this is legit brilliant work. I watch everything there is on this topic and you sir know how to present nuanced perspective with solid evidence and historical trend analysis.

  • @anthonykent00
    @anthonykent00 Месяц назад

    This was the best comedy set I've seen this week. Thank you! 😂

  • @justMRV
    @justMRV Месяц назад +6

    If you still think AI is hype, you don't know much about AI. And chatgpt is very tiny , compared to the whole use cases umbrella. AI will push business into a really new landscape in the next 5-10years. I've been talking to so many of my friends who has AI enabled workplace... Yes AI is taking away some jobs and changing the way of working...

  • @mrECisME
    @mrECisME Месяц назад +3

    8:00 But Tesla has been a complete failure? It hasn't made any profit. What are you talking about?

  • @ThatGuy-Official
    @ThatGuy-Official 3 месяца назад +14

    We definitely know already that the rate of improvement is not linear, it's not exponential, it's for sure logarithmic. One major issue with AI is that there's no framework for fine detail alterations to the finished product. If you generate an image and you want the person in the photo to have a green hat rather than a red one, then you need to regenerate the whole image. That is very computationally expensive. The alternative would be to hire someone proficient in photoshop to finish it. I think people are starting to learn that AI has some use cases but will not actually be replacing people in mass. Also, the companies that have replaced people with AI are starting to see drops in the quality of their products. I have a hunch that the whole Crowd Strike debacle was a result of pushing code written by AI that went unchecked.

    • @daniilnexus
      @daniilnexus 2 месяца назад +3

      "regenerate the whole image" - that's not true, we have a selected area regeneration tool right now.

  • @dlsmz
    @dlsmz 15 дней назад +1

    Hyping for small things depends on person to person, you're judging a person for hyping themselves for their own happiness!

  • @SeriouslyWeirdDream
    @SeriouslyWeirdDream 2 дня назад

    1:03 Damn, with that high a quality background, I thought for sure you were the chosen one that would tell us all what is next

  • @MrDanMaster
    @MrDanMaster 4 месяца назад +37

    “There’s a new virus running around” “It’s as old as human history”

  • @memegazer
    @memegazer 3 месяца назад +11

    "Human nature does not change"
    Whatever you think this means it does not mean that human society does not change or is never informed by new concepts.

    • @ethanr0x
      @ethanr0x 3 месяца назад +2

      nature != culture omg

  • @bob_pm
    @bob_pm 3 месяца назад +19

    Bro is persuading us to not leave a tech career. What a legend

  • @pochopsp
    @pochopsp 7 дней назад

    thanks for this. I am a junior-mid developer and this AI hype scares the shit out of me. Thanks to your video I feel calmer and more convinced that my job ain't gonna go away anytime soon

  • @miguelabreumacedo
    @miguelabreumacedo 9 дней назад +1

    He is right, this “too big to fail” mentality was the downfall of many companies. Once Ford, GM, Chrysler were the biggest companies in the world they weren’t able to keep up with times, so they are nothing compared to what they were. Kodak is an even better example because they were ahead of the trend when it came to digital photography but they where already too invested in B&M stores and those stupid Kiosk things that people used to print their photos so they failed. IBM was fucking huge, they also failed.
    What do all of these companies have in common? They were enormous in terms of their structure and hierarchies and a given of those characteristics is having a really hard time at adapting, being flexible and innovating. The next big thing comes around and they’ll eventually fail in keeping up and some newcomer will take their place. They’re trying to stay afloat with this AI hype, but lets be honest is there anything meaningful that AI can do that consumers at large are willing to spend their money on? No there isn’t.
    In my work I see so many businesses wanting to adopt AI into their business and the most adamant of people about it are always clueless c-level executives that have no clue about how AI works or what it can do, for them it is some kind of Black Magic. We are at a time where the next big step in technological advancement is nowhere to be seen. Elon with Space X is going after something that was already accomplished in the 60’s, just with an innovation with rockets that can land themselves… If the investment in that area was constant since the inception of space exploration we would be way pass that. Taking into account all the technological advancement since, the moon landing Space X’s accomplishments are meek in comparison…
    They are all going crazy trying to predict the next big thing and the only thing they can do is hype because the next really meaningful advancement for humanity is nowhere to be seen. Funniest thing is these companies are really young when compared to the giants I mentioned in the beginning of my comment. Can’t wait for this shit to be over, as long as companies are chasing the hype we will be wasting the smartest people in an entire generation doing something that in decades will be irrelevant. I’m not saying all this AI investment will be useless in a some decades, but it isn’t going to change the way we live as a human species directly.