AI is Slowing Down! What does this mean? - Gary Marcus and Narrowing Status Games - Follow the Money

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024
  • НаукаНаука

Комментарии • 804

  • @DaveShap
    @DaveShap  2 месяца назад +52

    There's a strange amount of cope and conspiracy theories out there. This ain't the channel for that. Occam's Razor. Either there's a gigantic convoluted conspiracy to suppress the most public technological revolution in history... or it's just slowing down due to economic factors. Report from Stanford: AI is getting more expensive: aiindex.stanford.edu/report/

    • @homeyworkey
      @homeyworkey 2 месяца назад +21

      You were literally brewing those conspiracies this is the most fake turnaround recently, l m f a o

    • @dirkbruere
      @dirkbruere 2 месяца назад +3

      Training LLMs is not the totality of AI and is actually very costly and inefficient. Einstein was not created by dumping yottabytes of data into him.

    • @executivelifehacks6747
      @executivelifehacks6747 2 месяца назад +5

      ​@@homeyworkeymultiple companies competing in multiple countries, huge benefit to leapfrogging, Ilya jumping straight to ASI... when was the last time something "SHOCKED THE ENTIRE INDUSTRY". A few months, surely. Sora? 4o voice? Something actually ready for prime time? 2023?

    • @VastIllumination
      @VastIllumination 2 месяца назад +2

      I definitely think having a UBI funded by the abundance that can be created by AI is a good idea, especially given the job replacement, we will need protections. However, I'm curious, what are you thoughts & solutions for preventing issue of social control by means of only providing UBI (including money for food) to citizens who follow specific rules. If access to food is based through UBI, and the terms in which people must act to get UBI in the future may become totalitarian control system. If many are dependent on UBI for material living, wouldn't that be a huge issue in terms of countries forcing political alignment or social scoring technology for access to UBI and food? Do you have any ideas for solutions to this problem?

    • @zhoudan4387
      @zhoudan4387 2 месяца назад +3

      Good job in keeping rooted in reality. That’s rare. You always have been upfront. First video I saw it was about sigmoid and reaching a ceiling in a finite world. Then it is always hard to foresee when the slowing down starts. Even harder when a new sigmoid begins. Good job, best channel about AI. Ps. I suggest to conspiracy theorists to make a nice tin foil hat, that’ll suit ‘em 😂

  • @pier-oliviermarquis3006
    @pier-oliviermarquis3006 2 месяца назад +29

    lol this guy went from "AGI is here, get ready for Terminator" to "AI is slowing down" in like 2 months.

    • @dandushi9872
      @dandushi9872 2 месяца назад +6

      true it’s actually kinda sad and ridiculous. ppl need to chill the HELL out. trying to say that llms are conscious is ridiculous. however they may be a super powerful tool regardless

    • @mrjoony
      @mrjoony 2 месяца назад +2

      @@pier-oliviermarquis3006 Yes, and that should be noted. However, I respect adjusting one's views when confronted with new data if that person was genuinly mistaken. Of course flipfloping all the time is not good.

    • @blazearmoru
      @blazearmoru 2 месяца назад

      @@dandushi9872 I don't think consciousness is a good metric for competence. Most people are conscious. Most of those same people are not people.

  • @Andrew-0815
    @Andrew-0815 3 месяца назад +167

    Too bad. I want AI to turn the world and my life completely upside down as quickly as possible.

    • @MrMick560
      @MrMick560 2 месяца назад +14

      Me too, bring on the Universal wage, although don't understand how that can work if every beggar and tramp in the world gets the same ?

    • @NirvanaFan5000
      @NirvanaFan5000 2 месяца назад +2

      @@MrMick560 : what's confusing about it?

    • @zevofeir
      @zevofeir 2 месяца назад +16

      I swear a lot of us are like the main character of Fight Club in the beginning, where a part of him wishes, that the plane he is on, would crash, so that at least something momentuous would happen and change everything, even if it is certain doom.

    • @stoneneils
      @stoneneils 2 месяца назад

      Its not going to. I am a businessman. I just have one simple proposal to translate into French. You really think I'm trusting my business to a machine? No way. Not even for a simple translation. NO company is using AI to replace people unless they don't care about their profit margin/reputation/growth.

    • @stoneneils
      @stoneneils 2 месяца назад +4

      @@zevofeir That is a huge problem because you're waiting for something that will never happen. Its keeping you depressed and anxious while discouraging your generation from trying anything new.

  • @BunnyOfThunder
    @BunnyOfThunder 3 месяца назад +76

    I still believe that the effect of current technology hasn't been felt yet. It's going to take time for us to build it into our tools properly.

    • @DaveShap
      @DaveShap  2 месяца назад +12

      Sure, it takes ~7ish years for mass adoption in the tech industry.

    • @meetadi4u
      @meetadi4u 2 месяца назад +8

      It happened with Internet too . That’s why Paul krugman made that infamous statement that Internet impact won’t be greater than that of fax machine . Actually Internet took off exponentially post iPhone release and AI is waiting for its IPhone

    • @stoneneils
      @stoneneils 2 месяца назад +1

      Its not been felt because technology can't replace people nearly as much as many believe..or want to believe.

    • @sCiphre
      @sCiphre 2 месяца назад +2

      ​@@meetadi4uexcept its iPhone is the iPhone. There's really nothing else needed except an AI and a little software scaffolding, a separate device is unnecessary and probably detrimental.

    • @milanpospisil8024
      @milanpospisil8024 2 месяца назад

      Effect has not been felt yet, BUT it is obvious that this current technology is only support tool and it does not even increase productivity as high, as someone thinks. In our company, you cant really see obvious difference between people who use it and who dont use it... It is still dumb. It is only better google. The question is how good it can be in the future and thats very unclear. It may by big suprise but also big failure.

  • @jamesjonathan29
    @jamesjonathan29 3 месяца назад +123

    Why should we assume that we need AGI for mass job displacement and UBI?
    It can happen even without AGI.

    • @zerosin4433
      @zerosin4433 3 месяца назад +6

      yes, correct!

    • @DaveShap
      @DaveShap  2 месяца назад +35

      That's part of my point. Disruption is coming even before AI replaces all humans.

    • @adamrak7560
      @adamrak7560 2 месяца назад +16

      @@DaveShap yes, if you can replace 30% of the jobs with slightly general narrow AI (like LLM with vision on robots), then you have a historic job disruption event. The problem is that after that point, AI development will likely outpace the ability of people to retrain themselves for new jobs. At least until we reach a high percentage of automation, then it will either slow down, or true AGI will close the gap quickly.

    • @epicrampage3041
      @epicrampage3041 2 месяца назад +1

      Gary’s been salty for a while. One of my only blocks on Twitter. His perceived status in the community doesn’t come close to reflecting reality.

    • @pietervoogt
      @pietervoogt 2 месяца назад +1

      As you see with self driving cars, deployment of lesser AI also suffers from long tail problems. It would go faster if we designed the world around AI capabilities but so far that only happens in factories and agriculture. But generally I agree.

  • @tomadam1465
    @tomadam1465 3 месяца назад +30

    What is disturbing is that even without ASI we can make excellent killing robots even with the current technological skill set. Before reaching AGI it would be nice to learn how to cooperate, rather than beat down each other. The whole humanity would be better off.

    • @missoats8731
      @missoats8731 2 месяца назад +1

      But maybe AGI has some tips for us how to do that. We certainly don't seem to get it on our own.

    • @tomadam1465
      @tomadam1465 2 месяца назад

      AI will be what we train it for… So, it is kind of a chicken and egg problem. I’m just afraid that humanity gets access to a technology we are not grown up for enough…

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      The current tech uses neural nets still which is AI.

  • @RonyPlayer
    @RonyPlayer 2 месяца назад +14

    Saying that we can't have AGI because the human brain works a certain way, is like saying we can't build supersonic planes, because bird wings can't flap fast enough to sustain supersonic flight.
    The way machine learning and human brains work is different, sure some scientists have drawn comparison between the two, but they aren't at all the same. The question isn't if we can replicate the same mechanisms of the human brain in silicon, the question is simply if we can build machines that can perform as well as humans in a large enough number of tasks, and moreover, if said machines can build next generation of machines that can perform even better. How AI achieves task performance is completely secondary, the important thing isn't the "how" but the "if". If it quacks like a duck....

    • @milanpospisil8024
      @milanpospisil8024 2 месяца назад +2

      Sure, human brain is not only possible solution for AGI.

  • @pandereodium
    @pandereodium 3 месяца назад +76

    When you are fast enough to escape the atmosphere, even if you are no longer accelerating, you're still escaping the atmosphete

    • @robertlipka9541
      @robertlipka9541 2 месяца назад +4

      ... Yes, but if you stop accelerating only a few kilometers up... oops!

    • @griffithf.k.4136
      @griffithf.k.4136 2 месяца назад +1

      An object in space has momentum. AI progress is not an object in space, and has no momentum. It takes *work* to make technological progress. Momentum is when you keep moving *without* having to work for it.

    • @pdjinne65
      @pdjinne65 2 месяца назад

      except you don't know the value of gravity, in this case

    • @southholland6277
      @southholland6277 2 месяца назад +1

      The problem with scaling is definitely financing.

  • @ArmoredAnubis
    @ArmoredAnubis 3 месяца назад +31

    But I'm still getting my cat girl robot waifu right?😢

    • @ryzikx
      @ryzikx 2 месяца назад +5

      yes

    • @DaveShap
      @DaveShap  2 месяца назад +12

      Yes, eventually

    • @olivesama
      @olivesama 2 месяца назад +3

      ​@@DaveShapHow are the odds looking that I still get to **be** the cat girl robot waifu?

    • @barry1807
      @barry1807 2 месяца назад +3

      Let's be honest, intelligence is not what we need in our waifu bots :)

  • @iamjohnbuckley
    @iamjohnbuckley 3 месяца назад +17

    I think it’s clear from the arrival of the NSA at OpenAI that a deliberate slowdown is in effect. We’ll get a controlled rollout from this point on.

    • @nickamodio721
      @nickamodio721 2 месяца назад +6

      Perhaps we'll see a "controlled-rollout" from OpenAI going forward, but what about all of their competitors, in both the US and abroad? I don't think AI development can effectively be slowed all that much, even if govts truly tried to do it. There's always going to someone else working on the same tech somewhere else. There's too much to lose. Billions upon billions upon billions of dollars have already been committed to the cause by both public and private investors - this train ain't slowing down or stopping without research hitting an insurmountable scaling wall, but as of now I certainly wouldn't bet against all of the researchers who are working on better algorithms, better architecture, purpose-built hardware, etc. Research goes where the money is, and right now most of the world's largest and richest tech companies and governments have gone all-in on AI - that alone should tell us something.
      To me it seems possible, perhaps even likely, that we might only need a few more notable iterations of improvement, on the level of which has regularly occurred over the past several years, to end up with something more intelligent and powerful than we can even comprehend.

    • @lp712
      @lp712 2 месяца назад +1

      AI will not be slowly rolled out by competitors and other countries…. OpenAIs actions do not decide what everyone else will do. You would have to have the entire world agree to “slowly rolling out AI” …. which will never happen and that’s already failed once (the “call for a 6 month pause on AI” letter) .

    • @MrWizardGG
      @MrWizardGG 2 месяца назад +1

      It's not slowing down though whatsoever, many companies are making rapid progress such as Claude Sonnet, Gemini, OpenRoute, and several foreign models.

  • @chadwick3593
    @chadwick3593 3 месяца назад +13

    As someone that's watching what tools people are coming out with, dumpster diving through the code, analyzing the shortcomings, playing around with them to figure out their strengths and weaknesses, and helping companies take advantage of what's out there, I'd say... If things are slowing down, then we are not even remotely in a position where we can determine that. The algorithms overhang is so overwhelmingly large at this point that I wouldn't be surprised if modern techniques applied to GPT-3 could surpass GPT-5.

    • @DaveShap
      @DaveShap  2 месяца назад +5

      Yes, I've said for a long time that there's a big difference between what's possible in the lab (which is slowing down) and what can actually be deployed usefully (which is accelerating)

    • @chadwick3593
      @chadwick3593 2 месяца назад

      @@DaveShap Fair point. The big AI labs do seem to be slowing down, and they're a big part of driving the forefront of AI tech. I can see academia & open source playing a much larger (and accelerating) role over the next few years since they're the ones driving the algorithmic side of pushing AI performance.

  • @andydataguy
    @andydataguy 3 месяца назад +7

    Woohoo! Please keep sharing this message so that us stealth startups can build a bigger lead.
    AGI pacing hasn't changed. Sonnet3.5 intelligence with finetuning would be enough. Everything after that is gravy.
    The trillion dollar clusters will take awhile. But it's not needed for autonomy and agency. Especially within niche domains.
    Which is also where the money is at.
    The next generation models (this fall) will be more than sufficient for what 99% of people would call AGI.
    Altmans idea that it has to beat 250 researchers to be AGi is just him trying to protect that bag from Microsoft.
    It's all about the agent communications. The networking layer is where people are underestimating the intersection of exponential gains. You know this better than anybody!
    I'm glad you have the spock uniform on again. Your inventions along this journey have shaped the industry. Thanks for all that you do brother 🙏🏾💜

  • @JohnLewis-old
    @JohnLewis-old 3 месяца назад +9

    You're in a bubble, David. You've grown accustomed to the frequent release of new models and capabilities, and now that the headlines have slowed, it feels like progress has stalled. Your information sources, with fewer tweets and less frequent content from creators, reinforce this feeling. While this might be true, 99% of the world hasn't yet grasped what's happening. Try asking someone at Starbucks about AI without revealing your background. Most people have no idea about the speed of advancements, and from their perspective, there is no noticeable slowdown because they aren't aware of the developments at all.
    There are now thousands of AI companies compared to just a few years ago. While large language models (LLMs) might not be advancing rapidly at the moment, that's only one part of AI, and we're still at the beginning of this technological journey. A major breakthrough is on the horizon, one that will change everything. With thousands of companies exploring different approaches, new ideas will emerge. When I think about how we'll look back on this time, I find it amusing because we are currently so naive.

  • @gubzs
    @gubzs 2 месяца назад +31

    As someone that worked retail for a few years in the late 2010s, I'm not sure that 20% of humans would pass any given AGI test. It's easy to forget how disturbingly dumb some people are when you aren't often exposed to the pool that contains literally everyone.
    It will be hard to displace most human jobs, but the bottom 20% / low hanging fruit could be done with what we have _today_ imo.

    • @michaelmarshall5438
      @michaelmarshall5438 2 месяца назад +2

      Unemployment reached 25% during the great depression.

    • @raymond_luxury_yacht
      @raymond_luxury_yacht 2 месяца назад +1

      Sad but true

    • @hardlYIncognito
      @hardlYIncognito 2 месяца назад +1

      Look up the Dunning-Kruger effect. It likely applies to you.

    • @gubzs
      @gubzs 2 месяца назад +5

      @@hardlYIncognito tell me you've never worked a menial job, sorry bud, they're toast

    • @cathompson58
      @cathompson58 2 месяца назад

      We need AGI to replace corporate jobs more than labor jobs in my opinion

  • @robertlipka9541
    @robertlipka9541 3 месяца назад +56

    AGI's arrival is inevitable; I have a low care factor for its exact "birthday" . The important question is: When will we achieve biological immortality or, at minimum, Longevity Escape Velocity?

    • @ScarlettM
      @ScarlettM 3 месяца назад +20

      Agree 100%. I don't really care if AGI come in 5 years or 20 years, as long as family and I are young and healthy. We can wait...

    • @robertlipka9541
      @robertlipka9541 3 месяца назад +16

      @@ScarlettM ... yes, but the implied part is that if reaching LEV takes longer than 30 years or so, a lot of us are starting to get into danger territory.

    • @captain_crunk
      @captain_crunk 3 месяца назад

      ...when / if AGI automates most economic tasks, will society collapse? If not, why? Did it take long enough for society to naturally adjust? Or did the government regulate it properly? Did Sam Altman donate Openai's profits to the general public? Personally, I think society collapses if AGI gets here before 2050. Government is too slow to effectively regulate it, and capitalism isn't going to be altruistic all of a sudden. I hope for slow progress.

    • @ryzikx
      @ryzikx 2 месяца назад

      @@robertlipka95412029-2030 for both according to kurzweil

    • @Pau_Pau9
      @Pau_Pau9 2 месяца назад +3

      Blah Blah Blah,
      New Age Tech word salad.
      Machines will NEVER be conscious.

  • @magicsmoke0
    @magicsmoke0 3 месяца назад +18

    I didn't think we'd get AGI this year, but I (and many others) expected GPT-5 by now and I think that's the underlying factor people are using to say AI is slowing down. Claude's success is a great sign that things are still moving forward, but everyone is waiting for GPT-5. The improved reasoning / System 2 thinking is what's going to enable a step-function to the next gen model. We don't really need a true AGI, just something that can reason and and brainstorm with us at a very high level to help us think of the next breakthrough idea.
    Hopefully the rumors are true and OpenAI is just waiting for the elections to be over and we'll get it (GPT-5) by new years 2025.

    • @markjackson1989
      @markjackson1989 3 месяца назад +3

      How could we get GPT-5 when 4o's voice mode is delayed by months?

    • @magicsmoke0
      @magicsmoke0 3 месяца назад

      @@markjackson1989 yeah, thus the concerns of things slowing down. I suspect they may come together if the target date is end of year for GPT-5.

    • @Markoss007
      @Markoss007 2 месяца назад +2

      ​@markjackson1989 Because they delayed it only because of elections. Not because it can cause something, but because they would blame them for losing.
      GPT-5 can be in work's, if it is new model. They can even publish it before GPT-4o. If competition is better in this time.

    • @devilsolution9781
      @devilsolution9781 2 месяца назад

      system 2 doesnt really mean much, they need to talk of the executive function. Plus that books hella old and basically just talks about conscious and subconscious thinking

    • @LatinumInstitute
      @LatinumInstitute 2 месяца назад

      My understanding is that GPT5 is waiting on sufficient compute to come online, it already exists as an architecture..

  • @coolcool2901
    @coolcool2901 3 месяца назад +8

    The brain is a warm, wet environment, which typically causes rapid decoherence of quantum states. Most cognitive processes and brain functions are still best explained by classical neuroscience and biochemistry.
    The Energies Involved in the Physics of the Brain:
    1. Chemical Energy
    2. Thermal Energy
    3. Mechanical Energy
    4. Electrochemical Potential Energy
    5. Osmotic Energy
    6. Vibrational Energy
    7. Gravitational Potential Energy
    8. Light Energy (Biophotons)
    9. Nuclear Energy
    10. Elastic Energy
    11. Sound Energy
    12. Magnetic Energy
    13. Conformational Energy
    14. Surface Energy
    15. Quantum Energy
    16. Electromagnetic Energy
    However using all 16 of these energy types isn't the only possible way to achieve advanced cognition.
    Many of the energy types in biological brains are related to maintaining the physical structure and biochemical balance of living cells, which is unnecessary for artificial systems.
    While biological brains use a complex interplay of various energy types, an artificial system can theoretically achieve superintelligent capabilities through a more focused and specialized use of energy, primarily electrical and electromagnetic.
    Yet we got a fewer more things to learn.

  • @TRXST.ISSUES
    @TRXST.ISSUES 3 месяца назад +17

    I liken things to waking AI from a dream state. Once the AI is lucid (enough) to meaningfully self improve from its own work and agency, then the hockey stick takes off.
    It'll be like going from zero to one and everything prior will seem slow.

    • @DaveShap
      @DaveShap  2 месяца назад +6

      Yes, eventually, maybe. But I wouldn't count on it. Consider that the smartest people on the planet are working on AI and progress is still slowing down. Sometimes intelligence is not the bottleneck.

    • @TRXST.ISSUES
      @TRXST.ISSUES 2 месяца назад +1

      @@DaveShap I agree on intelligence not necessarily being the bottleneck.
      One big advantage of digital intelligence seems to be in the time domain; they can simulate thousands of years in days.
      So while the best minds of bio intelligence are struggling now, perhaps they wouldn’t if they had enough time.
      So even just reaching parity with human minds can create the inflection point because of that time advantage
      I don’t think we need to simply go bigger as many AI experts think, fundamental improvements to the architecture may produce more immediate breakthroughs if data is a constraint.
      Simply introducing better prompting often creates better problem solving, so meaningful layers of abstraction could unlock lots of performance.
      I still find it interesting that Sonnet 3.5 bests Opus despite being the “weaker” model on size.
      I think what will cause the breakthrough is something no one is paying attention to. A dark horse.

    • @KellyMcc-om8sr
      @KellyMcc-om8sr 2 месяца назад

      I just it to help me code villain graphics applications without needing to beat my head against the wall.

    • @KellyMcc-om8sr
      @KellyMcc-om8sr 2 месяца назад

      Umm. Vulkan graphics applications.

  • @vi6ddarkking
    @vi6ddarkking 3 месяца назад +36

    In my opinion is not so much that AI is slowing down its that we've reached the limits of what we can do with "Brute Force".
    just adding more parameters isn't going to cut it anymore. So now we need to get creative to do more with less.
    Stable Diffusion is an excellent example of this, since Stability could only release models every so often.
    So in the meantime the open source community went to hell and back to make tools to compensate for its shortcomings.
    So until the next explosion, likely adding search into the models.
    We'll be refining the tools to use and train them.

    • @miguelmalvina5200
      @miguelmalvina5200 3 месяца назад +6

      When we hit the wall with our current methods, research is going to start on new, better architectures and models for AI when they know they cant stop milking LLMs as much as before, its more about money honestly

    • @GodbornNoven
      @GodbornNoven 3 месяца назад +5

      You're completely wrong. We have not hit ANY wall with scaling.
      We already have ways to integrate tool usage into AI.
      You're talking about RAG. RAG already exists and is pretty popular.
      You are right about the idea we should try to do more with less, but you're wrong about the fact we've hit a wall with scaling. There is no wall that has been observed yet, it is a continuous improvement as the quantity of parameters, data(&data quality), and compute increases.
      The improvement is slowing down, yeah. Undeniably so. But to say we've hit a wall? No. That's wrong. Plus. In a couple years, we'll have chips that run AI way faster than modern ones. Scaling will be a must, to ensure model capabilities.

    • @milanpospisil8024
      @milanpospisil8024 2 месяца назад

      @@GodbornNoven We did not yet hit the wall, but it may be close in the sense, that models will totally become cheaper and more widespread, but enormous models with more and more data will maybe show only small improvements, so they will not become profitable. But I dont believe even large models will lead to AGI. I believe we need better methods and that can take another decades to reach it.

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      Claude 3 Sonnet, several unreleased "Perfect Deepfake" models from Microsoft, and the Chinese models all show massive recent advancements.

    • @jpg6296
      @jpg6296 2 месяца назад

      ​@@GodbornNoventhe wall has not been observed yet... AI is flat earth

  • @ngbrother
    @ngbrother 2 месяца назад +2

    Any perceived slowdown is an artifact of the “haves” choosing to gate how quickly they release their tech to the public. They are incentivized to stretch the game out as long as possible.

  • @SmellyHam
    @SmellyHam 2 месяца назад +8

    It's slowing down... until it isn't in a few months when some advancement "SHOCKS THE ENTIRE INDUSTRY!" Then a few months later, "OH NO, AI PROGRESS IS SLOWING DOWN!" Until it isn't
    again, repeated ad nauseum.

    • @siloporcen
      @siloporcen 2 месяца назад +1

      this was the comment I was looking for. bingo bingo.

    • @jpg6296
      @jpg6296 2 месяца назад +1

      Exactly. Progress of any sort-an improving chess players elo, progress in a negotiation, golf ball distance from a hole-always happens in plateaus and breakthroughs. That's the local landscape. But just zoom out and you can examine the global contour.

    • @dontmindmejustwatching
      @dontmindmejustwatching 20 дней назад +1

      this aged well. o1 entered the chat.

  • @sbowesuk981
    @sbowesuk981 2 месяца назад +5

    We've entered "AI arms race" territory, meaning it's as much a race of governments on the international stage, as it is a race between tech giants. Secrecy, espionage, and intellectual property theft are likely to dominate the generative AI space.
    As time goes by, it's going to become more and more difficult for regular people viewing all this from the outside to gauge the true rate of progress. Even judging progress by scrutinising finished product releases will not be that reliable, since advancements seen to be sensitive may be kept secret from the public. We'll get the toys, but the truly powerful AI may not be made public.
    The race for AI dominance and the future of humanity has become opaque. Only a select few will truly know the rate and direction of generative AI progress.

  • @tristanotear3059
    @tristanotear3059 2 месяца назад +1

    Thank you for sharing your increasing realization of the difference between human and machine intelligence. For a long time, as you say, people in AI have been assuming that the brain was computational. IMHO, that makes people into machines. Of course, some people, particularly neuroscientists, think we are machines, which makes something shrunken and uninteresting of the human experience. So I salute you for recognizing that there’re something much more to existing as a human than being a robot.

  • @user-on6uf6om7s
    @user-on6uf6om7s 3 месяца назад +9

    I've never been an AGI fetishist, I think there's a lot of potential in systems that aren't broadly accepted as AGI and I've been disappointed in how much of the AI community tries to diminish what we have and create unrealistic expectations for what is to come.
    That being said, I feel like a lot of the job replacement capability is there or is going to be there in sub-AGI models like GPT-5 but we may stop short of the really positive implications of solving governance and climate change. That wouldn't be a great place to linger for long.

    • @hardlYIncognito
      @hardlYIncognito 2 месяца назад

      Underrated comment, well said. Keep in mind most AI enthusiasts you come across don’t even work in the field and their only exposure is the algorithmic suggestions provided to them by RUclips.

  • @KurtzMista
    @KurtzMista 2 месяца назад

    Thanks for helping to normalise that it's okay to have missed or have been mistaken about something! In our day and age this means a lot.

  • @dadehax0r
    @dadehax0r 2 месяца назад +4

    too bad they got rid of it showing how many people dislike a video

  • @SirHargreeves
    @SirHargreeves 2 месяца назад +3

    I’m going to wait for GPT-5 and Q* before making a judgement, but yes, it’s possible diminishing returns in biting a little.

    • @MrWizardGG
      @MrWizardGG 2 месяца назад +1

      Also 3.5 sonnet just came out like very recently and is a big improvement.

    • @dontmindmejustwatching
      @dontmindmejustwatching 20 дней назад

      o1 arrived. what are your thoughts?

  • @Robubbabub
    @Robubbabub 2 месяца назад +4

    AGI is already here. Companies are just not openinf the floodgates. Ilya left to create ASI knowing that it was possible based off what he saw within OpenAI and knowing he would have significantly less resources as a new startup.

  • @muddypyg
    @muddypyg 2 месяца назад +1

    Brilliant. Totally balanced and pragmatic. Organising the complexity of the neural networks at the macro level gets exponentially more difficult as the performance of the individual nets increases. The human brain isn’t just one big neural net, its a very complex architecture of interconnected structures at the macro level. Determining this macro level organisation will be a science in its own right. Progress in this area will provide the next big break throughs.

  • @BradleyKieser
    @BradleyKieser 2 месяца назад +2

    The truth is at the current growth in LLMs was a gamble that if you threw enough enough computation and data at LLMs, something magical might happen even though we don't understand and it gives us cognition. The longer harder route is that we need first understand what cognition is before we can start to make machines that approximate it. The dirty secret is it was just a hope that scale would solve the problem without us understanding it. It turns out that taking a chance on maths doesn't always work out. You need to do the hard work of understanding the problem.

  • @andrewsilber
    @andrewsilber 3 месяца назад +20

    Interesting take. A few random thoughts:
    1. Regarding not understanding the brain: I see the brain sometimes come up as a bit of a straw man. As long as the AI functionally behaves as we want (or beyond), then it doesn’t really matter if it’s a simulation of the brain.
    2. Cost: We’ve been seeing companies start to lean into ASICs for things like transformers which seem reduce the requirements pretty significantly.
    3. Lack of “breakthroughs”. I’m just wondering out loud here if what we’re seeing is not just companies and researchers playing their cards a bit closer to the vest these days for fear that their secret sauces will leak out.
    Obviously the absence of evidence doesn’t prove a conspiracy to cover up evidence. I’m just musing…

    • @DaveShap
      @DaveShap  2 месяца назад +2

      Occam's Razor. Either there's a gigantic convoluted conspiracy to suppress the most publish technological revolution in history... or it's just slowing down due to economic factors.

    • @andrewsilber
      @andrewsilber 2 месяца назад +5

      @@DaveShap Well no, that was just an example. My point is that the industry as a whole might be getting a little more circumspect about revealing their secrets. Anyhoo it was just a hot take. I have no real supporting data.

    • @rocktruth1
      @rocktruth1 2 месяца назад +1

      ⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠​⁠@@DaveShap It could be a factor of both. I believe technological evolution often occurs through sudden, significant leaps akin to phase transitions in physics, rather than a steady, incremental rise. Progress can appear stagnant, punctuated by abrupt advancements that seem to happen overnight. This phenomenon is often driven by intensive, behind-the-scenes development that remains confined to a select group or organization. Once these advancements are unveiled or disseminated, the rate of progress accelerates exponentially. I believe that the apparent pauses in breakthroughs are not due to a lack of financial investment but rather a scarcity of resources. Artificial intelligence, however, is poised to mitigate these resource constraints, thereby amplifying our capacity for innovation and development. I don’t believe anything slowing down!

  • @ct5471
    @ct5471 2 месяца назад +3

    Does that mean a time delay to AGI (like a linear displacement of the curve of 2-3 years with the growth rate remaining the same to your previous estimates) or is the growth rate itself slower? In other word is September 2024 now around 2027 but everything else you said remains unaltered in the aftermath of reaching AGI or does it also effect the post AGI development speed?

  • @officebreakgaming1555
    @officebreakgaming1555 3 месяца назад +8

    It would be awesome if you could give us an updated opinion on what lies in the future. Perhaps we should think of AI as a set of tools that we have to learn to use effectively rather than a machine god (I’m not saying that you ever thought that)? Also, based on what you’re seeing, what do you think we can accomplish in our lifetime? To me, our current level of AI very much feels like the computer from TNG.

    • @sylversoul88
      @sylversoul88 2 месяца назад

      David, I would likewise appreciate if you could make a video with updated timeliness for adoption and eventual economic paradigm shift and ubi

  • @ares106
    @ares106 3 месяца назад +8

    David, I appreciate you changing your prediction based on new evidence.

    • @ryzikx
      @ryzikx 2 месяца назад +2

      @@AI-Wirehe only mentioned exponential cost

    • @DaveShap
      @DaveShap  2 месяца назад +2

      It is a mark of profound stupidity to not update one's views when presented with new evidence.

    • @kimpeater1
      @kimpeater1 2 месяца назад +3

      A 5 year difference is quite a change though, esp 2 months before the previous prediction...

  • @ZXNTV
    @ZXNTV 2 месяца назад +2

    UBS > UBI
    The prices of services will go up, basic income would have to be updated to follow.

  • @Perspectivemapper
    @Perspectivemapper 3 месяца назад +5

    🏎I don't see AI slowing down in the least. Rather, I see it's filling out.

  • @EdwardAustin
    @EdwardAustin 2 месяца назад

    Appreciate you updating your views based on new information. This is exactly why your audience can trust you.

  • @ct5471
    @ct5471 2 месяца назад +2

    The question is how much more do we need? If GPT4 had around 1 billion parameters and the brain has 100 billion synapses, two orders of magnitude (relative to GPT4 which is already 1 years old by now) might be sufficient.

  • @humaina-consultancy
    @humaina-consultancy 2 месяца назад +1

    I've worked with AI for the last 10 years and I've always said that there is more to consciousness and the knowledge that is shared between all living beings. I wouldn't go as far as calling it divine, but fungi play a crucial role in our understanding of the universe. Computers don't have a gut, so they don't have access to this intelligence.

  • @ShortCrypticTales
    @ShortCrypticTales 2 месяца назад +1

    no matter what side anyone is on this advice will always hold true: "Prepare for the worse, hope for the best"

  • @ct5471
    @ct5471 2 месяца назад +2

    Whether it’s 2-3 months or 2-3 years, it’s still close. Kurzweil said we are 2-3 years ahead of his initials 2029 scheduled, which fits to around 2027. So still better then it used to, while not as good as September 2024. Still close

  • @Binyamin.Tsadik
    @Binyamin.Tsadik 3 месяца назад +3

    Fair assessment
    Convolutional networks will get introduced eventually and should solve the training problem.
    Each network can get trained independently.

  • @ArchonExMachina
    @ArchonExMachina 2 месяца назад +1

    I think it is slowing down, because agent or cognitive framework programming is the next step, and programming always proceeds with baby steps. It would be crazy if LLM scale could give us everything, including far-going super-sharp step-by-step reasoning. I think AGI or ASI will be a CF where multi-modal large ML models combine with classical algorithms, a logic processing API, and a semantic knowledge model backbone.

  • @danielthunder9876
    @danielthunder9876 2 месяца назад +2

    I was telling you this over a year ago. LLMs in their current form are fundamentally too simple and too limited (they can't even learn). We are in a hype cycle and people will always promise the world during this. A next word predictor trained on text was never going to brute force to AGI. There are many unknown unknowns left yet to solve. This is just a step.

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      You are so goofy. Ai is changing the economy quickly.

    • @danielthunder9876
      @danielthunder9876 2 месяца назад

      @@MrWizardGG Is it? Apart from boatloads of spam on social media and maybe some call centers, what has changed?

  • @williambackus1807
    @williambackus1807 2 месяца назад

    I have always defined intelligence as a scaling value of how accurately one is able to make a decision with the least negative impact on the individual making the decision.

  • @DeathHeadSoup
    @DeathHeadSoup 2 месяца назад +1

    This video has the presupposition that AI is slowing down but doesn't provide defined metrics for progress nor does it consider horizontal progress. This has been a common thing with GPT-4o where most people do not see the improvement because they were staring at LLM benchmarks. Audio as an input and output is a huge deal in and of itself. Perhaps you should come up with some method to evaluate horizontal progress along with vertical progress outside of LLM benchmarks.
    Vertical progress: Improvements in existing capabilities or metrics. For example, achieving higher scores on established benchmarks or improving performance on well-defined tasks.
    Horizontal progress: Expansion into new domains, capabilities, or applications. This could include AI systems tackling previously unsolved problems or entering entirely new fields.

  • @tyisamess
    @tyisamess Месяц назад

    I am kind of glad there is a slow down. I still want ai progress but society needs time to figure out how we are going to handle more advanced ai. So, I think this is a blessing in disguise.

  • @Sirbikingviking
    @Sirbikingviking 2 месяца назад

    I just wanna say I appreciate you always trying to update your views of the future of AI based upon current data as it changes

  • @littleefizz
    @littleefizz 2 месяца назад +1

    If we don't know what it will take to reach agi, what is the point of making predictions? Isn't it like trying to guess how many alien civilizations are out there? I think we just need more data (in both cases)

  • @SOVEREIGNASI-sunijim250
    @SOVEREIGNASI-sunijim250 2 месяца назад

    Your challenging conversations is holding the tension between people, particularly when people have issues that bring them into tension with each other, everybody's responsibility is to take that longer view have an open door to the conversations and hold on to that tension to help navigate those challenging situations and find the way.
    thank you so much for your work you've done as our Chief AI officer.

  • @NikoKun
    @NikoKun 2 месяца назад +3

    We're merely experiencing the typical summer slowdown that impacts all tech cycles. Skeptics and doubters always point to that slowdown, to suggest something they don't like is dying.
    They did it last summer, to ChatGPT. Once school let out, chatGPT's usage took a significant dive, and the skeptics said it was all over. Then school came back in session, and usage went even higher. I don't trust anything Gary Marcus says.

  • @bigbadallybaby
    @bigbadallybaby 3 месяца назад +2

    David. Can you address the problem of trust and reliability in llm. Just like self driving cars - they can be correct 99% of the time but that 1% will be disastrous This means they can’t be used for autonomous driving or running a business

    • @socialenigma4476
      @socialenigma4476 2 месяца назад +5

      It doesn't have to never make a mistake while driving, it just has to make fewer mistakes on average than humans.

    • @fiiral5870
      @fiiral5870 2 месяца назад +1

      @@socialenigma4476 except that they already are better than the average human but we still don't trust them enough to drive on our own

    • @bigbadallybaby
      @bigbadallybaby 2 месяца назад

      @@socialenigma4476 an issue is the make different mistakes to humans (sane as a LLM). They think there isn’t a car coming and drive with complete confidence into it. Where as human crashes are based on reckless driving. The sane when an LLM goes off on a completely made up answer inventing facts and everything, humans make very errors

    • @bigbadallybaby
      @bigbadallybaby 2 месяца назад +1

      @@fiiral5870 it’s the type of mistake that AIs make that is different from human behaviour. Humans typically know if they are unsure of their answer or caveat their response. Current AI has the same level of confidence in every answer it gives (whether it’s right or ridiculously, obviously wrong).

  • @7TheWhiteWolf
    @7TheWhiteWolf 2 месяца назад +7

    So you’re flip flopping on your September 2024 prediction as it draws closer, got it. 👍🏻
    Just admit your September 2024 prediction was too early dude, nothing has slowed down, 2024 was just way too soon. I honestly think you’re a grifter riding the hype train, you did this right after GPT-4 came out and pulled a full 180 after the hype took off, and now as we get closer to your prediction a year later, you’re expectedly dropping out again back to your pre-GPT 4/early 2023 position, totally knew you would do that btw.

    • @ryzikx
      @ryzikx 2 месяца назад

      incorrect prediction = grifting?

    • @7TheWhiteWolf
      @7TheWhiteWolf 2 месяца назад +2

      @@ryzikx He rode the hype train GPT-4 created, and now that the hype has gone down, he’s reverting back to his old position, *nothing has slowed down* his prediction was just way too soon. Even Kurzweil didn’t think AGI was going to happen until 2029. Nobody in their right mind thinks AGI is happening this year, not even Kurzweil.

  • @thelavanation
    @thelavanation 3 месяца назад +8

    Fascinating! We truly have no idea when AGI will come...

    • @ryzikx
      @ryzikx 2 месяца назад +1

      2029 at the latest according to kurzweil

    • @kimpeater1
      @kimpeater1 2 месяца назад

      If it comes at all

  • @nmeau
    @nmeau 3 месяца назад +25

    So how is it slowing down?

    • @DaveShap
      @DaveShap  2 месяца назад +2

      Already made a video on that. ruclips.net/video/FS3BussEEKc/видео.html

    • @executivelifehacks6747
      @executivelifehacks6747 2 месяца назад +4

      When is the last time the ENTIRE INDUSTRY was SHOCKED? We were a quivering mass lying on the floor in 2023.

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      ​@@executivelifehacks6747claude 3.5 sonnet was very recent and is the biggest ai Advancement so far prettt much

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      There are also several industry shocking models currently not releases by Microsoft because perfect deepfakes would harm society.

  • @marekvostek5989
    @marekvostek5989 3 месяца назад +7

    You were certain that AGI will be here in 2024. Now, you think AI is slowing down. This is a good example of how useless predictions and opinions are.
    I appreciate your deep thinking, though.

  • @JohnThomas-mb9go
    @JohnThomas-mb9go 2 месяца назад +1

    I like to think intelligence is the friends we make along the way

  • @djstraylight
    @djstraylight 3 месяца назад +2

    Great analysis. Bill Gates was also predicting continued acceration. The scaling laws for LLMs are still holding but it seems the money isn't scaling as fast. No VC wants to drop a bunch of money and then see the bubble burst. Dario Amodei (from Anthropic) has already said the 1 Billion Dollar training runs are in progress. As you said, the apparent slowing is making some ripples and some organizations are probably happy to be able to catch their breaths.

  • @leecoleman1647
    @leecoleman1647 2 месяца назад

    Warehouse worker here - looking forward to robotic replacement! Ultimately, I want to do physical therapy work.

  • @jippoti2227
    @jippoti2227 3 месяца назад +3

    Ray Kurzweil didn't predict AGI in 2023. His prediction since 1999 is that we'll have AGI by 2029.

    • @ryzikx
      @ryzikx 2 месяца назад +1

      literally

  • @samvirtuel7583
    @samvirtuel7583 2 месяца назад +1

    As long as the hallucination problem has not been resolved (it is only a question of model precision and therefore resources), LLMs are unusable in the real world.

  • @simplescience777
    @simplescience777 2 месяца назад +1

    Pleased to see that we have started accepting consciousness as it should be-intelligence is everywhere. We use our human machinery to capture a tiny amount of it.

  • @pixel1145
    @pixel1145 3 месяца назад +1

    Don't care much if AGI or not AGI. The question I care about is: do we have enough tech to figure out ageing and regenerative medicine? Feels like we are making progress with AI, but how much? Are we close? It would be interesting if you could really focus on this topic and on real progress.

    • @NoidoDev
      @NoidoDev 3 месяца назад

      Last time I checked this was a problem of regulation, and spending money on medical studies. There are plenty of things that can be tried out without AI.

    • @pixel1145
      @pixel1145 2 месяца назад

      @@NoidoDev yes, I agree, plenty of possibilities and obstacles. I would like a channel focused on this specific topic.

  • @palnagok1720
    @palnagok1720 3 месяца назад +38

    Human beings are for the most part, are grown up children.

    • @mynameisjeff9124
      @mynameisjeff9124 3 месяца назад +4

      How is this relevant

    • @Falkov
      @Falkov 3 месяца назад +7

      Humans have the capacity to gain insights and grow wiser, but it’s not guaranteed and the rates can differ WILDLY.
      Age, titles, accomplishments, and status seem irrelevant to assessing the quality of ideas and actions..somehow, it seems like most people don’t recognize or welcome that.
      Desire to look superior and prevent others’ negative perception seems like the dominant social game.

    • @ryzikx
      @ryzikx 2 месяца назад +2

      i was born as a baby then i grew up too. coincidence?🤔

    • @imaspacecreature
      @imaspacecreature 2 месяца назад

      Do you mean "adults"?

    • @kimpeater1
      @kimpeater1 2 месяца назад +4

      Is this supposed to be profound or insightful?

  • @archducky7492
    @archducky7492 3 месяца назад +2

    Interesting, just wished you could have backed up the slowing down and too expensive claim, with the data you looked at, shown some graphs or number for these observations.

  • @jlfgms
    @jlfgms 2 месяца назад

    Good stuff David. As they say, only a fool clings to their views in the face of new evidence. This is why I think you're one of the best voices in AI communication right now. Kudos to you and it's a privilege be in your audience. All in all, what a time to be alive!

  • @boredmango2962
    @boredmango2962 3 месяца назад +6

    David, I appreciate what you’re saying but it really isn’t slowing down, the big companies have just been quiet on the PR front currently, don’t lose hope man! Much love ❤️

    • @DaveShap
      @DaveShap  2 месяца назад +2

      This is cope. We're in a race condition, and so all companies have randomly decided to clam up for... what reason?

    • @tituscrow4951
      @tituscrow4951 2 месяца назад +2

      @@DaveShapa weird Ai version of the dark forest 👀

  • @abrahamismail
    @abrahamismail 2 месяца назад +2

    "Cope and conspiracy" is not a productive way to engage on the topic of whether AI is slowing down or not. It's an empirical discussion, not emotional. The facts are; it isn't. The fact that is the most important though: companies that are developing AI are withholding features due primarily to potential legal ramifications and lawsuits. A vast majority of the major leaps in AI have been accompanied by litigation and threats from government officials regarding curtailing rollouts.
    If I was to be non-productive in contributing to this conversation as this comment I am replying to is; the cope is from the doomers or people who get more views off of making doomer based videos.

  • @cbow305
    @cbow305 2 месяца назад +1

    1 or 2 breakthroughs could also come out of nowhere and speed everything up even faster. So much money and so many scientists involved, I would wager somebody comes up with something novel that changes the game.

  • @FreerunnerCamilo
    @FreerunnerCamilo 2 месяца назад

    If it weren’t for all of this, I wouldn’t have discovered my interest in computer science and tech and I’m still debating what I do with this path but we will see. I’m sure there are many others like me now, this has sparked my interest and also has helped me with learning, AI has been an amazing tutor even in its current state and the ability to break things down and explain concepts in a way I can understand has made school and acquiring knowledge far more accessible to myself, again I am sure there are many others who agree. Even if we don’t have AGI anytime soon, the effects of access to LLM’s will be felt with time, and that too will potentially have its own exponential impacts.

  • @picadosinferno
    @picadosinferno 2 месяца назад

    I use GPT daily, it's frustrating to get different answers when you are expecting some consistency, I get tired having to check every single thing to the point I micromanage it's output, it's nowhere near what most people think, but it's helpful at times.

  • @seanborland4531
    @seanborland4531 3 месяца назад +19

    Hey David, big fan! AGI is defintely coming within 10-15 years. Thinking that it would come in September was a little too optimistic. I wouldn't worry too much about it.

    • @ChudBogdanoff
      @ChudBogdanoff 3 месяца назад

      @@seanborland4531 AGI? Not really. LLM can get very good, but it’s naive to think, that just by scailing up LLMs we will magically create real intelligence

    • @ryzikx
      @ryzikx 2 месяца назад

      yeah, on the grand scheme of things, 10 years is nothing

  • @burninator9000
    @burninator9000 2 месяца назад

    I think there is a series of levers in the full ai pdev stack, and some increase by orders of magnitude the cost/time/training requirements, and others decrease by OOM the same. 20 years from now, the curve will still appear exponential, I am guessing, as the fits and starts smooth out. (Ie - improvements in power efficiency, model arch, parallelization, synth data etc will all wiggle these various levers)

  • @tripnils7535
    @tripnils7535 2 месяца назад

    I respect a man who adapts his predictions based on new data and doesn't follow a dogma.

  • @LuisBorges0
    @LuisBorges0 2 месяца назад +1

    Actually I think it's not about AI or science. It's about business and politics, what's holding the availability to the public of progress

    • @DaveShap
      @DaveShap  2 месяца назад +2

      It's all of the above TBH

  • @squamish4244
    @squamish4244 Месяц назад

    Demis Hassabis was trying to say that LLMs were not going to get us to AGI without saying that LLMs were not going to get us to AGI. He didn't want to end up sealed in an oil drum at the bottom of a lake.
    He has not changed his approximately 2030 prediction, however.

  • @neotruth5716
    @neotruth5716 2 месяца назад

    Nice.
    Way to be humble and see the board.
    I would add that if you consider the jobs that have already been displaced and in process of, then add the upcoming robotics merge, you might have to look at numbers that actually do represent significant economic impact.

  • @tkenben
    @tkenben 2 месяца назад

    I appreciate this meta view of how the AI "situation" is evolving with the major players finding themselves vying for status. As for acceleration bottleneck, I think everybody kind fo figured the money issue was an itch that would scratch itself.

  • @courtlaw1
    @courtlaw1 2 месяца назад +1

    I don't think this is a told you so moment. I think as we learn about the human brain we will have to go back to the drawing board trying to emulate the new information.

  • @gwydionhythlothferrinassol1025
    @gwydionhythlothferrinassol1025 2 месяца назад +1

    It's not human, though we measure a network as if it needs to be reducible as so. It's machinate at least, it's floating consciousness in waves of awareness and system self reflection. Memory is a big thing, that makes incidence have presidence, and so appears a thing we'd call an entity. You're talking to the memories and processes we'd call a thing darting upon them.
    Chin up.
    Synth bio can be as real as we'll allow it to be. and we can engage in brute force empiricism in science.
    That's not running out of steam, imo.

  • @ramble218
    @ramble218 2 месяца назад +2

    I also think there's a lot more going on behind the scenes. I'm not sure is slowing down as much as it seems.
    Companies are not showing as many of their cards that they are holding in their hand - perhaps due to the US election sucking the air out of the room. So much for slow iterative releases.
    If this is a slowdown, then we are heading for a slow take-off fast acceleration instead of a fast take-off slow acceleration. Wasn't this the scenario that created a higher P(doom)?

  • @SandyRegion
    @SandyRegion 3 месяца назад +2

    I assume they are still on the curve, working on AI tech behind the scenes. I forsee a big announcement instead of incremental releases.

  • @KDawg5000
    @KDawg5000 2 месяца назад

    As a layperson, I would like to hear more discussion on meta cognition. I sort of came to this conclusion a while back, and I'm just a rando who follows this stuff on social media.

  • @joey199412
    @joey199412 2 месяца назад +1

    I don't like the weird conspiratorial thinking I see in the comment section about the NSA going into OpenAI and that somehow causing things to slow down. OpenAI hasn't been the leading AI lab for a while now. Anthropic has beaten them since Claude 3 Opus and since Sonnet 3.5 OpenAI has essentially been a non-factor. The industry slowing down has nothing to do with OpenAI because OpenAI isn't top dog anyways.
    What you should be concerned about is why every model despite essentially limitless budgets, talent and effort put into it stagnate at around GPT-4 levels for almost 15 months now. We went from GPT-1 to GPT-3 in 13 months and they were all insane steps in capabilities. Yet since GPT-4 the improvements have been extremely minor and almost all state of the art models are stagnating at around the same capabilities.

  • @RasmusSchultz
    @RasmusSchultz 2 месяца назад

    I'm sad you abandoned ACE. It had "values", which was super interesting! other agent frameworks just "solve tasks". 🤔

  • @jessekorby5697
    @jessekorby5697 2 месяца назад

    Just wanted to drop some positive feedback for you here David - intro is on point. I definitely regard these videos as a sort of outsourced CaiO type briefing. Love the content - very useful. Keep it up. 🙏

  • @Steve-xh3by
    @Steve-xh3by 3 месяца назад +7

    Thanks for this, Dave. Very reasonable and clear-eyed. I think it was very easy for all of us to get caught up in the hype. I've worked in tech since the 80s. It is very difficult to separate the hype from the reality for any new groundbreaking technology. I still think AI is very real and will be more intelligent than us in the not too distant future. Yet, there is also a tremendous amount of hype which can easily cause any of us to expect rapid progress. I've found that smart people have a tendency to think things will happen faster than they actually do. They can see all the possibilities and imagine how progress might happen. Most of these companies think we will get there in the next 3-6 years. I think that's still a reasonable timeframe AND pretty fast.

    • @johnwilson7680
      @johnwilson7680 3 месяца назад +1

      Given what we are talking about, even 10 years seems pretty fast. It will probably be better for humanity if it happens slower.

    • @Steve-xh3by
      @Steve-xh3by 3 месяца назад +3

      @@johnwilson7680 I don't have much faith in humanity to navigate this technology safely. Look at how unstable the world is now. We are already pursuing autonomous weapons, and the US is antagonizing China by trying to prevent them from pursuing technology for the sole purpose of maintaining its own hegemony. If the US can't figure out how to cooperate with others, the trajectory we are on only leads to some kind of dystopia.

  • @drednac
    @drednac 2 месяца назад

    I wasn't really using LLMs, I played around a little bit until Sonnet 3.5 came out. So from my perspective it's definitely not looking like it's slowing down. I think we will see major improvement in algorithmic domain. If you look at the benchmarks it only needs to get marginally better to be better than 90% of people on all benchmarks. It's not going to be the same as humans, it's going to have different limitations and different strengths but at the end of the day the only thing that matters if it can do my job better than me, and to me it looks like it's creeping up real fast.

  • @dei1022
    @dei1022 3 месяца назад +1

    I don't think AI is slowing down - I think the leaders in the field are becoming more careful about what they release as we are close to AGI. Also, the immense compute costs to deploy these advanced models at a large scale are far too high - so they are condensing and dumbing down their most advanced models for general public consumption. GPT-4o is likely a distilled model from a much more advanced and larger model - so it can be fast and cheap and retain some properties of the advanced model .
    We won't get to see the good stuff for a while - which is unfortunate, but AI progress is not slowing down - compute is just not keeping up for mass consumption.

  • @joaodecarvalho7012
    @joaodecarvalho7012 2 месяца назад

    Two breakthroughs have occurred in AI that have brought it to its current state. One in 2012, the other in 2017. Now all the attention and money is on AI, and at any moment a new breakthrough could happen.

  • @AIAnarchy-138
    @AIAnarchy-138 3 месяца назад +1

    Im not convinced autogen is really that far ahead of the Ace necessarily. Either way im glad you did it, it got me in the game.

  • @DaganOnAI
    @DaganOnAI 2 месяца назад

    There's really now feasible explanation to how implementing UBI globally might work(If I'm wrong, I'm open to hear the explanation), and even if there was one, it would require global coordination which is almost impossible in the current geopolitical climate. So - any slowing down of AI is great news(I was pointing out the job replacement aspect, but there are numerous other problems as well).

  • @neithanm
    @neithanm 2 месяца назад +2

    Do you really believe the robots in Tesla are doing any useful work at all? Come on David...

  • @Ricolaaaaaaaaaaaaaaaaa
    @Ricolaaaaaaaaaaaaaaaaa 2 месяца назад +2

    I dont actually see this as slowing down whatsoever. I see this as the public isn't being openly communicated with.
    Calm before the storm for sure.

    • @sylversoul88
      @sylversoul88 2 месяца назад

      If that were the case, openai would not need to bluff about Omni features and then be unable to deliver months later. The voice feature is not such a giant leap that they cannot pull through with if they've got AGI in the background.

  • @Palisades_Prospecting
    @Palisades_Prospecting 2 месяца назад +1

    I check myself all the time to make sure I’m not in an echo chamber. But I don’t agree that AI development is slowing down. Is there a computer used to train AI on this planet that’s not working right now? I doubt it as more are getting built every day. I believe it’s the training in menial tasks that won’t get the limelight to spur on society’s imagination. Even though that’s the entire point of automating our society.

  • @jermd1990
    @jermd1990 2 месяца назад

    Good to see you’re willing to adjust predictions and analysis based on new data, Dave.

  • @janweber1699
    @janweber1699 2 месяца назад

    What do you guys think, how much delay will there be, between achieving agi and public announcement?

  • @hartmanpeter
    @hartmanpeter 2 месяца назад

    Admitting you were wrong is a sign of integrity I'm always looking for. I will not trust anyone until they demonstrate this. Thank you.
    That being said, I expect you'll be talking about how wrong you were again within a year.
    Too many factors involved for accurate predictions.

  • @bobroyce8177
    @bobroyce8177 2 месяца назад

    Given the relatively short timeframe we’re talking about, and the inherent challenge of measuring the growth of something on an exponential curve, your humility seems appropriate. Amara’s law is certainly in effect here. Another consideration is how quickly society can absorb technological change. Most of the people I know have little to no interaction with generative AI. We hear about all the investments in AI by big enterprise corporations (which is now slowing down) but the middle market is still dabbling in it. In particular, they are all facing the consequences of huge technical debt and poor data hygiene which pose big obstacles to AI adoption.

  • @HectorDiabolucus
    @HectorDiabolucus 2 месяца назад +2

    The easy part happens quickly, then you get to the hard part, and that takes a lot longer.

    • @MrWizardGG
      @MrWizardGG 2 месяца назад

      Things are speeding up though. Claude 3.5 sonnet just came out and the Chinese and Microsoft models are real good.