How Did Dario & Ilya Know LLMs Could Lead to AGI?

Поделиться
HTML-код
  • Опубликовано: 6 мар 2024
  • Dario Amodei, CEO of Anthropic. (August 2023)
    Full Episode: • Dario Amodei (Anthropi...
    Transcript: www.dwarkeshpatel.com/dario-a...
    Apple Podcasts: apple.co/3rZOzPA
    Spotify: spoti.fi/3QwMXXU
    Follow me on Twitter: / dwarkesh_sp
  • НаукаНаука

Комментарии • 83

  • @farhadkarimi
    @farhadkarimi 4 месяца назад +40

    You have officially the best podcast by far

  • @peter-rhodes
    @peter-rhodes 4 месяца назад +26

    I always come back to that saying from this "they just want to learn'. It all clicks together when you realise what it's saying.

  • @Neomadra
    @Neomadra 4 месяца назад +59

    Wait, we have AGI already?? 😅

    • @VictorMartinez-zf6dt
      @VictorMartinez-zf6dt 4 месяца назад +15

      No, we don't.

    • @F255123
      @F255123 4 месяца назад +31

      "AGI has been achieved internally"
      - 🍎

    • @randombubby1
      @randombubby1 4 месяца назад +3

      It's usually the claim that our "goal posts" have shifted and we have achieved what we would have previously considered AGI. I mean to be fair I think it's not super far off from the truth. If you talked to someone in 2014 about what AGI would first look like, it probably wouldn't be too dissimilar from the chatbots of today. Although I suppose there is an implicit "reasoning" that AGI is meant to have that has yet to be achieved so...

    • @vectoralphaSec
      @vectoralphaSec 4 месяца назад +2

      Yes, but not known to the public. Shhhh....

    • @vjzb3
      @vjzb3 4 месяца назад +4

      We have baby AGI. ChatGpt, Claude, Gemini and these types of generally-capable multi-modal modals are the seeds of AGI. They just need to get much smarter, and it’s looking like that may simply be a matter of scaling up. The more we scale, the more emergent properties we’ll find

  • @jimmybaker4821
    @jimmybaker4821 4 месяца назад +18

    Interview him again

  • @caparcher2074
    @caparcher2074 4 месяца назад +9

    I'm not a Yan Lacun fan but what he said on Lex's podcast the other day gave me pause for if LLMs could truly achieve AGI. Language is a very lossy, bloated, slow medium for representing knowledge and intelligence. Most animals that we would consider intelligent don't even use language as we know it. We probably need a different abstraction than language, that is closer to reality, to have an AGI with an actual understanding of the world. That abstraction would probably involve embodiment in my mind (with senses like seeing, hearing, touch, etc)

    • @ZenTheMC
      @ZenTheMC 4 месяца назад +2

      Embodiment is likely happening this year with the huge interest and investments in humanoid robots, so even if that’s true, it’s still not far away.

    • @ilevakam316
      @ilevakam316 4 месяца назад

      I agree. We are in a hype cycle. Remember self driving was doing to eliminate all truck drivers. 8 years later we the progress, while overall impressive, has stagnated.

    • @T_Time_
      @T_Time_ 4 месяца назад

      @@ZenTheMCall humanoid robots video been kinda of trash especially when compared to human. The only robot that can do an job better than human, are one that are specifically made for that function. The slow robot humanoid if they have to learn by trying every task, like machine learning does, it will take 1000x longer than training of language model.

    • @japneetsingh5015
      @japneetsingh5015 3 месяца назад +1

      ​@@ilevakam316 Self driving was being developed by only one or two companies and there was barely any research going on in the universities but for AI it's a very different ball game

    • @ilevakam316
      @ilevakam316 3 месяца назад

      @@japneetsingh5015 the self driving problem is an AI, much easier one than say being an software engineer.

  • @andybaldman
    @andybaldman 4 месяца назад +3

    All of these CEO’s you’re hearing overhyping things are just playing the CEO game. Their goal is to drive funding. Not to tell the truth.

  • @kawingchan
    @kawingchan 4 месяца назад

    I remembered Hinton also said something like that in a talk, that transformers really want to work, unlike what he was trying at the time, either the capsule net or the glom?

  • @shawnvandever3917
    @shawnvandever3917 4 месяца назад +2

    Scale matters because the better the model understand the better chance it can figure out the problem in a forward pass. Right now if you restricted the brain to one forward pass the smallest LLM would out perform the smartest brain hands down. The brain does continuous prediction updates 100s per second to generalize and stay on the rails. As these models move in that direction they will become extremely good at reasoning.

  • @kbizzy111
    @kbizzy111 4 месяца назад +1

    Bro has the most insane guests!!!

  • @chenlim2165
    @chenlim2165 2 месяца назад

    LOL, and what were the last 2 factors?

  • @arc8dia
    @arc8dia 4 месяца назад +7

    5:51 He's mic'd up! He's being fed what to say by a GAI

    • @dg-ov4cf
      @dg-ov4cf 4 месяца назад +4

      General Amodei Intelligence

    • @thinkingcitizen
      @thinkingcitizen 4 месяца назад

      Wait he pulled something black out

  • @SmirkInvestigator
    @SmirkInvestigator 4 месяца назад

    There was a moment where people touted it didn't matter how clean the data was. I imagined they were just exalting hiw suprisingly effective and robust deep-wide perceptrons nets are. I think we need to focus on creating architectures that can have partitions of the models updated, replaced, proxied, and removed in some sense. In the moment it seems this should be layers. That way when we notice failures we can synthetically generate data to relieve it at the layer and not consume suns to train a model. We can specialize models like legos. Just enough training to get the layers to anneal. Deep reasoning is interesting. Does it need to be spinkeled throughout or can we have a universal reasoning module? Can it be done w/o it being an agent itself? Whats the pro/cons of that.

  • @JustinHalford
    @JustinHalford 4 месяца назад +8

    Thank you for capturing the insights of those creating the most powerful technology in human history. You’re providing those who are trying to keep up with the bleeding edge a critical window providing visions of where we’re headed.

    • @keyser021
      @keyser021 4 месяца назад

      The socially inept Frankensteins creating visual vomit in hopes of controlling the chaos long enough to syphon trillions from the US economy while sending millions to the firing line of unemployment leading to a world of podcast infinity where everyone is forced to make a living online using AI tools to regurgitate endless waves of emotionless miasma funneled into a moron Möbius strip where expertise and depth of experience are replaced by Potemkin Village idiots with the skills of car salesmen? Heaven on earth.

  • @gabirican4813
    @gabirican4813 4 месяца назад

    Thanks!

  • @Gerlaffy
    @Gerlaffy 4 месяца назад

    Know it would lead to something we don't know it leads to? Genius title...

  • @scrutch666
    @scrutch666 4 месяца назад +1

    In 10 years… do you remember when they made us believe we will get agi and bought all ai related stocks and then went bankrupt and this guys rich?

  • @blakebaird119
    @blakebaird119 4 месяца назад

    TLDR we cannot tell at all how it works but we keep plugging the legos together hoping

  • @hypercube717
    @hypercube717 4 месяца назад

    Very interesting.

  • @DanaOredson
    @DanaOredson 2 месяца назад

    Hmm, well it seems as though we still don't have AGI, so is this premature/hopium?

  • @fine93
    @fine93 4 месяца назад

    what did ilya see

  • @DannyBoy443
    @DannyBoy443 3 месяца назад

    I'm sorry, robots have been used and running everything from candy manufacturing to medicine and even retail. Why (besides crappy sensors maybe? lol) would it be difficult to get good data from/for robots?

  • @InspiredScience
    @InspiredScience 4 месяца назад +1

    Dario Amodei is amazing. It's unfortunate his engineering talents have to play a backseat to running the business as the CEO.

    • @1fattyfatman
      @1fattyfatman 4 месяца назад +2

      Engineers make the best CEOs. Tougher to BS by lackey types.

    • @InspiredScience
      @InspiredScience 4 месяца назад

      @@1fattyfatman- I am a software engineer that eventually moved to VC and ran a number of startups in interim stages. I can agree there are certainly many benefits to engineers in upper management; however, it's not a cut-and-dry universal rule.
      More importantly, it's not whether they are bad or good, only that Dario is so brilliant and humble, it's a shame all of his energy can't go into engineering.
      It's a compliment to Dario Amodei.

  • @yosup125
    @yosup125 4 месяца назад +1

    for the algo

  • @Scybes
    @Scybes 4 месяца назад +1

    jumping the gun a bit eh

  • @Alex-fh4my
    @Alex-fh4my 4 месяца назад +4

    is the AGI in the room with us?

  • @someguy_namingly
    @someguy_namingly 4 месяца назад

    I love these interviews, but that's a very clickbait-y and misleading title. lol

  • @Thomas_jeba
    @Thomas_jeba 4 месяца назад

    Davin

  • @Osama_Abbas
    @Osama_Abbas 4 месяца назад

    Can we all agree now to stop abusing the phrase "all you need"?

  • @hunghuynh1946
    @hunghuynh1946 4 месяца назад +3

    They didn't say LLM would lead to AGI. Hype. You primed them to say that and they said maybe. If people want to believe in the hype and push up their stock, why would they say no. Demis Hasabis actively confirmed that LLM can't make planning, a key feature of AG.

    • @therainman7777
      @therainman7777 4 месяца назад +2

      The quote says that LLMs will *lead* to AGI, not that LLMs *are* AGI. I think you overreacted to the claim.

  • @Michsel77
    @Michsel77 4 месяца назад +2

    your podcasts are nice but stop with the damn clickbait

    • @dg-ov4cf
      @dg-ov4cf 4 месяца назад +1

      i wanna see dwarkesh make it big so at least for now i find it forgivable. god knows i've done worse things for a buck

  • @James-mk8jp
    @James-mk8jp 4 месяца назад +1

    They didn’t because they won’t

  • @LuigiSimoncini
    @LuigiSimoncini 4 месяца назад +3

    Simple: they didn’t. Stop the hype!

    • @andybaldman
      @andybaldman 4 месяца назад +1

      Yep. All of these CEO’s are desperate.

    • @sapito169
      @sapito169 4 месяца назад

      exactly

  • @fikretesfay9656
    @fikretesfay9656 4 месяца назад +3

    first one here

    • @dg-ov4cf
      @dg-ov4cf 4 месяца назад

      😳😳he's just too fast🤯🤯

  • @nonindividual
    @nonindividual 4 месяца назад

    What a bad video: it's bad not because everything being said is garbage, but because so many truths are inextricably mixed in with such awful garbage.

  • @andersfant4997
    @andersfant4997 4 месяца назад +3

    According to Gary Marcus, LLM will not reach AGI.

    • @genegray9895
      @genegray9895 4 месяца назад +15

      Given that Gary Marcus has been wrong about every single prediction he's made about AI, maybe we shouldn't believe him on this one.

    • @n-hm
      @n-hm 4 месяца назад +7

      Gary Marcus is a joke though

    • @DanielSanchez-jl2vf
      @DanielSanchez-jl2vf 4 месяца назад +5

      Gary Marcus is for AI as a film critic is for movies, always making opinions about something they don't do. I prefer a 1000 times the sceptisism that yann lecunn has about deep learning; at least he makes stuff.

    • @andersfant4997
      @andersfant4997 4 месяца назад +2

      @@genegray9895 I didnt say I belive Marcus.. Its way over my paygrade to make a prediction. If I understand Max Tegmark correctly, he value the arguments from dommers, but he also listen to the minority that say there is no existential risk what so ever. So even he is unsure?

    • @vectoralphaSec
      @vectoralphaSec 4 месяца назад +2

      Not trust worthy.

  • @christian-schubert
    @christian-schubert 4 месяца назад +1

    It actually comforts me to see that many people in the comment section recognize this scam for what it is. LLMs alone cannot and will not lead to AGI. It's now up to us to stop clicking on nonsense like this