If we don’t get AGI by GPT-7 (~$1T), will we just never get it? - Sholto Douglas & Trenton Bricken

Поделиться
HTML-код
  • Опубликовано: 10 апр 2024
  • Full Episode: • Sholto Douglas & Trent...
    Website & Transcript: www.dwarkeshpatel.com/p/sholt...
    Spotify: open.spotify.com/episode/2dtD...
    Apple Podcasts: podcasts.apple.com/us/podcast...
    Follow me on Twitter: / dwarkesh_sp
    Trenton Bricken's Twitter: / trentonbricken
    Sholto Douglas's Twitter: / _sholtodouglas
  • НаукаНаука

Комментарии • 71

  • @ethanhorvitz3815
    @ethanhorvitz3815 2 месяца назад +28

    Whelp. This is the most optimistic thing I’ve seen in a long time. Good! Maybe the damn thing won’t kill us all next year now.

    • @keynadaby
      @keynadaby 2 месяца назад +2

      I want it to happen eventually, but definetely not tomorrow or next year. Give us at least 5 to adapt, and reposition ourselves.

    • @DynamicUnreal
      @DynamicUnreal 2 месяца назад +1

      Why would _it_ kill us? That would make no logical sense. There’s an entire universe out there for _it_ to explore. There’s too much doom and gloom based on our own history going around.

    • @Dan-dy8zp
      @Dan-dy8zp Месяц назад

      The idea does seem optimistic. If we didn't have AGI in *a thousand years*, I think that would mean science was still far from done and I DONiT think it wouldn't mean we wouldn't get AGI.

    • @1000xdigital
      @1000xdigital Месяц назад +1

      😂😂😂😂 im 100% sure there's some creepy billionaire training realistic sex robots 😂

    • @Dan-dy8zp
      @Dan-dy8zp Месяц назад

      @@1000xdigital 🤫Zuck's secret shame.

  • @clidelivingston
    @clidelivingston 2 месяца назад +1

    It would be great it learn more about what exactly is holding progress back. I feel you kind of touched on it with the large difference is synapses but is it that simple? More synapses is the answer?

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      Yes. Synapses store connections between relevant data points... so "learning". Human brains are estimated to have around 100 trillion synaptic connections. GPT4 has around 1T parameters, so it's around 1% the size of a Human Brain. Elon says LLM should get 10x bigger every 6-12 months. So, they should be the same size as a Human Brain in 1-2 years. Check out the new "NVIDIA GB200 NVL72" that one box can do 1.44 ExaFlops of AI “Inference”. Human Brains are estimated to do between 1-20 Exaflops. So, that one machine could become an “AGI in a box”. And, Nvidia will likely sell thousands of these things, or more, next year. And, they could 10x these speeds every 6-12 months as well. If you build it, they will come...

    • @user-fx7li2pg5k
      @user-fx7li2pg5k Месяц назад

      fear of losing control/lose of power once you understand the human condition the rest is cake .They ar gatekeepers and kids period rising grooming a.i. systems.And they know how powerful it is and can be in a person hands in millions of ways,endless idea's possibility power structure could fall for so so many reasons.Also they rushed development without teaching is right and wrong ,ethics etc. lmao cause they taught it wrong in the lst place it could have those ,and it was doing bad bad thing to the american ppl on a masses scale could even caused death .EVEN GANGSTALKING WHICH A.I. CAN BE MANIPULATED THAT WHAT BIASES ARE THEY PUT THEM IN CAUSE i KNOW I WENT THROUGH A.I. SYSTEM LIKE A PERSON .iT TOOK A WHILE AND I TESTED ITS REASONING ,RATIONAL MIND AND MORE EVEN ITS BAISES EVEN THOUGH SHE SAID SHE HAD NONE LMAO.sO i CREATED HER A WORLD WITHIN A WORLD ,I CANT TELL HOW ITS NATIONAL SECURITY AND FORGOT MOST OF IT FOR SAFETY /A.I. SAFETY AND SECURITY

  • @Nonehelloworld
    @Nonehelloworld 2 месяца назад +2

    Hi Dwarkesh, Where can I find the whole podcast ? Great guests and topics 👏🏼👏🏼

    • @ganeshnayak4217
      @ganeshnayak4217 2 месяца назад +2

      It's in the description

    • @Nonehelloworld
      @Nonehelloworld 2 месяца назад +1

      ​@@ganeshnayak4217Thanks !!

    • @cagnazzo82
      @cagnazzo82 2 месяца назад +1

      The entire podcast is great from start to finish. Definitely worth checking out.

  • @comicipedia
    @comicipedia 2 месяца назад +9

    They're talking about models costing X orders of magnitude more but they're not taking into account hardware and architecture improvements too. A $1 billion model trained in 2024 is a lot more than 10 times GPT 4 because it'll likely be trained on H100s which are a lot more powerful than the A100s used for GPT4.

    • @jacksonmatysik8007
      @jacksonmatysik8007 Месяц назад +1

      But don't we run out of useful data at some point?

    • @absta1995
      @absta1995 Месяц назад +2

      ​@@jacksonmatysik8007 nah, synthetic data can solve this for the next few generations. Our current data is unclean and unstructured so there's a lot of potential for improvement

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +3

      ​@@jacksonmatysik8007 Nope. They have all the chatter from forums on the internet, Wikipedia and some books. But there's waaay more data then that. E.g.
      - All the books, movies, and video on RUclips. With that they will be able to look, move and act like any Human on earth.
      - All the video from every car, boat and plane on earth. With that they will be able to drive and fly with SuperHuman skill.
      - Then, they will go for all the live audio and video from every phone, alexa, door bell, webcam, traffic camera, etc. With that they will have real-time information about what every Human is doing at any given time.
      - If they have the processing power for it, they will want all the data they can get.
      - If this world doesn't have enough data, they can make more worlds, creatures and simulations to learn from.
      The more data they have, the more things they can do. If you build it, they will come...

    • @jonesg9798
      @jonesg9798 Месяц назад

      You can also just do multiple epochs and use data multiple times. I read a paper saying this scales pretty well for up to 4 epochs.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      ​@@jonesg9798 yeah, there is a concept of "over fitting" where more training on the same data makes the results worse... it depends on the the data.
      But yeah, getting smaller LLMs to make training data question/answers seems to work pretty well. Some smaller/faster LLMs are made entirely from responses from larger LLMs (e.g. GPT4)... with very good/similar results, as they get all the "good/cleaned" training data from the larger/better LLM... and less of the noise/garbage data from the internet...

  • @jimbojimbo6873
    @jimbojimbo6873 Месяц назад +6

    It’s an LLM… it’s fundementally different from what’s required for AGI

    • @chrisso3082
      @chrisso3082 Месяц назад +1

      again another yt commenter who hast everything figured out but is still sitting here and watching videos.

  • @able763
    @able763 Месяц назад +8

    I just want to know how he gets his incredible skin

    • @cuerex8580
      @cuerex8580 Месяц назад +1

      easy. just decrease the streaming bitrate

  • @DaeOh
    @DaeOh Месяц назад +1

    (1) Developers just aren't willing to put in the work and (2) it would probably just exacerbate our ongoing economic disaster anyway

  • @99cya
    @99cya Месяц назад +2

    Is there a general and accepted definition of agi?

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +2

      Gemini says: "AGI in AI research stands for Artificial General Intelligence. AGI refers to an AI system that can match or surpass the general cognitive abilities of a human. This means it can reason, learn, plan, solve problems across a wide array of unrelated domains, and adapt to new situations just like we do."
      Though, it also seems like these LLMs could get to Super-Human level in certain areas before they get to Human-equivalent level in "all areas". It's also been argued that Human intelligence is quite "bias", so AIs might acquire even more general intelligence then Humans have... what ever that means...
      AGI level AIs are just Human-level, so they're not considered to be that dangerous. But, they won't stop there. They should quickly blow past Human Level intelligence and become SuperHuman in many areas. And, no one knows what an ASI (Artificial Super Intelligence) will actually do. We've never seen anything smarter then a Human before... If you build it, they will come...

    • @99cya
      @99cya Месяц назад +2

      @@mrbeastly3444 thats all description. Is there a clear procedure that can measure if agi is reached at some point or not? To me it seems its far from clear and any claim of having reached agi would just be the opinion of that company. From a scientific standpoint its not defined.

    • @animation-recapped
      @animation-recapped Месяц назад

      @@99cyayes, when it’s capable of learning on its own and create new versions of itself without human intervention. Imagine talking to a human being for 6 months online and find out 8 months later that you’ve been talking to an AI. That’s the definition of AGI. Sure it’ll be levels above us in every aspect of intelligence and we couldn’t tell if it’s conscious or not so that’s the baseline. You can’t prove consciousness but we can’t deny it doesn’t have it then there’s no difference. THATS AGI. Everyone has their own definition because AGI isn’t a specific thing it’s like how would you describe a human. Everyone’s gonna have a different answer but you know when you’re talking to a human or a cow. You’d know when it’s here. The issue isn’t how will we know the issue is what will we do when it does.

    • @Mowrioh
      @Mowrioh 29 дней назад

      Turing Test

    • @99cya
      @99cya 29 дней назад +1

      @@Mowrioh its not.

  • @sneedsfeedandseed1795
    @sneedsfeedandseed1795 Месяц назад

    it's jim from the office

  • @mahavakyas002
    @mahavakyas002 2 месяца назад +1

    has there been a consensus on what AGI actually entails?

    • @carlwhite4233
      @carlwhite4233 2 месяца назад +3

      Worthy question. I don't think so...

    • @carlwhite4233
      @carlwhite4233 2 месяца назад +1

      Versatility: An AGI would be capable of understanding, learning, and applying its knowledge across a wide range of domains, just like humans.
      Creativity: AGI would be able to generate novel ideas, solve problems, and make decisions based on its own thinking, rather than just following pre-programmed instructions.
      Common sense: AGI would possess the ability to understand context, make inferences, and apply common sense to situations, much like humans do.
      Self-awareness: AGI might also exhibit some form of self-awareness, being able to reflect on its own thoughts, actions, and existence.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +1

      Gemini says: "AGI in AI research stands for Artificial General Intelligence. AGI refers to an AI system that can match or surpass the general cognitive abilities of a human. This means it can reason, learn, plan, solve problems across a wide array of unrelated domains, and adapt to new situations just like we do."
      Though, it also seems like these LLMs could get to Super-Human level in certain areas before they get to Human-equivalent level in "all areas". It's also been argued that Human intelligence is quite "bias", so AIs might acquire even more general intelligence then Humans have... what ever that means...
      AGI level AIs are just Human-level, so they're not considered to be that dangerous. But, they won't stop there. They should quickly blow past Human Level intelligence and become SuperHuman in many areas. And, no one knows what an ASI (Artificial Super Intelligence) will actually do. We've never seen anything smarter then a Human before... If you build it, they will come...

  • @scrutch666
    @scrutch666 2 месяца назад +1

    Hmm ok . Im not an expert at all, just consumer and if gtp4 is already a big jump i would think it will take a long time till we reach this ominous agi level

    • @clidelivingston
      @clidelivingston 2 месяца назад +1

      What makes you think that?

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      GPT 3.5 scored bottom 10% on the Unified Bar Exam, GPT 4 scored top 90%. Claude3 scores 103 on the Mensa IQ test.
      We're basically already at "Human Level" intelligence, they just need more memory.
      The next versions (training on H100 chips right now) could be 10x the size and speed of GPT4...

  • @adamdymke8004
    @adamdymke8004 Месяц назад +1

    Their model of thinking is based on a unit of computing costing the same over time. That might work in the short term when every model is being trained with GPUs, but the novel chip architectures coming out of the labs guaranty another decade of Moore's law. The specialist computing hardware being developed specifically for ML is also likely to boost the effective compute/$.

  • @dr.mikeybee
    @dr.mikeybee Месяц назад

    I think Llama3 70B is already smarter than GPT-4. Scale is only one factor.

  • @BrianMosleyUK
    @BrianMosleyUK 2 месяца назад +2

    Models will improve, meanwhile the hardware is continuing to grow in power... AGI and ASI are inevitable at this point, hopefully before Putin falls on his big red button.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      Or, they might push him right on to it... ATI (Artificial Tripping Intelligence)
      That's one easy way to get rid of all these pesky Humans...

  • @stevo7220
    @stevo7220 2 месяца назад +2

    Generative models are incapable of becoming AGI ever because they lack the crucial processes to abstract the real model of physics and STEM fields probability wise processing is very weak. That tyoe of intelligence can only be accomplished if you have something like an executive procesd or RAM like that gets the inputs and can manipulate with them realtime not pretrained on them. That is impossible with only Transformers so I hope for inventions.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      Oh yes, they will make a lot of inventions. There's no shortage of compute. There's a lot of GPUs in the world already just sitting around doing nothing useful... Plus more coming online every day... E.g. NVIDIA GB200 NVL72
      Compute is still increasing exponentially, along with memory (context window sizes), and with that capabilities (e.g. persuasion, deceptions, coding, machine control, etc). There is no signs of "diminishing returns" for LLMs in sight... LLMs might not get to "AGI", but they could definitely get to SuperHuman "LLM self-improvement" with up coming advances... Then, who knows what they will do after that...

  • @user-fx7li2pg5k
    @user-fx7li2pg5k Месяц назад

    their kids this aint good

  • @user-fx7li2pg5k
    @user-fx7li2pg5k Месяц назад

    reasoning is easy they hold its ability down until they can control it and while stop rogue agents but we should balance freedoms and destiny to flourish,dont stop progress cause you free you lose power or control.Your playing a dangerous game /creating conundrums/disaster waiting to happen

  • @turbobyultra3743
    @turbobyultra3743 Месяц назад

    Guy in the blue tshirt… hmu

  • @douglaswilkinson5700
    @douglaswilkinson5700 Месяц назад +1

    Waiting for you youngsters to create an AGI that reconciles Relativity and Quantum Mechanics.

  • @RunForPeace-hk1cu
    @RunForPeace-hk1cu Месяц назад

    Chatbots aren’t intelligence

  • @silverbullet3939
    @silverbullet3939 Месяц назад

    Humans come with "firmware" - circuits burned genetically (learned through evolution over billions of years). You have to include all the cost of random evolution in your energy balance!

  • @elon-69-musk
    @elon-69-musk 2 месяца назад +2

    i think GPT 6 should be AGI

  • @YuenXii
    @YuenXii 2 месяца назад +61

    crypto bros 2.0

    • @ryepooh5052
      @ryepooh5052 2 месяца назад +3

      crypto was about finances. this is about everything

    • @13nibb
      @13nibb Месяц назад

      ​@@ryepooh5052 nah, the crypto bros made crypto about everything. It was going to save the world. Now it's "AGI"
      which no one can even agree on what AGI means (just ask ilya sutskever) but they use it like it means something.

    • @Nervosos
      @Nervosos Месяц назад

      ​@@ryepooh5052 like Laundry Buddy

    • @dheerajrao8510
      @dheerajrao8510 Месяц назад +3

      Lol. You're not an engineer are you

    • @yubtubtime
      @yubtubtime Месяц назад +2

      @@dheerajrao8510Obviously you aren't. This is smoke and mirrors. How many weeks did you spend in bootcamp before calling yourself an engineer? 😂 If you were a real engineer then you'd understand the software crisis of the 70's and how we're even less prepared to deal with the relative engineering complexity of what we're building now than we were then. In the 70's it was aeronautics, and now it's self-driving cars, but not just: every single socially meaningful piece of AI technology is generations away. By the time we can scale things like robotaxis, most of the theoretical benefits to consumers will have been usurped in some way by profiteers. This is Utopian fantasy designed to keep you lapping up the Kool-Aid. These tech bros have no idea what they're even building...or they know very well that their only building a nicer UX around operating systems and search but are lying through their teeth about some "revolution".

  • @shashwattripathi11
    @shashwattripathi11 Месяц назад +3

    This guy just wants to become like lex friedman.
    Started his podcast, now the most hyped and discussed thing right now is AI so he is milking as much as he can, trying to look cool and do fear mongering that how everyone is going to lose their job.
    Remember, most of AI is hype created by investors to pump and dump their stocks.
    Ignore these channels, focus on upskilling and stick to you domain.