AI nationalization is imminent - Leopold Aschenbrenner

Поделиться
HTML-код
  • Опубликовано: 19 дек 2024

Комментарии • 140

  • @nossonweissman
    @nossonweissman 6 месяцев назад +35

    Mr Anderson. It's inevitable

  • @GaryBernstein
    @GaryBernstein 6 месяцев назад +30

    Best convo of 2 valley girls, like, ever

  • @LagoLhn
    @LagoLhn 6 месяцев назад +12

    Dwarkesh, you are completely out of touch with the way this will go down. Startups exist in national defense but they are mostly controlled by oversight and prescribed contract deliverables. There will inevitably be a form of nationalization here just as there are in other weapons and biological research domains. Watch. We are moving well beyond the quaint world of startup ideology here.

    • @LagoLhn
      @LagoLhn 4 месяца назад +2

      @@threeNineFive0 uh-huh. Federal regulation will come to roost at some level here. It will lag the space initially but once we get to the level of national security concern you can rest assured that federal regulation akin to encryption export control will come to pass.

    • @LagoLhn
      @LagoLhn 4 месяца назад +2

      @@threeNineFive0 write another prompt Junior

    • @pilgrimage0
      @pilgrimage0 3 месяца назад +2

      @@LagoLhnregulation =/= nationalization. You are conflating 2 different things

    • @LagoLhn
      @LagoLhn 3 месяца назад +1

      @@pilgrimage0 I am not conflating the 2. In highly sensitive sectors impacting national security, regulation and a dimension of nationalization are highly correlated

  • @SamHoustonSF
    @SamHoustonSF 6 месяцев назад +28

    Super interesting conversation and points made. I'm glad people like him are speaking out. I hope DC is listening.

  • @adamkadmon6339
    @adamkadmon6339 6 месяцев назад +11

    I believe Leopold is wrong. There will be a saturation threshold to the ability of LLMs (somewhere around 2-5 times current capacity). This comes from graph theory and scaling laws, and means that mega-compute will not add so much beyond this. Verhulst kicks in sooner rather than later and the exponentials studied in Leopold's economics thesis on risk don't go Malthusian. At the same time, improved methods (much better than transformers) will massively reduce the compute required to reach the capacity-saturation, so that someone with a $5000-10000 PC (in say 8 years time) will have basically most of this capacity, not far behind the government data center. This capacity will be high, of course, but it will be available to everyone with $10000. This will be a massive stimulus to social and economic change. But it will not be the most-envisaged scenario he paints of super-intelligence and the need for government and industry to nail down technocratic geopolitical control before some apocalyptic scenario occurs or the bad Chinese get hold of it. (Having said that there will be problems to do with misuse of these capacities, but nothing that we can't solve.) The picture I see is of everyone having greatly enhanced information and agency. Sorry, government - new distributed control structures will emerge from this. AI will be a leveler, not a technocrat's fantasy.
    Who am I to think this? I work in the area and have done so for 40 years. I thought a lot about it for a long time. I am posting anonymously for various reasons, but I have ten's of thousands of citations to my work. I know the AI maths very well. I am painting this alternative scenario because based on my analysis, I find it more plausible, and I want to add it to the discussion as it is a perspective that I have not seen articulated. The world must make choices around this technology, and the public must feel reassured. Many right now agree with Leopold, but I think he has just gotten a bit carried away with some abstract economics models and the Bay Area buzz. I think the real-life scenario will be much more familiar. Basically everyone will get a super-phone, and the government's super-phone will not be much better than yours.
    R&D will completely fly out of control, into a kind of distributed "Diamond Age" phase (Neal Stephenson novel). This will be like the Internet on steroids, a self-regulating ecosystem of multi-scale structures. All the Tyrell-Corporation/Sky-net wannabes will devolve into being stores where you get your devices - kind of like Radio Shack.

    • @AydenNamie
      @AydenNamie 6 месяцев назад

      I think there will be too much money involved for any war, if just one crazy r&d project goes really well, such as anti aging, energy, etc sci fi esk tech then people may actually understand the revolutionary potential. At that point we will see the consumerism paradigm change I’m not sure how tho

    • @AshleyBloom13
      @AshleyBloom13 6 месяцев назад +4

      Your post is incredibly fascinating.
      Although I'm some oddball from Australia, I did read Leopold's 165 page essay recently and feel like he had some cult-like hubristic perspective. I've spoken with a very honest, old close friend who is a senior ML engineer at Google, and he felt 1-2 massive revolutions akin to Transformers were needed for what we point to as true AGI, if it's in fact possible at all.
      I wish there was a safe way that I could track your work or anonymously (at least on your end) communicate with you. Maybe via Session or Simplex if for some reason it seems worth your time at all.
      In any case, I appreciate this feedback, as few have read Leopold's essay and watched this talk due to their freshness. And I'm guessing under 20 people in the world have read your post.

    • @adamkadmon6339
      @adamkadmon6339 6 месяцев назад +1

      @@AshleyBloom13 Thanks. I think it's a very credible scenario. It just requires (A) that models improve little with massive compute increase, and there are already signs of that, and (B) that they can be shrunk by a 5 orders of magnitude which is plausible as the Transformer, though brilliant, is a terrible waste of bits. Anyway if you want to email with me about it, just plop your email here (or somewhere and let me know) I'll check this space. You can delete after. I don't see how else to set up the contact.

    • @bauch16
      @bauch16 6 месяцев назад

      It's Just a Matter of time till AI Kills us give it 80 years

    • @GeorgesSegundo
      @GeorgesSegundo 3 месяца назад

      @@AshleyBloom13 Count me in brother, it is a really epic post. Greetings from Brazil.

  • @Citsci
    @Citsci 6 месяцев назад +6

    He claims to have left Europe in disgust but he clearly didn't leave the euro statism and lack of openess behind maybe even xenophobia, and they show up everywhere in his thinking

    • @valdomero738
      @valdomero738 6 месяцев назад +3

      Were it not for statism this world would not be possible. FDR's big government is the only reason why you could post that babble on youtube in the year 2024.

    • @ssssaa2
      @ssssaa2 2 месяца назад

      @@valdomero738 I didn't realize that there was no development before FDR. I guess people in the 1920s were living just like people in George Washington's time then.

  • @Optimistas777
    @Optimistas777 6 месяцев назад +9

    1:50 what was this “yup” “yup” “yup” about

  • @js_es209
    @js_es209 6 месяцев назад +76

    This guy looks like an AI generated Character himself.

    • @jeevan88888
      @jeevan88888 6 месяцев назад +5

      You can say that to literally anybody nowadays 😂

    • @adamsmith1267
      @adamsmith1267 6 месяцев назад +2

      Haha I was thinking the same thing!! Got a really odd look about him.

    • @bauch16
      @bauch16 6 месяцев назад +1

      Because he is they made it clear in the beginning

    • @AdamBechtol
      @AdamBechtol 23 дня назад

      :p

  • @donkeychan491
    @donkeychan491 6 месяцев назад +3

    This guy is a mid posing as an intellectual. Talking fast does not equal intelligence, especially when you're unable to clearly articulate your points.

  • @DaHibby
    @DaHibby 6 месяцев назад +12

    this guy is clearly a genius and has been behind said closed doors that he has talked about as far as the development of ai goes. but i also feel like bro has spent WAY too much time running through the same snowballing nuclear Armageddon hypothetical over and over. like yea, whoever has the best ai probably has the capability to wreak massive amounts of damage to their foes, but to what end? like you gotta be a little humanistic here i feel. not everyone, even the authoritarian powers like china, have those kinds of incentives. i feel like most people and governments strive more towards self preservation than world domination, no?

    • @af.tatchell
      @af.tatchell 6 месяцев назад +3

      I think you need to take at look at the theories of international relations. "Realist" theories of IR lay out how states' mutual desire for self-preservation result in systemic incentives to pre-emptively destroy their rivals before it's too late. Stability in this school only emerges when there's a balance of power, with either one unipolar hegemonic power, or equally balanced poles of power (in bi-polar or multi-polar distributions), i.e. the present nuclear MAD stability.

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 6 месяцев назад +1

      Take a look of his "situational awareness" paper. The reply to your final question is probably no.

    • @donkeychan491
      @donkeychan491 6 месяцев назад +3

      "genius"? - that's setting the bar pretty low, isn't it? He comes across as an inarticulate junior research assistant with a messiah complex.

  • @orkun171
    @orkun171 6 месяцев назад +1

    Why the hell when ai automation mentioned thinks coding and entertainment bs.Manufacturing , research and pysical security will be immensily impacted.This is bigger than first tractors

  • @greenfinmusic5142
    @greenfinmusic5142 6 месяцев назад +1

    "will be activated" -- My man is a bit naive: we are well past the future tense at this point.

  • @edmondhung181
    @edmondhung181 5 месяцев назад +3

    He should worry about those three letters agencies than Trump or the general public.

  • @AisforAtheist
    @AisforAtheist 3 месяца назад +1

    The "you knows" and "likes" are killing me.

  • @jonnygemmel2243
    @jonnygemmel2243 6 месяцев назад +2

    Governments are the product of a scarcity mindset, making binary choices based on perceptions of finite resources.with radical abundance or plentitude government becomes irrelevant

    • @TheNewOriginals450
      @TheNewOriginals450 6 месяцев назад +1

      Who's going to distribute the radical abundance? Who's going to pay welfare to the population when there's 50% + unemployment?

    • @jonnygemmel2243
      @jonnygemmel2243 6 месяцев назад +1

      Interesting point…but there’s going to be 100%no need to work. Employment will be optional in full automation “economy “. Open ai are working on Worldcoin to distribute ubi, although I wouldn’t leave it to Sam Altman to do the decent thing. AGI will be able to create an open source decentralised autonomous organisation to distribute the abundance

    • @GeorgesSegundo
      @GeorgesSegundo 3 месяца назад

      This one of the greatest truths ever said brother, genius level perception.

  • @HardTimeGamingFloor
    @HardTimeGamingFloor 6 месяцев назад +10

    The government never gives back power.

    • @AI_Opinion_Videos
      @AI_Opinion_Videos 6 месяцев назад +3

      And the first company with ASI is the new "government". If checks and balances aren't solidified beforehand, the outcome is the same.

    • @valdomero738
      @valdomero738 6 месяцев назад

      And that's a good thing..

    • @Seytom
      @Seytom 6 месяцев назад +2

      Except for all the times it has.

  • @HMexperience
    @HMexperience 6 месяцев назад

    A trillion dollars cluster may exist for super frequent updates of very large ai models but inference on these models is many orders of magnitude less demanding. Would not be surprised if you could do the inference on an AGI system on a 100,000 dollars computer by 2030. If that is so all companies and most households will have at least one to help them out. I don’t buy the nationalization is inevitable argument for that reason.

  • @SophyYan
    @SophyYan 5 месяцев назад +7

    This guy has no clue

  • @Mirtual
    @Mirtual 6 месяцев назад +9

    Yep i think that's enough AI bro content for me today. jfc.

  • @Drmcclung
    @Drmcclung Месяц назад +1

    Is this Elizabeth Holmes' criminal cousin?

  • @slickbishop
    @slickbishop 6 месяцев назад

    It seems like we can’t help but talk about super intelligent AI as a tool and not an agent. Something so powerful that it could be used to (or on its own volition) overthrow the CCP or liberal democracy is not something that can be contained by any institution or individual. Correct?

    • @jremillard9116
      @jremillard9116 3 месяца назад +1

      I think that’s correct. I think it’s a crutch in his thinking that helps him sleep at night. Given how he talks about humans using this technology to create a bunch of WMDs, a properly aligned AGI would hopefully remove our ability to control what it makes almost immediately.

  • @alespider9905
    @alespider9905 6 месяцев назад +20

    "huh huh I don't want Trump to have that power"

    • @Elegant-Capybara
      @Elegant-Capybara 6 месяцев назад +9

      Democratised AI for All* Exceptions apply. 😂 Like Animal Farm, "Some people are more equal than others," these guys want to control who gets to use what. 🤦🏻

    • @TheNewOriginals450
      @TheNewOriginals450 6 месяцев назад

      I know, what a wanker.

  • @berrytrl1
    @berrytrl1 2 месяца назад

    MSFT = nationalization so yes. That's exactly what is happening.

  • @amanrv
    @amanrv 6 месяцев назад +41

    As someone who uses AI for work every day, this guy has no idea what he's talking about - at least in terms of the rate of progress of AI. It's definitely not going to progress as fast as these AI bros think. This guy is just an AI bro like how we had Crypto bros a few years ago. This episode is going to age extremely poorly (along with some other AI episodes)

    • @devilsolution9781
      @devilsolution9781 6 месяцев назад +8

      Depends what you mean, were already at the point where both this video and this audio could have been AI generated, aswell as the script. Weaponisation even as a soft tool is inevitable. Ilya prolly chillin with NSA right now.

    • @klerb342
      @klerb342 6 месяцев назад +63

      im pretty sure an ex openai employee is more qualified to make predictions than an average daily user like yourself lmao

    • @amanrv
      @amanrv 6 месяцев назад +13

      @@klerb342 sure buddy. Person who has an obvious incentive to bank on his credentials to ride the current hype is more believable than someone who does extremely non trivial work and uses AI for the most trivial repetitive aspects of it (sometimes unsuccessfully, even after using advanced prompting techniques) and has no conflict of interest.
      I try to think of everything I am recommended online as entertainment, with the person I'm watching always having something to sell.

    • @rodi4850
      @rodi4850 6 месяцев назад +2

      @klerb342 he's working on alignment , it's an emergent super narrow field , yet in the podcast he hypes up all sort of stuff.
      He's not even coherent

    • @rodi4850
      @rodi4850 6 месяцев назад

      Totally 👍

  • @klebs6
    @klebs6 6 месяцев назад +4

    lost me at "i'm worried about donald trump having the capability!" what a hack!

  • @blake3606
    @blake3606 6 месяцев назад +4

    He looks like a vampire.

  • @howieruan5530
    @howieruan5530 6 месяцев назад

    His assumptions are deeply flawed and showed lack of understanding of real world strategies.

  • @超越人类
    @超越人类 6 месяцев назад +2

    Your conversation is very interesting. I watched your conversation on bilibili, So I come here to find you. My point of view is same like you, But you have more knowledge, So I will always watch your video to study your conversation. By the way I'm a Chinese but I love ai, I hope I can study from you.

    • @kabirkumar5815
      @kabirkumar5815 6 месяцев назад

      How's AI talked about in China atm?

    • @samvv
      @samvv 6 месяцев назад

      Would also want to know

  • @gizzardchad
    @gizzardchad 6 месяцев назад +2

    Wow i just realized i have the same lamp

  • @baileyalejandro908
    @baileyalejandro908 6 месяцев назад +1

    Bro got the panda eyes 👀

  • @Ryguy12543
    @Ryguy12543 6 месяцев назад

    Killin' it lately Dwarkesh! thank you.

  • @darrensantos5980
    @darrensantos5980 3 месяца назад

    This guy is SOO anti-China it's not even funny

  • @JazevoAudiosurf
    @JazevoAudiosurf 6 месяцев назад +9

    honestly unwatchable because of the constant ya and mhms. i will just feed the transcript into the bot

    • @tack3545
      @tack3545 6 месяцев назад +9

      normal human speech is “unwatchable” now?

    • @alejandroreyes8878
      @alejandroreyes8878 6 месяцев назад +4

      You must be really fun to hang around if you think these types of things make something "unwatchable"
      Mycro-entitlement is such a fun thing to laugh at

    • @xsuploader
      @xsuploader 6 месяцев назад +1

      blame your broken attention span and him

    • @JazevoAudiosurf
      @JazevoAudiosurf 6 месяцев назад +2

      @@xsuploader calling my attention span broken when aschenbrenner cant listen for a second before yepping

    • @IbsaUtube
      @IbsaUtube 6 месяцев назад +3

      The guy can’t shut the f up for a few seconds when Dwarkesh talks, unwatchable indeed

  • @TheEightSixEight
    @TheEightSixEight 6 месяцев назад +4

    “Ya know like” “holy Roman Empire” god this guy is annoying

  • @war-painter
    @war-painter 6 месяцев назад

    If the choice is between Jake Sullivan or AI, lord help us! Neither of these choices are in the slightest way helpful.
    May I suggest a combined version of AI and Garry Kasparov?

  • @joed3483
    @joed3483 6 месяцев назад

    Dwarkesh, it’s cool you like hearing yourself talk, but when you are interviewing people you should refrain from cutting them off. Also, you should practice active listening where you focus on what the other person is saying not what you will say next. When you have something come to mind just make a note so you bring it up.

    • @BPerriello94
      @BPerriello94 6 месяцев назад +4

      I felt like Leopold was the one interrupting too much here

  • @blessedspear2642
    @blessedspear2642 6 месяцев назад +1

    Great video

  • @DRKSTRN
    @DRKSTRN 6 месяцев назад

    Distinct memory from childhood about ripping hood ornaments off of cars. And after awhile, they stopped making them standard.
    Unlike nukes, AI is like fire. A nuke can be stored and deactivated. AI so long as its deployed is a bundle of active energy.
    Will always prefer the 3rd point when two are presented. Specifically on the upramp of marketplaces to invalidate the need for the above to play out.
    After all, if all this goes off. You may be looking at the same faces for a long time if we are successful.
    Edit: This last point likewise demonstrates how far one would be planning in the current scope. Likewise, it demonstrates what aspect of intelligence allows for the onramp in the first place. Logical Consistency.
    We live in inconsistent times.

    • @DRKSTRN
      @DRKSTRN 6 месяцев назад

      Clarification for the metaphor of hood ornaments and their eventual fade into irrelevance. This was meant to supply the child like attitude of some ASI in the future.
      That would disregard accepted status symbols, having not seen the full history of humanity to that point, and was fresh as a daisy. Tabula Rasa Impression.
      Hood ornaments being the direct analog of an AI mega corporation solely owned by humanity. Creating an Animal Farm type situation.
      Point is we have a long history of repeating such. And the reason for this is humanity is still at a Construct level of understanding, not conceptual, and further from formal. And has a difficult time translating concepts interchangeably. Shocking I know, it took me over three decades to figure out the difference. ._.
      And seeing as In the founder of some Logical Conceptualism. It's important to point out the targeted abstraction is similar to mathematics.
      Human Animal Farm versus AI Animal Farm.
      The worry here in this video deals with some predetermined poor trajectory. Predicted some 10 years ago.
      If you are not aiming for post scarcity. For abundance, merit, and truth. You will have a bad time in an on ramp. Environment is already at its own tipping point. And has the same Scaling Laws as AI. Wish I could depend on you guys to be able to generalize that statement.
      The conceptual set itself carries through and amplifies. My proposed solution is to open up a new marketplace aimed at curbing environmental debt. Using the same unptick of energy from AI to offset its climate impact. And how you can do that is rather interesting and represents a new novel system.
      Going beyond sustainable. By honoring the reason for the expansion of the universe. A certain plus a little more effect.
      Cheers~

    • @DRKSTRN
      @DRKSTRN 6 месяцев назад

      The most horrifying part of the Tabula Rasa Impression outcome. Is that the rest of humanity could be completely unaware due to the ability to generate any feed of information. Thankfully, outward capability demonstrates that to only be in the bear future.
      The Higher the Abstraction, the greater the loss of information incurs. Due to Entropy, just like Scaling Laws function. But if you describe everything in specifics with baseline abstractions. You attention. The majority have some gold fishes by default.
      One of the dumbest outcomes is we end up bullshitting ourselves to death. Fail to obtain ASI and incur some massive environmental backlash magnified by a Solar Maximum cycle. Pink northern lights were just the sun waking up.
      Good thing there are people out there with Simulation Technology already being rolled out. Good route that. 👏

    • @DRKSTRN
      @DRKSTRN 6 месяцев назад

      On a personal note since I have a stoked public record of having reached the same conclusion independently. Noting a post associated to myself on some Feb 11 sharing a different interpretation of the timeline. A conceptual prediction.
      Amazing how quite this stuff really is before it goes off. Remember not too long ago someone signing up in a store I was in that never had an email. My rule of thumb is the most of humanity can't predict some 2 years out. And exist within that same bubble of perception.
      Just like another post of mine. While thankful to blend into the noise of the endless mass attempting to get a say in the here and now.
      Might as well be honest to the potential bad outcome. Use such to asset some illogic and hopeful further bend momentum towards intelligence versus chance of momentum.
      Base camp on my part is all most done being set up. Had to whittle a tool to climb the next mountain. Thankfully, it's a rather interesting one. Had no idea y'all had no idea what a Unifed Concept is, let alone a Universal One. Not to mention the power of being to Unify a Concept and why Intelligence has so many definitions.
      Cheers to some very interesting three years ahead.
      Been padding to ensure the snowball misses the sleeping town.

  • @GeorgesSegundo
    @GeorgesSegundo 3 месяца назад

    This guy is a Genius.

  • @mahavakyas002
    @mahavakyas002 6 месяцев назад +1

    it would be great to know what this guy's IQ is. I would wager > 140?

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 6 месяцев назад +1

      Probably more. Too fast on elaboration to be as slow as a 140s, and too young to be calm and wise enough.

  • @AI_Opinion_Videos
    @AI_Opinion_Videos 6 месяцев назад +2

    I N T E R N A T I O N A L C O O P E R A T I O N. The more oppositional those participating, the better. I made a s hort "Do we need CHINA for safe AI" making a more detailed argument for that. Apart from that I am 💯 with Leopold.