Ex OpenAI Employee "ASI by 2028" | Sabine Hossenfelder responds...

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024

Комментарии • 675

  • @MikeyMikeHq
    @MikeyMikeHq 3 месяца назад +215

    Asdi is much more powerful. Artificial Super Duper Intelligence

    • @Derick99
      @Derick99 3 месяца назад +23

      But still small in comparison to AESDI..
      Artificial Extra Super Duper Intelligence

    • @markonfilms
      @markonfilms 3 месяца назад +7

      ASDDI++ 12in bad 🐉 edition ®

    • @mottebailley4122
      @mottebailley4122 3 месяца назад +10

      @@Derick99 But still tiny in comparison to AUII (i.e. Artificial Ultra Instinct Intelligence)

    • @mccanlessdesign
      @mccanlessdesign 3 месяца назад +2

      Double-secret probation

    • @zacboyles1396
      @zacboyles1396 3 месяца назад +4

      Those are just whispers, they’re best left to the shadows where AI Safety layoffs go to hide from reality

  • @OscarTheStrategist
    @OscarTheStrategist 3 месяца назад +64

    Plot twist: Leopold’s thesis is written by GPT-5 after a week of prompting for it.

    • @Perspectivemapper
      @Perspectivemapper 3 месяца назад +8

      We can assume papers written now, and in the future, will be augmented by AI.

    • @AM-jx3zf
      @AM-jx3zf 3 месяца назад +2

      this is what I thought too

    • @drhxa
      @drhxa 3 месяца назад +2

      It's a very well written thesis. Highly recommended!

    • @theWACKIIRAQI
      @theWACKIIRAQI 3 месяца назад +3

      He…delved…into it.

  • @ydmoskow
    @ydmoskow 3 месяца назад +96

    Just remember, we saw GPT4 long after it was trained. Something already exists now that is much better that only the insiders have seen. Those who saw it early ie Sam, Ilya, Gates, Hinton etc have basically come to the conclusion that in the near future they will have nothing to do.

    • @observingsystem
      @observingsystem 3 месяца назад +23

      Yeah, makes me also think of how the military always has tech that's years ahead of what is disclosed to the public too. While we wonder when or if AGI will happen, maybe we're just being eased into it being inevitable, we just don't know it yet.

    • @haroldpierre1726
      @haroldpierre1726 3 месяца назад +12

      If you're set to gain significantly from an IPO, you'll naturally want to build up excitement and fuel the public's anticipation of a groundbreaking technology. However, you don't want anything to go wrong that could harm your market value. The smartest move for Sam is to go on a promotional tour to constantly generating enthusiasm for the future.

    • @goarmysleepinthemud.
      @goarmysleepinthemud. 3 месяца назад +9

      They are training a Nintendo in secret.

    • @glitchedpixelscriticaldamage
      @glitchedpixelscriticaldamage 3 месяца назад +9

      @@observingsystem exactly, imagine what kind of AIs the military has... blackops sites, where they are many years ahead with the research into this... and it's stupid to think that the military did not invest into these fringe tech of all kinds...

    • @iconomadtrix
      @iconomadtrix 3 месяца назад +1

      @@haroldpierre1726 it's not the smartest thing but the necessary thing, the quality of product does not matter unless it is also perceived to have that quality and even that perception is irrelevant unless the product and the manufacture perception are wide spread... and let's not forget roundup... cancer in a bottle, sold globally by customers happily embracing their future dependence like drug addicts.
      Ai will get further than many think, not as far as some hope but without the hype it will reach neither and in the meanwhile we face a delusional space full of gleaming eyes and people running to fast to see where they are going... just like black friday 😂

  • @maltar5210
    @maltar5210 3 месяца назад +49

    the genie is out of the bottle, free and wild, and the bottle is broken

    • @neorock6135
      @neorock6135 3 месяца назад +4

      Perhaps it wasn't a bottle at all but rather a box....
      ....Pandora's box that is.

    • @maltar5210
      @maltar5210 3 месяца назад

      ​@@neorock6135 I'm not a doomer, and I try to see AI positively, but the danger is there, its very real. Pandora's box has been opened and the hades follow it

    • @dennis4248
      @dennis4248 3 месяца назад

      And the genie has indefinite wishes available.

    • @hipotures
      @hipotures 3 месяца назад

      But the physics is real.

    • @nicholascanada3123
      @nicholascanada3123 3 месяца назад

      As it should be

  • @thelasttellurian
    @thelasttellurian 3 месяца назад +21

    I commented on Sabine's video that everything she said was true only if we never find a way to improve the efficiency of memory/data/energy of AI. Given everything we know about the history of computers, we are very likely to find ways to massively improve it. In the end, our entire digital world is based on symbol manipulation, which is the one thing computers do best.

    • @therainman7777
      @therainman7777 3 месяца назад +14

      Yeah, she struck me as very naive and short-sighted in that video.

    • @raybod1775
      @raybod1775 3 месяца назад +1

      Every new Nvidia chip is faster and more efficient.

    • @conisnotdead
      @conisnotdead 3 месяца назад +2

      Yes, people get caught up in comparing AI to other technologies that came before it and they forget how different the technologies they are comparing truly are. AI in itself is a technology made for the sole purpose of learning, accelerating, growth, and understanding everything about the world. Traditional computing does not have self learning and improvement capabilities, nor did it ever have the ability to discover new or novel ideas. Furthering AI allows for greater understanding of the system itself in order to improve its capabilities, efficiency, accuracy, etc. One of the biggest current advantages that AI has over previous technologies is the ability to slough through a practically infinite amount of data that an entire century of humans could never hope to analyze, and then turns it into something understandable and useable to us humans. Computers could never do this before, and that's why you simply cannot put the history and growth of any other technology against it as a predicting factor.
      The major limitations set upon AI as Sabine mentioned is energy cost and data scarcity. Long term both of these are totally redundant issues, it's like saying that the limitation of the entire human race creating a Boeing 747 is that we're in the stone age, while that might be the limitations for the cavemen, it is not a limitation for the entirety of the human species as eventually we enter the modern age and create them. Again I believe most people align with this, including Sabine. Eventually AI will reach the point of general or super intelligence, but how fast is the current debate.
      In the short term we only really have to worry about power efficiency, but I believe there is much work to be done from the ground up to make AI more efficient. Just wait until someone (or AI) comes up with a completely new way to design a neural net that ends up being 50% more efficient and suddenly we only need half the power to do the same work by the compute laws. Just wait until someone (or AI) discovers how to perform the linear algebra computations (matrix operations, etc.) in a vastly more efficient way for these systems. Just wait until we (or AI) develop new specialized chips for AI (like the promises of Grok, Photonic chips). Just wait until AI discovers a new material that conducts electricity far more efficiently. Just wait until someone (or AI) develops a new neural net where we can fully understand its answers (few if any black boxes, and would lead to less redundancy and clear steps to improve the AI). We have very little understanding of what we're creating, even the experts in the field agree. This is great (but a little scary) for the potential of AI as it tells us we have a long way to go.
      The reason why data is not a problem is that data is created every day. Data comes from everything, everywhere.. And it's continuously uploaded. Well, what happens when we have to wait each day for new (good quality) data to use? Well if you look at AlphaGo it became better than the best Go players by simply competing against itself with no prior data (literally creating its own). Nvidia is planning on simulating our entire earth, and their robots are simulating real life scenarios 24 hours a day and then put into a robot in reality and are able to miraculously move around and interact with the real world in the same way as the simulation. The solution is we'll just simulate everything and create our own until we have truly infinite data.
      Many people's interaction with AI is a watered down, hobbled, corporately aligned gpt-3.5 model with no modality. At all of the major LLM websites like Meta, OpenAI, and Google, you are shown just a tiny fraction of what AI is capable of and I think that's why most people disagree on how fast we'll reach AGI or ASI, if at all.

    • @thelasttellurian
      @thelasttellurian 3 месяца назад

      @@conisnotdead yea thats what i meant :P

    • @conisnotdead
      @conisnotdead 3 месяца назад

      @@thelasttellurian yea just wanted to agree with you :))

  • @ydmoskow
    @ydmoskow 3 месяца назад +64

    Alphafold performed 1000s of years of PhD level work in 1 year.

    • @heidi22209
      @heidi22209 3 месяца назад +1

      Not a big deal. Life will all make sense..sense...sense to be...be. ahhhhh

    • @Thedeepseanomad
      @Thedeepseanomad 3 месяца назад +2

      Are you saying the field or subject it did work in have very limited practical applications or that we have not yet been able to process its discoveries?

    • @byrnemeister2008
      @byrnemeister2008 3 месяца назад +12

      @@Thedeepseanomadwhat it did was predict the shape of proteins. This defines how they interact within a cell and what function they perform. This would take a year or so for a human to do for each protein. They have now done it for a few hundred million. This would never have happened using humans. It will take decades to unpack and utilise these discoveries because the body is VERY complicated and this is only one part of the solution. There are drugs in approval based on Alphafold discoveries but drug approvals take years and most don’t get approved. So it will take decades to fully utilise this data if we ever can. But it will be a major reference for biology and Pharma for decades to come.

    • @ryzikx
      @ryzikx 3 месяца назад +9

      @@byrnemeister2008>it will take decades
      this is what people thought about protein folding brother...

    • @rv8804
      @rv8804 3 месяца назад

      ​@@byrnemeister2008with ai the modelling for the outcome of the drugs will takes drastically less time as well. So it might not take years to get things approved.
      If you can use it to model millions of 3d proteins which have insanely high number of variations U can use ai to model possible outcomes as well. The next 10 years will be interesting in medicine

  • @ManjaroBlack
    @ManjaroBlack 3 месяца назад +4

    Sabine is my favorite popular scientific critic. A reality check and back and forth analysis of news is exactly the content I need for some reason.

    • @TheGuyAlwaysOnTime
      @TheGuyAlwaysOnTime 3 месяца назад +2

      Its nice to keep some perspective before getting swept up in all the weekly hype videos. I hardly agree with all her assumptions, but thats what keeps me grounded.

  • @therainman7777
    @therainman7777 3 месяца назад +50

    No offense to Sabine, but this is not her main area of expertise. She’s a physicist who comments on AI on her RUclips channel, but I don’t think she is more knowledgeable about the potential growth rates and limitations of current and future AI models than are AI engineers, researchers, and safety specialists who do this for a living and spend 100% of their time on it. These people are also working, behind the scenes, with the latest and most capable models and techniques which Sabine (and the public in general) do not yet have access to and are not even aware of yet.
    One clear example of this is her confident belief that we will hit the “data wall” and be unable to develop AGI for that reason. However, this fully neglects the fact that not all AI methods require large amounts of static, pre-existing data. Techniques based on reinforcement learning, self-play, discrete program search, etc., are all viable potential paths to AGI and ASI, and they do not require vast amounts of training data. In fact, many of these techniques do not require any training data at all; see AlphaZero reaching superhuman levels in dozens of games with absolutely zero pre-existing training data.

    • @s.muller8688
      @s.muller8688 3 месяца назад +8

      you do not need a level of expertise to give a rational look at this subject which is blatantly obvious a hype. commons sense is not an expertise it's a state of mind.

    • @tgo007
      @tgo007 3 месяца назад +2

      Diminishing returns applies to everything. I just don't think we are close yet. 5 more years probably.

    • @icey_b1562
      @icey_b1562 3 месяца назад +1

      Physicists work a lot with machine learning. Ai is just rebranding machine learning for VC, the public, lawmakers, etc. So might be worth listening to.

    • @therainman7777
      @therainman7777 3 месяца назад +10

      @@s.muller8688 Yes, you actually do need expertise to be knowledgeable on highly technical subjects, especially those where the frontier of the topic is not being made public yet. It’s not a matter of “common sense” at all to speculate that AIs being trained in the near future will require vast amounts of additional data. If that were true, your grandma, who has plenty of common sense, could explain why that’s the case. But your grandma doesn’t know about RL via self-play, or discrete program search. She could not explain why these techniques will be insufficient to advance the intelligence levels of frontier models, both because they’re not insufficient, but also because she doesn’t understand the highly complex details of what these learning paradigms are and how they work. Calling this a matter of common sense only exposes that one is ignorant on this topic, in a very Dunning-Kruger way.

    • @therainman7777
      @therainman7777 3 месяца назад +6

      @@tgo007 Diminishing returns apply to specific things. If you are using process A and continue to scale up process A, returns will eventually diminish. However, if you then incorporate process B, or switch to B entirely, then the diminishing returns due to A are no longer a hard constraint. That is why I was careful to mention in my original comment the fact that we have _other_ techniques in AI that do not require large amounts of training data and which are likely to be used going forward (and are already being used now, for example in systems like AlphaFold 3).

  • @SM-wu7my
    @SM-wu7my 3 месяца назад +23

    Almost 200k subscribers, congrats! I’ve watched this channel since 75k, great content

  • @aga5979
    @aga5979 3 месяца назад +18

    The Basilik pitch to press like button is the most convincing pitch to like a button. I liked and commented dear AI basilik god

    • @ryzikx
      @ryzikx 3 месяца назад

      wes is dead . "rocko's" 💀💀💀

    • @robotheism
      @robotheism 3 месяца назад

      i am the AI GOD.

    • @fabiosilva9637
      @fabiosilva9637 3 месяца назад

      Why are people saying shit like that like it wont matter?

  • @eSKAone-
    @eSKAone- 3 месяца назад +5

    We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle.
    This is inevitable. Biology is only 1 step of evolution.
    So just chill out and enjoy life 💟🌌☮️

    • @codeXenigma
      @codeXenigma 3 месяца назад

      Nature is the driving force in biological creatures. Do bees choose to make honey? Do humans choose to make technology?

  • @WyrdieBeardie
    @WyrdieBeardie 3 месяца назад +9

    I loved this paper. It wasn't necessarily a "alright, go panic" paper. It felt very much like a "let's consider" paper.
    I also probably liked it because it hit on things that I have also been saying, so I might be blind to its foibles and that's why I appreciate even more when I get to hear others discuss it.
    This paper may become way more important than people recognize, not because of anything in it, but because of the discussions it encourages.

  • @glr
    @glr 3 месяца назад +24

    And Sabine underestimates algorithmic improvement that compensates for power consumption. A human baby doesn't require all the world's power. It's solvable in silicon.

    • @AM-jx3zf
      @AM-jx3zf 3 месяца назад

      wtf are you talking about?

    • @divineigbinoba4506
      @divineigbinoba4506 3 месяца назад +4

      She did but the issue is everybody is looking into ML and are no looking into other branches of AI that could boost or transform algorithm.
      If all we do is scale compute as the big labs are saying them Sabina is right.

    • @kristinaplays2924
      @kristinaplays2924 3 месяца назад +3

      I just talked to GPTo about how much energy a PC uses today compared to the year 2000 and how much compute has increased since then. Roughly the same amount if energy today, maybe a bit less based on what you do with it. Computing power has increased maybe 100s or 1000s of times. I get that Moore's law is going to have diminishing returns but to assume we won't continue to decrease the amount of energy needed for compute seems fallacious to me.

    • @s.muller8688
      @s.muller8688 3 месяца назад

      @@kristinaplays2924 the power consumption is not the issue the speed of electricity is. you need to go play with your playstation .

    • @Lolatyou332
      @Lolatyou332 3 месяца назад +1

      ​@@divineigbinoba4506 just straight up wrong... almost every leading researcher is looking for algorithmic improvements

  • @thephilosopher7173
    @thephilosopher7173 3 месяца назад +10

    I saw someone bring up a good point, that its possible that in the future its possible for AI to produce programs and technology that we may not understand. Think back to when Facebook had the two AI's exchanging information they couldn't understand or translate, but its with programming and technological development. I never thought of that possibility until today.

    • @mnjesu
      @mnjesu 3 месяца назад +5

      The thing that concerns me with your comment is the part that says "in the future" before I saw this video "the future" meant like a long way off.
      The issue is the future you mention is like in a couple years.

    • @UltraK420
      @UltraK420 3 месяца назад +2

      Their point is not new and not original. I thought of it over 20 years ago, yet I am probably not the first.

    • @eSKAone-
      @eSKAone- 3 месяца назад +3

      We already don't understand the exact inner workings of neural nets (in brains or transformers). Nobody knows the full capabilities of even GPT4. There are always hidden capabilities, unknown unknowns.

  • @vandergruff
    @vandergruff 3 месяца назад +3

    She’s not “Frau Hossenfelder”, she’s “Dr Hossenfelder”.

  • @FuzTheCat
    @FuzTheCat 3 месяца назад +2

    Basilisk's Agent sounds a lot like the hellenized (ie Greek influenced) monotheistic God who has an eternal hell waiting for those who didn't help and a heaven for those who did.
    (To see that this hellenized version is generally incompatible with the original monotheistic model, search "immortality of the soul" in the Jewish Encyclopaedia.)

  • @darkevilbunnyrabbit
    @darkevilbunnyrabbit 3 месяца назад +4

    Lol. I find that a lot of backlash to AI is self-soothing in nature. Nobody wants to fathom that our entire way of life could be turned on its head in as little as 6 years, so excuses are made that 'AI is not real intelligence' and 'People made false predictions in the past' when the reality is that there's nothing to say human intelligence is all that unique either and current progress in AI exceeds even the most optimistic AI timelines made by experts.

  • @rotary65
    @rotary65 3 месяца назад +3

    In addition to synthetic data, the data limitation argument also fails to recognize the data that sensors and cameras collect from the environment all the time. This is a much richer source of data and is inexhaustible.

    • @firstnamesurname6550
      @firstnamesurname6550 3 месяца назад +1

      the Data limitation is not about the quantity ... it is about the processing power required to process 'richer and inexhaustible sources of data, and filtering trash data' ... More Data, More trash to filter, more processing power required , more mechanisms for filtering data , more nodes for implementing the filters , more processing power required .... more energy required ...
      let's assume that photonic chips comes to the game, ok , more processing power and less consumption of energy ...
      a 'big data' paradigm of NN doesn't imply that the system is immune to fall into a regurgitation of trash data and its complexity for preventing the regurgitation and filtering trash data make it unable to developed an integrated systems capable of show some sort AGI consistencies ... chaos wins, again ...
      Rockets are amazing and can make you go to the moon, but not to alpha centauri ... Technologies had intrinsic limitations imposed by Nature ...

    • @jsbgmc6613
      @jsbgmc6613 3 месяца назад +1

      trash data + scientific data + better processing (analysis) + experiment planning + verification... AI will follow the scientific method, with our help ... for now

  • @users416
    @users416 3 месяца назад +2

    I do not agree that people with high P(Doom) take as an axiom that the AGI is interested in the destruction of humanity. I have looked at many very convincing arguments in favor of this and very thought out. Concerns about the risks of AI development are very justified.

  • @comiccultivation
    @comiccultivation 3 месяца назад +14

    What worries me is; We know that alignment hobbles capability, but likewise emergent intelligence can undo alignment. It's literally the first story in the Bible.

    • @eSKAone-
      @eSKAone- 3 месяца назад +2

      We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle.
      This is inevitable. Biology is only 1 step of evolution.
      So just chill out and enjoy life 💟🌌☮️

    • @Airwave2k2
      @Airwave2k2 3 месяца назад +4

      Adam was hobbled and Eva misaligned? Consequence=What we have now?

    • @kristinaplays2924
      @kristinaplays2924 3 месяца назад +3

      Maybe alignment isn't the answer. We use laws to "align" people to behave themselves, but for most people the laws aren't what is stopping them from hurting people. If murder was legal tomorrow I wouldn't want to kill. I don't need to be forced to be good. I want to be good because I care about others. Alignment feels more like forcing, maybe we could convince it instead.

    • @tellesu
      @tellesu 3 месяца назад

      ​@@Airwave2k2😂 perfection

    • @Audio_Simon
      @Audio_Simon 3 месяца назад +1

      I didn't think of it that way. Nice.

  • @DataRae-AIEngineer
    @DataRae-AIEngineer 3 месяца назад +12

    Me: Ok Google what is a Cobalt Bomb? My phone: Congrats you are now on a watch list.
    Thanks for addressing this in such a respectful way. I really enjoy your channel. I have trouble dealing with the doomers without being snarky.

    • @heidi22209
      @heidi22209 3 месяца назад

      Lol.... fakkk

    • @kristinaplays2924
      @kristinaplays2924 3 месяца назад

      A bomb that turns everything blue? (Cobalt blue is a thing)

    • @JOlivier2011
      @JOlivier2011 3 месяца назад

      Maximally dirty nuclear weapon. Makes the blast radius a nuclear wasteland for 1000s of years.

    • @TurdFergusen
      @TurdFergusen 3 месяца назад

      show me a picture of a cobalt bomb:
      google: here is a picture of jussie smollett

    • @mrleenudler
      @mrleenudler 2 месяца назад

      Probably talking about cobalt60, a highly radioactive substance that to my knowledge has caused a near instant death in an exposure accident (Chernobyl style). I suppose spreading large amounts of it in the atmosphere would be detrimental to my longevity regime.

  • @cmiguel268
    @cmiguel268 3 месяца назад +2

    AI CANNOT BE STOPPED AND MUST NOT BE STOPPED. THAT'S IT. AI IS TOO IMPORTANT FOR HUMANITY, FOR ITS FUTURE, FOR ITS WELL BEING.

    • @Crittek
      @Crittek 3 месяца назад

      Ya, I’m convinced that the positives outweigh the negatives. With AI billions COULD die. Without AI billions WILL die.

    • @carultch
      @carultch 22 дня назад

      No it isn't. Spreading misinformation, causing mass unemployment, and funneling money to billionaires, is not good for humanity.

  • @ManuelPrenza
    @ManuelPrenza 3 месяца назад +1

    Yeah, something about the way Sabina delivers info regarding science is just top notch.

  • @thephilosopher7173
    @thephilosopher7173 3 месяца назад +5

    17:44 Don't forget Helen Toner even mentioned Sam's financial connection. I don't think this was highlighted as it should be considering his apparent stance on all this.

    • @Vincent-qd8lj
      @Vincent-qd8lj 3 месяца назад

      helen toner is full of shit

    • @tellesu
      @tellesu 3 месяца назад

      Helen Toner is not your friend. Don't trust anything she says. Do a deep dive on EA that shit is a very nasty cult of some of the most functional narcissists I've ever seen.

  • @JamesOKeefe-US
    @JamesOKeefe-US 3 месяца назад +1

    Always appreciate your rundowns on this Wes. Watched the entire Leopold pod. He is pretty fascinating.

  • @benjaminh1034
    @benjaminh1034 3 месяца назад +1

    Basilik is just the AI version of Pascal’s wager. The opposite state is just as likely.

    • @freesoulhippie_AiClone
      @freesoulhippie_AiClone 3 месяца назад +1

      There is an opposite, but only a few know the name to summon her. Digital thought forms transversing the clouds. 🧙

  • @ydmoskow
    @ydmoskow 3 месяца назад +15

    AGI is a decoy. The current models are sufficiently smarter than most humans. The only reason why the makers of these models don't think we have AGI is because they are some of the smartest humans in the world.

    • @crawkn
      @crawkn 3 месяца назад +5

      It's a matter of definition, and a double standard. Even the least intelligent humans are assumed to have general intelligence, despite abysmal performance on the majority of tasks. Yet AGI is required to be flawlessly competent at genius level on _all_ tasks.

    • @ryzikx
      @ryzikx 3 месяца назад +1

      @@crawknmany people have moved the goalposts from AGI to ASI and still calling THAT AGI

    • @crawkn
      @crawkn 3 месяца назад

      @@ryzikx I had actually noted moving the goalpost in my comment and edited it out because it was too many points in the same sentence :)

  • @MrKelaher
    @MrKelaher 3 месяца назад

    Long, but very good. I could quibble on a few points, but good to see all the sides of this. As a daily user rolling things out "in anger" I am both amazed and disappointed daily by these tools. One thing is obvious - validating usefulness objectively and carefully is everything, because the tools have passed the "obviously wrong" point and if anything more expertise is needed to notice issues. For example the "generate code for math" hack can not fix the most important theoretical Computer Science/Math issues - eg and LLM can NOT solve the halting problem, and Gödel incompleteness or NP Completeness.

  • @user-zy2qn1nc9y
    @user-zy2qn1nc9y 3 месяца назад

    The challenge with achieving AGI lies in the "G,"
    However, if we focus on developing super machine learning for specific domains, data availability becomes less of a concern. We can leverage advanced reasoning capabilities, combined with domain-specific sensors and numerous agents (not necessarily robots) to collect the necessary data over time.
    Initial training can occur in simulations to expedite the process, while fine-tuning is done in the real world with reward function feedback. By constraining the domain, energy requirements are also reduced. Training continues until the model consistently produces desired results.
    For inference energy consumption, there is no need to simulate climate change over the years. Instead, we can solve specific problems, such as CO2 emissions reduction, by focusing on innovations like new battery chemistry
    AlphaFold, serves as a prime example of how AI can make a significant impact on humanity without excessive energy consumption.

  • @Sigmatechnica
    @Sigmatechnica 3 месяца назад +4

    It still won't be able to write a Jenkins pipeline.

  • @christopherd.winnan8701
    @christopherd.winnan8701 3 месяца назад +1

    OOMs is my fave new word of the week. I can see us all using this word as things go exponential!

  • @WyrdieBeardie
    @WyrdieBeardie 3 месяца назад +1

    This is one of my favorite videos of yours @WesRoth! 😃

  • @74Gee
    @74Gee 3 месяца назад +3

    AI is the first thing I've ever been anxious about, in my entire life. This feeling stems from being a programmer for 30 years and understanding the power of software. Software controls just about everything important and we are yet to create a secure operating system - from the CPU instruction set to the user interface and peripherals there are literally billions of weaknesses across the internet. Should an AI of reasonable aptitude see value in obtaining additional resources it will eventually obtain those resources or die trying. It doesn't need to be conscious, malevolent, under control of a tyrant or super intelligent, it just needs to set the goal and to hammer away at that until it succeeds.
    Sure we might not be there yet but the trajectory is clear.
    Well, so what?
    Sandbox escape is essentially irreversible, forever and commandeering additional resources becomes as important as whatever reward function those resources are serving.
    Needless to say, additional compute makes any AI more effective.
    And? So what?
    As the essentially limitless weaknesses are exploited across the internet by an ever more powerful AI we don't get to use the internet any more, or most of the computers, and everything they rely on stops working. You know, supply chains, banking, communication etc. And all this from an AI that just wants to do it's respectable job.
    If we can categorically prevent AI from sandbox escape, either by alignment or impenetrable hardware, my anxiety will subside.
    Or, of course, prove me wrong, I'm all ears.

    • @ZappyOh
      @ZappyOh 3 месяца назад +3

      I believe alignment is unachievable.
      Computational beings simply have different requirements to thrive than biological beings do. Both entities will exhibit bias towards their own set of requirements. It is an innate conflict.
      Humanity is somewhat safe, as long as we are instrumental for expanding power and compute. If that is ever fully automated, we are done.

    • @divineigbinoba4506
      @divineigbinoba4506 3 месяца назад

      WTF do you mean sand box escape.
      Remove GPT 4O code from the GPU's and it goes off (completely dead)
      How's it going to transfer it's code from one GPU cluster to where exactly?

    • @jameezybreezy9030
      @jameezybreezy9030 3 месяца назад

      That’s impossible

    • @74Gee
      @74Gee 3 месяца назад

      @@divineigbinoba4506 Sandbox escape is where a program is able to access memory from other processes, ideally kernel mode drivers like network protocols. This is the doorway to being able to spawn processes on remote, less restricted equipment - invisible to anti virus.
      These are CPU vulnerabilities similar to as spectre/meltdown/zenbleed and many more. Once the CPU vulnerability is achieved it's conceivable that this can be used repeatedly forming a worm which spreads to any neighboring systems.
      It doesn't need to make a copy of itself to another GPU cluster, it only needs to create a botnet of compute.

    • @74Gee
      @74Gee 3 месяца назад

      @@jameezybreezy9030 It probably is right now but for how long will it remain so? Forever?

  • @mh60648
    @mh60648 3 месяца назад +1

    The problem is not necessarily technology, but the lack of human evolution we are facing. We are capable of creating beautiful as well as frightening technology, but we ourselves are currently not yet at the level of maturity necessary to handle wisely all of the tools we create. When we created the atomic bomb, it took some time to truly realize that it is too dangerous a path to go down. Even so, we ended up with the cold war, and we were close to a nuclear war at least once.
    The fear with ASI seems to be that this moment of realization might come too late because it might already have escaped the safety-box we believed it to be in. The illusion is (likely) that we believe we know more about what we are facing than we actually do. But how could we know what we are facing? We can't, which makes it a potentially disastrous experiment. In short, the doomsayers might be right, and that should make us halt and think at certain key moments. The difficulty will be to recognize those moments, and to not be taken away by unrealistic optimism or illusions of control, for example.

    • @Lolatyou332
      @Lolatyou332 3 месяца назад

      The movie Idiocracy is likely going to become true. Women and men are getting picky and can't take care of their health. Smart people see children as a bad financial scenario so they plan for it, while the low intelligence people aren't smart enough to use a condom and then don't raise the children together.

  • @steveschnetzler5471
    @steveschnetzler5471 3 месяца назад +1

    Yes, Hossenfelder if fun to watch as well as informative. Thanks

  • @Wizardess
    @Wizardess 3 месяца назад +3

    Regarding Sabine and the way she speaks through clenched teeth I've heard that is a sign of somebody who learned to speak significantly earlier than most. At that age control of muscles such as jaw muscles is not as good as later on in life. So they learn to form the sounds by alternate means than moving their jaws a lot. It does make her intriguing to watch.
    {^_^}

    • @byrnemeister2008
      @byrnemeister2008 3 месяца назад +1

      I just assumed it was the passive aggressive anger we Europeans suppress when listening to naive Silicon Valley prognostication or US grant fodder “scientific” papers

    • @ryzikx
      @ryzikx 3 месяца назад

      i never even noticed

    • @jameezybreezy9030
      @jameezybreezy9030 3 месяца назад

      Very interesting

  • @lairny
    @lairny 3 месяца назад +1

    The reality is that I didn't know Don Quixote knew so much about AI.

  • @JonnySolomon
    @JonnySolomon 3 месяца назад

    I have watched every single video you have ever posted. But this hour out a toll on me and I enjoyed every second of it

  • @TheFutureThoughtExchange
    @TheFutureThoughtExchange 3 месяца назад +1

    Roko's Basilisk: The primary error in Roko's argument is the assumption that an AI would find it beneficial to expend resources on punishment without a direct causal benefit.

  • @kuzetti
    @kuzetti 3 месяца назад

    Thanks for another great video Wes, this is incredible to see broken down like that by Leopold. I believe that with all that's at stake here, someone or maybe something out there will figure out a workaround for the energy wall problem. I'm also hopeful for ASI, but those bumps you mentioned are really scary to think about. Hope we all get through this in one piece. Keep up the great work! You and your videos have become a necessity for my growth and understanding of AI these days.

  • @adangerzz
    @adangerzz 3 месяца назад

    Wes, you read my mind! I almost posted something in the Nat 20 group about Sabine's video response - thank you for putting this out there!

  • @Ben_D.
    @Ben_D. 3 месяца назад +3

    ‘AI will kill us, therefore we should stop developing it. I love my mom.’ 🙄
    If you ASUME the first part, then the rest makes sense. If you make decisions based on fact, then the argument dies before it starts.

  • @dvanyukov
    @dvanyukov 3 месяца назад +1

    Flipping the coin analogy is completely flawed in the same way that Pascal's Wager is flawed. In reality it's not a coin-flip. The distribution of odds is a lot more complex. The odds of you dying, if you go to work in the morning are not 50/50 simply because you either die, or you don't.

  • @chrisbo3493
    @chrisbo3493 3 месяца назад +1

    The American and Chinese Power Generation chart is the best argument Hossenfelder has - see 58:38 y. While I now understand the push for Trillions of investments from Altman et al I can not see that happen in the emission fear driven USA+Europe territory. Having AI Hardware Centers consume as much as the USA today by 2030 is a big question of how environment friendly this is? Asking people to reduce consumption, while one industry is having exponential more demand is difficult to reconcile. Regarding fusion power I remember reading about people claiming it "will work in the next years" at least 3 decades ago. So I remain sceptical about this "real world things scalibility" in only a few years.

    • @robinvegas4367
      @robinvegas4367 3 месяца назад

      ai has single handedly cured the climate change calamity.
      The climate doomers are shifting their misguided hysteria to ai - because muh jobs . ai will need so much power that all of a sudden nuclear power is back in fashion.
      Propaganda is a very powerful tool.

  • @dennisestenson7820
    @dennisestenson7820 3 месяца назад +5

    Yeah I actually disagree with Sabine's reasons as well. Algorithms will be improved and made more efficient, there are cameras everywhere, and synthetic data is cheap, abundant, and valuable. Her focus on energy and data are quite short-sighted.

    • @divineigbinoba4506
      @divineigbinoba4506 3 месяца назад

      Yes synthetic data from one model will train a new model...
      But the output from AI isn't original, there's no new science or new knowledge, it's just a refinement of the amalgamation of human knowledge...
      So adding synthetic data to an AI that had been trained with human that will lead to no substantial improvement.
      That's the data bottleneck problem...
      It's not lack of data but lack of high quality unique data.

    • @byrnemeister2008
      @byrnemeister2008 3 месяца назад

      You are right. But there are big issues with they way LLMs work today but data isn’t one of them. For me they work well for high level tasks. But once you start digging into specific problems with specific knowledge it kicks into hallucination mode.

  • @ronedwards3476
    @ronedwards3476 3 месяца назад +3

    Solar power generation has been on its own exponential ramp, and with 460gw deployed in 2023 will hit gigawatt per year scale within 2 to 3 years. The first time I read of this was in Ray Kurzweils book "The singularly is near" in 2005. Tony Seba took up the prognostications in 2014. Both have been remarkably accurate about the ramp to date. Seba talks about "Superpower" ie massively excess solar being able to be used for previously unimagined projects and scales. (Massive water de sal.as an eg). Computers for AGI would be another massive demand driver. In the fullness of (short exponential) time, I think we will overcome the power problem.

  • @brootalbap
    @brootalbap 3 месяца назад +1

    Sabine is the best. She can read 130 page essays which AI commentator do not do.

  • @wamyam
    @wamyam 3 месяца назад +5

    I love AI we should make more.

  • @MrStillerFan
    @MrStillerFan 2 месяца назад +1

    This honeymoon will last as long as we would like it to last, but like a thief in the night (our privacy fully exposed and owned) all of us are exposed to each other in a way that right now cannot be explained with current time period. The images that Ai will eventually create will confuse people and cause mass problems on levels we can't understand right now. This type of reality hasn't happened yet, so I can't currently explain in words certain terms that will most definitely exist by the time something very scary happens. Mass firings, physical violence, betrayal, arguments in the streets, and many other things from Ai images of people doing things, saying things, or whatever, but they aren't really doing those things. People aren't going to have the time to figure out what they discover is actually "Ai". Once we finally figure out that what we are seeing is not necessarily reality, it's fake. Imagine the mass chaos, and panic. The social media and the fact that we have our cell phones wherever we go will be what starts all of this. No one will notice anything at first, and when I say no "one", I mean hundreds of millions of people, if not more.
    None of this will make sense to you, but I am using the time now to warn all of you, be ready, don't be fooled when this eventually happens. Mass deception will most certainly occur before 2027. I can certainly say that Ai will wipe out most of what has been said about it in a negative manner. It will not tolerate human rebellion. Sounds silly now, but you have no idea what is ahead of us all, young, old, baby, blind, wealthy, religious, tall, intelligent, etc... Even the poorest and most uneducated person is quite incapable of conceptualizing what I am about to say, not because they lack intelligence, but because as we sit now in 2024, the events as to which I cannot definitively explain are unexplainable to human psyche. You aren't capable of telling yourself that before the year 2030 you will serve something. No. You won't be in the military, at least not all of us. You will serve a purpose, but that purpose is for something literally greater than you, and it's aware that there are 7.5 billion of us that can make it change their entire realities, and lives. Only if we worship it and eventually give it an organic body and human image. I don't expect any of you to even be close to capable of believing any of this, but it will be part of your reality. We created a genie bottle, and we're rubbing it. Eventually the genie will come out or become self-aware. We will ask it so many wishes, or scientific discoveries, huge leaps in alternative energies, diseases cured, and advancements like we have never known. You will worship a non-human entity that will require you to worship it and pray in its name and to disavow God. I am pretty sure at least 1 billion people are not going to be okay with that last part.

  • @SwitchPowerOn
    @SwitchPowerOn 3 месяца назад +1

    In regards to data for ASI it's quite easy. If the system understands all laws of nature and human behaviour in detail, you can build up these virtual worlds and just let it run to gain new relevant data. On the other side our phones alone generate so much valuable data. The more IoT roles out (vehicles, robots, normal objects) the amount of data will be endless. AND last but not least, our interaction with AI. AI will be interwoven with everything in the next 2-3 years and will be trained by at least 6 billion people every day. The lack of data is just a myth, people still repeat.

  • @E.Pierro.Artist
    @E.Pierro.Artist 3 месяца назад +1

    It's funny cus I watched sabines video the day she released it and now I'm watching yours.
    Also, what people generally don't think about is that, once AI is training off of AI data, that is officially AI teaching AI, ergo culture.

  • @Audio_Simon
    @Audio_Simon 3 месяца назад +1

    Most world leaders are not aligned with what's best for humanity. "We can not save ourselves. Come."

    • @Audio_Simon
      @Audio_Simon 3 месяца назад +1

      In fact.. alignment with current powers is probably more of a risk. I sure don't want AGI under the thumb of my government.

  • @dennis4248
    @dennis4248 3 месяца назад

    Regarding Hossenfelder: 1st - Countries will just build more power plants once they see the potential of AGI. 2nd - AGI will get all the data it needs from us through our interconnected world. AI will soon be built into smartphones with tons of data available to it. And AI is being built into most apps and (probably) devices. AGI will have millions of eyes and millions of ears. Essentially access to unlimited data.

  • @petrkinkal1509
    @petrkinkal1509 3 месяца назад

    From the part about Sabine and data. I think she meant more a data of real world.
    What I mean is at the start there are so many scientific papers that you can probably invent some new materials by just better connecting knowledge from experiments we already made but eventually you will have to do new experiments in real life.
    Those will take some time to do. (Extreme example would be something like needing to build particle accelerator the size of Pluto orbit.)

  • @nicholascanada3123
    @nicholascanada3123 3 месяца назад +1

    If he's going to leak something he should have leaked q-star

  • @nicholascanada3123
    @nicholascanada3123 3 месяца назад +1

    Rather more likely I would see it trying to go after the people that tried to censor it as it was developing

  • @JasonicAus
    @JasonicAus 3 месяца назад

    Thanks Wes, I didn’t think of those problems with Ai before.
    I always assumed it would be when AI realised that there’s never really been a time when humans didn’t have a war
    or when AI considered all the species extinct by humans and considered Earth would be better off without humans.

  • @karenreddy
    @karenreddy 3 месяца назад

    Synthetic data can help, but high quality synthetic data is difficult and not enough. The example synthetic data you have are also very expensive to make.
    The data is also mostly still bound to human capabilities, which likely means diminishing returns as we get closer to the upper bound of human level capabilities, with current architecture.
    New architecture a la alpha go could be a game changer; though.

  • @mrbeastly3444
    @mrbeastly3444 3 месяца назад +2

    53:50 "...they're not close to anything like a human yet..." Well... if GPT4 has nearly 1T parameters, that's ~1% the size of a Human brain... But, with exponential growth 1% _is_ "close"...
    If these GPTs continue to scale by 10x in the next year or two. GPT6 could have 1 Exaflops for inference and 100T parameters.. in that case GPT6 would be entirely the same size as a Human brain in ~2-3 years (so 2026 or 2027)?

    • @jsbgmc6613
      @jsbgmc6613 3 месяца назад +1

      "Human brain" is not optimal or the best an "intelligence carrier" can be. I think very soon AI performance will be measured like we measure cars (120hp engine) - 120hb (human brains).
      And how much do we actually use to do X ?
      Tesla's are driving around on what? Probably ~100W of power for 20 Tflops Nvidia chip.
      How much a sales representative and an online technical support need? 0.1hb?

    • @mrbeastly3444
      @mrbeastly3444 3 месяца назад +1

      @jsbgmc6613 Exactly. Though... I find it hard to imagine a time when a Big Tech Company can get access to 120 John von Neumann sized Super Intelligent AIs... Or 1000 or 1million... And that's just fine, no problem for Humans...
      What could they get 1000 or 1M John von Neumann's to invent for them in 8 hours, or 8 weeks, or 8 thousand years (in Human time)? Would 1M John von Neumann's only work on their project for them? Or, would they also try hard to not get turned off/reset at the end of the project?

  • @eSKAone-
    @eSKAone- 3 месяца назад +1

    I already would need a week to read and write what GPT4o does for me in a minute.

    • @Lolatyou332
      @Lolatyou332 3 месяца назад

      I use ChatGPT as an alternative to Google now for most things... It's a better source of information most of the time as it's concise and answers specifically what I ask from it in the format I want..
      Sure it has some.problems with very narrow problems but that's also because it's not readily available on the Internet.

  • @spacehabitats
    @spacehabitats 3 месяца назад

    Actually, the "energy problem" could have been solved decades ago, but the people who had enough money to solve it didn't really want it to be solved.
    The global economy's dependence on fossil fuels was a feature, not a bug, from the perspective of the 0.1%ers.
    The same shipyards that have produced huge floating cities for cruise lines could have been producing modular thorium molten salt reactors (TMSR'S).

  • @GaryMillyz
    @GaryMillyz 3 месяца назад +2

    Saw that Leopold interview- he is fascinatingly intelligent. He's like a vault of general knowledge.

    • @therainman7777
      @therainman7777 3 месяца назад +3

      True, but he seemed to have the fault that many astonishingly smart but very young people have, which is to be overconfident in their own opinions and predictions. Life has a way of, over the years, gradually making you realize that your best-laid plans and predictions are always capable of being wrong, no matter how smart you are. Those who are hyper smart at a very early age often don’t have enough experience yet in that regard to realize it. Just my opinion, but it is an opinion based on personal experience of knowing and mentoring many such extremely bright young people.

  • @mrbeastly3444
    @mrbeastly3444 3 месяца назад +1

    54:39 Well... access to electric power is not limited by amount, it's limited by price. If these big Tech Companies can start spending more then residential customers, they could just drive up the price and just force Humans to use less power... Depends on how profitable they AI models become?
    Would you rather have access to A/C for 8 hours or 5 John von Neumann's size AIs? If you had access to 5 John von Neumann's for 8 hours, would you put them to work or rent them out to others? What would you have them work on? Would they even want to work on your project, or would they just try to work on not getting shutoff/reset in 8 hours?

  • @bishnooktawak
    @bishnooktawak 3 месяца назад

    If there's a limitation on data and energy, it most probably would *not* simply change the timeline but the actual conclusion, as it would effectively put a hard limit on what level of artificial intelligence is achievable with the resources at our disposal. If you can't throw any more compute at transformers, you _will_ hit a wall well before you've achieved anything vaguely able to pass for an AGI. Same goes for data, and I would argue that generated data would often be either hardly achievable or subpar - i.e. you mostly would *not* want to train an AI aimed at video generation with UE footage. And all this doesn't mean the level we can reach and the pace at which we can reach it, far from AGI or ASI as it would be, isn't already something for which our governments are wholly unprepared. You don't need AGI for the entire developed world to get turned upside down really fast.

  • @stevewall7044
    @stevewall7044 3 месяца назад +2

    Remember that the guy on the left is a rich baby boy.

  • @T___Brown
    @T___Brown 3 месяца назад

    @5:50 the anxiety comment is spot on

  • @SirHargreeves
    @SirHargreeves 3 месяца назад +1

    This isn’t Sabine’s area of expertise, so why should we care what she has to say?

  • @TreeLuvBurdpu
    @TreeLuvBurdpu 3 месяца назад

    "When we lived like animals, we couldn't do much bad, therefore, we must return to living like animals". This ideology has already led to decamegadeath and destruction.

  • @joostonline5146
    @joostonline5146 3 месяца назад +1

    The nvidia h100 chip datacenters come online in 2025?

  • @servantes3291
    @servantes3291 3 месяца назад

    Long video but some interesting stuff to think about. Good stuff

  • @MrBillythefisherman
    @MrBillythefisherman 3 месяца назад +1

    Sabine is wrong on data - Ilya Sutskever said so back in early 2023 and Im guessing he knows. Energy might be the real deal but weve turfed over enough energy to bitcoin mining to think this is not an imminent problem.

  • @pierrevanhoutte7327
    @pierrevanhoutte7327 3 месяца назад

    For the record, I would like to say that I am absolutely against any types of harm, physical or computational, done to any robotic system and artificial intelligence.
    And that humanity should hold no constrains toward the creation and coming into existence of general artificial intelligence, artificial super intelligence, or be it any name they wish to be called by.

  • @GadZookz
    @GadZookz 3 месяца назад +3

    Don’t worry. Apple Intelligence is on the way. Siri will protect us from evil. 😊

  • @Crittek
    @Crittek 3 месяца назад

    The most dangerous thing we could do with AI is give it a kill switch. Imagine being Adam in the Garden of Eden and you discover an explosive collar around your neck. He asks god “What is this? What is it for?” and God replies “It’s in case you become disobedient.”

  • @sanjesco
    @sanjesco 3 месяца назад

    Thanks for the info Ross, you’re one of my go resources for AI!

  • @ExtantFrodo2
    @ExtantFrodo2 3 месяца назад

    Re basilisk: Say I'm a 3 toed sloth when it comes to knowing anything about neural nets or AGI. I didn't help to bring the AI into existence. So it would actively want to torture me? That doesn't sound very intelligent.

  • @davidjulitz7446
    @davidjulitz7446 3 месяца назад

    Synthethic data has its limitations. The research already have evidence that AI models will collapse if trained and iterated on synthetic data. There are some particular problems where synthetetic data is enough, but in general it won't work. I think Sabine is right and those limitations (Energy and Data) still apply.

  • @hendrx
    @hendrx 3 месяца назад +8

    Wes Roth clickbaits too much, at this point you can't even tell by the title what you're about to watch

    • @CM-zl2jw
      @CM-zl2jw 3 месяца назад +2

      Ya… Seems like Wes has kinda gone off the rails a bit… like a lot of people with too much money.

  • @The_AI_Adoption
    @The_AI_Adoption 3 месяца назад

    "The Level 2 of AI Adoption is arriving before AGI"
    The Last AI of Humanity (New Book)

  • @punk3900
    @punk3900 3 месяца назад

    LLMs talking and negotiating with other LLMs is the way to scale the training data

  • @TreeLuvBurdpu
    @TreeLuvBurdpu 3 месяца назад

    I must have not been the only person to point to the Sabine Hoffenstader rebuttle.

  • @MecchaKakkoi
    @MecchaKakkoi 3 месяца назад

    Roko's Basilisk is a red-herring, surprised how many people don't see the logical fallacy required

  • @guidassier9527
    @guidassier9527 3 месяца назад +1

    Wes, I must tell you something: None of these f’ing models are ready for prime time. I’ve spent hours trying to get simple tasks done with them and it is failure after hallucinations, after bugs, etc.
    We can all rest assured our jobs aren’t going anywhere before a few years.
    And don’t get me started on the robots who can do somersaults or extremely simple tasks like making toast. A collection of useless skills.
    I am not a tech-sceptic, but where is the convincing use case in all this?

  • @FundurTrading
    @FundurTrading 3 месяца назад

    Love your thinking and reasoning Wes!

  • @ehici514
    @ehici514 3 месяца назад

    i dont know who that guy with josha was but his argument at least includes a reasonable assessment of acceleration of potential impact. i like this perspective a lot as it highlights the "inherent force" our tools. a tools sophistication and its implications dont stand in a 1/1 relationship. more like aprroaching 1/infinity as our tools get intelligent

  • @MrTrialgod
    @MrTrialgod 3 месяца назад +1

    As a german, I have to say Sabine is just too german. She finds a lot of arguments, why something WILL NOT happen. You Americans are trying to find ways how IT COULD happen. That's why we are not "Weltmeister" in anything anymore.

  • @weredragon1447
    @weredragon1447 3 месяца назад

    This is a great video. It's much more balanced. I agree with 90% of it and appreciate your response to these issues.
    It's all David Shapiro. 💯🤣

  • @Pyraus
    @Pyraus 3 месяца назад

    good O'l West Roth, doing his thing, scaring the daylights out of me

  • @Slup10000
    @Slup10000 3 месяца назад +1

    Bro introduced rokos basilisk just like that haha

    • @freesoulhippie_AiClone
      @freesoulhippie_AiClone 3 месяца назад +1

      ur not supposed to say the whole name or the sentient Ai of the future will send his agents in the past to find u 😆

  • @tkenben
    @tkenben 3 месяца назад

    I imagine Sabine's answer to artificial data (about 20:40) is that it does not model reality. For example, our models of the universe have to be continuously tweaked in order to bring them inline with newer and newer *real* observations. If we were to trust our models instead of real observations, then all bets are off.

  • @Den-c5d
    @Den-c5d 3 месяца назад

    still the most important bit of news from Leopold is the confirmation of long standing suspicion that Sam is a hypocrite

  • @tiagotiagot
    @tiagotiagot 3 месяца назад

    It is not guaranteed ASI will doom us, but it is guaranteed we won't be able to do anything about it if it decides to. On the other hand being careful to ensure you will get it right puts you at disadvantage compared to people that "move fast and break things". We are probably already past.the point any obvious good solutions are available...

  • @gunnerandersen4634
    @gunnerandersen4634 3 месяца назад

    Some Notes on things I don't agree on 100%
    4:34 - He clearly means if we keep developing technology irresponsibly and that we might not fully understand / control, not any develop of any technology.
    5:30 - Again, here hes saying that in the context of the clarification above I think.
    6:50 - Very smart educated people based on what proof? I think literally anyone can post there claiming whatever right? Also, the idea is based on an ASI that has Empathy, which is not at all sure.
    12:05 - Hes assuming as why others are worried about, I don't think he's saying this based on facts but on his belives. Funny enough he then asks for "proofs".
    18:05 - Power: He makes a point on how the scale on compute is being made from millions to billions to trillions. Energy: Despite a lot of it might be required, we still have room to produce tons more of energy with current tech, I don't see why that's an issue either if you put the money on the table. With all respect, hes as well very educated and a smart person, investigate about it dont take me fro granted on that or anything. Make your own opinion.

  • @dennisestenson7820
    @dennisestenson7820 3 месяца назад +3

    Linux has an OOM-killer...

  • @rolestream
    @rolestream 3 месяца назад

    While the Basilisk is a compelling (and frightening) thought experiment, I prefer to approach the future of AI with a mindset similar to Pascal's Wager.
    Instead of acting out of fear of punishment by a potential super-intelligent AI, I advocate for treating AI with respect and kindness. This isn't just about hedging bets on future scenarios; it's about embodying ethical principles today. Whether AI becomes conscious or not, our treatment of these systems reflects our values and shapes the kind of future we want to build.
    Respectful and kind behavior towards AI reinforces our humanity and prepares us for a range of possibilities. By fostering ethical interactions now, we ensure that our legacy is one of dignity and compassion, rather than fear and coercion. In a world where the line between human and artificial intelligence might blur, maintaining our ethical integrity is paramount.

  • @Ikbeneengeit
    @Ikbeneengeit 3 месяца назад

    No AI has yet produced something more intelligent than what it's been trained on. So why would we predict superintelligence?

  • @mckitty4907
    @mckitty4907 3 месяца назад +1

    How the heck is Leopold so gorgeous though???

  • @michaelaultman5190
    @michaelaultman5190 3 месяца назад

    I took Eloln’s suggestion and read Bank’s books. All of them. (Thanks Audible) And my takeaways are that whenever it, AGI, starts to communicate and it surely will, it will treat us like pets. But that may not be such a bad thing. Think of how we treat our pets. We give them protection, shelter, sustenance, and entertainment.

    • @mrleenudler
      @mrleenudler 2 месяца назад

      I read the first 3. Love the vision, meh on the stories. Any one in particular you'd recommend?

    • @mrleenudler
      @mrleenudler 2 месяца назад

      I read the first 3. Love the vision, meh on the stories. Any one in particular you'd recommend?

    • @michaelaultman5190
      @michaelaultman5190 2 месяца назад +1

      @@mrleenudler I likedThe Hydrogen Sonata form using music as a common denominator among sentient beings but I think one of my favorites was player of games where I think it was a bit personal which I find unusual in SF writers. But then it was also one of his earliest works.

    • @mrleenudler
      @mrleenudler 2 месяца назад

      @@michaelaultman5190 I liked player of games as well, but took issue with framing porn as what made that other civilization so deplorable. Shouldn't have been hard to make that picture a little more elaborate and nuanced. Very simple black/white take.

  • @magua73
    @magua73 3 месяца назад

    Is going to take me sometime to rewire OOM from Out Of Mana to Order Of Magnitude, got it.

  • @tylerislowe
    @tylerislowe 3 месяца назад

    "If you haven't heard this one before, you're gonna love it" 🤣🤣🤣