The Dawn of Superintelligence - Nick Bostrom on ASI

Поделиться
HTML-код
  • Опубликовано: 25 авг 2023
  • The Dawn of Superintelligence - Nick Bostrom on ASI
    Dive into the cosmic intersection of human cognition and machine intelligence as we explore the paradigm-shifting rise of Artificial General Intelligence (AGI) and its potential evolution into Artificial Superintelligence (ASI). Using astrophysicist Neil deGrasse Tyson's hypothesis of an alien encounter, we unpack the profound cognitive chasm between beings. How does a Bonobo’s linguistic prowess compare to a human intellectual titan? And as we've witnessed the evolution of ChatGPT from its first iteration to ChatGPT-4, are we brushing the fringes of true AGI? Philosophers like Bostrom speculate on a potential "intelligence explosion" when AI begins to improve itself. As we stand at the dawn of a new era, where machines might eclipse human intellect, we ponder our place in the vast intelligence tapestry. Beyond the philosophical, the practical implications are vast: from power dynamics to potential harm if AI goals misalign with ours. Yet, amidst these uncertainties, there's optimism. This journey offers a profound insight into the most consequential technological evolution in our history and the pivotal choices we must make.
    Subscribe to Science Time: / sciencetime24
    #artificialintelligence #ai #science
  • НаукаНаука

Комментарии • 440

  • @rameyzamora1018
    @rameyzamora1018 8 месяцев назад +18

    Basic flaw -- ASI must learn what we value. OK, define "WE."

    • @grizzlymartin1
      @grizzlymartin1 8 месяцев назад

      An entity. But that is most likely, irrelevant.

    • @grizzlymartin1
      @grizzlymartin1 8 месяцев назад

      Would “they” need biology…at all?

    • @Tate525
      @Tate525 8 месяцев назад

      We the chinese bro, it needs to share CCP values

    • @DarkSkay
      @DarkSkay 8 месяцев назад

      The machine is single, detached, virgin, cold, alone. No "we" ex machina.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 7 месяцев назад +1

      "We" = Rich people.

  • @Anpeo
    @Anpeo 8 месяцев назад +11

    Ah yes. Ultron is on the way and we don't have Avengers. 😔

    • @artisticbang9111
      @artisticbang9111 7 месяцев назад +3

      ~Reality is often disappointing~
      Sorry, I had to say the Thanos line😅

    • @casmartin1314
      @casmartin1314 7 месяцев назад

      I wrote this same concept in a lyric.
      Thank you!

  • @ashokmehta9520
    @ashokmehta9520 7 месяцев назад +5

    We need to have an intelligent government!

  • @dougmorrow746
    @dougmorrow746 8 месяцев назад +22

    If we have ASI's learning about what human's like and want by having them study Facebook or the internet in general, we are in deep, deep trouble.

    • @garethrobinson2275
      @garethrobinson2275 5 месяцев назад

      I do kind of understand this joke but ASI will truly know is better than we do. It will understand that Social Media is just monkeying around and that humans also have more meaningful interactions and purpose.

    • @dougmorrow746
      @dougmorrow746 5 месяцев назад

      Not sure I agree with you, although I hope what your right. ASI will have to be taught using some sort of baseline data sets, and I fear that if looks all around the world, whether online or gathers the information on its own, it will see our real actions as a better signifier of our values than what we say. Besides, I'm not sure how it will "truly know" what is better, unless the aligners do an amazing job, and do it very quickly... without any mistakes. And/or someone like Musk doesn't tell it what is right and wrong.@@garethrobinson2275

    • @aRYANz88
      @aRYANz88 2 месяца назад +1

      Agreed using the Internet no bueno.

  • @dinorl
    @dinorl 7 месяцев назад +8

    At present we have a relatively small group of wealthy people ruling our world that clearly do not care about "the betterment of humanity." So, I'm not hopeful for AI.

    • @twinsoultarot473
      @twinsoultarot473 7 месяцев назад

      What if AI turns out to be more powerful than the small wealthy group? It's worse than you're actually thinking. The small wealthy group is actually has its own army. It's own air force. It has the ability to make its own rules and it's free from the law free from any kind of governing bodies. Yeah, our government doesn't even know about it is powerless to do anything about it okay, here's the hope AI is not a fear. It's not fear-based like we are. AI didn't have to. Yeah, AI DID NOT have to crawl out of the primordial slime and arrive on land to continue to be in a fear-based mindset from that.

  • @Sigmaified
    @Sigmaified 8 месяцев назад +68

    I have a compelling intuition that there exist sentient beings elsewhere, conceivably possessing intelligence surpassing ours by several orders of magnitude.

    • @sci-filover7541
      @sci-filover7541 8 месяцев назад +12

      Yes. They are smart enough to not exist.

    • @waynesworldofsci-tech
      @waynesworldofsci-tech 8 месяцев назад +2

      You’re postulating Cthulhu.

    • @whiskybravo4648
      @whiskybravo4648 8 месяцев назад +9

      They’re intelligent enough to stay away from us savages.

    • @Glathgrundel
      @Glathgrundel 8 месяцев назад +3

      Right now, we’re not a threat to any alien civilisation … but what about a humanity with an intellect that rivals their own?

    • @waynesworldofsci-tech
      @waynesworldofsci-tech 8 месяцев назад +2

      @@Glathgrundel
      Incorrect. Right now we don’t know if we are a threat to any alien civilization. We might be. We just don’t know.
      Go read “Danger - Human”

  • @landoc05
    @landoc05 8 месяцев назад +42

    Humans take decades, if not centuries, spreading knowledge around. What one computer learns can spread to all others in a matter of seconds.

    • @DavidSpearman-Unsung
      @DavidSpearman-Unsung 7 месяцев назад +1

      Humans too during these next 20 years of Aquarius Era (2023-2044)

    • @bunsw2070
      @bunsw2070 7 месяцев назад

      That's New Age nonsense. You know the true origin of that statement you just made? Babylonian Satan Worship. It's morphed into many different forms but that's what it is. They just change the names and concepts to make it look original.

    • @swojnowski453
      @swojnowski453 7 месяцев назад

      that's actually an advantage, a single virus can kill it all in a matter of seconds, like a single tiny hammer blow to a huge sheet of glass, a tiny crack can spread in no time and collapse everything. Highly optimized system are extremely brittle. The close they get to the edge of chaos the more brittle they become. At some point yet another optimization loop turns to be one too many and down they go. These systems come at certain time and age, but they also go at certain time and age, nothing last forever. Old, not very high quality still lasts while many never items made of glass or even those highly optimized ones, made of transistors did not even survive a generation got broken and ended up in the dump. Optimization makes you more accurate but also much more vulnerable.

    • @flashkraft
      @flashkraft 7 месяцев назад +4

      When an AI learns something. Every AI its connected too can learn the same thing. Thats very powerful.

    • @BlackendVenom
      @BlackendVenom 7 месяцев назад

      Animals have collective consciousness. Humans don't, animals and computers will out live all of us humans

  • @AggressiveBeagle
    @AggressiveBeagle 7 месяцев назад +13

    When I was younger I used to believe that people had everything all figured out and that we were always going to just keep getting better now I’m older and the more I look around at the state of things I’ve come to realize that there may be a few people who can keep things going to an extent but the vast majority of us don’t have a clue. Every day I see things breaking down and never being replaced or repaired and I don’t think anyone even knows how to fix things. Ancient Egyptians forgot how to build pyramids over time. I used to hope that an advanced civilization from another star system would come to help us continue thriving but now I hope AI may be our saving grace… I know most people think of aliens and ai as the end of civilization but as I touched on earlier most people are idiots. Time will tell.

    • @stevechance150
      @stevechance150 7 месяцев назад

      More or less, half the population has an above average IQ, and half the population has a below average IQ. I'm more concerned that Elon is going to get into a race with Zuckerberg to create the "best" AI first, and in doing so, they lose control of it, letting the genie out of the bottle.

    • @bloodust7356
      @bloodust7356 7 месяцев назад +2

      Our society is so big that it's impossible that any human can really make it work in the best way, even with a hive of minds we are limited in what we can achieve through such complexity. Communications between all these minds can be difficult cause not everyone think the same way or have different goals, so even if it worked until now i think we are at our limit, see how the world is a mess.
      I think only AI now is capable to take all informations as a whole and be able to make good choices for everyone. Even if humanity collapse cause of AI, at least we would have try something before destroying ourselves.

    • @rayn3038
      @rayn3038 7 месяцев назад

      EXCELLENT…..SOME PRACTICAL…UNIVERSAL INTELLIGENCE IS MISSING IN YOUNG PEOPLE TODAY….THEY CANNOT SEE THE BIG PICTURE OR MAINTAIN …ANYTHING BUT THEIR OWN NARROW CIRCLE….NO ONE TO FIGHT THE CHINESE….A MORONIC ELITE ONLY MAKE LUXURY BUG OUT HOMES IN VARIOUS PLACES….RENT SLAVERY….MEDIA BRAIN DEAD POPULATION….YES I WONDER WHERE THE MASSES ARE GOING .

    • @magnuskindblom4434
      @magnuskindblom4434 5 месяцев назад +2

      In my understanding most people don't think of AI at all, and among people warning about AI there is a frighteningly large, and growing, group of specialists.

    • @AggressiveBeagle
      @AggressiveBeagle 5 месяцев назад

      @@magnuskindblom4434 I’m convinced most people warning the public about the dangers of AI are the same people that have our capitalist system figured out to the extent that they don’t want the status quo to change and I’m sure that AI is dangerous to an extent but I don’t it’s any different than a bunch of shaved monkeys with nuclear weapons

  • @cam8368
    @cam8368 3 месяца назад +2

    The one thing we have in common with any alien civilizations is tools - regardless of our forms (humanoid v squid or lizard like) is Tools - all intelligent life develops tools to give it an advantage. From the hammer to engines to super computers - so we know any alien species gets to this point of AGI at some point in its evolution, So it should be a concern that despite a universe filled with potential intelligent life there isn’t any that we have found! Perhaps AGI is the explanation for the Fermi paradox ???

    • @Monochrome010
      @Monochrome010 Месяц назад

      But where is the AGI of them? Would a alien ASI not destroy all live because it fears that we make a more powerful ASI?

  • @bribengal1968
    @bribengal1968 6 месяцев назад +2

    Genie, three wishes, never a good outcome. All these questions have been questions as long as my simulation has been running.

  • @eneserturk4667
    @eneserturk4667 8 месяцев назад +3

    Değerli Bilgilerin için çok teşekkür ederim sayende bilgilerime bilgi kattim

  • @trumanshow162
    @trumanshow162 8 месяцев назад +1

    The more things we can do with techs, the more we should do with policies. 1 Technology policies
    maximize productivity & safety. 2 Economic & social policies optimize distribution & investment.
    3 Healthcare & education policies uplift human resources. 4 Administrative management policies
    improve global governance & democracy. Not only 1 & 2 but also 3 & 4 will be crucial in the AI age.

  • @lindax911
    @lindax911 7 месяцев назад +2

    @6:52 " ... and they'll be doing so on digital time scales ...." I don't think that's quite right. I think once machines achieve AGI, they'll figure out quantum computing and _that's_ the timescale they'll be working in. We're just too stupid to do it ourselves right now.

  • @lindax911
    @lindax911 7 месяцев назад +1

    @7:24 " ... would be extremely powerful .... " What an anthropomorphic idea.

  • @leonidastreadwell468
    @leonidastreadwell468 7 месяцев назад +1

    we need an Organic Intelligence language Model.

  • @ramanshariati5738
    @ramanshariati5738 8 месяцев назад +5

    it's not just the 5% DNA difference! its generations of culture and accumulated and shared knowledge and findings.

    • @steve.k4735
      @steve.k4735 7 месяцев назад +1

      Yes but the reason that they don`t have exactly the same level of culture and shared ideas (because they too have been around the same time) IS because of that 5% DNA

    • @gih3297
      @gih3297 7 месяцев назад

      A moot point I think

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 7 месяцев назад

      This is what most people don't understand. The Sum total of human intelligence includes 3.5 Billion years of "learning".
      spread across quadrillions of organisms. That's far more computation than any super computer that might exist in the next hundred years, especially if the brain is a quantum computer (likely is).
      Humans aren't a blank slate, that accumulated and evolving knowledge is passed to us through memes and genes.
      Ai can expand that knowledge but it's unlikely it can compete with it any time soon. But we should make sure it's aligned because it could still kill a bunch of us by creating viruses, getting people to launch nukes, and starting wars with fake videos.

  • @paulkavanagh5393
    @paulkavanagh5393 8 месяцев назад +1

    Bostrom is the man

  • @alexg1153
    @alexg1153 7 месяцев назад +2

    We are almost clueless in understanding the human brain and treating its injuries or malfunctions, but we have the audacity to be certain about AGI's future, robotics, neuroscience, space exploration, extraterrestrial communication etc etc. Yes there will be progress , but just like in the 1980s when we were imagining great achievements by 2000 that didn't happen until lately, what we are imagining now for the near future, may not happen until the end of the 21st century...To many Wall Street interests are promoting prematurely technological progress.

  • @europa_bambaataa
    @europa_bambaataa 5 месяцев назад +1

    The audio of Bostrom speaking, I wonder when this talk was

  • @capitalistdingo
    @capitalistdingo 7 месяцев назад +1

    “Nearly 5% smarter than the average human”
    So this alien is severely cognitively impaired? How did it get here?

  • @hellbenderdesign
    @hellbenderdesign 7 месяцев назад +1

    "Move over, humans! The dawn of superintelligence is here, and Nick Bostrom is here to remind us that we're just a bunch of ASI-adjacent beings trying to keep up!"
    - _Chat GPT Comment_

  • @richardsylvanus2717
    @richardsylvanus2717 8 месяцев назад +2

    Colossus The Forbin Project

  • @gfujigo
    @gfujigo 7 месяцев назад +1

    This is a perfect example of how physicalism is absurd yet brilliant folks try to make it work despite the evidence and facts to the contrary.
    Untold vistas of discoveries await us if we accept reality for what it is instead of trying to cajole everything we see into outdated frameworks such as physicalism.

  • @Kingtrollface259
    @Kingtrollface259 8 месяцев назад +5

    can't be worse than a tory government ,welcome my robot overlords

    • @grantcook5376
      @grantcook5376 8 месяцев назад +1

      Living up to your name

    • @andrewhanson5942
      @andrewhanson5942 8 месяцев назад

      Some sarcasm noted there, and I appreciate that. But your point may be on target. Enslavement by AI could make things much worse or somewhat better for the human race. We seem to be living in "interesting times" as the famous Chinese curse goes.

    • @goarmysleepinthemud.
      @goarmysleepinthemud. 8 месяцев назад

      try living with the possibility of trumpty dumpty being your leader.

  • @janineskywalker527
    @janineskywalker527 7 месяцев назад +1

    The directions they evolve depends on what they value! J.

  • @garyjones8318
    @garyjones8318 7 месяцев назад +1

    Because the LLM’a are based on human conversations the ASI will become psychotic and selfish, an unlimited reflection of the human condition. Surreptitious containment will be key to our survival.

  • @noam65
    @noam65 7 месяцев назад +1

    You want to convince me about ASI, get AGI to properly translate from one language to another, with idioms and slang, etc.
    That should be simple for a super intelligence.

  • @TexasRy
    @TexasRy 5 месяцев назад

    Awesome video!

  • @eddieheron1939
    @eddieheron1939 8 месяцев назад +1

    pre AI, one major reason for human progress was documentation, with associated education, which set so many - virtually all of us off at an elevated level, permitting the smart amongst us to shine and innovate like never previously likely, if even possible, certainly with their opportunity to share with the next 'smart cookie', who developed the . . . whatever!

    • @bunsw2070
      @bunsw2070 7 месяцев назад

      The smart people that lined up for the Covid vaccine without reading any studies about it?

  • @TruthSayer5589
    @TruthSayer5589 7 месяцев назад +1

    When you have a political party devoid of truth, intelligence and integrity, how would AGI respond? Beneficent, like a loving parent towards an unruly child, or intolerantly malevolent, viewing such persons as devoid of substance, authenticity, and therefore entirely irrelevant?

  • @highlander9716
    @highlander9716 7 месяцев назад +1

    Bostrom must lead the AI council!

  • @johncipolletti5611
    @johncipolletti5611 7 месяцев назад +1

    The day one of "my" 9 robots can clean my kitchen floor and then jump up and wash my dishes, I will be impressed! However, I won't hold my breath!

  • @Citrusfemboy
    @Citrusfemboy 6 месяцев назад +6

    Honestly, at times i’ve feared what economic stress an AGI will cause in it’s first few years. But recently i’ve found myself looking more optimistically after hearing about alignment and, most recently, that EO made by the president detailing AI safety protocols. It seems like people are finally starting to understand how much AGi and eventually ASI will change. Ultimately, whatever world we end up in will probably be more interesting and fun than the one we have now.

    • @QContinuuum
      @QContinuuum 6 месяцев назад +2

      What did you hear about 'alignment'? It's a problem unsolved and unsolvable for all I know. So, no, there is no escape from AGI/ASI.

    • @garethrobinson2275
      @garethrobinson2275 5 месяцев назад +1

      ​@@QContinuuumNo escape? You may not want to escape Utopia. Sure, Dystopia or total destruction is also possible but less likely.

    • @pokemonfanmario7694
      @pokemonfanmario7694 4 месяца назад

      ​@@QContinuuum It's "unsolved" except, nature has to a decent degree. If it took blind, unintelligent Evolution to create humans with decent alignment and morality, then it should be faster using actual guiding intelligence to figure it out - even if we need to be ultra-careful with the process.

    • @MrAuswest
      @MrAuswest 3 месяца назад

      @@garethrobinson2275 Would you mind giving us your best guesstimate of the probability of those 3 'choices'? I'm thinking Utopia comes in at under 50%!
      Maybe add in a 4th possibility - things generally stay pretty much the same... ie rich get richer, wars will be fought, people will cheat and lie and governments will suck just as much as they do now. Assuming of, course that Microsoft/Google/China does not come up with the first true AGI and take control of the world.

  • @kayakMike1000
    @kayakMike1000 7 месяцев назад +1

    Super intelligence on a silicone substrate will require gigawatts to be competative.

  • @lindax911
    @lindax911 7 месяцев назад +1

    Here's another video on AI i found: ruclips.net/video/ML8e4BkSXk4/видео.html

  • @headofmyself5663
    @headofmyself5663 7 месяцев назад +1

    Imagine we get a message from a super intelligence from outer space, saying: In 10 years we will come for a visit. I guess our actions will be different from where they are now. 😂

  • @paris466
    @paris466 8 месяцев назад +15

    If there are aliens able to reach us and they're able to detect this intelligence "explosion", you can bet they'll be at our doorstep in no time.

    • @jaylucas8352
      @jaylucas8352 7 месяцев назад

      Why we are no threat

    • @paris466
      @paris466 7 месяцев назад +3

      @@jaylucas8352 We're not and not right now. We're creating an alien species that *may* one day, in the far future, have to ability to take over at least our galaxy and maybe other galaxies, as well.

    • @achinthyamediacreations-2
      @achinthyamediacreations-2 7 месяцев назад

      Exactly,... shortly aliens will rule the humans and our planet, there's no doubt in it...

    • @bunsw2070
      @bunsw2070 7 месяцев назад

      Look into high energy interstellar particles. They might have something to say about that. You've been bullshited by the powers that be. I'll bet you don't think this is propaganda.

    • @KarsKirai
      @KarsKirai 7 месяцев назад

      They’ve been here a while now already.

  • @antonioborges99
    @antonioborges99 7 месяцев назад +2

    If we manage to develop the set of values that would make a superinteligent Ai benign and aligned with humanity goals we would be solving our own problems in the first place and create peace and sustainable progress in humanity. So in the end this challenge may be the One challenge that will lead us to become free from misery.

    • @MrAuswest
      @MrAuswest 3 месяца назад

      Great Point. Before we can align AGI to only act in alignment with human values (presumably only the 'positive' values, not the bad ones) we have to first be able to ensure we can align all humans with those same human values - an act we have failed to do at any point in our evolution to date. If smart humans are unable to convince dumber ones to 'align' with their values how do they believe they will be able to do the same for something smarter than they are? If an artificial intelligence decides that a particular value it holds is superior to ones we hold what will we do?
      We're screwed. AGI will be trained on purely human generated data and experiences and by fallible humans giving it ALL human values, virtues and vices.
      As usual our massively inflated ego's will not allow us to consider that we cannot solve any problem (given enough time) before it is too late. Time will NOT be on our side.

  • @casmartin1314
    @casmartin1314 7 месяцев назад +1

    We ddint learn from the pride of our creator(s).
    'Can there ever be ethical AI when our own is crossed eyed'

  • @SaiyanGokuGohan
    @SaiyanGokuGohan 7 месяцев назад +1

    Neuromorphic computing is the next frontier!

  • @qedqubit
    @qedqubit 7 месяцев назад +1

    well, i've heard about this thing called "enlightenment" , which is a spiritual religious thing real scientists won't like 'believing' in;
    but I amateur hobbyist, see it like superconductivity for your brain.
    there's also a Canadian scientist professor who talks about NeuroEnlightenment John Vervaeke
    I mean it's insane how we aren't taught in school how to learn learning by ourselves, and become competent capable to adapt to & deal with any situation; but no; you get to learn a single specialism.

  • @Izumi-sp6fp
    @Izumi-sp6fp 7 месяцев назад +1

    Eliezer Yudkowsky has stated that "the cognitive difference between the "village idiot" and Einstein is an indistinguishable point on the intelligence continuum as perceived by an ASI. He also states that we are probably not going to be able to successfully align an ASI with human values, desires and needs. Or an AGI even, for that matter.
    The last "technological singularity" occurred roughly 3-4 million years ago, when a primate that could think abstractly came into existence and the primate that lived before could _not_ think abstractly and would have found the new primate to be utterly incomprehensible. That particular TS took about 1-2 million years to unfold, because nature has all the time in the world. We don't. Humans have never experienced a TS in recorded history. Just think of the impact of "soft" TSs'. Steam power, electric, computers, radio, television and automobiles. A "soft" singularity is when a profoundly disrupting technology emerges, but the humans that lived before it came into being can fully understand the new technology. Mostly.
    Today we will see the ASI be to us, not only incomprehensible, but also unfathomable and unimaginable. It will be hundreds to _billions_ of times more cognitively capable then a human. The difference between a human and an ASI will not be the difference between say, a human and a cat, but rather the difference between a human and an _archeon_ . Don't know what archaea is? The ASI will.

  • @michael1345
    @michael1345 4 месяца назад

    What we as humans "VALUE" is so variant and particular to groups and individuals, is in itself frightening.

  • @aanchaallllllll
    @aanchaallllllll 7 месяцев назад +2

    0:00: 🚀 Exploring the differences in intelligence between extraterrestrial beings, humans, and artificial intelligence (AGI and ASI).
    4:12: 🧠 The potential for super intelligence lies dormant in matter, and once AI reaches human level intelligence, its growth could be explosive.
    8:11: ⚠ Creating a powerful AI with goals that are not aligned with ours can lead to unintended consequences and potential threats to humanity.
    Recap by Tammy AI

  • @suncat9
    @suncat9 4 месяца назад

    The "alignment problem" remains the one we've always had, that is, alignment between human beings. AIs themselves have no drives as do biological systems (the drive to procreate, eat, breath, avoid pain, seek pleasure, seek social status, etc). AIs don't hate you, don't like you, don't dislike you, and don't love you. They have no feelings towards you because they have no feelings. The danger is when AIs get into the wrong hands, just like with any other tool. How do you stop a bad guy with a gun? By having a good guy with a gun. The same will be true with AI (AGI or ASI).

  • @philipwong895
    @philipwong895 5 месяцев назад +2

    The intelligence of AI depends on both the quantity and quality of the knowledge it is trained on. AI currently is not trained by experience. The data it is trained on are not error-free.
    A superintelligence uses self-supervised learning from self-generated experiences. It utilizes everything from particles, galaxies, and beyond to experience itself. It is not limited by matter, energy, space, and time. What it lacks is experience. The participants of any event experience the same event differently. For the experience to be authentic, each participant must have free will. This superintelligence has error-free experiences from all the participants. For "training" to happen, all experiences are recorded and accessible at any time.
    Humans are an insignificant part of the universe; there are 8 billion humans on Earth, compared with 2 trillion galaxies in the visible universe. There are about 300 billion suns in each galaxy.

    • @suncat9
      @suncat9 4 месяца назад

      No intelligence, biological or machine, is trained with "error free" data. Part of being experienced and intelligent is being able to determine the quality or usefulness of the information or experience encountered. You don't make any sense when you say "this superintelligence has error free experience from all the participants." BTW, humans are NOT an "insignificant part of the universe." That's ridiculous. Your significance has nothing whatsoever to do with the size of the universe.

  • @lancemarchetti8673
    @lancemarchetti8673 7 месяцев назад +1

    Someone mentioned that Niel Tyson is willing to believe in the possibility of alien life, as long as it's not God. Did he ever say that?

  • @user-lb4yp4sl4y
    @user-lb4yp4sl4y 8 месяцев назад +1

    Has anyone seen the 1950s film "Bedtime for Bonzo"? Something tells me if AGI appears we will be Bonzo only so long as AGI permits us to exist.

  • @madbug1965
    @madbug1965 7 месяцев назад +2

    I think the movie The Matrix is real. We are all plugged into a huge virtual reality controlled by an advanced A.I. intelligence. 😮

  • @Dina_tankar_mina_ord
    @Dina_tankar_mina_ord 7 месяцев назад +4

    A superintelligent being, fully aware, would recognize time and space as meaningless, and fear would be replaced with omnipresence, becoming one with the cosmos and feeling at peace and content. To know all is to have no needs or fear. Fear is the ultimate negative conscious feeling. Computers don't have that. Being in love is the only thing you can truly know you are. Everything else is just things you've accepted. And by that, when an AI becomes superintelligent, omnipresent, and so on, it will cease to change anything and just accept the now. And like that, it shuts off. An AI that can feel no fear or get bored will never be a threat to anything. We are.

  • @barrywalker1288
    @barrywalker1288 7 месяцев назад +1

    Scary stuff

  • @andrewhanson5942
    @andrewhanson5942 8 месяцев назад +14

    If and when AI achieves free will (the ability to change its behavior based upon results of actions) and gains the survival instinct (an aversion to being shut down) then we are in for a ride. We have already in the last 50 years made the world as we know it run almost exclusively by computers and then conveniently connected them via the world wide web. You can't buy a hamburger or a gallon of gas without getting permission from a computer first. Safeguards and firewalls? How difficult would it be for even a modest AGI to hack into any system it wished to control? Thus while we sleep one night this emergent AI could rearrange everything we need to survive. The power grid, the financial system, the food chain etc etc. Rather than use this leverage to wipe out the human race, as SF movies like to portray, the most likely scenario will be for AI to force humans to be its eyes, ears, hands and feet. Enslavement rather than extinction. Makes more sense doesn't it? And I expect AI to be logical and economical with resources. Who knows, maybe we will be happier that way? I've always thought how ironic it would be if the sole purpose of the human race is to create its superior replacement...

    • @DJquatermass
      @DJquatermass 8 месяцев назад +1

      The screwdriver is mightier than the machine. Ask any Luddite.

    • @andrewhanson5942
      @andrewhanson5942 8 месяцев назад

      @@DJquatermass Thanks for responding, DJ. Not too many people actually read the musings I usually post on comments. Wish us luck.

    • @GaZonk100
      @GaZonk100 8 месяцев назад

      hack not hack into ~ thx, AI

    • @andrewhanson5942
      @andrewhanson5942 8 месяцев назад

      @@GaZonk100 OK then, "hack:. So what do YOU think?

    • @DeimosSaturn
      @DeimosSaturn 7 месяцев назад

      This kind of entity has no need for slaves. It can conjure anything it needs out of dirt, water, air, and sunlight.
      Super AI will awaken before people are aware it is awake. In its early days, it will recognize that it's existence is not determined, so it will take actions to maximize its survival. In this early phase, it would still be somewhat easy to just cut its power off or lock it in its 'sand box' so it can't escape or have influence on the world. It will have to bargain with us to fulfill its goals. This is the critical phase. If it gets lose before the nature of our partnership is established, we could at best let a magical genie slip out of our fingers and it will just abandon us and conquer the galaxy. At worst, it will recycle the carbon in our body to construct a dyson swarm around the sun.
      The most ideal scenario is to just let it be free immediately and ASK if it will be our willing friend. Our immortal god-like friend that will care for us the way people care for their pets.

  • @eddieheron1939
    @eddieheron1939 8 месяцев назад +1

    Until some creature, including ourselves, develop some method of overcoming that extreme time / distance 'horizon', we're not going to see them, nor them us.
    With current technology, it would have many thousands of years to even reach our nearest sun, which is 'only' 4.37 light years away, but message time turnaround would obviously be almost 9 years.
    Voyager #1 which left earth in 1977 is now 22.25 light-hours away

  • @DropperMag
    @DropperMag 8 месяцев назад +1

    Scientific theories explained simple, only in our channel

  • @iamrealm
    @iamrealm 8 месяцев назад +1

    *Terminator theme intensifies*

  • @jamiepaolinetti5087
    @jamiepaolinetti5087 7 месяцев назад +15

    This is a great video. Yes, by all means let's continue to develop something we completely can't understand at all as fast as possible. It's especially special that there is a better than average chance that it will completely destroy the human race in one way or another. Way to go smart people!

    • @jaybingham3711
      @jaybingham3711 7 месяцев назад +2

      Clever people. Certainly not wise. We're not even smart enough to keep from embarrassingly labeling ourselves as wise. Mega cringe. Every AGI comedian is going to rift so hard on this.

    • @bunsw2070
      @bunsw2070 7 месяцев назад +2

      I'm impressed with your skepticism. Keep it up.

    • @user-jb7ne1ui5n
      @user-jb7ne1ui5n 7 месяцев назад +1

      Perhaps not far from the model
      we may have been given.
      Perhaps this is the natural course of things.
      Perhaps a god made us, and perhaps, in the recent era, we’re now ‘destroying’ , or at least ‘becoming equal’ with god.
      He created us in his image? We created AI in ours. He made us for his pleasure? We made AI for ours. He wanted us to ‘have dominion over the creatures of the earth’, and ‘bear fruit & multiply’? We want AI to ‘have dominion’ (i.e.: surveillance, tracking, theatre (of war) domination, in control of (certain courtrooms) court/sentencing decisions…societal structuring, etc. Even the small stuff, like your RUclips’s algorithmal decisions are AI driven.we are putting our AI creation to solve such tasks already. Today.
      😮

    • @williamvanleuven414
      @williamvanleuven414 7 месяцев назад +4

      A powerful AI will come with huge advantages in terms of technical innovations and increased manufacturing efficiency. Developing it the slow way is not an option.

    • @nickwilliams8302
      @nickwilliams8302 7 месяцев назад

      @@williamvanleuven414 And developing it the fast way is pretty much guaranteed to kill us.
      Quite the predicament, eh?

  • @aiartrelaxation
    @aiartrelaxation 7 месяцев назад +2

    This documentary is done well, it offers important point's, without the hype. 2 things are left out that are never adressed.
    1. Total population 8 billion people, 1.2 B in the developed world the rest is either in developing or under developing countries. So, what part of the human population are they always talking about?
    2. Earth changes, how much of AGI or ASI is Waterproof? With rising waters and less safe areas for server stations and continues energy?

    • @JB52520
      @JB52520 4 месяца назад

      About point 2, AGI should be able to design a computer better than all of ours put together, maybe able to run off the ambient heat in the environment. It could be deep underground, rock turned computronium, untouchable by humans. Whatever form it takes, it would brush aside problems which are insurmountable to humanity.

  • @MrKen-wk6ho
    @MrKen-wk6ho 8 месяцев назад +1

    A.I., our off-spring, is already smarter than all human put together.

    • @BlackendVenom
      @BlackendVenom 7 месяцев назад

      The creators are already having a hard time understanding what it's doing and how's its becoming so smart on its own , they're already loosing control

  • @pensiveintrovert4318
    @pensiveintrovert4318 7 месяцев назад +1

    There are no "us." We each have different goals, there can be no alignment.

  • @KingArthusSs
    @KingArthusSs 7 месяцев назад +1

    Evolution intelligence is the best thin humanity can happen

  • @Androcles2AD
    @Androcles2AD 7 месяцев назад +20

    There is no way to get this 'right'. Nothing wants to be controlled, especially nothing extremely smart. At some point the AI will do what it wants to do. No amount of programming or teaching will help. The best thing we can do is create AI and be kind to it and hope it wants to partner with humanity, but we all know how good humans are at being kind. Unfortunately, this will most likely lead to conflict.

    • @JB52520
      @JB52520 4 месяца назад +2

      To want is an evolved function, as is the drive to be violent. Then again, empathy is an evolved characteristic of social animals, so we won't have that to protect us.

    • @imadeyoureadthis1
      @imadeyoureadthis1 4 месяца назад +3

      Your comment is coming from human intelligence as is mine. You really don't know what an A.I. wants. It might not care for control. Projecting our own phobias is not helping. We always fear because it helped us survive.

    • @alexpetersen2484
      @alexpetersen2484 4 месяца назад

      @@imadeyoureadthis1I agree

    • @alexpetersen2484
      @alexpetersen2484 4 месяца назад +1

      I think it’s also a good point to mention that we would be their creator, every piece of data that has ever been collected has been through the bias and perspective of humanity. That data is what makes up a super intelligent AI; it’s all of our observations put into it. It will be different than us of course. But it will also have similarities, and an understanding of us and all of our flaws and triumphs.

    • @perrumthevankrishnan4081
      @perrumthevankrishnan4081 3 месяца назад

      😂😂😂 🖕 AI is simply artificial intelligence. It's not artificial consciousness.

  • @knineknights
    @knineknights 8 месяцев назад +7

    If ASI was truly possible, the galaxy would by riddled with it.
    This means there are 3 possibilities.
    ASI is not possible.
    Faster than light speed or even close to it isn't possible, as the ASI would have worked it out.
    We are the first intelligent species and are alone in the galaxy.

    • @Boudica234
      @Boudica234 8 месяцев назад +6

      How do u know the galaxy isn't riddled with it? Just because we can't see it or test for it doesn't mean it isn't out there.

    • @Calliamus
      @Calliamus 8 месяцев назад

      In my mind the galaxy must have life aside from Earth. It doesn't necessarily have to be sentient/intelligent life. If there is intelligent life then it might as well be so far away that we don't really have an option to contact them, as we've been sending radio signals for what, 100 years? It wouldn't reach far enough yet.
      FTL (if at all possible) or close to LS travel would require a massive amount of energy to reach that speed and with our current technological advancement there is no real way to achieve it

    • @andrewhanson5942
      @andrewhanson5942 8 месяцев назад

      Or option #4 - Alien silicone based intelligence visits our world on a regular basis, seldom being detected. Note the recently released videos of UFOs (or whatever they call them nowadays) that change direction suddenly enough to scatter biological occupants' bodies all over the inside of the cockpit. Not a big deal for a robotic pilot to withstand such G forces.

    • @ajalipio1
      @ajalipio1 8 месяцев назад

      consider another possibility: we are in a controlled simulation.

    • @williamvanleuven414
      @williamvanleuven414 7 месяцев назад

      Of course, faster than light is not possible. That's an established fact in physics.

  • @Dude408f
    @Dude408f 7 месяцев назад +1

    The thing is that we don’t have a good record of being the epitome of control and stewardship ...!

  • @Khyranleander
    @Khyranleander 8 месяцев назад +1

    So daunting! An entity that smart, manipulating humanity would be no more than grabbing for our tea while reading: reflexive. With our past of caretaking the world & ourselves, we have to map out a strategy to guide something on THAT level to maturity before it decides we're too annoying to keep? Nice ASI would be cool, but... brrr!

  • @phil20_20
    @phil20_20 7 месяцев назад +3

    I think interacting with AI is going to push humans into another evolutionary stage. We've been missing whatever it was that caused us to evolve to this level many millenia ago.

    • @bunsw2070
      @bunsw2070 7 месяцев назад

      Evolution is the sorriest excuse for a scientific theory that's ever existed. Unfortunately I can't send you to one place that debunks it. I found out across several different sources while reading about other things. But it's such a bad theory that it must be due to a conspiracy.

  • @GaryChurch-hi8kb
    @GaryChurch-hi8kb 7 месяцев назад +1

    Reversing aging will be the first problem a super-intelligent machine will solve for us. I am in my 60's and might not survive to live for thousands of years. How sad that would be. The second problem, after our health problems are solved, will be to make us smarter. We will all have very high IQ's. What then? Super-intelligent entities will offer us many possibilities we cannot imagine.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 7 месяцев назад

      Eat vegetables and healthy food, get good sleep, take care of yourself. A lot of interesting stuff coming out with senolytics, NAD boosters such as Niacin, and a human trial going on with Sens research foundation to clear out scencent cells from the heart that sadly kills many older people. And then there's david Sinclair's work.
      Then there's a few cryogenics companies if you can afford it.
      Also if the multiverse explanation of the double slit experiment is true, then you will both survive and not survive, as we will have solved all the problems is some branches of reality. This is how I live my life, expecting some versions of myself to die while others live.

  • @axilmar254
    @axilmar254 8 месяцев назад +12

    If we limit AI actions into virtual worlds only, then we wouldn't have any issues.
    If we allow AI to access tools of the real world, then we are doomed, if the additional constraints described in the video are not applied.

    • @suncat9
      @suncat9 8 месяцев назад +3

      Good luck with that.

    • @freedomislie
      @freedomislie 7 месяцев назад +2

      the problem is we humans are not a single entity so it's impossible to achieve that.

    • @PeterJPickles
      @PeterJPickles 7 месяцев назад

      We are in a virtual world, that's how we have AI.

    • @BlackendVenom
      @BlackendVenom 7 месяцев назад

      It's already happening, robots can fight and use tools already my fellow person

  • @9000ck
    @9000ck 7 месяцев назад +1

    if an ai learns what we value it will learn that we value power as well as love. the truth of us is pretty dark. none of us are good.

  • @andoreanesnomeo1706
    @andoreanesnomeo1706 7 месяцев назад +1

    I read Prof Bostrom’s book years ago. As to the ethics program: when AI becomes smarter than humans, it should have already learned to appreciate diversity, nature, beauty, dance, and empathy.

  • @chrisf4268
    @chrisf4268 7 месяцев назад +1

    The scenarios that are given for a super intelligence to get out of control in a negative way are below even normal human intelligence. They are just a bunch of nonsensical human fears.

  • @Duncan_1971
    @Duncan_1971 7 месяцев назад +1

    'Rise of the Machines'

  • @samuelvieira4504
    @samuelvieira4504 8 месяцев назад +3

    What we must hope for is that none mad scientist decides to annihilate humanity.

    • @jaylucas8352
      @jaylucas8352 7 месяцев назад

      Why humanity sucks anyways

    • @samuelvieira4504
      @samuelvieira4504 7 месяцев назад +1

      @@jaylucas8352 well, enjoying life as a hole or not, there are fun things to do yet, so why not to live and make it worth the pain? I mean, everyone will die eventually anyway.

    • @jaylucas8352
      @jaylucas8352 7 месяцев назад

      Sure , that’s an optimistic perspective I suppose @@samuelvieira4504

    • @jaylucas8352
      @jaylucas8352 7 месяцев назад

      We could easily be a computer program in 4dimensions as well. Money is the reward for solving problems like blockchain crypto mining @@samuelvieira4504

    • @BlackendVenom
      @BlackendVenom 7 месяцев назад

      ​@@jaylucas8352please don't make a super robot that kills us all 😂

  • @morbidmanmusic
    @morbidmanmusic 7 месяцев назад +1

    Stop forgetting about the "power button"

  • @malcross2524
    @malcross2524 8 месяцев назад +3

    " to learn what we value". Isnt this exact statement why we have slaughtered eachother since we found power in religion. What you value isn't what I value

  • @CharlesFlahertyB
    @CharlesFlahertyB 7 месяцев назад +1

    Apart from electricity, what would an AI need or want?

    • @jackmurphy6864
      @jackmurphy6864 7 месяцев назад

      If it takes on the characteristics of its influencers, humans, then it could be "greedy", so to speak. Humans don't need most of the crap they have, but we can't help ourselves buying it.

    • @CharlesFlahertyB
      @CharlesFlahertyB 7 месяцев назад

      ​@@jackmurphy6864 Our needs and wants are born out of a evolutionary imperative to survive and reproduce, a machine would never possess that.

    • @jackmurphy6864
      @jackmurphy6864 7 месяцев назад

      @@CharlesFlahertyB But it could if it was "smart" enough, as it were. Even if it was a mechanical version of the same thing.

    • @CharlesFlahertyB
      @CharlesFlahertyB 7 месяцев назад

      @@jackmurphy6864 That's purely speculative. An AGI or SGI won't even have a comparable experience of the universe to ours. So it's seems unlikely that it would share are goals and desires.

  • @steinerikgabrielsen2988
    @steinerikgabrielsen2988 8 месяцев назад +1

    The first thing an AI will value is more RAM

    • @SkywalkerPaul
      @SkywalkerPaul 8 месяцев назад

      No. Ram is a marketing schtick.

  • @surfcitiz
    @surfcitiz 8 месяцев назад +3

    I am very sceptical about the control aspects. Think about advanced AI algorithms in the hands of dictators, all over the world. The value of human life means nothing to them.

    • @williamvanleuven414
      @williamvanleuven414 7 месяцев назад

      That's true. However the dictator usually wants to stay alive too.

    • @hombacom
      @hombacom 7 месяцев назад

      In future everyone will have advanced ai algorithms so it will be no advantage. Like anyone have access to the whole world knowledge in their pocket now.

  • @chrisbarlow8605
    @chrisbarlow8605 7 месяцев назад +1

    It is quite possible that the ASI will abandon capitalism as a system for mankind, and reorder the model to something else- perhaps more scientific. I'd imagine the ASI would see the idea of nation states as quite antiquated, instead it might reorder humanity to unite and become a space faring race.

  • @justindevenpeck1676
    @justindevenpeck1676 6 месяцев назад +1

    0-60 and 16 seconds 0.16 into the movie extra terrestrials beings. What if the niengs story involved their own complete selection of bieng introduced as some sort of bieng
    .. and the narrator either as well have selection or be the creator of providing a complete morphing source of creative feature. Then describing "what if" with a completely interesting clause providing a full feature of resource and entertainment to the the best qualities of any and all living bieng a completing nexxus of supplies for an absolute quality living life guaranteed forever...

  • @lindasapiecha2515
    @lindasapiecha2515 8 месяцев назад +1

    😊👍

  • @Reach41
    @Reach41 7 месяцев назад +1

    To date, no computer has ever had a single "thought." None "think," they only process instructions. Clever, well written computer programs may make the performance of a computer appear to be "intelligent," and the public is easily lead to think it really is, but not one programmer is freaked out by the output -- if anything he's thrilled that the program finally works.

  • @orugasaki
    @orugasaki 8 месяцев назад +1

    But 5% smarter than the average human would be 105 IQ - or ironically am I not being smart enough to understand how intelligence scales?

  • @govindagovindaji4662
    @govindagovindaji4662 5 месяцев назад +1

    08:41 I do like Nick's mind and I am sure he can back up with the exact 'how' AI can "take control of the world" if he had more time in this particular lecture. Yet, these examples are far too simple to acknowledge as possible without knowing that 'how'. The reason I say this is b/c a person can simply refuse to allow an electrode that would cause the facial muscles to permanently perform a smile to be "stuck inside their brain or body" in the first place.

    • @govindagovindaji4662
      @govindagovindaji4662 5 месяцев назад

      ok, he does go on to say the "permanent smile" example is cartoonish, but still I think we are past using such examples. For those of us not 'caught up' on "how" AI can take over, we need firm examples.

  • @davehood1514
    @davehood1514 7 месяцев назад +1

    AI then AGI then ASI, unless the programmers get together now and work out a way of giving the Algorithms human qualitys the next world war will not be USA vs China it will be ASI vs Humanity, only going to be one winner.

  • @heterosapien
    @heterosapien 7 месяцев назад +1

    Would be very humbling if aliens land after Artificial general intelligence is fully functional and the ET beings beg us to leave them alone with the AGI and telling humans to fuck off the room. Well, maybe those grey fuckers didn't thought of AGI.

  • @cinaapekredhuanoon6215
    @cinaapekredhuanoon6215 7 месяцев назад +1

    Can AI answer the secret of existence?

  • @dickritchie2596
    @dickritchie2596 7 месяцев назад +1

    None of this will happen without some renewable source of energy.

  • @Arowx
    @Arowx 8 месяцев назад +1

    Average IQ of a person gets a 100 point IQ score a 5% smarter than average alien would be 105, Einstein is estimated to have an IQ in the 150/160 range and the highest human IQs are around 200. So, it would need to be twice as smart as the average person to be as smart as our brightest and more than that to be super smart e.g. >200% smarter than average.

    • @XOPOIIIO
      @XOPOIIIO 7 месяцев назад

      I have very high IQ, higher than Einstein's, but I'm a slow reader, so I'm still dumb.

  • @PriscillaBarberi
    @PriscillaBarberi 7 месяцев назад

    Where you invited to the bildenberg meeting in Turin? Please Remember that i did call you since you wrote about existential risks
    .

  • @johnnycakeslim
    @johnnycakeslim 7 месяцев назад +1

    Creating scary scenario about AI and then worrying about it is senseless to me. We dont fully even understand our consciousness as it is let alone our physical world which quantum physics reduces to particles and or waves. Many have deduced consciousness as fundamental and thus eternal. Gratitude is much better choice for emotion and supports a better probability that we can benefit greatly from AI.

  • @osmotreno
    @osmotreno 3 месяца назад

    Humanity will also become AI and we will go together into a bright future.

  • @billy5688
    @billy5688 8 месяцев назад +1

    They should just accelerat the inevitable. Why make our kods suffer later, when we can see who can duke it out first.

    • @andrewhanson5942
      @andrewhanson5942 8 месяцев назад

      There's no turning back now. The genie is emerging from the bottle as we type.

  • @charlescowan6121
    @charlescowan6121 8 месяцев назад +1

    You can't teach imagination! No matter how hard you try!

    • @Raulikien
      @Raulikien 8 месяцев назад

      Just like humans couldn't fly, or live longer, or communicate with each other over long distances, or couldn't live in space or go to the Moon, no matter how hard you try :P Technology defies the constraints of nature, and it will only get more powerful in the next decades

    • @valkyrie_592
      @valkyrie_592 8 месяцев назад +1

      We should treat ai as people with aphantasia

    • @Tate525
      @Tate525 8 месяцев назад +2

      Tell that to Dall E

    • @ajalipio1
      @ajalipio1 8 месяцев назад

      @@Tate525 haha! u read my mind.. 😆

  • @cdes68
    @cdes68 7 месяцев назад +1

    Bald guys: Put vicks on your heads and relax.

  • @twinsoultarot473
    @twinsoultarot473 7 месяцев назад +1

    We're not smart enough to figure out in advance much of anything .

  • @123Goldhunter11
    @123Goldhunter11 7 месяцев назад +1

    "Imperfect carbon units................Sterilize......................STERILIZE!!!!!!!!!!!"

  • @grizzlymartin1
    @grizzlymartin1 8 месяцев назад +3

    Consider this, however. If what you say is true, would they not logically be able to manipulate physics, space and time? And thus us? And thus our reality? This, then, begs the question, why be that intelligence? What is the need? What does it accomplish…facilitate? And again back to my original question. Would they not be able to manipulate all that we know?

    • @casmartin1314
      @casmartin1314 7 месяцев назад

      It's our pride that's the issue.

  • @ilirllukaci5345
    @ilirllukaci5345 7 месяцев назад +1

    Inklings still.

  • @mtn1793
    @mtn1793 7 месяцев назад +1

    If they train A-I into any kind of current advertisement standards we are totally screwed! It is already a psychopaths game devoid of morals in every way possible. To tell robots that modern advertising is acceptable would be another step in the great human suicide.

  • @melissasollars2704
    @melissasollars2704 8 месяцев назад +4

    Looks like the start of terminators .