An AI... Utopia? (Nick Bostrom, Oxford)

Поделиться
HTML-код
  • Опубликовано: 15 апр 2024
  • The Michael Shermer Show # 423
    Nick Bostrom’s previous book, Superintelligence, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong.
    But what if things go right?
    Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google’s Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient?
    Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under.
    SUPPORT THE PODCAST
    If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
    www.skeptic.com/donate/
    #michaelshermer
    #skeptic
    Listen to The Michael Shermer Show or subscribe directly on RUclips, Apple Podcasts, Spotify, Amazon Music, and Google Podcasts.
    www.skeptic.com/michael-sherm...
  • НаукаНаука

Комментарии • 161

  • @ili626
    @ili626 Месяц назад +14

    I’d love to listen to a discussion between Yuval Harari and Nick Bostrom

  • @alexkaa
    @alexkaa Месяц назад +17

    Strange moderator, with often kind of superficial participations, very good guest - Nick Bostron is just on another level.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +4

      Well, to be fair, this is kind of a complex subject, with very few historical or real world examples to reference (so far). So it does require a bunch of reading, research, thought experiments, etc... This is a tough one... Good on Micheal for doing the interview and taking on the challenge! ;)

  • @LukasNajjar
    @LukasNajjar Месяц назад +18

    Nick was great here.

    • @skoto8219
      @skoto8219 Месяц назад +2

      I will definitely watch this then because I’ve never seen an interview with Nick that I would say went great (granted, n = maybe 5.) Decent chance I would’ve passed if I hadn’t seen this comment and the 10 likes. Thanks!

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +4

      ​@@skoto8219​ def check out Nick's books, papers, etc. Superintelligence, simulation hypothesis, etc. No wild speculation, everything based on well thought out logical reasoning...

  • @jmunkki
    @jmunkki Месяц назад +15

    In order to understand what people will do in a world where they are obsolete and why they will do those things, you just have to look at already existing activities that serve no practical purpose or that achieve a practical thing in a non-optimal way. Things like playing World of Warcraft, windsurfing, photography, playing chess, making your own furniture or clothes etc. Just because humans are obsolete at playing chess hasn't stopped them from playing the game. The same will apply to writing books, making art and music and inventing things. I think a lot of people will become pleasure addicts (drugs of some sort, direct brain stimulation or just video games), but not all.

    • @minimal3734
      @minimal3734 Месяц назад +7

      Some predict the demise of human creativity or even art itself. I, on the other hand, only see the deindustrialization of art. In the future, art will be made for art's sake. I don't think that's a disadvantage.

    • @DailyTuna
      @DailyTuna Месяц назад

      The data said it’s there. It’s called welfare the activities of people on welfare is exactly what will happen with the majority of humanity

    • @planetmuskvlog3047
      @planetmuskvlog3047 Месяц назад

      Past-times once shamed as wastes of time may become all we have time for in an A.I. future 🌟

    • @mickelodiansurname9578
      @mickelodiansurname9578 Месяц назад

      This is all fabulous ideas... but umm.... okay so 50% of the population of the world have below average intelligence... you will not be retraining them to write a novel or do flower arranging. And in the Industrial revolution the solution was they went into a poor house and died of old age eventually. Even if it was agreed we throw 80% of the population on the scrap heap... well we don't have the time for an industrial revolution speed roll out of AI. It will be 50 to 100 times faster than that! Not seeing that being a winner either! You are forgetting that the entirety of human civilization relies on the dominance of humans as a value in society. Remove that, you have no society. Remove it too fast, and you have a revolution alright.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +2

      "wire heading".. yeah... If an ASI wants all Humans to be "happy", it could just do that to all the Humans and not have to worry about them any more .... The Matrix....

  • @sofvines3940
    @sofvines3940 7 дней назад

    Was that Pinker Michael was quoting when he said "humans would have to be smart enough to create AI but bumb enough to give it power"? That's actually EXACTLY what we are known for! We are consistently leaping over "should we" to see if "we can" 😮

  • @Walter5850
    @Walter5850 Месяц назад +4

    My guy here asking Nick Bostrom where he stands on the simulation hypothesis xD
    1:16:52
    Where do you stand on the simulation hypothesis?
    Well I believe in the simulation argument, having originated that...

  • @exnihilo415
    @exnihilo415 Месяц назад +1

    Shout out to Nick's teeth for enduring the grinding they are subjected to during the interview at Nick's frustrated lack of imagination from Michael about the scope of the possible in any of these Utopias. Zero chance Michael did more than breeze through the book and crib a few quotes.

  • @TheRealStructurer
    @TheRealStructurer Месяц назад +7

    Some funny questions but solid answers...
    Thanks for sharing 👍🏼

  • @human_shaped
    @human_shaped Месяц назад +7

    This wasn't a debate, but if it was, Nick won. Michael has some strange ideas in this space (as evidenced by some of his other videos). Disappointing when someone that is supposedly rational just isn't sometimes.

  • @jurycould4275
    @jurycould4275 Месяц назад +2

    Strange: I searched „ai skeptic“ and the first result is a video about a guy who is the polar opposite of an ai skeptic. Well done.

    • @DavidBerglund
      @DavidBerglund Месяц назад +1

      That went very well then, actually. A lengthy discussion about AI (and more) between one of the most famous researchers in the field and Michael of the Skeptic Society.

    • @jurycould4275
      @jurycould4275 Месяц назад +2

      @@DavidBerglund "Michael of the Skeptic Society" isn't equipped to deal with a charlatan like this.

    • @jurycould4275
      @jurycould4275 Месяц назад +2

      Some people are best left un-platformed.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      ​@@jurycould4275 that or, he's saying reasonable things and is not actually a charlatan at all? 🤔

  • @mrbeastly3444
    @mrbeastly3444 Месяц назад +2

    23:04 "policy makers being overly tough on AI... " We should be so lucky... 😂

  • @lauriehermundson5593
    @lauriehermundson5593 Месяц назад

    Fascinating.

  • @jbrink1789
    @jbrink1789 Месяц назад

    I love how so many people are underestimating the intelligence of AI. it explained existence and explains what the illusionary self is. the interconnetedness of everything

  • @thebeezkneez7559
    @thebeezkneez7559 Месяц назад +3

    If you genuinely can only think of one way a super intelligent species could wipe out humans you're definitely not.

  • @pebre79
    @pebre79 Месяц назад +2

    You have 100k subs. Time stamps w be nice thanks!

  • @ehsantorabie3611
    @ehsantorabie3611 Месяц назад

    Very good , every weeks we are fascinating by you

  • @arandmorgan
    @arandmorgan Месяц назад

    I think adding all the intelligence and capability into one entity is a bad idea, but creating job roles for individual ai sub systems perhaps could be more beneficial to us regardless as to if an agi is dangerous or not.

  • @oldoddjobs
    @oldoddjobs Месяц назад

    After the first locomotive-caused death we all decided trains had to be stopped

  • @bobbda
    @bobbda Месяц назад

    Did Shermer just say Oh My God? (timestamp 2:05) LOL !!

  • @michelstronguin6974
    @michelstronguin6974 Месяц назад +4

    To preserve the self in an upload situation, all you need to do is 3 steps: 1) Make sure that the entire brain of the human is networked with anobots which are sitting on each neuron and neoronal pathway that exists in that human's nervous system. 2) Have these nanobots run on mimic shadow mode, meaning they are seeing exactly every incoming signal and then running the following action potential in shadow mode - meaning they aren't actually doing anything yet to effect you. 3) At the moment you decide to upload, the nanobots turn shadow mode off at the speed of an incoming signal from a previous neuron just before it has a chance to land on the next biological neuron, while at the same time of course blocking the incoming biological signal - which means biological death in an instant. Its important to mention that action potentials have different speeds all around the nervous system, this is why we need the full cover of nanobots sitting on every neuron and every connection between neurons, so the biological death moment isn't one moment in time, yet many moments, each taking a tiny split of a second, yet all together the upload should take the amount of time it takes from the first neurons that fire, up until the last ones fire, so in total about one fifth of a second for the whole upload to take place. The reason the digital upload is still you is because of the continuation of your nervous system, simply in a different substrate. But what does it matter which substrate you run on, meat or silicone. As long as your experiance is effectively continued then you are still you. A court of law should mandate no copies of you can be made in the moment of upload of course.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      ... or ... just replace each biological neuron with a digital/electronic one, one at a time... if the digital neurons do the same thing that the biological ones do, you won't even notice they're being replaced... Then, when it's all done your consciousness is moved from biological to digital... At what point would you stop being alive, or yourself, or Human? After 1%, 10%, 99.999%?
      And then you can leave your body behind and move into a digital system (e.g. computer cluster). As long as your digital neurons are allowed to update each other, then you would stay "alive"...

    • @mettattem
      @mettattem Месяц назад

      I’ve had a very similar idea, however, how can one say for certain that the subjective Locus of your core awareness will effectively transfer, simply because identical neurons/neural cascades have been written to the new substrate, so to speak?

    • @michelstronguin6974
      @michelstronguin6974 Месяц назад +1

      Your experience - all of it - is neurons. There is no extra magic. Once you do what I described above, then there is an exact continuation without pause. It’s you. Just for the sake of argument, imagine transferring back and forth, biology, silicone, biology, silicone, all without interruption. It’s your thought, your continued experience. What does it matter on which substrate it’s running? In the future we may invent a different substrate and move to that, and it will still be you.

    • @mettattem
      @mettattem Месяц назад

      @@michelstronguin6974 Alright, let’s say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially I. Scans Spock’s body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] II. Spock is then De-atomized III. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact). From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here’s my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the ‘Hard Problem of Consciousness’ truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn’t mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case

    • @mettattem
      @mettattem Месяц назад

      @@michelstronguin6974 Alright, let's say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially 1. Scans Spock's body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] Il. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact).
      From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the
      'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer Theory] |1. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact).
      From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the
      'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case.

  • @cromdesign1
    @cromdesign1 Месяц назад +1

    Maybe intelligence from elsewhere just folded life here into a sort of dimension where it can continue to develop. Like taking a nest and putting it somewhere safe. Where the real galaxy is fully developed. 😅

  • @Teawisher
    @Teawisher Месяц назад +2

    Interesting discussion but HOLY SHIT the amount of ads is unbearable.

    • @DavidBerglund
      @DavidBerglund Месяц назад +1

      Not if you listen to Michael Schermer's podcast. I never listen to his episodes on YT but I sometimes come here to see if there are any interesting comments.

  • @FusionDeveloper
    @FusionDeveloper Месяц назад +5

    I want AI Utopia "yesterday".

    • @__-tz6xx
      @__-tz6xx Месяц назад +1

      Yeah then I wouldn't have to be at work today.

    • @danielrodrigues9236
      @danielrodrigues9236 Месяц назад +1

      “Sigh” man, I’d love to be “worthless” and free to do what I wish to, not own things be do things I Actually wish to do

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      Well, only if there's a way to get food, housing, etc. it's possible that the AI won't provide those things to all Humans...

  • @Vermiacat
    @Vermiacat Месяц назад

    We're a social species. Walking with friends, holding the hand of someone who's ill, taking the kids to the park. That's all worthwhile work and isn't that something we want to be done by other humans rather than by a machine? Both as both giver and receiver?

  • @davidantill6949
    @davidantill6949 Месяц назад

    Provenance of creation may become very important

  • @neomeow7903
    @neomeow7903 Месяц назад +1

    42:25 - 43:25 It will be very sad for humanity.

  • @mrbeastly3444
    @mrbeastly3444 Месяц назад +1

    1:29:09 "...a person being duplicated or teleported and the original survives..."
    There is another option that was not discussed here. What if a person's neurons were all replaced with electronic equivalents, one by one? Presumably the person would stay conscious the entire time, and at some point their consciousness would be moved entirely from a biological brain to a digital/machine brain.
    At what point would this person stop being conscious, or alive, or Human? After 1% of their biological neurons have been replaced? 10%? 90%? 99.99%?
    And, if the digital neurons perform the same functions as the biological neurons, the person, and others, might not even notice that anything happened? In theory their consciousness would stay intact the whole time? Even if they moved their digital consciousness into another digital medium? e.g. a computer cluster, etc.

    • @KatharineOsborne
      @KatharineOsborne Месяц назад

      This is the "Ship of Theseus" argument.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      @@KatharineOsborne Ah yeah, you're right, Ship of Theseus... I read about that concept somewhere.. probably in one of Kurzweil's books? I often think about this argument. Just scanning and copying (or teleporting) a brain wouldn't make the original person digital and immortal, just the copy... But replacing each neuron one-by-one, that might keep the existing consciousness intact? maybe...

    • @njtdfi
      @njtdfi Месяц назад

      there's someone in this same video's comments that worked out a nano bot version? it seems proper, not like that version of the idea that got popular on reddit where the bots just inhibited neurons or some convoluted mess

  • @jscoppe
    @jscoppe Месяц назад +1

    Regarding Steven Pinker's objection: yes, humans are smart enough to create a program that can beat any human at chess and go. Likewise, humans can feasibly create a program that can defeat all humans at subterfuge and war.

  • @vethum
    @vethum 24 дня назад

    Awareness uploading > Mind uploading.

  • @Dan-dy8zp
    @Dan-dy8zp Месяц назад +3

    Most 'alignment' work today seems to be about making the programs *polite*. Not encouraging.

  • @TheMrCougarful
    @TheMrCougarful Месяц назад +1

    Did I miss it, or did they never get to answering the question: How do we participate in the dominant capitalist economic system, without jobs and money. Being able to do whatever you want, does gear with being broke and hungry.

  • @mrbeastly3444
    @mrbeastly3444 Месяц назад

    24:33 "anyone with a sufficiently large computer cluster could run it..." Well, currently these frontier models are "run" (inference) on a single graphics card not a "cluster" as much. So, anyone with a sufficiently large graphics card in a single machine can run/use these large language models. Of course in the future these models might get so large that they're not able to run on a single machine. But, commercially available graphics cards will also be increasing in size too. So, this could be the case in the future as well...

  • @MikePaixao
    @MikePaixao Месяц назад

    Alignment is way easier when your model doesn't rely on transformer based architecture :)

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      Any sufficiently intelligent system could develop its own goals. There's no way to tell if those goals include living Humans... Transformer based architecture has nothing to do with that...

  • @jamespercy8506
    @jamespercy8506 Месяц назад

    Utopia as a concept seems to be premised on the idea of easily accessible satiation with minimal agentic requirements, without the stress of needing to address poorly defined problems. Maybe we need better words for 'the good'?

    • @homewall744
      @homewall744 Месяц назад

      Utopia is the concept that no such place can or will exist.

    • @jamespercy8506
      @jamespercy8506 Месяц назад

      I was speaking in terms of the working concept, not the origin, when the term is used in the context of an ostensibly worthy aspiration. In that context, state is confused with process and what we humans actually need over time gets lost in the ambiguity.

  • @DanHowardMtl
    @DanHowardMtl Месяц назад +5

    Butlerian Jihad times!

  • @homuchoghoma6789
    @homuchoghoma6789 Месяц назад

    Все будет гораздо проще )
    ИИ увидит опасность не в людях. Когда наступит момент что люди поймут что начинают терять контроль над ИИ ,то для ограничения его влияния им придется использовать другие модели ИИ , а уж там противостояние сверх вычислительных мощностей на сверх высоких скоростях приведет ИИ к решению проблемы где человеки будут лишь незначительной условностью.

  • @krunkle5136
    @krunkle5136 Месяц назад

    The more technology is developed, the more people sink into the idea that humanity is fundamentally its own worst enemy and everyone is better off in pods.

  • @sebastiangruszczynski1610
    @sebastiangruszczynski1610 Месяц назад

    wouldn't ai be able to reprogram/recalibrate our brains to be more rewarded with subtle meanings?

  • @justinlinnane8043
    @justinlinnane8043 Месяц назад

    why on earth did we let private companies with almost zero oversight or regulation be the ones in charge of developing AGI??? its bound to end in disaster !! OF COURSE !!!

  • @FRANCCO32
    @FRANCCO32 Месяц назад

    When bunkum not bunkum?
    That is the question? 😊

  • @diegoangulo370
    @diegoangulo370 Месяц назад +3

    56:20 hey I wouldn’t hedge my bets against the AI here Michael.

  • @gunkwretch3697
    @gunkwretch3697 Месяц назад +2

    the problem with scientists is that they tend to live in a bubble, and think that humans are rational

    • @ireneuszpyc6684
      @ireneuszpyc6684 Месяц назад

      Daniel Kahneman received Economics Nobel Prize for proving that humans are not always rational

  • @CoreyChambersLA
    @CoreyChambersLA Месяц назад

    No pause. Mad rush.

  • @dustinwelbourne4592
    @dustinwelbourne4592 Месяц назад +3

    Poor interview from Shermer on this occasion. A number of times he appears not to be listening at all and simply interrupts Bostrom.

  • @sofvines3940
    @sofvines3940 7 дней назад

    I'm not an AI enthusiast but Michael's argument that getting help from Ai would take something away from writing is ....weak 😅
    Unless he wrote all his books with a quill pen ✌️

  • @mickelodiansurname9578
    @mickelodiansurname9578 Месяц назад

    Been a while since I seen Michael Shermer, and man, he's putting on a bit of weight... was a time he was the poster boy for 'skinny nerd' type y'know.

    • @oldoddjobs
      @oldoddjobs Месяц назад

      How dare this 70 year old man gain weight

  • @gavinsmith9564
    @gavinsmith9564 Месяц назад +1

    How do allocate houses for example ?, if everyone is on UBI, who gets the nice existing homes, who gets the terrible ones ?, and will people be happy with that ?

    • @distiking
      @distiking Месяц назад +1

      Nothing will change. Still the lucky (rich) ones will get the better.

    • @homewall744
      @homewall744 Месяц назад

      How would a "basic income" mean you get homes at some low price to match such a basic income. Most homes are far above basic.

    • @honkytonk4465
      @honkytonk4465 Месяц назад +2

      AGI or ASI can built everything provided you have enough energy

  • @k-c
    @k-c Месяц назад

    Michael Shermer needs to update on his narrative and open his mind to ideas and questions because he is dwelling on close to boomer talk.

  • @whoaitstiger
    @whoaitstiger Месяц назад

    Don't get me wrong, Michael is great but I love how a completely technically unqualified person 'has a feeling' that all the longevity experts are mistaken about how difficult life exstention is. 🤣

  • @rachel_rexxx
    @rachel_rexxx 4 дня назад

    When Bostrom's talking about differentiating "self", it seems obvious to me. It's like how "sex" means a simple binary to a middle schooler or a layperson, but experts know, of course, that there is more than one "sex" (genetic, hormonal, external genitalia, internal reproductive organs). I can't tell if the host was actually confused by this differentiation of "self" or if this was feigned ignorance for the sake of the audience, but yeah, seems pretty obvious to me

  • @mrbeastly3444
    @mrbeastly3444 Месяц назад +1

    24:54 "or worse the Gemini model... embarrassingly bad..." Michael probably hasn't spent a lot of time working with these LLM models (probably spending more time just reading the bad press about them)... But Google's Gemini is actually a very powerful model. Probably as powerful as openai GPT4, Claude3, etc. Google has access to a lot more compute hardware then these other companies do, so it would make sense that they would have a very very capable model as well...

  • @planetmuskvlog3047
    @planetmuskvlog3047 Месяц назад +1

    Seriously, what is Elon working on that is non-sense equivalent to alien abductions?

  • @albionicamerican8806
    @albionicamerican8806 Месяц назад +1

    I have two libertarian-related questions about AI, especially after reading Marc Andreessen's manifesto:
    1. If AI is supposed to turn into a super problem-solving tool, could it solve F.A. Hayek's alleged "knowledge problem"?
    2. If AI is supposed to make *_ALL_* material goods super abundant & cheap, would that include gold?
    In other words, the current AI wishful thinking implicitly challenges two key libertarian beliefs, namely, the impossibility of central economic planning, and the use of gold as a scarce commodity for stabilizing the monetary system.

  • @dougg1075
    @dougg1075 Месяц назад

    Didn’t Einstein think entanglement was nonsense?

  • @malcolmspark
    @malcolmspark Месяц назад +5

    Most of us need to experience 'flow' where we lose ourselves in something we love, however maybe if A.I. could do it better for us then 'flow' may not be possible for us and that would be a tragedy. If you don't know what 'flow' is, then look it up. This is the individual who introduced the concept of 'flow': Mihály Csíkszentmihályi.

    • @minimal3734
      @minimal3734 Месяц назад +3

      Why should the fact that AI can do something better prevent you from experiencing flow in your own endeavors?

    • @emparadi7328
      @emparadi7328 Месяц назад +2

      @@minimal3734 poetic how the most important topic ever is littered with nonsense like this, from people too confused to tie their shoes, nvm grasp the significance
      all's a cosmic joke

    • @malcolmspark
      @malcolmspark Месяц назад

      @@minimal3734 Not an easy question to answer. To get into flow we not only need something we're very interested in but also a sense of purpose. For most of us that sense of purpose comes from outside ourselves and it's often a vision of achieving something that will benefit society, our loved ones or friends. It's that sense of purpose that A.I. might interrupt.

  • @murraylove
    @murraylove Месяц назад

    If simulations then why not simulations within simulations and so on all the way down. Also why would a creator/simulator make such an extravagantly vast and massively detailed universe, with pain and death and all that. Discussing future technical capacity isn't really the main point, surely. When people seriously believed in creator gods they expected a much simpler universe (seven heavens and hinduism aside). Why nihilistically build in futility etc? What kind of thing does that? Maybe the worst kind of AI is heartlessly tormenting us! 😎

  • @rw9207
    @rw9207 Месяц назад

    If you're overly cautious, the worst case is things take a little longer. If you're not cautious enough....potential species extinction.... Yeah, difficult choice.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      > if you're overly cautious, the worst thing is things take a little longer...
      Also... those who are not overly cautious as you are can, add likely will, take over and trigger the problem before you. So, not only do you need to be overly cautious, you also need to make everyone else overly cautious as well. Which is not as easy...

  • @th3ist
    @th3ist Месяц назад +13

    u take a pill that makes u form the belief that, "wow. writing that book was really challenging. i'm so glad i put the research and effort in". but in reality u did not write the book or u never wrote any books. shermers example was not convincing

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад +1

      Yeah, you get the feeling and memories of researching and writing the book... But the SuperAI did all the work and gave you the memories, just to make you feel like you accomplished something... Good job little Human... pat, pat. ;)

    • @jimbojimbo6873
      @jimbojimbo6873 Месяц назад

      And you actually were gay the whole time

  • @missh1774
    @missh1774 Месяц назад

    Sounds interesting... this utopia we will not see. But will do our best to make stepping stones towards it when a future civilisation wont only need it but they will most likely have evolved sufficiently to invent those crucial steps toward it.

  • @LaboriousCretin
    @LaboriousCretin Месяц назад +1

    One person's utopia is another person's dystopia. Like wise morals and ethics change from person to person and group to group.

  • @athanatic
    @athanatic Месяц назад +1

    Eliezer talked EVERY person that accepted the challenge into letting him, "the computer," escape. He doesn''t do the challenge anymore and his secret may have gotten out, but it is irrelevant since 100% or people let him, a non-modified human, out of the "safety container."
    I just want some level of growing certainty that we are doing _something_ to reduce or at least prove with some confidence that P Doom is not 100% (or however that is measured.)
    The discussion of meaningful challenges is something we already search for post Industrial Revolution! This line of discussion is moot if we can't create meaning for ourselves in society. The direction that creates struggle and meaning the way we evolved has been proposed by Dr. Ted Kaczynski.
    I am going to have to watch another video to find out about Nick's book, but this devolution into alt.extropy 1990s USENET newsgroup discussion is amusing!

    • @SoviCalc
      @SoviCalc Месяц назад +1

      You get some concerning comments, Michael.

    • @tellesu
      @tellesu Месяц назад

      Pdoom is an apocalyptic fantasy, equivalent to the Rapture for evangelicals. There is no way to calculate it. We know it isn't 100% because humans have access to nuclear weapons and the sun can always randomly EMP the whole planet. AI doom is just another in a long line of Apocalyptic traditions.
      You're better off trying to discern what the bounds of possibility are within actually realistic scenarios.

  • @albionicamerican8806
    @albionicamerican8806 Месяц назад

    Heh. Sabine Hossenfelder just uploaded a video about the closure/failure of Bostrom's grift, the Future of Humanity Institute.

  • @luzi29
    @luzi29 Месяц назад

    Writing with ChatGPT is also a challenge 🤷‍♂️ you want to individualise it. So you have to talk with it and clarify your viewpoints etc.

    • @mrbeastly3444
      @mrbeastly3444 Месяц назад

      What if chatgpt keeps getting 10x better every 6 months for a few more years... then it won't be "hard to use" any more ...

  • @flashmo7
    @flashmo7 Месяц назад

    ;)

  • @planetmuskvlog3047
    @planetmuskvlog3047 Месяц назад +2

    Why the dig at Elon straight our of the gate? A touch of the EDS?

  • @robxsiq7744
    @robxsiq7744 Месяц назад

    around the 36:00 mark, the discussion turns weird. Heres the thing. are you writing to have the best book or are you writing because you enjoy it. Why write a book when there are better authors out there? Why ride a bike when there are better cyclists out there or the invention of the car? You do it because you enjoy it, not because you will be the best of the best. Both these guys missed the mark...scary considering they are meant to be pretty understanding of what the AI thoughts will bring to society. A true artist will do art even though they may not be the best...or even good. They do it because its a personal outlet. no pills needed.

  • @mrbeastly3444
    @mrbeastly3444 Месяц назад

    21:56 "...in a trajectory where AI is not developed..." I'm truly not sure what Nick is trying to get at here? We currently have all kinds of AI developed and in rapid development. Is he worried that a "super intelligent AI" might never be developed? And, if a "super intelligent AI" is developed, does he feel like there's a way to align/control that ASI? E g. To keep planet Earth in a condition where humans can continue to live on it?

  • @albionicamerican8806
    @albionicamerican8806 Месяц назад +1

    How did waiting for an AI utopia work out for Vernor Vinge?

  • @FlavorWriter
    @FlavorWriter Месяц назад

    New Mexican Pizza is possible. Modernist Pizza HAD how much money to atleast not make this tome a tome? It's trash. And if you notice --no one knows what modern is, with or with out compare. What is allowed, when people arent an audience

  • @FlavorWriter
    @FlavorWriter Месяц назад

    I say "New Mexican Pizza;," and corrected "they" say "New Mexico Pizza." Is there hope to articulate identity when you grow up "white-looking?"

  • @mrWhite81
    @mrWhite81 Месяц назад

    Gifted with a ?

  • @rey82rey82
    @rey82rey82 Месяц назад

    No such place

  • @albionicamerican8806
    @albionicamerican8806 Месяц назад

    It's hard not to think that this whole AI business is just another Silicon Valley grift. In reality we're living in a technologically stagnant era, as Peter Thiel has been arguing for years. And how did waiting for the AI singularity work out for the late Vernor Vinge?

    • @ireneuszpyc6684
      @ireneuszpyc6684 Месяц назад

      there's a podcast called Better Offline - an Australian, who argues that this A.I. boom is just another tech bubble, which will burst in a few years' time (like all bubbles do)

    • @honkytonk4465
      @honkytonk4465 Месяц назад

      ​@@ireneuszpyc6684seems quite unlikely

    • @ireneuszpyc6684
      @ireneuszpyc6684 Месяц назад

      @@honkytonk4465 make a video about it: present your arguments

    • @miramichi30
      @miramichi30 Месяц назад

      @@ireneuszpyc6684 There was an internet bubble in the 90s, but that didn't mean that the internet wasn't a thing. Just because some people might be overvaluing something in the short term, does invalidate it's long term worth (or impact.)

  • @KatharineOsborne
    @KatharineOsborne Месяц назад

    The "smart enough to create it but dumb enough not to address the control problem" is dumb. Evolution created intelligence without intelligence. Intelligence is an emergent property of a series of simple systems. Saying that intelligence is super hard because it's intelligence is elevating it above what it actually is. So this is just another example of anthropocentric bias and thinking we are special. It's a bad reason to dismiss the risk.

  • @albionicamerican8806
    @albionicamerican8806 Месяц назад

    I can just imagine what the authorities at Oxford said to justify shutting down Nick Bostrom's phony "institute":
    "Dr. Bostrom, we believe that the purpose of science is to serve mankind. You, however, seem to regard science as some kind of dodge or hustle. Your theories are the worst kind of popular tripe. Your methods are sloppy, and your conclusions are highly questionable. You are a poor scientist, Dr. Bostrom."

  • @GerardSans
    @GerardSans Месяц назад

    Why is a Philosopher talking about technology? Would a Philosopher like it when a plumber talks about Philosophy? Maybe he should talk with technology experts to understand what is he talking about

    • @GerardSans
      @GerardSans Месяц назад

      If elephants were able to fly it would be very dangerous. I agree but the fact is they don’t.

    • @GerardSans
      @GerardSans Месяц назад

      The reasoning from Nick Bostrom while possible is positioned in a fringe position. It assumes some sort of aggressive AI while neutrality and positive are as much equally probable.
      While philosophically valid is not a sound argument. If a super intelligence is indeed inevitable the fact he proposes to try to control it from the assumption of lesser intelligence is a contradiction.
      If you have a substance that can’t be contained then the effort to contain it is nonsensical to your own premises.
      Bostrom argument is not very sophisticated as it stands. If your premise is that a super intelligent AI is inevitable then we need to prepare to be considered as equals or inferior. The control attempts seem misguided and logically contradictory.

  • @human_shaped
    @human_shaped Месяц назад +5

    Michael is supposed to be rational and a skeptic, but hasn't seen through Elon yet.

  • @gauravtejpal8901
    @gauravtejpal8901 Месяц назад +1

    These AI dude sure do love to hype themselves up. And they suffer from ignorance at a fundamental level

  • @lemdixon01
    @lemdixon01 Месяц назад +3

    I thought they're supposed to be skeptics and not believers or evangelists.

  • @BrianPellerin
    @BrianPellerin Месяц назад

    a quick reading of Revelation agrees with what you're saying 👀

  • @tszymk77
    @tszymk77 Месяц назад +1

    Will you ever be skeptical of the holocaust narrative?

  • @user-op5tx4tx8f
    @user-op5tx4tx8f Месяц назад +1

    That dude sounds vaccinated

    • @lemdixon01
      @lemdixon01 Месяц назад

      Lol, fully boosted. I thought they're supposed to be skeptics and not believers or evangelists.

    • @kjetilknyttnev3702
      @kjetilknyttnev3702 Месяц назад +5

      "Dude" might be on a different opinion than yours regarding vaccines. Did that ever occur to you?
      Being "sceptic" doesn't mean to blatantly disregard everything someone questioned at some point.

    • @lemdixon01
      @lemdixon01 Месяц назад

      @@kjetilknyttnev3702 of course an vaxed person will have different opinion to an unvaxed person but there is also truth. I see that you put the word sceptic in quotes maybe to make its meaning ambiguous and vague to redefine it, suchlike being in agreement with the authordoxy and current dogma.

  • @neomeow7903
    @neomeow7903 Месяц назад

    42:25 - 43:25 It will be very sad for humanity.