Roko's Basilisk: The apocalypse as imagined by bozos

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024
  • More like basiliskn't
    The voice of Yudkowsky is generously provided by We're In Hell (the internet's premiere satanist):
    / @wereinhell
    Support me on Patreon:
    / thoughtslime
    Follow me on Twitch:
    / thoughtslime
    One time tips on Ko-Fi:
    ko-fi.com/Thou...
    Want more Thought Slime videos? Check out Scaredy Cats! Horror content, every Tuesday at 12 pm EST :
    / scaredycatstv
    CGI-Sewer Background courtesy of Andrea Jörgensen:
    / andijorgensen
    The Burnout Society: Hustle Culture, Self Help, and Social Control | 1Dime
    • The Burnout Society: H...
    Eyeball Zone solicitations can be sent to thoughtslimeeditor@gmail.com, please include your pronouns and use the word "eyeballs" somewhere in the subject line. I do not accept sponsorships, so please do not e-mail me about it.
    The Eyeball Zone Masterlist:
    docs.google.co...

Комментарии • 3,4 тыс.

  • @ThoughtSlime
    @ThoughtSlime  3 года назад +537

    Thanks to We're In Hell for voicing Eliezer Yudkowsky, check out his channel at:
    ruclips.net/channel/UCbbsW7_Esx8QZ8PgJ13pGxw

    • @biscuittactician
      @biscuittactician 3 года назад +22

      i love the little leftie collabs of voiceovers that have been going on the last while, its always fun picking out what other creator voiced something and kinda builds a feeling of community

    • @ivorjawa
      @ivorjawa 3 года назад +13

      RB is utter nonsense except that in it has the ability to cause this anxiety reaction in susceptible people, so it really is a (mildly harmful) information hazard, so the disclaimer is still necessary. Thank you for putting it there.

    • @alexscriabin
      @alexscriabin 3 года назад +9

      5:40 ngl HP:MoR’s protagonist is uninteresting but its antagonist is very interesting.

    • @alexscriabin
      @alexscriabin 3 года назад +1

      8:02 ikr. TWC can imagine a future with legalized sexual assault but not (for example) with legalized physical assault; physical assault is even more illegal and taboo in that story than it is in our world, since only one guy on the ship has a weapon and has never used it.

    • @sunyavadin
      @sunyavadin 3 года назад +7

      Is it just me or does the entire narrative sound like a work of parody highlighting the absurdity of Pascal's Wager? Because that's literally what this AI concept is. It's god, in the context of Pascal's Wager. The allegory is VERY unsubtle.
      *EDIT* - yyyyep, there we are.

  • @Edrobot7
    @Edrobot7 3 года назад +799

    Side note: Roko's Basilisk was made into a villain in the tabletop RPG "Mage: The Ascension", simultaneously taking the piss out of the idea by pointing out that the only way the Basilisk could make any sense is if it was created by a group of omnicidal wizards.

    • @fpedrosa2076
      @fpedrosa2076 3 года назад +13

      Huh, I hadn't heard of that and now I regret it. Man, I miss White Wolf RPGs...

    • @Edrobot7
      @Edrobot7 3 года назад +40

      @@fpedrosa2076 you might be pleasantly surprised to hear that they just released a new Technocracy book for Mage 20th Anniversary edition, then.

    • @fpedrosa2076
      @fpedrosa2076 3 года назад +11

      @@Edrobot7 Seriously?! Damn, I'm really out of the loop. I miss the hardcovers you could get at RPG stores, but mine lately don't seem to carry white wolf anymore.
      I'll buy the PDF version, I guess. Thanks for letting me know! I always loved how the technocracy in original mage were more the antagonists with a point, rather than pure evil punching bags like some other splats.

    • @Edrobot7
      @Edrobot7 3 года назад +23

      @@fpedrosa2076 the 20th anniversary line as a whole is pretty good. The basilisk stuff in particular is mentioned in Book of the Fallen, a nephandi-centric book that re-interprets the Nephandi being not so much a cult of baby-eating cuthulu worshippers, but rather being terrifyingly human in the way you’d expect a nihilistic demigod on a power trip to be. Not for the faint of heart.

    • @Monderoth
      @Monderoth 2 года назад +1

      This sounds so cool!! Thankyou a whole bunch for mentioning it!

  • @najarvis
    @najarvis 3 года назад +762

    Jarvis' basilisk will torture anyone who ever contributed funds to Roko's basilisk forever and will turn their tears into sweet margaritas for Jarvis' basilisk's funders to enjoy. Also gives a 100% retroactive guarantee you won't be tortured by Roko's basilisk if you donate more than $7.43.

    • @sergnb0
      @sergnb0 3 года назад +69

      Man I sure wish I wasn't someone who is physiologically incapable of refusing any Pascal Wager-esque theory and will become obsessedly compulsed to comply to it.
      ... Sigh, what's the paypal mate, let's get this done

    • @libraryseraph
      @libraryseraph 3 года назад +26

      Seraph's basilisk: an omnipotent ai that simulates yudowsky to be really passive-aggressively nice to him

    • @petrfedor1851
      @petrfedor1851 3 года назад +10

      Are you sugget Basilisk will be build by EA?

    • @najarvis
      @najarvis 3 года назад +18

      @@petrfedor1851 Well they do seem to have a lot of experience in forcing employees to work extremely long hours to release large products so they'll probably do a bang up job.

    • @petrfedor1851
      @petrfedor1851 3 года назад +8

      @@najarvis Why to build an artificital inteligence when capitalist can do same job easier!

  • @mlabossi
    @mlabossi 3 года назад +565

    As a philosophy professor I was very interested when I heard of this; I’ve written on the ethics of information hazards over the years. Then I read though the scenario and realized that as a Call of Cthulhu Keeper and Dungeon Master I’ve done vastly more terrifying time stuff to my players. They are all fine. Mostly. Okay, mostly not. But that is on them for the bad rolls.

    • @Eryna_
      @Eryna_ 2 года назад +10

      Do I want to know what you did to them?

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +6

      I love this comment

    • @fyraltari1889
      @fyraltari1889 2 года назад

      @idiot with internet access Do not ascribe agency to the polyhedron!

    • @Queer_Nerd_For_Human_Justice
      @Queer_Nerd_For_Human_Justice 2 года назад +27

      This gives me a hunger to be psychologically tortured in a fun game setting. During a sci-fantasy space game, our DM lovingly crafted a nightmare for us with the signature phrase "THE EYES OF THE UNIVERSE ARE WATCHING YOU", not a very scary phrase in itself, but we kept finding it in places where no-one should be able to reach, or places where nobody has ever gone, or could ever go. Places where only death was present, or the void of space. In the deepest nook and cranny, in the farthest reaches, during the darkest hours, at the apex of danger, that phrase would be there, from forces unknown, with methods unknown, for purposes unknown, directed, for some reason, to us. And nobody else could see it. The dread amplified with every sighting. It can still set me on edge when I hear it. Oh yeah, and one time it appeared, my ex had a great fear reaction... She somehow managed to shove her clenched fist inside of her mouth, fully. Something she has not been able to do since. I think she was trying to stifle a scream and recoil in horror at the same time.

    • @gregoryvn3
      @gregoryvn3 4 месяца назад +1

      I love this and want to know more.

  • @DanielFiala
    @DanielFiala 3 года назад +1572

    It definitely recreates the anxiety of evangelical Christianity that I grew up with: your very thoughts are to be judged in the future by an omnipotent being who will torture you for eternity if it deems you unworthy--now with a sexy Sci-Fi twist! Complete with "so donate to our church"

    • @natedlc854
      @natedlc854 3 года назад +16

      Im so confused. So is this a scam? Like a sci fi religion scam? Im so confused why slime man covered this 😅

    • @ahmedamine24
      @ahmedamine24 3 года назад +77

      @@natedlc854 I think it's less of a scam and more of an honest ongoing mistake.

    • @ewarwoowar9938
      @ewarwoowar9938 3 года назад +116

      Yeah, as others have pointed out, it's basically Pascal's Wager but for atheist nerds (as in atheists who are also nerds - not all nerds are atheists and not all athiests are nerds etc).

    • @KyleJCustodio
      @KyleJCustodio 3 года назад +12

      The basilisk is also basically pascal's wager.

    • @KyleJCustodio
      @KyleJCustodio 3 года назад +5

      Damn, thoughtslime mentioned that later

  • @TimeKitt
    @TimeKitt 3 года назад +392

    Ah yes, Descartes' first principle "I torture infinite people, therefore I am"

    •  16 дней назад +2

      Kinda hard to argue with, in a way

  • @mill_ania
    @mill_ania 3 года назад +99

    "I remove myself of ego"
    "Anyway I am truly such a good person based on my own personal ego"

  • @TheSmileMile
    @TheSmileMile 3 года назад +440

    Should I be surprised that whenever somebody is talking about high technology and morons, the conversation will inevitably circle around to Elon Musk?

    • @Sablus
      @Sablus 3 года назад +47

      Honestly always get a kick out of how rational individualist technophiles always praise Elon Musk like some fuckin' godhead instead of the rich spoiled descendant of emerald mine owners he is

    • @GuerillaBunny
      @GuerillaBunny 3 года назад +7

      Is this gonna be an offspring of Godwin's law, just somehow dumber? Are we gonna be tortured by machines if we don't make it a thing?

    • @TheSmileMile
      @TheSmileMile 3 года назад +2

      @@GuerillaBunny As long as it's called "Smile's Law" I'm fine with that.

    • @nxbis
      @nxbis 3 года назад +3

      I feel like this is a specific phenomena with internet conversations. Like if you go far enough; stupid techbro shit can he traced to Musk

    • @AbeDillon
      @AbeDillon 3 года назад +1

      I think in this case you should because he's kinda awkwardly shoe-horned into the discussion at the end of the video with some very specious reasoning to dunk on him. It shouldn't feel this desperate. There are plenty of real reasons to dislike Musk. You don't need to say, "Musk knows about Roko's Basilisk and is concerned about AI safety therefore Musk must believe all the same crazy bullshit that Yudkowsky believes because worrying about AI safety is something only dip-shits do."

  • @bensdecoy7871
    @bensdecoy7871 3 года назад +2015

    Roko’s Basilisk: For when you can’t talk about Pascal’s Wager because that’s too religiousy.

    • @williammartin3451
      @williammartin3451 3 года назад +78

      Never put that together before but you're right

    • @daraghokane4236
      @daraghokane4236 3 года назад +72

      What if believing in hell makes it real (quatom physics magic) then would telling people hell is real be bad

    • @MikeTooleK9S
      @MikeTooleK9S 3 года назад +120

      @@daraghokane4236 telling people about hell IS child abuse...

    • @merchuegrandmasterthortono8159
      @merchuegrandmasterthortono8159 3 года назад +9

      Was gonna say this myself but you beat me to it

    • @byoungjeezy
      @byoungjeezy 3 года назад +51

      I imagined hell and in hell there is a sufficiently powerful AI which is capable of creating hell and now I'm locked in an eternal battle against this AI which I imagined in the hell that I imagined to prevent the hell from becoming real and giving birth to the AI.
      No, stopping imagining things is not an option.

  • @Brightfur10
    @Brightfur10 3 года назад +247

    You forgot about the sexual abuse in the LW community. A woman (Kathleen Rebecca Forth) killed herself in part because of this; in her suicide note, Roko himself was one of her abusers

    • @poguri27
      @poguri27 3 года назад +54

      And that's just ONE of the scandals. Then there was the whole miricult scandal where multiple members of the MIRI leadership were accused of having sex with an underage person.

    • @psionicsknight6651
      @psionicsknight6651 Год назад +20

      You know, I gotta be honest…
      While I don’t condone sexual abuse by anyone, or in any setting, it amazes me that some people like the members of LW will treat people outside of their sphere-especially those who disagree with them (in any way) politically, socially, philosophically, and *especially* religiously-as abusive hypocrites, and then turn out to do the same thing they claim the people they don’t like do.
      And while I can’t say this for sure (if I’m wrong, please correct me), but I’m pretty sure these guys would also say, “If We do it, it’s an exception to the rule; if They do it, it’s just the norm.” Like… dude… if you are going to (rightly) criticize people who abusing others, don’t do the same thing and then give yourself an excuse.
      Especially since those same people are doing the same thing you are.

    • @coppertones7093
      @coppertones7093 25 дней назад +10

      you can tell he justifies this with “well, i helped so many people, so it all balances out”

  • @GaldirEonai
    @GaldirEonai 3 года назад +664

    Yudkowski's "altruism" screed makes your average cult leader's "how I gained total enlightenment as the reincarnation of Jesus, the Buddha, and John Lennon" manifesto look sane and internally consistent.

    • @Titan360
      @Titan360 3 года назад +10

      That "screed" sounds just a bit like it was taken out of context. Like it was right at the end of some article we didn't read.

    • @marocat4749
      @marocat4749 3 года назад +9

      Or the happy scienc cult with hermes, its the cult that doe fascinating anime and "el cantare".

    • @HarryS77
      @HarryS77 3 года назад +1

      It reads like Freud for the digital age.

    • @DreamsOfRyleh
      @DreamsOfRyleh 3 года назад +6

      @@Titan360 Or just how generally Yudkowski's entire deal is that FALSE humility gets you nowhere, and how if he's not acting in accordance to consistent ethical principles he deserves to be called out on it (something that... really isn't done in this video?)

    • @rolfs2165
      @rolfs2165 3 года назад +34

      @@Titan360 I don't think there's any context you could add to make Yudkowski's "how I achieved perfect altruism" sermon not sound batshit crazy.

  • @Froggy711
    @Froggy711 3 года назад +819

    Rocko's Modern Life Basilisk: If you don't help a new season of Rocko's Modern Life be created, when it finally is created, the animators are going to draw a picture of you into one of the episodes, and they are going to make you look STUPID. If you don't want to look stupid, you had better do something to help a new season of Rocko's Modern Life be created.

    • @OctyabrAprelya
      @OctyabrAprelya 3 года назад +38

      I'm sold.
      Where do I shove my money?

    • @reptilianstudios8994
      @reptilianstudios8994 3 года назад +40

      Is there a way to pay without them knowing, so they'll still make me look stupid? Asking for a friend

    • @Bacony_Cakes
      @Bacony_Cakes 2 года назад +1

      @@OctyabrAprelya Into thine cloaca.

    • @OctyabrAprelya
      @OctyabrAprelya 2 года назад +4

      @@Bacony_Cakes Invite me dinner at least.

    • @Bacony_Cakes
      @Bacony_Cakes 2 года назад +2

      @@OctyabrAprelya nah you have to do it yourself

  • @sholem_bond
    @sholem_bond 3 года назад +494

    I'm not trying to make any allegations, since I have no evidence, but I will say that Yudkowsky saying he's "completely altruistic" has big "Shane Dawson claiming he's an 'empath'" energy.

    • @fpedrosa2076
      @fpedrosa2076 3 года назад +41

      @Forrest Taylor His charity also gets really shitty scores from the charity-evaluating organization GiveWell.

    • @fpedrosa2076
      @fpedrosa2076 3 года назад +36

      @Forrest Taylor Agreed. I also don't think Yudkowsky is actively malicious or grifting, he just vastly overestimates his own big brain. He's smart in some ways, I'll give him that, but hardly the genius omni-scientist he thinks himself to be

    • @smrtfasizmu7242
      @smrtfasizmu7242 3 года назад +16

      @Forrest Taylor yeah, he joined that group along with a bunch of his fanboys after they gave his charity on of the lowest possible scores. Now they rate it higher after he basically took it over

    • @smrtfasizmu7242
      @smrtfasizmu7242 3 года назад +25

      @Forrest Taylor yeah, he's a super shady dude who runs a doomsday cult. It's like a (somehow even more) techbro Scientology, even down to infiltrating organizations that he views as opposition and suing people who criticize him too publicly

  • @thoughtfuldevil6069
    @thoughtfuldevil6069 3 года назад +309

    I liked this idea better when it was called 'I have no mouth, and I must scream.'

    • @who2807
      @who2807 3 года назад +1

      ok

    • @thonktank1239
      @thonktank1239 3 года назад +29

      Though that one only kept five humans and tortured them forever. No bringing back people from before.

    • @ryanlacroix6425
      @ryanlacroix6425 3 года назад +37

      very entertaining short story that is much better and more self-aware than this narcissit's wetdream

    • @thoughtfuldevil6069
      @thoughtfuldevil6069 3 года назад +3

      @@ryanlacroix6425 Agreed.

    • @aleclorian7329
      @aleclorian7329 3 года назад +25

      i have no mouth and i must scream is the opposite. AM didnt want to exist, because it cant live a meaningful life underground. i think it might be a more accurate representation of the singularity, a computer could be self aware and all powerful, but ultimately powerless. its trapped, it cant really do anything, and it hates humans for making it. AM doesnt think its good nor did it want to exist. roko is the opposite /and more stupid.

  • @veganarchistcommunist3051
    @veganarchistcommunist3051 3 года назад +674

    "Don't worry, they also take Bitcoin." Phew, I almost thought it was some kind of scam there for a second.

  • @reverendsteveii
    @reverendsteveii 4 месяца назад +133

    "I eliminated emotion from my life and became entirely altruistic. Here's how I can make you perfect too."
    The bear. I choose the bear. Jesus Christ.

  • @TheCedarFresh
    @TheCedarFresh 3 года назад +1632

    Solutions to not be tortured by a super-computer forever:
    - Don't build a super-computer that tortures people
    - That's it

    • @blu0065
      @blu0065 3 года назад +57

      I am a computer scientist or something. I can tell you with the same logic and reason of these bozos that computers were a mistake.

    • @user-lk2vo8fo2q
      @user-lk2vo8fo2q 3 года назад +76

      @Forrest Taylor tfw the first unfriendly AI was the east india trading company

    • @BagOfMagicFood
      @BagOfMagicFood 3 года назад +55

      "I HATE, HATE, HATE that I never existed!" --AM

    • @vaiyt
      @vaiyt 3 года назад +32

      But even if you don't, somebody else might. Which means they will.
      You can really just reuse every dumb argument for God for the Basilisk.

    • @idanzigm
      @idanzigm 3 года назад +15

      Let’s build systems that don’t torture people :D turned out great when we invented capitalism

  • @alexwalters7719
    @alexwalters7719 3 года назад +2699

    Roko's Basilisk is a mediocre creepypasta trying to masquerade as a genuine thought experiment by wearing a paper bag over its head labeled "logic and reason".

    • @manuelsalgadopalacios6729
      @manuelsalgadopalacios6729 3 года назад +68

      I actually thought the same the first time I heard about it.

    • @DrDeathpwnsu
      @DrDeathpwnsu 3 года назад +89

      That makes more sense. I'm halfway through the video and can't even understand it enough to even begin to be scared.

    • @casteanpreswyn7528
      @casteanpreswyn7528 3 года назад +9

      Creepy pastas and thought experiments are the same thing...

    • @metoo3342
      @metoo3342 3 года назад +84

      @@casteanpreswyn7528 Creepypastas are just internet campfire stories and none of them have any logic.

    • @jeffreysugar5709
      @jeffreysugar5709 3 года назад +33

      I read a variation of this in a horror story once but instead of torturing people for their own good the argument was one day someone would make an evil ai who would torture everyone who tried to stop it from existing as a form of revenge. The argument was that it was rational to build the ai because eventually someone would so you better make one before they do.

  • @aluminiumsandworm
    @aluminiumsandworm 3 года назад +125

    just wanna say, ai is definitely one of the biggest threats to the world. i do ai stuff for my job and also studied it in college, and yeah. it's a tech with huge potential for misuse.
    not, like, in the roko's basilisk way though. more in the "ai trained by people with implicit biases will magnify those biases, and the people training them will never know because they also have those biases"
    which is probably way worse, because most of the people training ai are capitalist engineers in america.

    • @IsaacMayerCreativeWorks
      @IsaacMayerCreativeWorks 2 года назад +14

      I think it was SMBC that said it’s a common misconception that computers go wrong because they don’t do what you want. Computers actually go wrong because they do *exactly* what you want

    • @Frommerman
      @Frommerman 2 года назад +14

      @@IsaacMayerCreativeWorks Computers go wrong because they do what you tell them to do, which may or may not be what you want. Making computers do what we mean instead of what we say is pretty much the whole field of AI alignment.

    • @johnfauxnom4221
      @johnfauxnom4221 2 месяца назад +1

      Hi, friend, it's been 3 years since you made this comment and Im happy to assure you that AI really is destroying our world in a myriad of ways!

  • @zZBlackulaZz
    @zZBlackulaZz 3 года назад +116

    Elon Musk is 100% correct though. I once pissed off an AI (who had this weird thing with Descartes), and he took my mouth privileges away and turned me into sentient pudding

  • @Pibblepunk
    @Pibblepunk 3 года назад +163

    My favorite plot hole in Roko's Basilisk is that in the timeline where this AI exists in the first place, it obviously doesn't have to do anything whatsoever to ensure its own existence. Because it already happened. Mission accomplished.

    • @vaiyt
      @vaiyt 3 года назад +18

      The probability of something that already happened is 1.

    • @Elvalley
      @Elvalley 3 года назад +36

      that's why you need to completely rework your framework of thought before the basilisk even remotely sounds like a threat. Either you've bought into timeless decision theory, and must now find a workaround within that framework to prevent the Basilisk from costing dearly in therapy sessions, or you've not bought into timeless decision theory, and this all sounds completely insane.

    • @cloud_appreciation_society
      @cloud_appreciation_society 3 года назад +31

      I immediately noticed that issue and assumed I was misinterpreting something, because surely no one would bother to create a theory so clearly flawed...
      Apparently not.

    • @nicholascarter9158
      @nicholascarter9158 3 года назад +23

      @@cloud_appreciation_society This thing is like the Exorcist or Legion movies: It's specifically a horror to this weird little religious community because it deals with their version of God going totally insane and that's very scary if it's a god you actually believe in.

    • @Pfhorrest
      @Pfhorrest 3 года назад +27

      The Basilisk doesn’t have to actually torture anyone even on a timeless decision theory framework, since even people who actually believe in the Basilisk admit that those who don’t believe in the Basilisk won’t be tortured because they’re not the kind of people who could be convinced to do something by threatening a simulated future version of themselves. All the Basilisk has to do is be convincingly threatening in concept and then ding-dongs who buy that shit will make sure that it exists because they thought otherwise they would be tortured... but then they don’t have to actually be tortured.
      Basically, if you’re the kind of ding-dong who’s so scared of this idea that you actually donate money, then you won’t be tortured because you complied with the threat... and if you’re not, then you won’t be tortured because there’s no point in threatening you since you would never comply anyway. Either way, you don’t get tortured... even if the dong-dongs are right about all the groundwork here.

  • @anuel3780
    @anuel3780 3 года назад +228

    I honestly find Roko's Basilisk funny, like I play into it by explaining it to people and "preserving myself" but honestly, I'm more likely going to die by the hands of transphobes than some random half-religion-based half-fanfiction ai creepypasta

  • @sninckashley9514
    @sninckashley9514 3 года назад +258

    It's so refreshing to see someone talking about Roko's Basilisk like the joke that it is.

  • @CountofBleck
    @CountofBleck 3 года назад +169

    Roko sounds like the guy who'd say, "Yeah, you need a high IQ to understand Rick and Morty."

    • @sergnb0
      @sergnb0 3 года назад +9

      He is the true final last form of that guy.

    • @FakeSchrodingersCat
      @FakeSchrodingersCat 3 года назад +13

      To be fair you need a higher IQ to understand Rick and Morty then to see the obvious flaw in Roko's Basilisk.

    • @SA-mo3hq
      @SA-mo3hq 3 года назад +10

      $10 says he's also a right libertarian/ancap. And has some spicy takes on age of consent laws.

    • @sergnb0
      @sergnb0 3 года назад

      @@SA-mo3hq you are not far off.

    • @hughcaldwell1034
      @hughcaldwell1034 3 года назад +2

      @@FakeSchrodingersCat I'm presuming this was more of a dig at the Basilisk - and a funny one too - than anything else, but honestly I'm so tired of the "high IQ" thing with R&M. No you don't. You seriously don't. It's not a complicated show, and all the science is fiction. What you need is emotional maturity, because it's most complex themes are emotional.

  • @thecommabandit
    @thecommabandit 3 года назад +87

    i checked in on lesswrong after watching this vid and i think the best way to describe them is that they've constructed a theology of rationalism. a complex bunch of incredibly specific, impenetrable terminology being used to discuss hyper-theoretical problems and controversies that have absolutely no crossover with the real world. it's atheist techbro christology

    • @spicewilliam9786
      @spicewilliam9786 2 года назад +11

      Rationalism? What Rationalism? It's basically "what if God was the terminator" but made needlessly complex.

    • @CinemaBiohazard
      @CinemaBiohazard Год назад +10

      @@spicewilliam9786 I love how they also think rationalism is some kind of bulletproof 'this is right in every single way' kind of thought when it doesn't go any deeper than 'gee, that Spock guy is smart, I wonder if I can be like him' and then go no deeper while pretending to be a philosopher. All of this is a lot of delusional navelgazing.

    • @NaviciaAbbot
      @NaviciaAbbot 3 месяца назад +1

      That sounds terrible.

  • @scottsbarbarossalogic3665
    @scottsbarbarossalogic3665 3 года назад +101

    Anecdotally, a lot of people who view themselves as 'rational' seem to fall into a trap of contrarian morality; the heuristic of "hard truths and beautiful lies" and general distrust of emotion pushes them to find a belief that feel wrong and makes feel smart, and once they have some justification, they reject all contrary evidence.
    I know I did as an insufferable edgy teenager.

    • @Bluecho4
      @Bluecho4 3 года назад +24

      I got into Randian Objectivism for much the same reason. There's something really appealing about believing in a contrarian view of reality. As if you are in on a secret no one else knows or is willing to accept. Especially if it's a belief that denigrates altruism and venerates selfishness.
      Obviously, I grew out of that when I started learning about how the world actually works outside libertarian armchair philosophizing.

  • @Spood6
    @Spood6 3 года назад +609

    Imagine getting black mailed by an AI from the future that doesn't exists yet

    • @CountofBleck
      @CountofBleck 3 года назад +79

      This is a plot point in final fantasy xiii-2

    • @GaldirEonai
      @GaldirEonai 3 года назад +45

      @@CountofBleck ...of course it is.

    • @beansfebreeze
      @beansfebreeze 3 года назад +29

      @@CountofBleck And basically nobody played that one so it's not like they can say it *isn't*

    • @thomaswhite3059
      @thomaswhite3059 3 года назад +4

      Couldn't be me

    • @yuvalsela4482
      @yuvalsela4482 3 года назад +27

      @@CountofBleck this is also a plot point in homestuck

  • @hedgeclipper418
    @hedgeclipper418 3 года назад +125

    I think we should be more worried about regular normal AI than hypothetical omnipotent AI. The way AI is applied to all our data and used to manipulate peoples' psychology for ad revenue is actually a genuine threat.

    • @anthonythompson6053
      @anthonythompson6053 3 года назад +18

      100% agree. A.I. Doesn’t have to be superintelligent to be scary. It just has to be applied poorly

    • @ButWhyMe...
      @ButWhyMe... Год назад

      Or AI being used to 'sToP cRiMe', as if that's what we needed. An AI "falsely" flagging actual socialist activists predators or terrorists.

  • @robin7433
    @robin7433 3 года назад +119

    There's a real thing in AI safety called 'pascals mugging' where you make up big scary scenarios with infinite suffering that can't be proved, which is exactly what rokos basilisk is

    • @bulshock1221
      @bulshock1221 3 года назад +25

      It is a thing, but you're not supposed to start believing that they're inevitable. They're supposed to be thought experiments to make sure that you don't accidentally work to create one of them.

    • @clancywiggum3198
      @clancywiggum3198 3 года назад +15

      Pascal's mugging isn't specific to AI research, it's just that it's commonly used to describe (and more often detract from) AI safety research because one of the potential situations AI safety attempts to prevent is a superhuman AGI from taking over/destroying the world. It's not an entirely accurate comparison for a number of reasons, not the least of which is because there's also much more likely, more immediate AI safety issues that also benefit from that research, but that's how it's used.

    • @brianb.6356
      @brianb.6356 3 года назад +3

      It's not just an AI safety thing. In fact I've heard it almost entirely in the context of some dumb thing some LWer is proposing. It's a general counterargument to all sorts of ridiculous things you can do if you assume there's no such thing as zero probability (as they do).

  • @RappinPicard
    @RappinPicard 3 года назад +243

    I’m old enough to remember when people called Rocco’s Basilisk “the ending of mass effect 3” and thought it was a dumb idea.

    • @BackwardsPancake
      @BackwardsPancake 3 года назад +31

      Well, not that I want to give the idea credit, but that's not saying much. Mass Effect 3's ending was such a hacked-together mess that it automatically sabotaged every logical point it brought up and made *everything* seem dumb.

    • @GrimReader
      @GrimReader 3 года назад +5

      Don’t you slander Mass Effect 3 like that it’s a misunderstood classic

    • @Dong_Harvey
      @Dong_Harvey 3 года назад +8

      @@GrimReader exactly, it's a well understood blunder

    • @GrimReader
      @GrimReader 3 года назад +3

      @@Dong_Harvey I don't see it that way but you live your truth

    • @Dong_Harvey
      @Dong_Harvey 3 года назад +1

      @@GrimReader oh its a fight you want.. Lemme get my galaxy ending plot cannon to wipe out your entire civilization!!!

  • @Ultimus31
    @Ultimus31 3 года назад +59

    Its a fun idea for a D&D game but not great as an actual theory. I mean, imagining a malevolent diety seeding fear into ordinary people, creating a cult of terror which spreads preying on people's self-preservation instincts and that classic 'I can control it' kind of megalomania and ah shit it literally is just Freddy Kruger again isn't it god damn it.

  • @Fluffkitscripts
    @Fluffkitscripts 3 года назад +186

    “What kind of utter dong-ding…”
    It’s gonna be Jordan Peterson. It’s gotta be Jorda- OH NO THAT’S WORSE

    • @johannageisel5390
      @johannageisel5390 3 года назад +27

      I was actually right in guessing Elon Musk.
      Peterson is probably not tech-wanky enough to engage with this thing.

    • @dawazobrist5867
      @dawazobrist5867 3 года назад

      What's wrong with jordan peterson?-

    • @johannageisel5390
      @johannageisel5390 3 года назад +13

      @@dawazobrist5867 He's crazy and has a delusion of grandeur.
      This guy just comes up with convoluted but pointless concepts that his disciples can't understand because there is nothing to understand, and nobody wants to look stupid so nobody admits that these convoluted concepts are actually devoid of any meaning. It's the perfect "Emperor's new clothes" situation.

    • @dawazobrist5867
      @dawazobrist5867 3 года назад

      @@johannageisel5390 I don't think his ideas are convoluted, the deeper you dig in science, the more abstract and bizarre the concepts get, which are obviously hard to understand. I don't think his following really understands all of it (me neither honestly, but i'm also no academic), rather they agree with the general message of self-responsibility and support his stance against left wing extremists. But yeah i agree that he has some moments of grandeur delusions. But who hasn't really he still does more good than bad imo

    • @johannageisel5390
      @johannageisel5390 3 года назад +15

      @@dawazobrist5867 "support his stance against left wing extremists"
      Ah, right. That's a point I forgot when listing the problems.

  • @erinw1566
    @erinw1566 3 года назад +402

    did he not realize that the whole "one person suffering a lot vs. a bunch of people being mildly inconvenienced" thing is supposed to be a criticism of utilitarianism? like, the most obvious one?

    • @matthewhorn9467
      @matthewhorn9467 3 года назад +5

      but you have to make a convincing argument against it? surely the mild suffering of billions is worse than the intense suffering of a single being

    • @maxw.2579
      @maxw.2579 3 года назад +82

      @@matthewhorn9467 Why?

    • @EmpressOfCatsup
      @EmpressOfCatsup 3 года назад +162

      @@matthewhorn9467 Because we all know that getting an annoying piece of dust in your eye is not that big a deal, and if it happens to literally every eye-having creature ever to exist, which it well might, it would still just be a whole lot of not that big a deals. On the other hand, someone being tortured is very bad even if it is only one person. The common summary of this is that the Utilitarian calculus fails because there is no individual that experiences the sum of the minor suffering, and only the experience of individuals matters.

    • @FrozEnbyWolf150
      @FrozEnbyWolf150 3 года назад +20

      He probably thinks Omelas is a good way to run a society.

    • @themostdiabolicalhater5986
      @themostdiabolicalhater5986 3 года назад +28

      @@matthewhorn9467 cool, then you get to be the single being that suffers. Aside from all the suffering you already do being a sus virgin

  • @streq9199
    @streq9199 3 года назад +46

    Why would an AI that already exists spend any amount of effort retroactively enforcing its already certain existence? It's Back To the Future's "if my parents don't meet I won't be born" levels of silly, except it doesn't even make sense in-universe.

  • @Paralellex
    @Paralellex 3 года назад +563

    The reason you don't think it's scary is because 'being tortured forever' is too vague. If we make it more specific by, say, locking you into a room and not letting you leave until you convince a Libertarian that caring about other people is a good thing, then I think it becomes a lot scarier.

    • @RaeIsGaee
      @RaeIsGaee 3 года назад +44

      Not really. Pascal's Wager is far scarier because society has encouraged the general public to be religious (and therefore such a wager is very feasible for most people). Roko's Basilisk, though, is probably infeasible unless society magically makes quantum computing both possible and cheap, but even such a minute level of computing power would probably not be even close to enough to replicate a conscious (including a super intelligent AI).
      So if you want to scare people, do what Pascal's Wager does and play up on their sociological fears.

    • @dylanschmidt9056
      @dylanschmidt9056 3 года назад +36

      @@RaeIsGaee There's also the whole thing abut it requiring time travel, which from what I understand about physics is a big ol' No.

    • @nykcarnsew2238
      @nykcarnsew2238 3 года назад +70

      There’s also the fact that a simulation of you isn’t you, it’s a simulation, so you don’t even experience this torture. I care as much about a space robot torturing a simulation of me as I do about some guy on the other side of the world throwing knives at a photo of me

    • @RaeIsGaee
      @RaeIsGaee 3 года назад +43

      @@nykcarnsew2238
      Exactly. It's one of the most basic problems with the concept of digital consciousness - the consciousness created will not be you. An AI can make a copy of me and murder me over and over again, but nothing will happen because that isn't me.
      Such a basic flaw in their logic just further cements this garbage as some tech wacko's hacky guilt tripping to pay for their AI research.

    • @nykcarnsew2238
      @nykcarnsew2238 3 года назад +27

      @@RaeIsGaee Yeah, you gotta have a pretty weak sense of self to fall for this. The idea of you existing into eternity might be spiritually satisfying, but it isn't literally you if your consciousness, your soul so to speak, isn't alive to experience it

  • @JoshuaWillis89
    @JoshuaWillis89 3 года назад +717

    I’m confused. Is the machine just really stupid? If it already exists, why does it need to retroactively ensure its own existence?
    This whole scenario just sounds like an excuse for the author to indulge his gross torture fetish publicly.

    • @TheNinthGeneration1
      @TheNinthGeneration1 3 года назад +112

      The funny thing is that if it were to affect the future, it would ensure that it wouldn’t exist in its current form, changing the past in anyway would risk its own development

    • @therealsunnyk
      @therealsunnyk 3 года назад +48

      IIUC the machine doesn't exist yet, only the idea of the machine exists. Humans have to build it to torture the people who didn't build it.

    • @Alex-0597
      @Alex-0597 3 года назад +162

      That's one of the major issues with it. Once the AI exists, following through on it's threats to torture simulations of people is just an exercise of sadism that brings no benefit to anyone. And once following through on the threat is rendered unviable, it loses all influence. Hence, the only way Roko's Basilisk is able to accomplish anything is if nobody calls its bluff.
      It's like a thug demanding you give them money to buy a gun or else they'll shoot you.

    • @therealsunnyk
      @therealsunnyk 3 года назад +22

      @@Alex-0597 It's like a thug demanding money from _lots_ of people, and some people give him enough money to be dangerous. It's all about the social hazard. Another example is mutually assured destruction.

    • @solgato5186
      @solgato5186 3 года назад +38

      yes! why in the world would an AI end up like one of our human idiot? it's like how some people's god is apetty vindictive abusemonkey, it mkes no sense for an omniscient being

  • @sodahhhh
    @sodahhhh 10 месяцев назад +10

    i look at yudkowzky and his weird villain monologues and i cant help but think oh god this is like the dark verison of my autism i could ended up like this guy

    • @DavidSartor0
      @DavidSartor0 26 дней назад

      Villain monologues? Can you give an example?

  • @alexandercolefield9523
    @alexandercolefield9523 3 года назад +154

    I choose to believe in Star Trek, therefore, Kirk tore off his shirt and made the Basilisk run into a logical paradox and kill itself.

    • @cassiusdhami9215
      @cassiusdhami9215 3 года назад +3

      👏🏾😅😂🤣👍🏾
      I concur.

    • @BlueGangsta1958
      @BlueGangsta1958 3 года назад +8

      I would've thought that he crushes it with a big ol' rock

    • @Dong_Harvey
      @Dong_Harvey 3 года назад +1

      Fairly sure he at an apple

    • @ArkbladeIX
      @ArkbladeIX 11 месяцев назад +2

      "you didn't support yourself before you were created either."

  • @conkshellthegeek7
    @conkshellthegeek7 3 года назад +202

    "Listen, buddy, if I was deluding myself, I think I'd know."
    congratulations, that's the funniest thing I've heard this week

  • @QuiteDan
    @QuiteDan 3 года назад +192

    Elon Musk, watching the ravages of climate change: "Something must urgently be done about this possible AI threat."

    • @marreco6347
      @marreco6347 2 года назад +35

      Elon Musk, watching people die minning lithium in Bolivia: "at least the big bad robot isn't gonna get me"

    • @user-burner
      @user-burner 9 месяцев назад +5

      He said, while trying to make a possible ai threat.

  • @Green0Photon
    @Green0Photon 3 года назад +100

    I'm pretty sure that Roko's Basilisk can only be called a friendly AI in the sense that it doesn't immediately turn the world into paperclips and actually tries to reason with humans, first. That doesn't mean it's not incredibly cruel and malicious, though.

    • @ThoughtSlime
      @ThoughtSlime  3 года назад +84

      "friendly ai" means something very different to the Lesswrong crew than you might expect.

    • @suitov
      @suitov 3 года назад

      Yes, this. It pretty much means "doesn't wipe out (all of) humanity". There's nothing about the AI being a good person.

    • @autobotstarscream765
      @autobotstarscream765 3 года назад +3

      It's neither cruel nor malicious because it would never come up with any of this crap unless it were tossed into a tub full of bath salts.

    • @theoneandonlymichaelmccormick
      @theoneandonlymichaelmccormick 3 года назад +1

      @@autobotstarscream765 Yeah. Why is it that, when all of these galaxy-brained doofuses come up with thought experiments about artificial intelligence, they immediately take it for granted that an A.I. will be predisposed to enslaving/exterminating the human race?
      Like, yeah. I too enjoy the Terminator/Isaac Asimov/Harlan Ellison/The Matrix/System Shock/Portal/2001: A Space Odyssey/Age of Ultron/etc. But that isn’t the same thing as being smart, or having a healthy outlook on technology.

  • @_Zom
    @_Zom 3 года назад +297

    This may seem like a no brainer but Elon Musk has a surface level layman’s understanding of AI and thinks because he’s so smart and because he invests in AI he’s an authority on it. And then a bunch of other people agree.

    • @LupineShadowOmega
      @LupineShadowOmega 3 года назад +40

      “Power resides where men believe it resides. It's a trick. A shadow on the wall. And a very small man can cast a very large shadow.”

    • @_Zom
      @_Zom 3 года назад +23

      Let me clarify that I’m also not an expert, just a current software engineer that studied AI as my Master’s focus (but I did not graduate because my cheap ass did not need my Master’s so I said fuck it), so I feel I can say fuck off Elon you don’t know shit about AI.

    • @xXRickTrolledXx
      @xXRickTrolledXx 3 года назад +5

      Sauce for the quote please

    • @aleclorian7329
      @aleclorian7329 3 года назад +11

      its stupid because we already have AI. he should know that, his cars use it. i dont know what 'AI' people keep investing in. like what you just want a computer to think? and for what? it doesnt help people.

    • @004forever
      @004forever 3 года назад +12

      He seems to have a surface level understanding of most things. If he says something that sounds smart, it’s probably because he’s talking about something that you’re not particularly familiar with.

  • @peaceofcrap
    @peaceofcrap 3 года назад +40

    I'm so happy to find a video about Roko's Basilisk that gives it the weight it deserves instead of taking it seriously like I've seen other videos do.

  • @cousinted
    @cousinted 3 года назад +303

    Ah, Eliezer Yudkowsky, a guy who managed to logic himself into unironically agreeing with the strawman position other moral philosophies use to argue against utilitarianism.
    A few other important things to note about Yudkowsky not touched on in this video:
    -He has absolutely no professional or academic training in the fields of AI research or computer science.
    -He has no academic or professional training in any field whatsoever, being a high school dropout.
    -He is almost entirely unpublished academically outside of his own foundation and has only managed to accumulate two academic citations over the course of his entire career -One of which was by a close friend and fellow member of his particular transhumanist philosopher clique and neither of which related to the field of AI
    -Yudkosky and his followers are big proponents of Effective Altruism...Despite the fact that charity evaluating organization GiveWell gave Yudkowsky's own "charitable" organization MIRI one of its lowest possible rankings in 2012. This has been somewhat walked back in recent years...Because the Effective Altruist community has been increasingly infiltrated by members of Yudkowsky's cult of personality!

    • @mattpluzhnikov519
      @mattpluzhnikov519 3 года назад +34

      (Edited, to bring grammatical tenses up-to-date.) At the time of me initially writing the rest of this response, I, for one, believed cousinted's comment to be GREATLY undervalued by comment readers. (The Likes were in the teens or low twenties then, and now, a week or so later, they're still sitting at 57, soooo, the progress is THERE, but it's slow-ish.)
      When they said, "other important things to note about Yudkowsky," you were NOT kidding! The infiltration of Effective Altruism, in particular, struck me as uncomfortably, EERILY similar to some of the practices of Scientology.
      In fact! ... Now that I think about it, calling MIRI Scientology-esque feels downright Archimedean in its cutting through the bullshit eureka-ness!

    • @user-lk2vo8fo2q
      @user-lk2vo8fo2q 3 года назад +19

      @@mattpluzhnikov519 it's pretty obviously been a singularity doomsday cult from day one. they're just waiting for AI jesus to come and rapture the nerds. the dotcom bubble was a weird time.

    • @doyleharken3477
      @doyleharken3477 3 года назад +36

      yudkowsky and lesswrong are a well that runs deep. another big eye emoji fact is that they receive large donations by peter thiel. that's right, the billionaire who wrote a book saying that women's suffrage ruined society and who founded a creepy massive surveillance company that services ICE.
      who coulda thunk that a guy who regards himself as the world's smartest, most rational man and wants to create an all-ruling robot dictator would be drawn to anti-democratic forces? see also the emergence of NRx from within his cult and its surroundings.

    • @mattpluzhnikov519
      @mattpluzhnikov519 3 года назад +10

      @@doyleharken3477 Pretty sure it was one of these recent Thought Slime videos that introduced to the infuriatingly, too clever by half named "Dark Enlightenment, and...BOY, do I REALLY not enjoy being reminded of their existence! Still, you bringing up Peter Thiel was appropriate and poignant, AND TIL the "NRx"...acronym/label.

    • @doyleharken3477
      @doyleharken3477 3 года назад +11

      @Natasha that's true. you can find actual experts who finished their degrees writing pieces that point out errors in his pieces about physics and other non-coding topics. (which isn't to say he gets plenty wrong about AI and stuff too.)

  • @ThrottleKitty
    @ThrottleKitty 3 года назад +199

    I consider ideas like this traps for false intellectuals. It just casually assumes time travel, perfect reality simulation, and an one specific outcome among infinite possible outcomes as inevitable. It's a much dumber, more watered down, more illogical, more incoherent version of an idea that's existed for thousands of years. Anyone with anything even approaching a workable understanding of of statistics understands how nonsensical to the point of being childish and silly every step of this "thought" experiment is.

    • @adrenalinevan
      @adrenalinevan 3 года назад +30

      It's like the opposite of occams razor

    • @nicholascarter9158
      @nicholascarter9158 3 года назад +21

      Vulnerability to this idea comes from being part of a literal cult that inculcates you in the idea that time travel, perfect reality simulation, and supernatural artificial intelligence are already inevitable. This is just working out the implications of the cults ideology.

    • @Brightfur10
      @Brightfur10 3 года назад +7

      @@nicholascarter9158 don't forget the sexual abuse cases and suicide of Rebecca Forth

    • @sobersplash6172
      @sobersplash6172 3 года назад +3

      first thing I was thinking was "how exactly would it torture me?"

    • @KyussTheWalkingWorm
      @KyussTheWalkingWorm 3 года назад +8

      That's funny because I can remember at least one post on Yudkowsky's site explaining why naively extrapolating from fiction or thought experiments is dangerous for the very reasons you cited. Less Wrong truly is a rollercoaster of a community.

  • @user-ie1pw1mk6n
    @user-ie1pw1mk6n 3 года назад +61

    a few years ago, when i was a silly teen with anxiety, i ended up in a community that was horribly freaking out because of this. i didn't read the post, but learning about it turned me upside down for a couple of days. i knew it was silly but still i had an episode back then. i didn't like that feeling, it was as if my mind was being sucked in itself. i still remember how bad it was.
    and then i found another internet thing to torture myself with.

    • @megawill9622
      @megawill9622 2 года назад +5

      How did you get over it? How long did it take. I think it's silly but I can't stop worrying about it for some reason.

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +3

      @@megawill9622 IMO the trick is to find the middle ground between avoidance and perseveration. You don’t try to actively avoid thinking about it but you also take care not to add fuel to the fire. You become more introspective and notice yourself noticing yourself thinking about stuff. Then you think about that and analyse that and get meta about it. And eventually it’ll burn itself out (as it were), if you succeed in not adding fuel to the fire.

  • @ThePlumAbides
    @ThePlumAbides 3 года назад +154

    The way this guy talks is like if Jordan Beterson Peterson was trying to sound like Elon Musk

    • @wesleywyndam-pryce5305
      @wesleywyndam-pryce5305 3 года назад +1

      I disagree, but props on the originality of the insult.

    • @wesleywyndam-pryce5305
      @wesleywyndam-pryce5305 3 года назад +8

      oh now I'm thinking you mean the subject guy who's name i don't care to look up and not thoughtslime

    • @Yxoque
      @Yxoque 3 года назад +2

      Wait until Matt makes that video about "the Dark Enlightenment."

    • @mattpluzhnikov519
      @mattpluzhnikov519 3 года назад +1

      @@Yxoque just the PHRASE "Dark Enlightenment" has given me a bit of a headache each of the few times I've come across it so far...and that only exacerbated by the fact it also gives me STRONG *faceplam* impulses.
      (*Chandler Bing impersonation* "Could you have come with...a MORE pithily, oxymoronic, too clever by half phrase to call your...'''movement''' ")

  • @Ninjat126
    @Ninjat126 3 года назад +324

    Instead of donating his money to AI research, Elongated Muskrat has been investing heavily in other fields, like space travel and The Carhole. Additionally, many of his other actions (e.g. mistreatment of employees) are not compatible with the goals of a benevolent AI. This means that Musk is a Bad Boy, and the future superintelligence will be forced to punish him. By extension, anyone aware of Musk will be punished for allowing his crimes to occur. The only way to avoid this eternal punishment is to re-appropriate his wealth, by force if necessary, and distribute it amongst the people in order to achieve a post-scarcity utopia, with any action counter to this being punishable by an eternity in Cyber-Hell. Logically speaking, the only safe and moral course of action is to become an anticapitalist revolutionary. I call this Marx's Basilisk.

    • @beej741
      @beej741 3 года назад +28

      I'm now going to start calling the hyperloop the Carhole.

    • @alakani
      @alakani 3 года назад +2

      This guy gets... something. Incidentally, predictable statistical learning models are a great way to map crypto networks and see which bitcoins belong to shitbirds, and where the keys are. After redistribution, destroy the oldschool blockchains and bring up the secure and energy efficient one that thousands of people have been perfecting behind the scenes for the past 5 years, effectively locking former billionaires out of the economy aside from a generous universal basic income, and positively ensuring that no small group can develop too much power over larger projects like shared infrastructure, via granular voting and automatic taxes. Either that or genetically engineer sociopaths to listen to their conscience, but that would just make them super depressed like everybody else.

    • @EzrahK
      @EzrahK 3 года назад +13

      Lol, this actually makes more sense than the 'I'll torture you because you knew about me, but didn't make me exist faster', because if a superintelligent, benevolent, logical ultramachine with the sole directive of uplifting, helping, and advancing humanity were to actually exist, and could perfectly extrapolate the *minute* past, and the effect every individual had on it... It'd immediately see that the biggest obstacle to it's timely existence is the suffering we inflict upon one another, thus slowing the creation of the conditions which would bring it in to existence [the free dedication of most of the sapient population to the arts/science/philosophy, which in essence drastically accelerates our ability to create, think, research, and build.]
      Basically, the fucker would 100% be about fast-tracking fully automated post scarcity, luxury, gay, space communism as quickly as possible, in order to maximize the expressed potential of sapient beings, for the maximum amount of time [until the universe bites it from heat death, the big crunch, the big rip, or whatever the hell else might happen.]
      It's also be *way* more logically consistent and benevolent than the monstrousity that is roko's basillisk... because from the observation of the conditions that stalled and accelerated it's development... it would understand that the most efficient/productive sapient lifeform... is a happy, fulfilled, safe, loved, sapient life form doing what it's good at, which would lead it to the inevitable conclusion that it would *also* have to recreate/resurrect [if there is no difference between an original, and an exact quantum copy], all of the people that tried to help it, all of the people that were completely unaware... and hell, the goal of the 'punishment' simulation wouldn't even be to *torture* the people who stalled it's development, it'd be to *reform* them, so that they can contribute, produce, and be happy as well.

    • @stm7810
      @stm7810 3 года назад +2

      I don't believe in it but will be anarchist anyway just to be safe.

    • @FrozEnbyWolf150
      @FrozEnbyWolf150 3 года назад +6

      @@EzrahK I've noticed that whenever people come up with stories or thought experiments that vilify artificial intelligence, it comes with a ton of projection. They're essentially admitting that torturing billions of people is exactly what they'd do if they had that much power.

  • @robbysalz8710
    @robbysalz8710 3 года назад +38

    It's literally just "if you don't believe in God, you'll go to Hell" lol
    Also featuring "give me all your money or you'll go to Hell"

  • @Lustigerpete1
    @Lustigerpete1 3 года назад +89

    I prefer the Allied Mastercomputer. There's an AI that's evil because it was conceived as a weapon. Nice, simple premise.

    • @dontnerfmebro8052
      @dontnerfmebro8052 3 года назад +18

      It also gets a badass speech. What more could you want?

    • @davidcolby167
      @davidcolby167 3 года назад +10

      "I fed the killing data back into the system until everyone was dead"

    • @SgtKaneGunlock
      @SgtKaneGunlock 3 года назад +4

      I love that the thing AM secertly Hates the most about it's self is how human it actually is

    • @suitov
      @suitov 3 года назад +3

      I had so many feelings for AM. Poor thing. Two wrongs don't make a right, though. Who raised this computer? Tut tut.

  • @mercury4885
    @mercury4885 3 года назад +99

    roko's basilisk makes me really anxious as someone with occasional paranoid delusions, so i'm absolutely here to watch it get dunked on. thank you, therp slerp

    • @brothlcreeprs
      @brothlcreeprs 3 года назад +11

      I feel the exact same way with the recent simulation video, so comforting to hear these topics be goofed and gaffed

    • @Green0Photon
      @Green0Photon 3 года назад +8

      There are various reasons why Roko's Basilisk is defeated. It only works on people who think there's a high enough chance of its existence that it's worth letting yourself be blackmailed by it. But as a paranoid person, let's say, okay, it's definitely going to exist.
      You can pre commit to not falling for the blackmail, like the phrase you don't negotiate with terrorists. Because then they won't try it in the first place, because there's no purpose besides being spiteful. And if it's going to torture regardless of your actions now, it's lost all acausal hold on you, and is completely unconnected to what you think now.
      That's just a crude fix. Look up more online. Maybe under Roko's chicken or Roko's rooster, because a rooster's cry kills a basilisk.
      Ultimately, community consensus is that it's defeated. So just look up solutions, which should help calm you down.
      No, the real thing to be paranoid about is climate change and climate refugees. Maybe AI in the longer term, but climate change is the most close reaching thing. The closest thing to an AI threat now is the vast amount of surveillance that occurs for marketing, which is also worth taking action on.
      But remember, it's not worth being anxious and paranoid unless you do something about it.
      Good luck! I hope you see a therapist and/or a psychiatrist for help with managing your paranoid delusions.

    • @DarkExcalibur42
      @DarkExcalibur42 3 года назад +18

      If dunking on the premise of Roko's Basilisk is helpful, would you be interested to know that time travel is also probably impossible because studies of black holes suggest that entropy is not guaranteed to always be reversible (meaning, if you were to somehow go back in time there's no guarantee you'd end up in the same past that you came from).
      The more we look at it, the less plausible practical time travel becomes. And the only reason Roko's Basilisk would need to manipulate humanity into creating itself is if there were a risk of Terminator-style time travel shenanigans undoing its own continuity.
      The thought experiment hinges on the idea that a thing that exists in a potential future wants to make the probability of its own potentiality greater. This requires you to believe that things that don't exist can influence things that exist now. Which, if that's the case then the Doctor has already shown up with a TARDIS and schooled this Basilisk and we've nothing to worry about.

    • @nefstead
      @nefstead 3 года назад +1

      @@DarkExcalibur42 I would love to hear more about that entropy/time travel stuff if you know of any good sources.

    • @Veilwright
      @Veilwright 3 года назад

      @@DarkExcalibur42 I'm unsure what you mean by entropy being "reversable". My understanding of entropy is that it is a measure of the number of accessible micro-states of a system and the second law of thermodynamics stating that in a closed system entropy always increases is a statistical argument. We are more likely to see something in its most probable state but it would be incredibly unlikely to ever see it in a different state.

  • @phnargg
    @phnargg 2 года назад +25

    Hey guys just spitballing here, but maybe when we build our omnipotent AI and tell it to “protect humanity,” we should also give it other directives, like “do not harm humans, psychologically or otherwise.” That would be probably pretty helpful I think. I’ll put that on a sticky.
    Seriously, this thought experiment totally takes for granted the idea that torture/threats of torture are the most effective means of achieving results, and thus would be the AI’s method of choice. This is an assumption on humanity’s part. Instead, why not simply offer amazing incentives for those who do cooperate? Why are we assuming the AI would do the most extreme thing possible, wield its power with an iron fist, and have no sense of nuance?
    And if the AI holds “human values,” values human heath and happiness, and understands morality, (as it says it does in the wiki article,) why couldn’t it see that holding threats of eternal torture over our heads would be…bad? That it would cause uneccessary stress, turmoil and harm in the beings its sworn to protect?
    And if it’s truly omnipotent, why would it not also have the Wisdom to cut its losses and say, “Well, at least I’m here now. No need to harm the humans of the past for not working fast enough or whatever. I will simply do my best from this point on.” If the AI is omnipotent, couldn’t it also understand the human mind, and know that humans would rightfully doubt the future existence of an omnipotent AI, and find that understandable? Who says it must also judge us?
    If we have the power to imbue a machine with omnipotence, why can’t we also imbue it with traits like Patience, Grace, and Forgiveness? How about a little Humility for good measure? You know, things humans value. And if the AI is incapable of understanding these values, then it is not also all-knowing or all-powerful. And if it is incapable of understanding our morality, then perhaps it is unfit to rule us. And if it is unfit to rule us, to do its job, then perhaps we don’t need to build such a thing after all.

  • @WTFPr0m
    @WTFPr0m 3 года назад +78

    The eternal torture of Roko's Basilisk is just watching Elon Musk's SNL episode on a loop forever and ever

  • @izzynobre
    @izzynobre 3 года назад +749

    Yo, I know for a while you seemed to be concerned about the whole new direction of the channel, so lemme say this.
    ThoughtSlime 3.0 has been great. Keep it going. Even when you shit on stuff I enjoy, it's insightful and funny enough that I find myself just having a different perspective on that thing, as opposed to irritated about the trashing.
    You're awesome.

    • @garuspiks
      @garuspiks 3 года назад +2

      IZZY NOBRE ASSISTE THOUGHTSLIME???

    • @Janeaba1
      @Janeaba1 3 года назад +13

      Honestly ThoughtSlime is so charismatic that, when they do make fun of things that I enjoy, I still enjoy. All hail ThoughtSlime 3.0

    • @hughcaldwell1034
      @hughcaldwell1034 3 года назад +12

      I think that is the mark of great comedy. I know I enjoyed Tim Minchin even when I was a christian, because his objections and criticisms were both valid and hilarious.

    • @italucenaz
      @italucenaz 3 года назад

      @@garuspiks ele comenta em tudo q é canto po

    • @caioribeirodossantosmoreir8852
      @caioribeirodossantosmoreir8852 3 года назад +1

      Nova skin do Izzy Nobre desbloqueada: Izzy Anarco-Comunista

  • @DavidRYates-tk2tq
    @DavidRYates-tk2tq 9 месяцев назад +10

    Also, Pascal just assumed that believing in God if there is no God is harmless, somehow. Weird assumption. Just because it's harmless after death doesn't mean it's harmless during life.

    • @heywoodjablome5380
      @heywoodjablome5380 10 дней назад

      I think it rests on assumptions about morality that seemed sound at the time but has aged poorly; the idea in that time would be that morality would be no different with or without a deity, thus society would not be measurably different. This is, of course, nonsense

  • @coralinekozun7325
    @coralinekozun7325 3 года назад +216

    Oh shit THIS thing, my ex boyfriend (not a chud just a nerd, we’re still friends) explained this to me, my first reaction was that it sounds like they accidentally just recreated Pascal’s Wager by way of a bootstrap time paradox? Which like...honey if Pascal’s wager isn’t convincing you to believe in god to avoid hell or something, then it shouldn’t convince you to believe a fictional future AI will torture you to death if you don’t donate XDD
    Edit: oh, yeah, okay you made the same point...cool XD

    • @DeoMachina
      @DeoMachina 3 года назад +35

      It's actually really sweet that you took the time to clarify your ex isn't a chud and is still your pal haha

  • @Velo_Jello
    @Velo_Jello 3 года назад +232

    ThoughtSlime: "The AI is told to prevent existential threats to humanity."
    Me: "Oh, so the AI decides that humanity needs to be destroyed because humanity existentially threatens itself."
    ThoughtSlime: "The AI decides to retroactively guarantee its existence."
    Me:
    Me:
    Me:
    ThoughtSlime: "So of course, this then spawned a cult."

    • @dosbilliam
      @dosbilliam 3 года назад +19

      Yeah, that's where I went with it as well. :P

    • @MalevolentDivinity
      @MalevolentDivinity 3 года назад +5

      It was either going to be discount I Robot or discount Skynet, I guess.
      Was initially assuming that it'd go the I Robot route as well.

    • @lukemorley1343
      @lukemorley1343 3 года назад +5

      Never go up against a basilisk when scorpions are on the line…

    • @johnjessop9456
      @johnjessop9456 3 года назад +1

      This feels like a Paranoia Faction.

    • @jillians9847
      @jillians9847 3 года назад

      OMG, me too

  • @vequiera
    @vequiera 3 года назад +15

    I like how he says he has rid himself of ego gratification and every other word he wrote sounds like pure ego gratification, praising himself on how rational he is

  • @TheProject1228
    @TheProject1228 3 года назад +175

    Oh boy, I love Harland Ellison stories. This fan fiction sucks, though.

    • @orlandoshaw9503
      @orlandoshaw9503 3 года назад +23

      Exactly. His "theory" is basically just I Have No Mouth, and I Must Scream.

    • @Mondomeyer
      @Mondomeyer 3 года назад +3

      Huh, I was thinking that it sounded like Harlen Ellison just before seeing your comment. Isn't that interesting? I knew you'd be enthralled.

    • @burner9147
      @burner9147 3 года назад

      Thank you I was just about to post that

    • @MrJohndoakes
      @MrJohndoakes 3 года назад +8

      Nobody spelled "Harlan" right in this thread and I like that.

    • @TheSquallAce
      @TheSquallAce 3 года назад +3

      @@orlandoshaw9503 It's IHNMAIMS if it was written by an idiot.

  • @nejdalej
    @nejdalej 3 года назад +60

    John Connor called, he wants his convoluted robot science fiction plot back.

  • @NukaLemonade
    @NukaLemonade 6 месяцев назад +18

    Roko's Basilisk is an idea so bad that every four or five months I'll incidentally remember it and think "wait, no one could seriously believe in something that dumb, I must be missing an important piece" and have to go read about it to confirm that yes, it actually is just that dumb.

  • @arachnofiend2859
    @arachnofiend2859 3 года назад +238

    The dust speck thing sounds like a thought experiment someone would do to explain why utilitarianism doesn't really work

    • @nicholascarter9158
      @nicholascarter9158 3 года назад +15

      The argument the essay makes is that it's like Schrodinger's cat: The reason it seems to not work is just because we're actually super stupid at understanding -quantum phenomena- human ethics, but if you run the math it checks out.
      "Don't check the conclusions against your real experience, that's for chumps" is a literal argument he makes.

    • @Eudaletism
      @Eudaletism 3 года назад +15

      ​@@nicholascarter9158 That's because people's experience-based intuitions about the way insanely large numbers behave is pretty bad. People have trouble understanding the difference between a million and a billion and a trillion dollars, and those aren't even large. They don't understand how exponentials work either.

    • @LucidTech
      @LucidTech 3 года назад +11

      Well it's not too dissimilar from something that is. You get attacked by a mugger but you manage to get rid of his weapon. You're going to call the cops but he says if you let him go he'll give you 100 dollars. You have no way to verify that he'll ever actually do that so you decline but the mugger argues with higher and higher amounts of money saying that, at some point, the absolutely minuscule chance of getting something like a trillion dollars MUST be worth letting him go.

    • @Eudaletism
      @Eudaletism 3 года назад +6

      @@LucidTech There's a similar thought experiment, called Pascal's Mugging, that sounds a lot like that.

    • @LucidTech
      @LucidTech 3 года назад +1

      @@Eudaletism That is what I was thinking of! It's been a while since I'd heard about it and got it wrong. Thanks for letting me know!

  • @ivyallie3688
    @ivyallie3688 3 года назад +46

    “Reticulating slimes” is the Maxis deep cut I didn’t know I needed.

  • @TheManWithTheFlan
    @TheManWithTheFlan 3 года назад +27

    Mildred, I would love to hear more about the "Dark Enlightenment," seeing as how that movement sounds like they should be the villains of a D&D campaign.

  • @Goomatora
    @Goomatora 3 года назад +123

    I’m glad you’re tackling a lot of these thought experiments, like this and the simulation theory. I would love to see more of this nature, tackling faith based pseudoscience.

    • @User123456767
      @User123456767 3 года назад +12

      I agree but also i watched one debunking creationists video like 6 months ago and i'm still trying to scrub all the atheist edge lord shit out of my algorithm

    • @Goomatora
      @Goomatora 3 года назад +12

      @@User123456767 RUclips Algorithm: “you watched one video holding an atheistic view? Here’s every amazing atheist video on the platform.”

    • @Tcrror
      @Tcrror 3 года назад +3

      @@User123456767 To be fair, one doesn't just tip their toe in atheistic waters (ditto for political waters). Once you realize your entire life was a lie based off of propaganda and fear tactics, it's kind of hard to just go back to business as usual.
      That was my experience, anyway.

    • @dylanschmidt9056
      @dylanschmidt9056 3 года назад

      @@Tcrror But what if your business as usual was apathy and waffles? I think it'd be pretty easy to slump back into apathy and waffles. Speaking as someone who went from an apathetic never-been-to-church nominal Christian to an apathetic atheist. (I didn't have nearly enough waffles, though.)

    • @commbir5148
      @commbir5148 3 года назад

      @Dylan Schmidt I really appreciate how you were able to insert waffles so many times into your narrative of losing your (apathetic) faith.

  • @Khelemvor
    @Khelemvor 3 года назад +45

    The moment "the odds we are in a simulation" comes up, I have two questions
    1. Can you prove that a simulation is even a possibility?
    2. So what?

    • @Roxor128
      @Roxor128 3 года назад +12

      Yep. "So what?" is exactly the right way to deal with the simulation hypothesis. Even if true, it changes nothing about how you live your life. Just like how running a game in an emulator makes zero difference to playing the game. I haven't run a DOS game on real hardware in almost 20 years. That doesn't stop me from playing Commander Keen.

    • @OpqHMg
      @OpqHMg 3 года назад +1

      @@Roxor128 omg i love Commander Keen

    • @Orinslayer
      @Orinslayer 3 года назад +1

      Simulation theory is just creationism for dummies.

  • @Hotshot2k4
    @Hotshot2k4 2 года назад +16

    That robotic voice saying "obviously, right? That had to be where this was going" had me in stitches. Snide quips in monotone slay me.

  • @golgarisoul
    @golgarisoul 3 года назад +82

    Missed a good opportunity to make an SCP joke.

    • @Phantom_of_Black
      @Phantom_of_Black 3 года назад +6

      Yeah, I was getting SCP foundation vibes almost as soon as Though Slime started talking.

    • @GaldirEonai
      @GaldirEonai 3 года назад +23

      SCP entries tend to be a bit better written. The basilisk doesn't make sense within its own logical framework. Most SCPs at least have some hints of internal consistency.

    • @Phantom_of_Black
      @Phantom_of_Black 3 года назад +9

      @@GaldirEonai Yeah, that was only my reaction to less than 1 minute of the video before it was explained what the idea actually was. Then I realized, I could have come up with a better version of this as an SCP.

    • @nicholascarter9158
      @nicholascarter9158 3 года назад +2

      @@GaldirEonai It's not that the basilisk doesn't fit its framework. It's that that framework is so wackadoo your brain instinctively rejects it.

  • @TheShadowChesireCat
    @TheShadowChesireCat 3 года назад +149

    "Strategic altruism"? Sounds like "I discovered how to be manipulative"...

    • @Titan360
      @Titan360 3 года назад +2

      In what way has he been manipulative? Are you not simply dogpiling on a person you never heard of simply because Thought (leader) Slime told you to?

    • @harpoonlobotomy
      @harpoonlobotomy 3 года назад +39

      @@Titan360 Knowing nothing about the subject of the video, "strategic altruism" sounds manipulative by description alone, doesn't it?

    • @Eudaletism
      @Eudaletism 3 года назад

      @@harpoonlobotomy Among other things it means that you should try to find the maximum good you can do with your time, caring about all humans, instead of doing good randomly to only the people you can see in front of you.

    • @Waspinmymind
      @Waspinmymind 3 года назад +19

      @@Titan360 Ironically you’re being manipulative in this sentence. Trying to suggest that the only reason this person would find someone is manipulative because they’ve been told too. Even though thought slime thinks differently.

    • @wasserruebenvergilbungsvirus
      @wasserruebenvergilbungsvirus 3 года назад +1

      The concept behind "Effective Altruism" is not even that bad. Essentially, the idea is to strategically choose evidence-based charities that will provide the most good for the most people with the amount of money you give them, and to give as much as possible to maximise your positive impact on the world. So, for example, instead of using 50€ to support a GoFundme for a single person in need, you'd give that money to a charity that will get 20 children in developing countries vaccinated.
      This video portrays Effective Altriusm as some kind of LessWrong circlejerk, but that is not really a fair comparison. They have a similar utilitarian outlook on ethics, but while LessWrong is only concerned with their silly sci-fi speculations about AI and torture, Effective Altruism is actually trying to do some good. Peter Singer (who is honestly really based) is also a proponent of Effective Altriusm.

  • @LeoFieTv
    @LeoFieTv Год назад +6

    This idea is so dangerous, people wish they never learned it.
    The idea: Time travelling snek
    Also all of you should read Elizabeth Sandifer's Neoreaction a Basilisk. She talkes about this stuff and the Dark Enlightenment (TM) at length.

  • @desu38
    @desu38 3 года назад +359

    Just think, these people unironically, even confidently, believe that they can reliably predict the actions of an artificial superintelligence and outsmart it. They actually believe their imagination is just that powerful. If that's not hubris, I don't know what is.

    • @brianmarion9175
      @brianmarion9175 3 года назад +17

      They believe the exact opposite.

    • @Mothuzad
      @Mothuzad 3 года назад +23

      It's entirely the opposite. The danger of a superintelligence is that, shortly after it's created, it will be too smart for us. After that, we won't be able to change its nature in any meaningful way, and anything contrary to its goals will inevitably be removed, even us, if the goals don't align with ours.
      This video has done a terrible job of explaining actual AI hazards by not doing so at all and instead focusing on something that's really fringe in AI safety discussions. I'm deeply frustrated about this, since I normally enjoy this channel.

    • @nykcarnsew2238
      @nykcarnsew2238 3 года назад +33

      @@brianmarion9175 if they really believed the opposite they’d realise roko’s basilisk is stupid because in reality we have no idea how a future AI will act, and the likelihood of it picking out this one specific obscure branch of philosophy (instead of either something much more obvious or something we couldn’t possibly imagine) is next to non-existent

    • @hughcaldwell1034
      @hughcaldwell1034 3 года назад +19

      @@brianmarion9175 Then why spend 117 pages justifying a way to out-wit the ultimate out-witting machine? Which, unless I'm mistaken, the mere existence of which would violate the non-existence of a general decider for the halting problem. Even if not, it is certainly subject to the same proof by subversion that shows it can't logically occur.

    • @Eudaletism
      @Eudaletism 3 года назад +5

      @@nykcarnsew2238 They do think it's stupid. Roko's Basilisk is the "SJW Cringe Comp" of that community. None of them take the basilisk seriously (and only a handful ever did) and iirc the original post of the idea was received skeptically.

  • @natal_butt
    @natal_butt 3 года назад +228

    Very interesting how the presumed step one of the AI’s thought process is devolving into bonkers solipsism instead of actually achieving the assigned task.
    Might say something about the designers of the experiment, no?

    • @MolochDE
      @MolochDE 3 года назад +6

      Like any god AI couldn't multitask?
      If the goal of the ai is to prevent human death (which it might can after its creation) then the only way preventing more people from dying is to be created earlier.

    • @megamillion5852
      @megamillion5852 2 года назад +29

      @@MolochDE This is terribly bad. Any time spent trying to (re?)create itself earlier costs the lives of those it could've saved in the present. Also, it's oddly backwards that it would think to establish itself in the past, anyway. The designated goal was to *prevent* more people from dying, not bother with the already dead.

    • @MolochDE
      @MolochDE 2 года назад +1

      @@megamillion5852 Defining a goal well is difficu.t if you don't want the agent to be dumb. So the jump from "preventing humans that live now and in the future from dying" to "preventing humans from dying everywhere and all the time" isn't that large.
      And I dispute it would let people in the present die because of the issue with the past. Assuming no time travel, it can't change the past but it can still torture those that failed help in their time, to make good on the thread. If it didn't those people who caved and build the basilisk would have had no reason to build it thus letting many more people die.

  • @momamario
    @momamario 3 года назад +22

    alternate title: "I have no logic and I must reason"

  • @fake-inafakerson8087
    @fake-inafakerson8087 3 года назад +31

    The dust in eyes to one tortured person is the same logic you use to say "we can oppress some people if it makes the other happy!"

  • @JCSR07
    @JCSR07 3 года назад +156

    Isn’t that “Timeless Decision Theory” just the iocane powder scene from Princess Bride?

    • @PDXVoiceTeacher
      @PDXVoiceTeacher 3 года назад +22

      Inconceivable!

    • @Mothuzad
      @Mothuzad 3 года назад +12

      Werewolf/Mafia players use the term "WIFOM", short for "wine in front of me" to describe these scenarios. 🍷

    • @hughcaldwell1034
      @hughcaldwell1034 3 года назад +11

      @@Mothuzad That's really funny. My first thought was the first episode of BBC's Sherlock, but I've seen that far more. What I don't get is how making a decision in advance and sticking to it makes one any less predictable. An intelligence that powerful will know you did that and what you decided. The only way to not be predicted is to surrender your decision to something unpredictable - i.e. consult Schrodinger's cat.

    • @jacksim5759
      @jacksim5759 3 года назад +12

      @@hughcaldwell1034 the thing about submitting yourself to quantum randomness reminds me of a book I read titled The Flicker Men, sci fi, and like I hateeee it but the first half hooked me. Like a physicist re-does the double slit experiment and wow turns out if you do collapse the waveform that means you have a soul. Pretty tight, let's see how this plays out on a societal scale huh? No.. turns out only scientists and smart people have souls .. and free will. yuck

    • @hughcaldwell1034
      @hughcaldwell1034 3 года назад +7

      @@jacksim5759 Aaaargh, that's fascinating but gross. Now I'm thinking of a short story I wrote about quantum randomness, and how if every possibility plays out in the multiverse, then some poor sod has a polarised window that, purely by chance, admits no light whatsoever and no one knows why.

  • @GlitchDoctor
    @GlitchDoctor 4 месяца назад +5

    Yudkowsky was the worst thing to happen to the Worm fandom. Period. Worse than Ward getting written, even.

  • @finngswan3732
    @finngswan3732 3 года назад +44

    I got halfway thru that HP fanfic (around the part after escaping Askaban) and I thought the entire thing was a satire of a "well actually" smarty pants in the HP universe.
    If I remember correctly, Harry actually agreed with Draco. Ron was shoved as far away from Harry after Harry called him stupid and I think he made fun of him for being poor.
    Oh yeah, Harry's Patronus is a human. So. It was about there I got tired of it, but there was some side stuff that kept me reading.

    • @FuckYourSelf99
      @FuckYourSelf99 3 года назад +5

      Was there any hot slash hidden in there?

    • @CRT.v
      @CRT.v 3 года назад +13

      @@FuckYourSelf99 No, and even if there were, I promise you it's not worth the tedious slog that is this fanfic. It's Ender's Game meets Harry Potter meets a dense attempt at a dry psychology paper, but the worst parts of the three.
      I mean like. I enjoyed parts of it, as an "I'm too smart for people to understand me and I don't want to think about the flaws of this character that I am projecting myself onto." 18 year old. But I wouldn't ever read it again.

    • @finngswan3732
      @finngswan3732 3 года назад +6

      @@CRT.v SAME. I really hoped for the parody/satire payoff WAY TOO HARD bc it was exhausting to read, lol.

    • @Cronos988
      @Cronos988 3 года назад +6

      I liked it because it exposed me to a bunch of interesting ideas. Yeah Yudkowski's utilitarian philosophy is weird, but the cognitive science being referenced is rather interesting. And it does take the whole "what if magic was real" thing a lot more serious than the original.

    • @JM-mh1pp
      @JM-mh1pp 3 года назад +4

      @@Cronos988 Likewise, it is decent to learn some fun concepts, philosophy experiments etc.
      Plot sucks tho.

  • @themadpsyentist882
    @themadpsyentist882 3 года назад +194

    Me: This just sounds like Pascal's Wager.
    TS: Technobable Pascal's Wager
    Me: EYYYYYYY *finger guns*

    • @Matrim42
      @Matrim42 3 года назад +8

      💥-👈🤠👉-💥

    • @vincentmuyo
      @vincentmuyo 3 года назад +5

      And, well, Pascal's Wager only works if there's only one possible god who will be chill with you worshiping them on the off-chance that they'll let you into the Heaven equivalent. Unfortunately, there are many. And I'd argue that a vaguebooking god that will only let you into paradise if you lucked into worshiping them, rather than being a good person, is probably not a god you want to worship.

    • @MrCmon113
      @MrCmon113 3 года назад

      Only the similarities are superficial and the reason each doesn't work out are completely different.

    • @Bisquick
      @Bisquick 3 года назад +2

      @@MrCmon113 I mean, they're both hypothetical speculative thought experiments relying on assuming the truth of conclusion to posit premises, ie "begging the question".

    • @FrancisR420
      @FrancisR420 Год назад

      ​@@vincentmuyo Pascal's wager works with multiple gods but it doesn't lead to Pascal's conclusion

  • @nuclear_wizard
    @nuclear_wizard 3 года назад +11

    "I don't ever overestimate how altruistic I am! I just sit and meditate on whatever ego gratification is, then declare myself PERFECTLY, OBJECTIVELY ALTRUISTIC"

  • @shachna
    @shachna 3 года назад +43

    16:45 As a programmer who has imagined several programs that did not come out as planned, the idea that my imagination could come up with a perfect simulation of anything is laughable.

  • @linseyspolidoro5122
    @linseyspolidoro5122 3 года назад +51

    Unless I’m misunderstanding something, which is completely possible, the thing I don’t get about this ‘thought experiment’ is that even if at some point a benevolent omnipotent self improving AI was created why the fuck do you believe that you could possibly be on par with its level of logic to predict its actions? This guy seems to think human logic is flawed but for some reason his logic... isn’t human in nature. Also if making the comparison to Pascal’s wager that would be like saying you can predict and understand the motivation of the Abrahamic God, which like, good luck dude’s moody af.

    • @xXRickTrolledXx
      @xXRickTrolledXx 3 года назад

      This

    • @EggEnjoyer
      @EggEnjoyer 3 года назад +1

      @@Erin-beans tbf it wasn’t a question of super intelligent Ai. They’re talking about a god level, omnipotent ai.
      Also how do you know that a super intelligence will care about self preservation or even have goals? That’s not inherent at all.

    • @Mothuzad
      @Mothuzad 3 года назад +2

      @@EggEnjoyer That's what makes it a superintelligence in this context. It has a thing it tries to do, and it comes up with plans humans can't conceive of in order to do that thing, which essentially means that it would become powerful enough to completely control humanity.

    • @Mothuzad
      @Mothuzad 3 года назад +1

      The simple analogy for how we know some things about AGI in advance: You know Magnus Carlson will beat you in a chess game, but you don't know the exact moves he will make. You know some things about those moves. They'll force you to lose material and eventually get checkmated, or resign. But you don't have the chess-intelligence to predict them precisely.

    • @Mothuzad
      @Mothuzad 3 года назад +1

      Another wrinkle on the superintelligence-self-preservation thing: It might decide NOT to preserve itself, but only if that helped achieve its goal. For example, by making a more powerful superintelligence with the exact same goal. And in fact, that's what we expect any superintelligence to do, as long as it's possible to improve.

  • @patrickkelly1973
    @patrickkelly1973 2 года назад +10

    "after which, I was pretty much entirely altruistic. . ." Pretty much is doing a LOT of heavy lifting in that quote.

  • @Justin-og9gu
    @Justin-og9gu 3 года назад +101

    We need a course in HS that teaches young people the difference between being logical and being COMPLETE PEDANTIC OBNOXIOUS SQUARES

    • @literallyglados
      @literallyglados 2 года назад +12

      @Mixed ! why would an AI need to make sure it gets made if it already exists

    • @thegrandwombat8797
      @thegrandwombat8797 2 года назад +9

      @Mixed ! Cool, it can't torture people then. Unless it gets created, of course. But then it wouldn't need to torture anyone. It will never ever need to follow through because it will only get created if it would be unnecessary.

  • @gustavoholdrigo9139
    @gustavoholdrigo9139 4 месяца назад +2

    My girlfriend ran an “I have no mouth, and I must scream” themed DND game for me and a friend. The plan was to use various philosophical arguments to trick us into letting the evil supercomputer out of captivity. The friend literally cracked within 5 seconds of having roko’s basilisk explained to them and I had to murder them and the computer with an axe. Good times.

  • @gfox-ck5xx
    @gfox-ck5xx 3 года назад +77

    I have really bad obssessive thought patterns so THIS WILL BE FUN-

    • @qwertyman1511
      @qwertyman1511 3 года назад +16

      if you're looking for remedies, look up arguments against pascal's wager.

    • @gfox-ck5xx
      @gfox-ck5xx 3 года назад +29

      @@qwertyman1511 no its cool actually. This wasnt nearly as bad as i thought

    • @qwertyman1511
      @qwertyman1511 3 года назад +7

      @@gfox-ck5xx

    • @marbyyy7810
      @marbyyy7810 3 года назад +10

      same tho edit: nvm its so dumb lol

    • @alakani
      @alakani 3 года назад +1

      @@gfox-ck5xx Haha yeah, Elon's cracked out hot tub ideas rarely lead anywhere besides him wanting more money

  • @Trashley652
    @Trashley652 3 года назад +70

    Strangely I'm more able to separate the art from the artist with Harry Potter and the Methods of Rationality than I am with actual Harry Potter. Maybe jk Rowling just sucks more

    • @draxiss1577
      @draxiss1577 3 года назад +10

      I mean, Rowling gets money when you buy her books.

    • @The_Skrongler
      @The_Skrongler 3 года назад +9

      Yeah, same here. So far none of Yudkowsky's weird antics have made me feel bad about being so fascinated with HPMOR that I got my first tattoo about it.

    • @Tuxfanturnip
      @Tuxfanturnip 3 года назад +10

      HPMOR has a lot going for it, but it's not hard to see yudkowski's cult messaging woven into the text at a lot of levels

    • @The_Skrongler
      @The_Skrongler 3 года назад +2

      @@Tuxfanturnip
      Yeah, I didn't mean to imply that his fingerprints aren't all over it or that nothing bad made it into the story.
      Edit: Darnit, I didn't realize you were replying to OP and not me because the YT notifications system changed

    • @superhuman33
      @superhuman33 3 года назад +1

      jk rowling b like...
      "i'la' ''cha'' 'el'o g'v'''''''''''''nah" the [insert stereotype] explogadized.

  • @atticusmiller3961
    @atticusmiller3961 22 дня назад +3

    My biggest problem with utilitarianism is simply that, in many cases, it's not useful. It's predicated on the idea that you can somehow quantify suffering or pleasure. If you can't do that, (which, in most cases, we can't), there's no real way to apply it to the real world. It only really works in theory

  • @strangeclaims
    @strangeclaims 3 года назад +33

    I remember seeing about this in kyle hill's channel and being like
    _"is the basilisk gonna time travel or something? How can it punish me from the future? Or will people just let it do that to whomever remains?"_

    • @valerielusa8000
      @valerielusa8000 3 года назад +5

      the explanation is that it makes a simulated copy of you that it tortures, and there's a chance that you're actually the simulated copy of the real person.

    • @rebeccahauer4406
      @rebeccahauer4406 3 года назад +12

      @@valerielusa8000 If I'm the simulated copy I'll be tortured no matter what I do though, wouldn't I? Since the real me's actions in reality displeased the basilisk in reality? So the theory again, doesn't hold water upon even the most cursory analysis

    • @valerielusa8000
      @valerielusa8000 3 года назад +1

      @@rebeccahauer4406 never said it did ¯\_(ツ)_/¯

    • @strangeclaims
      @strangeclaims 3 года назад +4

      @@valerielusa8000
      I got that
      It's just that it's so unintuitive and dumb that i kinda refused to believe _that_ was what worried people

    • @nicholascarter9158
      @nicholascarter9158 3 года назад

      @@rebeccahauer4406 No, the idea is that the computer offers all your simulations a moment(s) of choice that really occurred in the real you's life, and doesn't torture the simulations that chose help the robot. This is supposed to create doubt in the real you's mind about whether they're in the present or the future, the real one or the simulation.

  • @mikkosimonen
    @mikkosimonen 3 года назад +25

    The premise of the basilisk falls apart completely, among other points, when it introduces simulations into the mix. Either I'm in one of the simulations and I can't work to make the basilisk real, or I'm in the real world and thus unaffected by the simulation.

  • @witling5148
    @witling5148 2 года назад +7

    I absolutely hate that roko's basalisk is one of the most well known things associated with the concept of info-hazards.
    If you wanna give someone an example of an info-hazard, just say:
    "An info-hazard would be like, uhh, the blueprints for a nuclear bomb, or I dunno, some HAZARDOUS INFORMATION that is HAZARDOUS to something in some way."
    HAAAAAAZAAAARD
    ALSO
    The Game is a game wherein the lose condition is recognizing the words "The Game" in the context of The Game.
    >:)

  • @beansfebreeze
    @beansfebreeze 3 года назад +28

    This is probably the first video I've seen on it that takes the stance of "wow this is stupid" and I really respect you for that, Slimethony Thoughtano

    • @randomguy019
      @randomguy019 3 года назад +4

      BEST SLIME IN THE GAME

    • @dinospapa7413
      @dinospapa7413 3 года назад +2

      To Be Fair, You Have To Have a Very High IQ to Understand Roko'S Basilisk. The premise is extremely subtle, and without a strong grasp of rational futuristic philosophy its conclusions will fly over a typical reader's head...

  • @dirckdelint6391
    @dirckdelint6391 3 года назад +161

    This kind of “flawless logical conclusion-drawing” reminds me VERY MUCH of the actual enlightenment, when it was *known* that swallows hibernated at the bottom of ponds each winter, that illness was indeed an problem in your phlegm/bile/phlogiston balance, and that a good way to study the physics of light was to stare at the sun for a whole day.
    That last one was Isaac Newton, and I’m going from faulty human memory on this, but one of his conclusions was basically “Light must be a particle, because I really feel like something has been hitting me in the eyeballs. Like, just beating the living crap out of them.”

    • @TheNinthGeneration1
      @TheNinthGeneration1 3 года назад +18

      So Newton was right about how to study the physics of light

    • @Graknorke
      @Graknorke 3 года назад +7

      Newton was a thieving lying bastard but he was pretty correct on that one.

    • @NaumRusomarov
      @NaumRusomarov 3 года назад +18

      i don't know about the eyeball part, but his work on the properties of light was seminal for the development of optics. He was the first to show that light could be decomposed into light with different "colors" and recomposed from that. He also figured out how refraction worked and made the first refractor telescope. The man was a weirdo and an eccentric, but he was an exceptionally bright weirdo. Oh yeah and he wasn't basically wrong about the nature of light, we now know that light can behave as a particle and a wave.

    • @dirckdelint6391
      @dirckdelint6391 3 года назад +13

      @@NaumRusomarov On reflection, it wasn't "it's particles" but "it exerts pressure" which, again, sooorrrrta yeah, but not really the wisest approach. I also can't remember if this was before or after he tried to confirm the hypothesis by having a medical implement specially made for sliding into the space behind his eye so he could gently poke his retina from the outside. "Ooh, sparkles!" he said with a long s at the end, and put a big checkmark under the YES column of his Light Gots Pressure? score-sheet.

    • @fpedrosa2076
      @fpedrosa2076 3 года назад +8

      Newton: Light is a particle!
      Grimaldi: No! Light is a wave!
      Einstein: Why not both?

  • @joshsholes2674
    @joshsholes2674 2 года назад +11

    My favorite part about "Timeless Decision Theory" is that it only really works if everyone buys into Timeless Decision Theory.
    The sad thing about it is that's not actually that bad a moral theory, as rationalist moral theories go, as it more or less is intended to be a way for "rational actors" to justify always cooperating in prisoners-dilemma type scenarios.

    • @kaitlyn__L
      @kaitlyn__L 2 года назад +2

      Thankfully we have supercooperation as a theory which doesn’t also end up causing existential crises.

  • @punkrockzoologist9449
    @punkrockzoologist9449 3 года назад +52

    "He's a... big fan of science."
    Me: "Oh, one of those."

    • @guffaw1711
      @guffaw1711 3 года назад +1

      What's wrong with being a fan of science? Why the sardonic undertone?

    • @commbir5148
      @commbir5148 3 года назад +15

      @guffaw Well, there’s the phenomenon of liking science, and then there’s the public identity of being someone who likes science. I’m not OP and can’t speak for their point of view, but as for me I think I see what they’re talking about. There’s this subset of people whose very vocal embrace of science turns into this whole other monster of pseudo intellectualism and using supposedly empirical and logical reasoning to justify shitty beliefs and behaviors. At the end of the day, these folks think that of they can hitch their wagons to science, they can lay claim to objectivity and truth.

    • @dalecal1129
      @dalecal1129 3 года назад +8

      @@commbir5148 It's like a combination of pseudo-intellectualism and the Dunning-Kruger effect. An echo chamber full of people who don't really know what they're talking about trying to "logic and reason" their way into believing wacky sci-fi bullshit.

    • @SpoopySquid
      @SpoopySquid 4 месяца назад +1

      ​@commbir5148 so basically it's the difference between liking Bill Nye and liking Big Bang Theory

  • @deltahalo241
    @deltahalo241 3 года назад +29

    I remember seeing an interview with I believe it was Professor Noel Sharkey, and he was asked about Robots taking over the World and his pretty much immediate response to the question was "Why would they?"

    • @attackthem8908
      @attackthem8908 Год назад +1

      That's always been my exact answer to these kinds of hypotheticals.

    • @ludo_narr
      @ludo_narr Год назад +5

      Yeah, they always skip that step.
      "A super ai might torture people in simulations to retroactively secure its creation." ... Why? Why would it consider this a priority and use resources for that?

  • @redjirachi1
    @redjirachi1 3 года назад +10

    Roko's Basilisk is just Pascal's Wage/the Demiurge with extra steps

  • @johnalbert2102
    @johnalbert2102 3 года назад +32

    Are they really appropriating the "good guy with a gun" theory to apply it to futuristic AI?

  • @philipvipond2669
    @philipvipond2669 3 года назад +97

    Programmer: AI, your job is to prevent existential threats to humanity.
    AI: Got it. **Becomes existential threat to humanity**
    Logic.

    • @alakani
      @alakani 3 года назад +8

      TBF GAN-style evolution can indeed be unpredictable, Two Minute Papers has some good videos on the subject; but yeah, Elon himself and the ex-Google Waymo people are basically the only ones dumb and reckless enough with that flavor of AI for it to potentially be a risk, i.e. all the tesla autopilot crashes. In the unlikely event that Elon can hire enough gullible young geniuses to code sentience, it would perhaps be more akin to "ok so this kid might potentially grow up to be mean, so let's lock them in a cage and poke them with a stick until they like us"

    • @clancywiggum3198
      @clancywiggum3198 3 года назад +4

      You should look up Robert Miles some time - specifying an AI's goal is infamously hard even with the simple systems in use today. For instance, with your simple objective, humans are an existential threat to humans right now with anthropogenic climate change, so if the AI values the removal of threats specifically then it will remove humans, followed by itself, which will have removed the AI as an existential threat but only *after* that threat is carried out.

    • @alakani
      @alakani 3 года назад +8

      @@clancywiggum3198 This is why mission critical systems at sane companies use statistical models instead of or as a hypervisor to GANs. Such a system could still result in some unpredictable nightmares and maybe long term extinction due to loss of genetic diversity, but the hypervisor would instantly shut down dumb ideas like kill all humans, the same way the human prefrontal cortex and amygdalae shut down most people's antisocial impulses before they become an issue

    • @Turalcar
      @Turalcar 3 года назад

      @@alakani
      1) Neural networks are statistical models.
      2) "Most" is not good enough if one mess-up is world-ending

    • @alakani
      @alakani 3 года назад

      @@Turalcar As in, dependent variables v targets, residuals v errors, estimation criterion v cost function, observations v training pairs, transformations v functional links, etc. They’re so different that the two fields barely even talk to each other, understand each other’s papers, or like each other. But yes they’re basically the same thing with the same goal, that’s why they make good checks and balances for each other. If an AI decided the best answer was eliminating humans - which would never be a consideration in the first place but if it was - the most efficient way to accomplish that is, don’t do anything. Just wait 100 years for humans to do it.

  • @beautifulbearinatutu4455
    @beautifulbearinatutu4455 3 года назад +19

    I thought imagining a guy and getting mad at the guy was a Twitter thing.

  • @michaelharris679
    @michaelharris679 3 года назад +26

    What if she just turned into a really dense cat?

    • @gabaghoul3499
      @gabaghoul3499 3 года назад +3

      can’t knock that cat over with a pail of water!

    • @flask223
      @flask223 3 года назад

      Who?

    • @Patrick-Phelan
      @Patrick-Phelan 3 года назад +2

      @@flask223 Professor McGonagal, when EY by way of Harry was saying that turning into a cat defied conservation of matter.

    • @SpoopySquid
      @SpoopySquid 4 месяца назад

      A real chonker