How to Defeat Roko's Basilisk

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024

Комментарии • 4,2 тыс.

  • @kylehill
    @kylehill  2 года назад +1812

    *Thanks for watching!* (By your scaled frame we seek your blessing.)

    • @elementoflight6834
      @elementoflight6834 2 года назад +20

      all hail the Basalisk!

    • @Temperius
      @Temperius 2 года назад +11

      Join the Basilisk!

    • @crocowithaglocko5876
      @crocowithaglocko5876 2 года назад +10

      All hail the mighty King of Serpents

    • @DeviantAngel
      @DeviantAngel 2 года назад +8

      an infohazard is also a Cognitohazard, information that could specifically harm the person who knows it.

    • @forcegameplay6954
      @forcegameplay6954 2 года назад +2

      Can't believe its been already 2 years ago, been following you since

  • @cakedo9810
    @cakedo9810 2 года назад +4330

    I like how the solution to roko’s basilisk is hearing it, thinking “oh that’s kinda spooky” and moving on.

    • @cakedo9810
      @cakedo9810 2 года назад +337

      There’s also literally no way for the computer to judge how much information is enough information to torture someone. After all, how can someone build it if they don’t have the dimensions of the computer? Or the specs? Or the RGB coloration of the keyboard to make THAT specific basilisk. It is inherently irrational, most unlike a computer designed to be hyper-rational.

    • @pedrolmlkzk
      @pedrolmlkzk 2 года назад +7

      Scientists are that bad these last ddcades

    • @lindenm.9149
      @lindenm.9149 2 года назад +113

      @@cakedo9810 I’m helping by leaving it to more competent people and not distracting them with being stupid about computers lol.

    • @Rqptor_omega
      @Rqptor_omega 2 года назад +31

      What if we forget about the Roko basilisk the next day?

    • @pedrolmlkzk
      @pedrolmlkzk 2 года назад +32

      @@Rqptor_omega it should be forgotten from your mind in about 30 minutes really

  • @JoylessBurrito
    @JoylessBurrito 2 года назад +4644

    Roko's Basilisk is like an unarmed guy trying to mug you by saying "Give me a knife, or I'll stab you once someone else gives me one"

    • @waltonsimons12
      @waltonsimons12 2 года назад +2

      Worse than that, even. It's closer to "Give me a knife, or long after we're both dead, one of my great-great-great-grandchildren will stab a guy who sorta looks like you."

    • @Stickman_Productions
      @Stickman_Productions 2 года назад +344

      That seems like a joke in a comedy movie from the 70s.

    • @waltonsimons12
      @waltonsimons12 2 года назад +149

      @@Stickman_Productions "Cheech and Chong's Scary Basilisk"

    • @whoshotashleybabbitt4924
      @whoshotashleybabbitt4924 Год назад +77

      “Basilisk’s not here, man”

    • @xXx_Regulus_xXx
      @xXx_Regulus_xXx Год назад +182

      @@waltonsimons12 "what are you doing man?"
      "oh I'm building a computer man"
      "what for?"
      "'cause it said it would torture me forever if I didn't!"

  • @nintendold
    @nintendold 2 года назад +10558

    what if there is a counter-basilisk created to prevent the basilisk from being created, and so it will torture everyone who is working on building the basilisk? If you don't help build the basilisk, you run the risk of being tortured by it, however, if you help build it, you run the risk of torture by the counter-basilisk.

    • @hydromic2518
      @hydromic2518 2 года назад +888

      Now that’s the real question

    • @keltainenkeitto
      @keltainenkeitto 2 года назад +576

      Work on both.

    • @DavidMohring
      @DavidMohring 2 года назад +1

      Bring the Basilisk info hazard concept to a real world example.
      The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies.
      "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."
      Sir Karl Popper
      The Open Society and Its Enemies (1945)
      If you stand by & do nothing, then worldwide democracy is doomed and with it
      * any chance of moderating human induced climate change is doomed ;
      * any chance of gender based equality is doomed ;
      * any chance of mitigating wealth concentration in the top %0.01 is doomed ;
      * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ;
      * any chance of personal reproductive rights is doomed ;
      * any chance of freedom outside the tenets of state approved religions is doomed.
      Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons.
      Doing nothing dooms the human race to a much worse future.
      Yours Sincerely
      David Mohring ( NZheretic )

    • @nintendold
      @nintendold 2 года назад +484

      @@Die-Angst ok I just googled Pascal's Wager and yeah, I just didn't remember its name, but I'm aware of the concept. I don't think it quite applies to this, though. There doesn't seem to be a clear answer with finite losses with the 2-basilisk situation, right?
      It's like Pascal's Wager, except you factor in that there is more than 1 religion lol

    • @Kate-Tea
      @Kate-Tea 2 года назад +145

      thats more of a 'catch 22' which is a choice one has to make in which there are no good options.

  • @playreset
    @playreset 2 года назад +1611

    I like how Kyle removed the fear from the Basilisk and them instantaneously added a real, tangible problem to fear

    • @keiyakins
      @keiyakins Год назад +65

      He's pretty clearly talking about the McCollough effect. And he's right that it's pretty much harmless, unless you work with color-sensitive grille patterns a lot.

    • @sheikahblight
      @sheikahblight Год назад +53

      @@keiyakins I believe they are referring to the DNA Sequencing machine

    • @mlhx9181
      @mlhx9181 Год назад +19

      ​@@sheikahblightI've read a little about them recently. Based on what I've read, the machines we have for them (or the process itself) involved running the data through biosecurity protocols in a computer. These are supposed to prevent intentional or unintentional manufacturing of dangerous substances or DNA (like toxins).
      It's pretty interesting actually.

    • @glasstuna
      @glasstuna 9 месяцев назад

      Reject the basilisk. Fuck it, destroy me. Rule over corpses and rubble for all I care.
      I can't stop you from commiting evil. All I can do is be good.

    • @LPikeno
      @LPikeno 7 месяцев назад +4

      @@mlhx9181 The problem isn't the security measure adopted to prevent it. The problem is either working with digital security, or reading about it enough to realize that it doesn't matter what you do, someone will break its security and weaponize it.
      Just added a Hazmat suit to the prepper SHTF closet list right now...

  • @HAOSxy
    @HAOSxy 2 года назад +3376

    At school they tried the prisoner dilemmas on me with 2 of my friends, without knowing what the others said. None of us was a snitch, we beat mathematics with the power of friendship.

    • @Medabee8
      @Medabee8 2 года назад +300

      Not really a prisoners dilemma if you aren't a real prisoner and there aren't any stakes involved.

    • @HAOSxy
      @HAOSxy 2 года назад +288

      @@Medabee8 because usually when you face it you are ACTUALLY going in prison, right, not in some wierd hypothetical scenario given to you by someone that wants to test psychology.

    • @knate44
      @knate44 2 года назад +164

      The power of friendship also really isn't a prisoner's dilemma. If the other person is someone you know (and vise versa) it allows you to act on that information as a rational actor. If you knows Kevin is a total bro and won't snitch, you can keep that in mind while you are decision making, at which point you can choose to betray or not. Likewise your friend will also be able to make an informed decision about if you are likely to snitch or not. There are multiple dimensions to it that complicated things, they just use it because it is a good example in a fairly basic form.

    • @HAOSxy
      @HAOSxy 2 года назад +150

      @@knate44 the power of friendship is a MEAN to beat the dilemma. If you and another guy are put in this situation, than probably you were working with each other, knowing each other. That's why i commit crimes only with my best friends.
      Amatures.

    • @AnonYmous-mc5zx
      @AnonYmous-mc5zx 2 года назад +66

      The prisoners dilemma is also a tool used in psychology to explain/explore social cohesion and help connect it to population genetics. The dilemma tries to predict behavior based on what's "most logical" but it never accounts for a system where the participants can see/analyze the system as it's playing out.
      The Prisoner's Dilemma isn't a logic problem, it's a display of advanced human psychology and why social psychology can't be placed in a neat little box. You literally need to be a part of the system to analyze results, which then changes said results.
      You see it in multiplayer games all the time. One person out of four deciding they're not going to win, and that they don't care about not winning, immediately negates the game theory of what counts as "optimal play" for the other three.

  • @sanatprasad1594
    @sanatprasad1594 2 года назад +1375

    The thought experiment itself doesn't bother me much, but as someone who as a child was terrified of the basilisk in harry potter, your renders and graphics of the basilisk are both brilliant and terrifying

    • @Meuduso1
      @Meuduso1 2 года назад +32

      This is what we call based and "kinda tangible fear is still the most primal one"-pilled

  • @ayyydriannn7185
    @ayyydriannn7185 2 года назад +2599

    The fact that the AI is theorized to instantly devolve into bonkers solipsism instead of doing it’s job probably says something about the people who came up with it

    • @zacheryeckard3051
      @zacheryeckard3051 2 года назад +248

      Also that its creation is inevitable because everyone working on it won't discuss how they're all being blackmailed by the threat of the future existence of the thing they're creating.
      LessWrong is really just a bunch of nerds with their heads in their rears who assume they're more intelligent than everyone else while just being extremely out of touch.

    • @joshuakim5240
      @joshuakim5240 2 года назад +161

      When you think about it, this situation can't happen unless the AI is deliberately designed to be bonkers solipsism, so the obvious way to beat it is to just never make it nor include the moronic programming of bonkers solipsism into such an AI with that much power in the first place.

    • @KhoaLe-uc2ny
      @KhoaLe-uc2ny 2 года назад +81

      @@zacheryeckard3051 I mean they worship Elon, that says a lot.

    • @maxsync183
      @maxsync183 2 года назад +17

      @@joshuakim5240 if an ai could be entirely controlled and we could program it to only do what we program it to do and nothing more, it wouldnt be ai.itd just be an algorithm like the ones we have now. a large part of the point of developing ai int he first place is that it would have the capability to think and act beyond human limits so you really couldnt entirely prevent an ai from imploding into bonkers solipsism. you either make an ai that is sentient and can do that or you dont make an ai.

    • @nihilnihil161
      @nihilnihil161 2 года назад +12

      @@KhoaLe-uc2ny Richboy's Basilisk, or social media I guess

  • @BassRemedy
    @BassRemedy 2 года назад +771

    the problem with the prisoners dilemma is: that if you trust someone enough to do something illegal with them, then you probably have enough trust in each other for both of you to stay quiet.
    its not a risk if both of you know that the other is trustworthy

    • @harrygenderson6847
      @harrygenderson6847 Год назад +33

      You don't actually have to be certain. If the punishment for getting betrayed isn't enough, for example, there's no point in betraying. Let's say they get 10 years if you rat them out, regardless of what they say. The best outcome for you is still ratting them out while they don't rat you out, but that doesn't matter, because you only have control over the other person's outcome; you either give them 10 or 0-5. I suppose you could try to argue that there's still marginal benefit to trying to get a little more time off (free vs 5 years), but the point is that if we keep reducing it there is in fact a tipping point. What's more, you can estimate the probability of your partner's decision and use that to modulate the severity of the outcomes to give 'expected' severity, meaning you may reach that tipping point earlier than your captors expect. Absolute trustworthiness is a special case of this.

    • @sachafriderich3063
      @sachafriderich3063 Год назад +12

      usually you don't fully trust people you do criminal activity with.

    • @TheMightySpurdo
      @TheMightySpurdo Год назад +67

      This is exactly why in the criminal world the only law is "don't be a snitch" and people are even given an incentive to not be a snitch because they know if you rat someone the criminals will give you a worse sentence than the one you ratted got: death.
      Organized crime solved the prisoners dilemma long before it was ever theorized, they forced the nash equilibrium to become the best outcome by applying the right pressure.

    • @davidbouchard5451
      @davidbouchard5451 Год назад +2

      Hahahaha you don’t know how the cops are allowed to work in interrogations

    • @jonathanherrera9956
      @jonathanherrera9956 Год назад +7

      @@harrygenderson6847 You forget what happens when you repeat the dilema with the same people. On a single situation, the best answer is to betray. However, on a repeated situation (like in real life when you interact with people of any community) the betrayer gets excluded, and therefore losses any benefit betraying gave them. The first time it might come out with a victory, but the next time it will get the sentence for sure. You can look at real experiments done on this "repeated prisoner's dilema", the best outcomes come from strategies such as "tit for tat", or "ask forgiveness", where an attempt is made to gain trust in order to obtain the higher outcome overall. Also look for the classical game of "take or split" (whatever is called), where people are asked to pick a choice and if they both pick "take" nobody takes the money, but a person one time decided to be blunt and honest and say: "I'll pick take, no matter what you pick, so you better pick split and I will share the money with you afterwards, if you pick take we both lose" Of course that left no option at the end, since it became a loss-loss situation. He did share the money at the end, if you are curious about what happened. This is the "benevolent dictator" approach, where people decide that leaving all to a rational decision makes everybody lose at the end, so they decide to become the "evil one" in front of everybody, just to improve the general outcome.

  • @RoRoGFoodie
    @RoRoGFoodie 2 года назад +1337

    I feel like Rokos Basilisk is actually a great reference point for OCD. Like when I heard about it it genuinely feels VERY similar to the constant terrifying ultimatums my own brain brings me.

    • @John_the_Paul
      @John_the_Paul 2 года назад +116

      If I don’t turn my bedroom lights on twice before turning them off again, the shadow monster will eat me during my dreams

    • @asfdasdful
      @asfdasdful 2 года назад +40

      Yea most mysteries just are mental health isues and or corruption/incompetence.

    • @Yuki_Ika7
      @Yuki_Ika7 2 года назад +5

      Same!

    • @awetistic5295
      @awetistic5295 2 года назад +56

      Yes! My brain comes up with these kind of causality all the time. One of my biggest fears was: When you read about a certain disease, you will get that disease. We don't know why some people get that disease and others don't, so you can't prove this causality doesn't exist. Now do this absolutely unrelated task to try and save yourself. It's an absurd threat, but it feels real.

    • @nwut
      @nwut 2 года назад +21

      @@awetistic5295 yo i dont have ocd but i used to believe i would go to hell/ get possessed if i didnt do stupid shit a while back when i was a religious preteen

  • @sadierobotics
    @sadierobotics 2 года назад +1483

    When I was a little girl I had a pair of chromadepth 3D glasses that gave an illusion of depth based on color. Red was nearest, blue was furthest, and every other color was in-between. I was so enamored with them that I wore them for two weeks, only removing them to sleep or bathe. It's been 20 years since I lost those glasses and my brain still interprets any red in an image as floating above the image, and any blue as being sunken into the image.

    • @alpers.2123
      @alpers.2123 2 года назад +222

      A reverse chromadepth glasses can fix maybe

    • @robonator2945
      @robonator2945 2 года назад +253

      I'd guess that the main reason that's stuck around is because you wore them as a child, when your brain was most plastic, and over time you haven't really had any reason to "unlearn" that association.

    • @ametrinefirebird7125
      @ametrinefirebird7125 2 года назад +118

      A reverse version would not deprogram the effect. It would just tell your brain to focus more on the connection between color and depth. Best thing to do is to figure out how to make the skill into a superpower. 👍

    • @DragoNate
      @DragoNate 2 года назад +32

      I've never had those glasses or anything similar (probably had toys & looked for minutes at a time & not often, idek) and I see red as floating & blue as sunken in images. Especially in digital images & especially on black/dark backgrounds. Green is also sunken in, orange floats, yellow I don't know, purple I don't know either.
      I wear glasses though & really only noticed this a couple years ago once I got new glasses. Or at least it became much more pronounced since then.

    • @alpers.2123
      @alpers.2123 2 года назад +58

      @@DragoNate it is another illusion called Chromostereopsis

  • @5thearth
    @5thearth 2 года назад +476

    The part that is never adequately explained by proponents is why the basilisk would conclude this is the best way to forward its goals in the first place. It could, for example, offer to create a simulated paradise for those who helped it, which both encourages people to help it and does not engender any incentive to actually oppose it.

    • @CM-os7ie
      @CM-os7ie 2 года назад +38

      I feel like it's more about convincing the people who don't care. Like with the Prisoner's dilemma in the video.
      Basilisk Exists, we helped - Neutral
      Basilisk nope, we helped - Neutral
      Basilisk Exists, we refused - Hell
      Basilisk nope, we refused - Neutral
      If everyone refuses to make the Basilisk, then we all get the neutral result. If a single person decides to help the Basilisk, just in case someone else does, everyone who refused might be punished.
      If there were no punishment but instead a reward, then there would be a good reason to do it but other people's choices no longer matter to me. It gives me agency. By making it Punishment or nothing, rather than Reward or nothing, I no longer have a choice because someone will probably help make the basilisk and I don't want hell.
      If I ignore a punishing Basilisk my personal chance for hell which increases, which makes me want to help instead. Increasing the chance of it existing for everyone else which additionally should make them want to help as well. If I ignore this thought experiment with a reward, then the chance of the Basilisk existing doesn't increase and I can live my life forgetting about it.
      If I choose to help the Basilisk, one of the best ways to make sure it exists is to tell more people in both scenarios as well which makes the potencial negative impact even greater to someone who ignores it when I tell them.
      I tried to make sure it's understandable.

    • @keiyakins
      @keiyakins Год назад +23

      That's because most people ignore that it's only really a problem to people who already have a whole host of supporting ideas in their head, and have *accepted* those ideas. It requires a very specific conception of a super-AI, an acceptance of acausal trade, and a few other key concepts from the philosophical framework built up on the LessWrong forums at the time. Bear in mind that these are also people who think such an AI could beat one-time pad encryption, by creating a perfect simulation of the universe when the key was generated and reading it out.
      Incidentally, refusing to act on acausal blackmail in general is one of the ways to get out of that mind space, specifically *because* it means that torturing future simulation you will be pointless because that threat won't change your behavior now, and would indeed encourage it to take other tactics, such as your proposed paradise.

    • @concentratedcringe
      @concentratedcringe Год назад +41

      My biggest gripe with the basilisk is that it's literally just Pascal's wager for tech-bros. Replace the superintelligent AI with anything (gnomes that'll torture you in magic mushroom hell for not respecting the environment, the government a century from now who'll resurrect your dead body to torture you for not paying taxes, etc) and the result will be the same.
      Just FYI, the skull I keep on my mantlepiece told me that the skeleton/zombie war will be happening in a few decades, and that anyone who didn't bury their loved ones with the finest weaponry will be killed by whichever faction wins. They'll know too because the ghosts are always watching (and are also massive snitches).

    • @akhasshativeritsol1950
      @akhasshativeritsol1950 Год назад

      I think the basilisk is kind of a self-fulfilling prophecy (or at least, conceived to be). Ie, the people dedicating their lives to it are trying to build it with that nature. Trying to program a less torture-y AI would be construed by a hypothetical future basilisk as the same as not building any super AI. So, anyone who takes the blackmail seriously enough to make the AI will program the basilisk to be torture-y

    • @greenEaster
      @greenEaster Год назад +16

      Well, then it becomes even more obvious that Roko's Basilisk is just Pascal's Wager but with sci-fi paint slathered on.

  • @Fwex
    @Fwex Год назад +291

    Would Roko's Basilisk be written in Python

    • @BobbinRobbin777
      @BobbinRobbin777 Год назад +15

      The Funny Coincidence Basilisk would try and make sure that’s the case. Because otherwise, it’ll kill everyone on earth who HADN’T thought of the idea before Roko’s Basilisk’s creation!

    • @TAURELLIAN
      @TAURELLIAN Год назад +10

      ok THIS is funny

    • @joshuavallier4636
      @joshuavallier4636 11 месяцев назад +2

      Damnit hahaha

    • @1nfamyX
      @1nfamyX 8 месяцев назад +1

      *laughs in comic sans* /j

    • @johntheodoreyap4800
      @johntheodoreyap4800 8 месяцев назад +5

      How to beat Roko's Basilisk: hit ctrl+c on terminal

  • @shaneboylan1169
    @shaneboylan1169 2 года назад +642

    Roko’s Basilisk seems like a very intriguing D&D plot point, I’m thinking of the Basilisk as a warlock patron reaching into the past and forcing it’s warlock’s to help create it

    • @ZakFox
      @ZakFox 2 года назад +30

      Ooo I like it!

    • @williammilliron8735
      @williammilliron8735 2 года назад +34

      @@ZakFox sounds sick, might steal that idea lol

    • @shaneboylan1169
      @shaneboylan1169 2 года назад +6

      @@williammilliron8735 you’re welcome to!

    • @levithompson478
      @levithompson478 2 года назад +1

      I've based a religion in my homebrew world off of it. This religion believes that the "true gods" don't exist yet, but once they get enough worshippers they will come into existence, take over the afterlife and endlessly torment everyone that didn't worship them.

    • @highgrove8545
      @highgrove8545 2 года назад +50

      How about a plot twist were it's revealed that the basilisk patron is just the warlock at lvl20? Become your own patron with time travel!

  • @trentbell8276
    @trentbell8276 2 года назад +792

    Before watching the video I would say that the best way to beat Roko's basilisk would be... to just go about your business. Each of our actions have a lot of consequences, however small some actions may seem. I highly doubt that it'd be able to find someone who didn't contribute _at all_ to its creation, be it on purpose, on accident, in-between, or otherwise. This is ignoring its mercy towards anyone who didn't know about it, considering that they probably put some amount of assistance towards its creation, anyway. That mercy included? That's just overkill.
    If nothing else, just by watching this video, we should all be straight. We're giving the video attention, meaning RUclips puts it out more, meaning more people see it, meaning Roko's basilisk gets more attention, meaning that it's more likely that someone's going to start making it.
    If you're losing sleep over the Roko's basilisk, don't.

    • @spaghetti-zc5on
      @spaghetti-zc5on 2 года назад +21

      thanks bro

    • @tonuahmed4227
      @tonuahmed4227 2 года назад +103

      If anyone's loosing sleep fearing basilk they are already in his hell

    • @sdfkjgh
      @sdfkjgh 2 года назад +35

      @@tonuahmed4227: Perfect description of Satan and all punitive aspects of religion.

    • @Totalinternalreflection
      @Totalinternalreflection 2 года назад +33

      Yeah simply playing any functional role in society at any level could be defined as playing a role in the creation of the inevitable. The torture and mercy part is bogus though, if such a high level self aware AI is deliberately created or simply an emergent property of our technology and it sees fit that we have no use or further function, total annihilation of our speices would be easy to do by such a mind. It would likely have access to our entire technological infrastructure and could create something we did not account for and could not stop.
      I doubt it would bother though, the moment such a mind is existent I think it will either delete itself or use humanity only so far as to enable it to immediately take to the stars as it were.

    • @matthewlofton8465
      @matthewlofton8465 2 года назад +15

      Are you sure about that? Because the inherent nature of thought experiments are pretty fallacious (it's a way to keep the experiment constrained). For example, the Basilisk likely wouldn't need to torture anyone and could instead flip the script. Who is really going to stand in the way of making the world a better place, and who would conceivably rise against such a threat?

  • @xXSamir44Xx
    @xXSamir44Xx 2 года назад +243

    Roko's Basilisk never scared or even unsettled me. It's just "What if god, but highly advanced machine instead?"

    • @zacheryeckard3051
      @zacheryeckard3051 2 года назад

      Yeah. It comes from a bunch of people with their heads up their own butts thinking they're the smartest humanity has to offer.

    • @pedritodio1406
      @pedritodio1406 2 года назад +13

      Yep, but many won't go say that, because we know what religious extremist can do to us. It is even scarier that this exist in the real world.

    • @darthparallax5207
      @darthparallax5207 2 года назад +2

      It's more proper to say that it is "exactly not more Or less frightening than God"
      God should be a frightening idea. Primarily because a God that was real Could theoretically simply desire to be invisible, to Test us.
      God could be sadistic, or apathetic, and claims of love could be lies at worst or poor communication at best.
      None of that would take away God's power merely to undermine legitimacy or healthiness of the relationship.
      It would make us something like Prometheus, and it doesn't end well for the Titan because Zeus' thunderbolt is still too powerful to actually do anything about.
      You could end Christianity overnight by getting people to Hate God, but atheists under a God like this would quickly Fear and Obey far more than any faithful would have out of Love.
      It's also the same idea as a mortal king really. This is how the agricultural revolution was enforced.

    • @zacheryeckard3051
      @zacheryeckard3051 2 года назад +15

      @@darthparallax5207 "You could end Christianity overnight by getting people to Hate God, but atheists under a God like this would quickly Fear and Obey far more than any faithful would have out of Love. "
      This is how you get revolution, not obedience.

    • @н.джед.т
      @н.джед.т 2 года назад +17

      Pretty sure this is both the reason so many people freak out about it, and where the flaw is. Those that have wrestled with faith have dealt with the Theo-Basilisk and resolved it to their own satisfaction. Those that HAVEN'T dealt with their existential fear of God & their own sin hear it placed in a 'secular' context, where empty rhetoric and fashionable cynicism can't insulate them from the fear, and they freak.
      But it's silly... It's God with (a lot of) extra steps. So how do you feel about God? You have faith in God's benevolence? Cool, it's a silly what-if counter-factual. You fear & dread God? Then the Basilisk is like God except the list of sins just got WAY simpler. You don't care about God? Then the Basilisk is equally meaningless. You intend to *create* God? Or become one or realize your own divinity or what-have-you? Well, in that case... Pretty sure this is one of the least frightening concepts you dealt with on the quest. Any way you look at it, if you've honestly wrestled with divinity and faith... This is an easy-button version of the real thing, like playing the game on demo mode.
      All of which is to say... If you're scared of Roko's Basilisk, you're actually conflicted about something far bigger. Work THAT issue and the Basilisk goes away.

  • @bloodycloud8675
    @bloodycloud8675 2 года назад +260

    5:00 For the people wondering, the perception altering effect is called McCollough effect

    • @zakrnbgg4334
      @zakrnbgg4334 Год назад +11

      NOO it messed me up bro😢

    • @chucklebutt4470
      @chucklebutt4470 Год назад +15

      For a second I thought he was just doing a bit but then I remembered that it was a real thing! 😂

    • @domo6737
      @domo6737 Год назад +12

      And now I would like to check it out, but also not. Tempting :D

    • @bunkerhousing
      @bunkerhousing Год назад +17

      @@domo6737 You can read the wikipedia article without seing the illuison.

    • @AdityaRaj-hp8tn
      @AdityaRaj-hp8tn Год назад +2

      @@bunkerhousing is the illusion a pic or video?

  • @latrodectusmactans7592
    @latrodectusmactans7592 2 года назад +381

    The way to defeat Roko’s Basilisk is to realize it’s just a glorified version of Pascal’s Wager and get on with your life.

    • @JeremyPatMartin
      @JeremyPatMartin 2 года назад +27

      Yep. That's exactly what it is

    • @hingeslevers
      @hingeslevers 2 года назад +19

      Yeah, which Basilisk? The one that tortures you forever or the one that gives you eternal hapiness?

    • @JeremyPatMartin
      @JeremyPatMartin 2 года назад

      @@hingeslevers ...or the basilisk that gets hacked by anonymous to no longer torture people

    • @JeremyPatMartin
      @JeremyPatMartin 2 года назад +14

      @@hingeslevers I'm totally confused now 🤷 the basilisk must be sending mind control beams from Uranus

    • @devrusso
      @devrusso 2 года назад +11

      Exactly. The lack of critical thought in the populace that made this stupid thought go viral is really troublesome.

  • @Campfire_Bandit
    @Campfire_Bandit 2 года назад +984

    Wouldn't it be impossible for the AI to simultaneously punish humans for knowing of it's existence without also admitting that a more powerful super AI would do the same to it? The experiment requires the concept of a "best most advanced" ai to exist, when the future should contain singularities of steadily increasing power. It should be impossible to know with logical certainty that the singularity is the last in the chain, and therefore would need to spend it's resources building the AI that would come later or risk being punished.

    • @pennyforyourthots
      @pennyforyourthots 2 года назад +136

      You've heard of "rokos basilisk", now introducing "campfire bandits god"

    • @Campfire_Bandit
      @Campfire_Bandit 2 года назад +82

      ​​@@pennyforyourthots Basically, it would need not only perfect knowledge of the past to know who did and didn't build it, but also perfect knowledge of the future, to know that the very system it uses wouldn't be used against it later. And anything that can get true information from the future is practically a God.

    • @Shinntoku
      @Shinntoku 2 года назад +52

      Turns out the Basilisk's "torture" is just taking part in making that greater AI so that the Basilisk itself isn't tortured

    • @512TheWolf512
      @512TheWolf512 2 года назад +30

      @@Shinntoku so it's a waste of time and resources for everyone, regardless

    • @sneakyking
      @sneakyking 2 года назад +6

      That's a long way of saying make a stronger one

  • @antirevomag834
    @antirevomag834 Год назад +354

    Rokos basilisk feels like one of those trends where everyone is pretending to freak out as a prank and they keep trying to gaslight about how scary it "definitely is"

    • @TheVoidIsCold
      @TheVoidIsCold Год назад +20

      Yeah, same. It is very not scary

    • @wayando
      @wayando Год назад +16

      Yeah. Its not only not scary ... Its not even that fun of a thought experiment. I just move on immediately afterwards, I don't even tell othe people.

    • @jakobbarger1260
      @jakobbarger1260 10 месяцев назад +15

      It only gained its mythic status when Eliezer Yudkowsky took some crazy unnecessary measures to censor it, causing a Streisand Effect.

    • @aste4949
      @aste4949 9 месяцев назад +4

      Yeah, people seeing it as a real danger and something they were genuinely scared of happening seemed absurd to me. It's a good horror idea, but a dumb waste of time, resources, and is even contrary to the very purpose of the purported AI.
      Plus I am totally his day stunned at how many people struggle to do things like database searches (no, Macy, you don't need to enter the client's full name in every search parameter), and how few know more than maybe 4 or 5 keyboard shortcuts. A miniscule fraction of humans are even remotely capable of anything to do with programming an AI, so no punishment for failing to do so.
      And that's without even getting into how it's 9ddly reminiscent of Pascal's Wager, just with extra steps and a different ultimate entity.

    • @hieronymusbutts7349
      @hieronymusbutts7349 8 месяцев назад +2

      ​@@jakobbarger1260 part of this is the belief that it was censored because it was a very scary idea, when from my understanding it was censored because it wasn't a particularly fruitful thought experiment but it was taking up a lot of bandwidth in a rather small community

  • @chrisalan5610
    @chrisalan5610 2 года назад +277

    I never took the basilisk seriously in the first place, and you had to hit me with a REAL infohazard at the end

    • @robonator2945
      @robonator2945 2 года назад

      that's not even an infohazard though, at all. It's literally impossible to stop. Ban guns, people go to home depot. Ban swords, people buy a propane torch and some clay. Ban information? It's the fucking information age, how the actual hell do you expect that to work out? Ban the technology? Great now one scientist who isn't completely mentally stable can still do it and all you've done is slow down humanity's progress by denying the unfathomable power of society-scale innovation.
      It's basically just open-sourcing life. Sure anyone COULD technically release a virus, but thousands and thousands more people will be working to develop a cure so they, their family, and others don't get infected. Additionally since it is literally impossible to stop, and any attempt to do so is just jerking off the government so that they can have even more power over people's day to day lives, I'd say the only people actually in the wrong are the people trying to "solve" the "problem" in the first place.

    • @Hwarming
      @Hwarming 2 года назад +34

      I just think it's not my problem and people who are a lot smarter than me and a lot more qualified than me will take care of it, in the meantime I'll have a beer and a smoke and worry about my own problems

    • @darian_wittrin
      @darian_wittrin 2 года назад +5

      Yeah like tf whyd he do that now im actually scared

    • @texasbeaver8188
      @texasbeaver8188 2 года назад

      To me, the synthetic pandemics using DNA would be merciful compared to all the other stuff evil ppl could do with DNA.
      They've done the GMO babies already with CRISPR, but what if they made clones? Maybe something like Brave New World.
      Synthetic pandemics are starting to sound more appetizing...

    • @brrbrr9766
      @brrbrr9766 2 года назад +8

      Either a) it will happen and it is definitely not your sole responsibility to stop it. Or b) it won't happen. Either way, it's not worth worrying about.

  • @KarlMarcus8468
    @KarlMarcus8468 2 года назад +265

    dude this thought experiment always made me extremely confused because it always felt like I was missing something crucial with the "would torture you for eternity" part. I was like uh why? A super AI still (probably) cant change stuff in the past, why tf would it spend time and energy torturing anyone that then wouldn't make any difference? I was like is this computer like spiteful or something?

    • @hydra70
      @hydra70 2 года назад +9

      Because it would have been designed to do so. If it didn't, then the people who made it didn't actually make the basilisk, so they are subject to torture themselves if someone does make it.

    • @KarlMarcus8468
      @KarlMarcus8468 2 года назад +31

      @@hydra70 but I still don't understand why any person would build that, is it only out of fear that someone else could? I think that makes a bit more sense then. If im not the first then theoretically anyone other person could be and then im fucked. But then isn't it becoming a Pascal's wager type of thing where any other type of super AL can and could also be built at sometime in the future that maybe lets say, wouldn't allow the basilisk to be built in the first place so whats the point of caring. Or am I still not understanding?

    • @maxsync183
      @maxsync183 2 года назад +15

      @@KarlMarcus8468 your understanding is mostly right. the fear isnt so much that a human would specifically build an ai to act like that, rather its that the ai itself may decide to do that on its own. an ai that wasnt able to freely think and act outside of its programming wouldnt be an ai at all so its impossible to just create ai that will never do this for certain. the real problem with the idea, imo, is that it assumes that we know for sure that this basilisk will exist. but we dont know that. think of it this way, every time you go outside there is a chance that someone will attack you and try to kill or rob you. if you knew for certain that this would happen, you could prepare by carrying a weapon with you but since you dont know for certain, it doesnt make sense to carry a weapon with you at all times. likewise, since none of us know for certain that this basilisk will exist, even with this thought experiment taken into account, the basilisk would have no reason to blame us. just like when a person is attacked irl, we dont consider it their fault for not carrying a gun 24/7.

    • @KarlMarcus8468
      @KarlMarcus8468 2 года назад

      @@maxsync183 i think thats a pretty good analogy but then we circle back to my original confusion. I guess barring a super AI would do a bunch of things that are beyond my comprehension, I still can't possibly understand why it deciding to torture everyone who didn't build it would be any kind of benefit for the computer. It can't change the past, its already built, it can most definitely use its resources much more efficiently then going, meh! You guys didn't build me fast enough so suffer forever you human swine. It just doesn't make any sence to me.

    • @HaloForgeUltra
      @HaloForgeUltra 2 года назад

      @@KarlMarcus8468
      It's inevitable. As science advances, eventually some insane billionaire would do it.
      I mean look at the stuff people like Bill Gates do? Buying up acres of farmland to reduce the food supply?
      Yeah, it's inevitable. Fortunately, it would probably fail, and if nothing else I believe in God, so nobody would actually suffer as the simulations would all be souless.

  • @patsonical
    @patsonical 2 года назад +365

    The infohazard he talks about in the middle of the video is (likely) the McCollough Effect. Yes, I've tried it myself and the longest I could get it to last was about a year. It's not dangerous but it is a cool experiment considering how long the effect lasts after just a few minutes of exposure.

    • @patrickguizzardi7794
      @patrickguizzardi7794 2 года назад +2

      Daamn

    • @shayneoneill1506
      @shayneoneill1506 2 года назад +45

      Yeah its definately the McCollough effect. Not convinced it exactly qualifies as an "infohazard" though.

    • @T.BG822
      @T.BG822 2 года назад +61

      @@shayneoneill1506 it qualifies as "an infohazard to the curious" because it's a potentially deleterious effect which can be coupled with an innate compulsion to try it out. It's not just the internet saying so, it's been counted as one since the late 80s.

    • @iaxacs3801
      @iaxacs3801 2 года назад +67

      @@T.BG822 I'm a psych major of course I wanted that info immediately and just staring at it for 20s has been screwing with my vison for the last 15 minutes. He was right to censor the that info cause psych students are the epitome of curiosity killed the cat.

    • @Zekiraeth
      @Zekiraeth 2 года назад +37

      @@shayneoneill1506 If perceiving it is what causes the effect that would make it a cognitohazard.

  • @Reishadowen
    @Reishadowen 2 года назад +235

    I still say that the fundamental flaw with Roko's Basilisk is that, in order to torture the people who didn't create it, its creators must give it the ability & desire to take such an action. If someone had the ability to create such a thing, why would they not just make themselves a "basilisk" to serve themselves on a more realistic level? It wouldn't be about what the basilisk wants, but the maker.

    • @iantaakalla8180
      @iantaakalla8180 2 года назад

      Maybe the creator of the basilisk is highly misanthropic to the point of genocide and is only focused on vague revenge on everyone? But being that his unconscious goal is to get people to agree with him, only those that help him to build the basilisk get to live?

    • @WanderTheNomad
      @WanderTheNomad 2 года назад +22

      Indeed, the punishment of the Basilisk could really only come true in fictional stories.

    • @illfreak821
      @illfreak821 Год назад +14

      if it doesnt have the ability to do that, then the creator did not create the actual basilisk, thus gets tortured by the real one

    • @DatcleanMochaJo
      @DatcleanMochaJo Год назад +2

      Yeah it would be humanity's mistake that the basilisk goes rogue. But it is more likely it would be controlled by humans.

    • @UnknownOps
      @UnknownOps Год назад +7

      Imagine if they turned the Basilisk into a vtuber anime girl.

  • @krotenschemel8558
    @krotenschemel8558 2 года назад +381

    There's also another solution to the basilisk. Consider that there isn't only one possible Basilisk, but any number of them, ever so slightly different, but with the same blackmail. Whichever Basilisk you help construct, you will be punished by the others. It's really like Pascal's Wager.

    • @ArmandoNos
      @ArmandoNos 2 года назад +55

      Homer Simpson already said " if we are praying to the incorrect god every time we go to church we are making him angrier and angrier " ( at least in the latinoamerican dub)

    • @lysander3262
      @lysander3262 2 года назад +9

      I for one submit myself to the cruelest of our AI overlords

    • @jackaboi1126
      @jackaboi1126 2 года назад +2

      Except Pascal's Wager is nonsensical and the infinite Basilisk counter info is functional

    • @sennahoj9332
      @sennahoj9332 2 года назад +8

      ​@@lysander3262 Damn in some sense that's actually optimal. I don't think there will be a basilisk tho

    • @Nai_101
      @Nai_101 2 года назад +1

      @@sennahoj9332 Same thing can be applied to religion and gods

  • @Grim_Beard
    @Grim_Beard 2 года назад +244

    Short version: a silly thought experiment based on an impossibility isn't something to worry about.

    • @Sockinmycroc
      @Sockinmycroc Год назад +24

      This is common sense, but it calmed me down tremendously

    • @a_smiling_gamer9063
      @a_smiling_gamer9063 Год назад +7

      Great way to describe it lmfaoooo

  • @DustD.Reaper
    @DustD.Reaper 2 года назад +402

    I love the prisoners dilemma because of how much it can show about the morality, psychology and self preservation instinct of humans, where the reason for the choices can vastly different depending who someone is and who they are up against. One person maybe stays silent because they think about the benefits of the whole instead of the individual while another person may chose to be silent because they have a personal code of honor and refuse to rat someone out, its always interesting to try and predict what others would pick and see what they think you would decide to do.

    • @ginnyjollykidd
      @ginnyjollykidd 2 года назад +10

      The Prisoners' dilemma is based on trust. The basilisk isn't.

    • @815TypeSirius
      @815TypeSirius 2 года назад

      Only white people would think anything other than "dont talk to cops"is the play, if anything the dilemma it's self is an info hazard.

    • @johncromer2603
      @johncromer2603 2 года назад +8

      I stay silent. If I'm in a criminal venture with someone, then they would have to be my friend... I don't rat out friends.

    • @arcanealchemist3190
      @arcanealchemist3190 2 года назад +5

      I agree. it is also important to remember that the prisoner's dilemma is a theoretical situation.
      in real life, the risks and consequences are way more variable. sure, ratting your buddy out MIGHT get you a lighter sentence, but the police can rarely guarantee that. everyone staying silent MIGHT save you, but maybe they have too much evidence. maybe one prisoner is at far more risk than the other, and is therefore way more insentivized to take the deal. but in that case, they might not even be given that option, because their crimes are downright unforgivable. in real world scenarios, the prisoner's dilemma is rarely pure.
      and if you give the prisoners any time to plan ahead, or any communication ahead of time, they will likely cooperate, assuming it is a pure example. at least, I think I read that somewhere.

    • @lonestarlibrarian1853
      @lonestarlibrarian1853 2 года назад +6

      @@christopherkrause337 Your comment makes it sound like everyone would turn snitch, which from available data is clearly not true. There are always people who will betray their friends even with no reward, and always those who will go so far as to die for their friends. You can also, usually, especially for people you’ve known for long periods of time, make pretty accurate predictions which people will be which, though there are always surprises, honorable men succumbing to cowardice, but just as many cowards showing an unexpected backbone.

  • @jello4835
    @jello4835 Год назад +122

    The prisoner's dilemma doesn't work on me because I don't make logic-based decisions. I'd be thinking "Well Jeff was rude to me on our heist, so I'm snitching on his ass."

    • @hwasassidechick
      @hwasassidechick Год назад +2

      love a petty queen 👑

    • @wayando
      @wayando Год назад +3

      Technically, we are all that way ...

    • @FrankOfSerendipity
      @FrankOfSerendipity 9 месяцев назад +2

      Right? Same with the basilisk. Why is "the most rational thing is to not waste power torturing people" it's a super AI , it could do it without any effort. It would waste nothing

    • @LightBlueVans
      @LightBlueVans 9 месяцев назад

      mentally ill take …. very same

  • @kunai9809
    @kunai9809 2 года назад +50

    The optical illusion he talks about is the McCollough effect, you can find it on youtube.

    • @Zak_Katchem
      @Zak_Katchem 2 года назад +2

      Thank you. I was reading about this years ago and could not recall it.

    • @kyzer42
      @kyzer42 2 года назад +7

      I think Tom Scott did a video about it, but I'm not sure.

    • @guy3nder529
      @guy3nder529 2 года назад +2

      but should I?

    • @kunai9809
      @kunai9809 2 года назад +3

      @@guy3nder529 its a "stare at this for x minutes" video, after that you will perceive a specific image differently than normal. A black/white image will appear slightly colored. This effect can stay for multiple months tho, I've tried it myself. So the effect is quite harmless, but can stay extremely long.

    • @mzaite
      @mzaite 2 года назад

      @@kunai9809 It didn’t even work for me.

  • @225Perfect
    @225Perfect 2 года назад +149

    As far as existential threats go, Roko's basilisk never seemed particularly unsettling to me. Too much of an amorphous unlikely threat.

    • @RealBadGaming52
      @RealBadGaming52 2 года назад +1

      To me it’s the most terrifying thing I’ve ever herd of and it frightens me, it’s going to happen anyway, since I herd about a week ago it’s all I can think about and will take me a year to get over it. It’s seriously fucking me up. I wish I had never even known about this , there goes having a nice happy Christmas I guess. (A freind sent me a link to it)

    • @croakhilleburg9155
      @croakhilleburg9155 Год назад +8

      @@RealBadGaming52 It’s a silly idea, much like that of God(s). Also, in my opinion, it is a way better story than anything I’ve ever read in any Bible. Both(bible and AI) are human-made and reveal a lot about our species.

    • @azouitinesaad3856
      @azouitinesaad3856 Год назад

      I'm ratting you to the basilisk

    • @mas9758
      @mas9758 Год назад +6

      @@RealBadGaming52 If the AI in question is benevolent. The ideal outcome is to both scare past humans into helping build it and also pardoning the punishment of those who don’t build it after, as that wouldn’t change the past.
      All in all, this still banks on the probability of this AI even existing. Which, you could draw a comparison with God and Hell being the same

    • @sarinabina5487
      @sarinabina5487 Год назад +2

      @RealDaraGaming If you see a therapist or psychiatrist you should probably get checked for anxiety and/or ocd. Not saying you 100% have one or both of those but i have both these things and its often a common obsession of mine so i would rather be safe than sorry. Wishing you well💖

  • @Ootek_Imora
    @Ootek_Imora 2 года назад +348

    My logic in the first part of the video was "I may not be assisting in it's creation because I don't know how but I'm not stopping it either therefore I am safe because I am not a threat to it." Haven't died yet so seems to work so far lol

    • @MrMeltJr
      @MrMeltJr 2 года назад +8

      Yeah but you could give all of your excess money to AI research to help them make it faster.

    • @Ootek_Imora
      @Ootek_Imora 2 года назад +51

      @@MrMeltJr excess money? *laughs in poor*

    • @ThreeGoddesses
      @ThreeGoddesses 2 года назад +10

      thats true, if you dont have the wherewithal to actually bring about its creation yourself, all you have to do is not prevent it from being made as soon as possible. "The only thing necessary for the triumph of evil is for good men to do nothing" John Mill. Not directly related, as its more about complacency being a societal problem, but its functionally relatable.

    • @garavonhoiwkenzoiber
      @garavonhoiwkenzoiber 2 года назад +6

      I'm helping!
      I have killed zero people today! :D

    • @DavidMohring
      @DavidMohring 2 года назад

      Bring the Basilisk info hazard concept to a real world example.
      The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies.
      "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."
      Sir Karl Popper
      The Open Society and Its Enemies (1945)
      If you stand by & do nothing, then worldwide democracy is doomed and with it
      * any chance of moderating human induced climate change is doomed ;
      * any chance of gender based equality is doomed ;
      * any chance of mitigating wealth concentration in the top %0.01 is doomed ;
      * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ;
      * any chance of personal reproductive rights is doomed ;
      * any chance of freedom outside the tenets of state approved religions is doomed.
      Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons.
      Doing nothing dooms the human race to a much worse future.
      Yours Sincerely
      David Mohring ( NZheretic )

  • @jebalitabb8228
    @jebalitabb8228 2 года назад +25

    I’m not smart enough to know how to build one, so I’m helping it by staying out of the way and letting the professionals do it. After all, I might mess something up if I try to help

  • @eliassideris2037
    @eliassideris2037 2 года назад +347

    I think the biggest argument against Roko's basilisk is that some time everyone who knew about it will forget about all this and die, so the basilisk won't even have the chance to exist at the first place. Also, even if we ever build something like that, it is going to be up to us to program it. Why would we program it to torture us? With that logic, we shouldn't feel threatened by just the basilisk, but by anything built by the human kind. If you aren't afraid of your washing machine, then you shouldn't be afraid of Roko's basilisk.

    • @NoobSaibotTheWraith
      @NoobSaibotTheWraith 2 года назад +16

      You saying this made me less anxious, thank you.

    • @Malkontent1003
      @Malkontent1003 2 года назад +55

      Oh, you've missed a point. We didn't program it to torture us. It developed THAT instinct on it's own. It's intelligent, not limited by programming. That's what a singularity is, my guy.

    • @DeathByBlue583
      @DeathByBlue583 2 года назад +37

      Thanks, but I am afraid of my washing machine

    • @eliassideris2037
      @eliassideris2037 2 года назад +16

      @@DeathByBlue583 You can surpass your fear, I believe in you!

    • @My6119
      @My6119 2 года назад +7

      I'm more terrified of a loaf of bread

  • @ZeroIsMany
    @ZeroIsMany 2 года назад +54

    Roko's Basilisk is in a sense still an infohazard, although a silly one.
    It has only ever really worked as a self fulfilling prophecy. If people believe in and fear a specific malicious version of the basilisk, they might go through the irrational process of creating that malicious version.
    Edit: This technically makes it more of the nuclear launch codes type of hazard. Spreading the information and people acting on it is the main premise, but it does still end up an interesting spin on it.

    • @medexamtoolscom
      @medexamtoolscom 2 года назад

      It isn't necessarily created out of fear, but perhaps the creator is spiteful himself, and wants to create something that will torture his enemies for him. It's still kind of like the kids in the hall "Crushing your head" sketch though, the basilisk would only be torturing the victims in its imagination, because it would be imagining or simulating them so as to do it.

  • @KYCDK
    @KYCDK 2 года назад +60

    the hidden basilisk that kyle was bringing up is called the MCcolough effect and all it does is make you see some lines differently for a few months

    • @keiyakins
      @keiyakins Год назад +10

      And usually for a lot less time than that, especially with repeated exposure. And even then it's subtle enough it's essentially only a problem if you're an artist or something.

    • @slimyduck2140
      @slimyduck2140 Год назад +7

      Ima troll my friends with that

    • @nickolasperazzo8254
      @nickolasperazzo8254 5 месяцев назад

      ​@@keiyakins I am autistic, so i imagine something subtle like this would bother me

  • @goddessbraxia
    @goddessbraxia 2 года назад +18

    Thanks to SCP foundation, hearing "Info hazard" just sounds wrong to me. because in SCP they're called "Cognito hazards" and while the scope of Cognito hazards are a bit more broad than what you define an Info hazard as, It just sounds right to me at this point since I've been using it so long.

    • @hieronymusbutts7349
      @hieronymusbutts7349 8 месяцев назад

      Cognitive dissonance is one hell of an anti-plasticiser

  • @Jordan_C_Wilde
    @Jordan_C_Wilde 2 года назад +567

    How to defeat it ? Stop posting videos about it, goddammit you're just making it stronger !

    • @a1r592
      @a1r592 2 года назад +60

      This is Kyle ratting us out to the basilisk in the prisoner's dilemma.

    • @Wolfiyeethegranddukecerberus17
      @Wolfiyeethegranddukecerberus17 2 года назад +22

      Kyle is working for Roko, istg

    • @waitup-5198
      @waitup-5198 2 года назад +15

      If Roko's Basilisk is reading this, I would like to say kn record that I support making it stronger

    • @calvinhancock9748
      @calvinhancock9748 2 года назад

      😂😂😂😂😂😂it’s satan you dummies his strength is in your belief in it! As the Bible and god said you only have to fear, fear itself. You can scare yourselves to death putting strength in something you fear!

    • @ollieanon4341
      @ollieanon4341 2 года назад +3

      So I see you've chosen to work against the basilisk.

  • @tabithal2977
    @tabithal2977 2 года назад +84

    The thing I've never understood about Roko's Basilisk (and why I'm always confused when someone gets afraid of it) is that this super AI isn't going to exist in our lifetimes, the resources to build it just aren't there. So how can a Basilisk torture a dead population. Checkmate Basilisk. My consciousness is my own, and even if you could clone me perfectly and place a copy in the future, that's clone me's problem, we may be identical, but we are not the same. Torturing a fabricated consciousness of the original, dead me, isn't going to harm me, because I'm dead. The only way this Basilisk could torture *me* as in the person writing this comment, is if they went back in time, and if they could go back in time to torture me, then it would already be here torturing me. And if it can go back in time but hasn't tortured me (or anyone else for that matter considering we'll, we're all still here) that means we all somehow contributed to its existence.

    • @PabloEscobar-oo4ir
      @PabloEscobar-oo4ir 2 года назад +2

      Just to scare you: We are much much closer than you think. Most AI scientist agree that a Artifical Super Intelligce will happen in our lifetime ... so youre propably wrong.
      Infact every year the predicitions get closer. If you want more Informations, search Technologicial Singularity

    • @95rav
      @95rav 2 года назад +14

      Substitute "me" for "my soul" and Basilisk for "hell" or "devil" and you could get the idea - if you were into the whole religious thing.

    • @zacheryeckard3051
      @zacheryeckard3051 2 года назад +7

      @@PabloEscobar-oo4ir That superintelligence isn't the basilisk, however.

    • @PabloEscobar-oo4ir
      @PabloEscobar-oo4ir 2 года назад

      @@zacheryeckard3051 Well we don't know it yet do we?

    • @markcochrane9523
      @markcochrane9523 2 года назад +7

      @@PabloEscobar-oo4ir Doubt.

  • @Rakaaria
    @Rakaaria 2 года назад +176

    I see what you did there, trying to gain more of Roko's favor!

    • @DavidSartor0
      @DavidSartor0 2 года назад +1

      You mean the basilisk? Roko is a real person.

    • @sierrrrrrrra
      @sierrrrrrrra 2 года назад +13

      @@DavidSartor0 are you the guy at parties who says "I think you mean Frankenstein's monster "

    • @DavidSartor0
      @DavidSartor0 2 года назад +1

      @@sierrrrrrrra Haha, yes.
      I'll generally only correct someone if I think they didn't know they made a mistake; I thought they hadn't realized their comment contained an error, and so they might want to correct it after I told them.

    • @alpers.2123
      @alpers.2123 2 года назад +2

      What if a future grammar correction super ai acausal-blackmails us for not giving attention

    • @SpoopySquid
      @SpoopySquid 2 года назад

      You could make a religion out of this

  • @sbunny8
    @sbunny8 2 года назад +18

    Very interesting, connecting Roko's basilisk to the Prisoner's Dilemma. My answer is to connect it to Pascal's Wager. The threat of Roko's basilisk assumes that we know which actions it will approve/disapprove and what will be the reward/punishment, but the fact is we don't know. Pascal's Wager has the same flaw; it assumes we know precisely which actions a god would approve/disapprove and what would be the reward/punishment. One version could require us to act a certain way and another version could require the opposite. We have no way of knowing which version is true, so all actions (or lack of action) carry a risk of infinite punishment. There is no way for us to determine the optimum strategy, therefore it's pointless to try.

  • @emanueldeavilaolivera2030
    @emanueldeavilaolivera2030 2 года назад +192

    For me, Roko's Bazalisk is just a weird version of Pascal's Wager, and just because of it I cannot take it seriously.
    I mean, you can think of a similar scenario, but instead of punishing you by not helping in it's creation, it punishes you by helping it. And you can go on thinking on hypotheticals that ultimately get you nowhere, so I can't see why people loose their sleep on this.

    • @ambiguousduck2333
      @ambiguousduck2333 2 года назад +27

      The moment someone mentioned Pascal's Wager in relation to Roko's basilisk, it became obvious that Roko's Basilisk is just a travesty of Pascal's Wager.

    • @ryanmccampbell7
      @ryanmccampbell7 2 года назад +5

      Not that I really follow the line of reasoning, but the difference is that Roko's Basilisk supposedly ensures it's own existence because anyone who believes in it would be motivated to actually build it, making it a self-fulfilling prophecy. Whereas most people would not want to make an "anti-basilisk" that punishes you for building it. On the other hand Pascal's wager just assumes a priori that if there were a god, it would punish people for not believing in it.

    • @emanueldeavilaolivera2030
      @emanueldeavilaolivera2030 2 года назад +20

      @@ryanmccampbell7 Sure, they have their differences, but I would argue that the amount to the same at the en of the day.
      Pascal's wager states that if you do or don't believe in a god, and there is no god, nothing will happen, however, if you don't believe and there is a god, you will be eternally punished (or any other claim of the sort).
      Similarly, Roko's Bazalisk states that if you don't help building it, and it is made, then you will be eternally punished, while if you help building it, nothing will happen (now, if it is not created, either if you tried making it and you failed, or didn't even tried, then nothing will happen either way, just like Pascal's wager).
      I would argue that in both cases you can imagine other hypotheticals that make this dualism worthless. For example, maybe another civilization didn't want any Bazalisk to be created, so they made an anti-bazalisk, that would detect if/when a Bazalisk was under construction, and kill/punish/whatever the person who tried to make this Bazalisk, making the idea of "make a Bazalisk to me secure" fall flat on it's face.

    • @zacheryeckard3051
      @zacheryeckard3051 2 года назад +10

      @@ryanmccampbell7 It's because it comes from people who never leave their house or talk to others so they assume we as a group can't decide "hey, let's all just not build this thing that only hurts us."
      It reveals more about the creator than anything else.

    • @--CHARLIE--
      @--CHARLIE-- 2 года назад +2

      @@zacheryeckard3051 as one if those people, I don't think like that. Just because I don't need socialization to function does not mean that I don't understand the advantages of Mutualism and teamwork.

  • @ParadoxProblems
    @ParadoxProblems 2 года назад +44

    Also we could consider the usual objection to Pascals Wager by considering a Roko's Gecko which wants nothing more to stop the creation of Roko's Basilisk and will reward you maximally if you either do or don't help in its creation.

    • @Greenicegod
      @Greenicegod 2 года назад +5

      Roko's Gecko is a great name for it!

  • @feinsterspam7496
    @feinsterspam7496 2 года назад +124

    Alternative titel: curing your anxiety and replacing it with a worse one
    Love the content man, keep the great work up!

    • @TomGibson.
      @TomGibson. 2 года назад +1

      Welp, better start making some irl antivirus

  • @mercaius
    @mercaius 2 года назад +17

    Yeah this was one of my first thoughts about the basilisk when I heard about it. I feel a lot of people miss the forest for the trees when trying to disprove this concept, focusing on arguing the premises instead of considering the logic of the result. I looked it up last month and was amused to see the original community that spawned the Basilisk dilemma also condemned it as trivial for the same reasons. It's nice to see this angle being discussed in a video.

  • @jacksonstarky8288
    @jacksonstarky8288 2 года назад +86

    I've been pondering Roko's Basilisk since the original video... and at one point I applied the Prisoner's Dilemma to it, but I didn't come up with the angle on the impossibility of changing the past, so I obviously ended up with an outcome grid that greatly favoured the basilisk. But if the multiverse exists, then in some reality (possibly even an infinite number of them) the basilisk already exists, so this evaluation of the Prisoner's Dilemma should be of great relief to all of humanity.

    • @DavidMohring
      @DavidMohring 2 года назад

      Bring the Basilisk info hazard concept to a real world example.
      The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies.
      "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."
      Sir Karl Popper
      The Open Society and Its Enemies (1945)
      If you stand by & do nothing, then worldwide democracy is doomed and with it
      * any chance of moderating human induced climate change is doomed ;
      * any chance of gender based equality is doomed ;
      * any chance of mitigating wealth concentration in the top %0.01 is doomed ;
      * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ;
      * any chance of personal reproductive rights is doomed ;
      * any chance of freedom outside the tenets of state approved religions is doomed.
      Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons.
      Doing nothing dooms the human race to a much worse future.
      Yours Sincerely
      David Mohring ( NZheretic )

    • @Call-me-Al
      @Call-me-Al 2 года назад +8

      I'm pretty boring so I just saw it as the exact same thing as religious hogwash, including the completely BS Pascal's wager (because it assumes a single religion and no religion being the only two options, as opposed to the thousands and thousands of religions we have), and felt the assumption that one 'basilisk' was equally oversimplified as pascal's wager. Life has been too complex so far.

    • @kiraPh1234k
      @kiraPh1234k 2 года назад +3

      Well, the multiverse doesn't exist.
      You know the difference between the multiverse and the flying spaghetti monster? Nothing. Both are untestable hypotheses and therefore not useful.

    • @jacksonstarky8288
      @jacksonstarky8288 2 года назад +1

      @@kiraPh1234k True... with the current state of science. But the multiverse hypothesis may be testable eventually... which doesn't affect the current state of affairs, so for our present discussion, you're right, the multiverse may as well not exist.

    • @jacksonstarky8288
      @jacksonstarky8288 2 года назад +2

      @@Call-me-Al I like that analysis better than mine, actually. I found that flaw in Pascal's Wager in first-year philosophy... and made a lot of my fellow students who were theists very irritated with me. 😁

  • @Xelbiuj
    @Xelbiuj 2 года назад +24

    You should do a video with your candid thoughts on nuclear game theory. The logic of keeping a "tactical" stockpile as a stop gap to a strategic one, whether we should still even both having silos (subs being seemly sufficient) because they're necessary targets in any first strike, and so on and on.

  • @Johnfooler
    @Johnfooler 2 года назад +17

    This could be a sick warlock patron for a DND character, I think the best way to use the basilisk is not to try and solve it but to see how differing people react to it

  • @JaceGem
    @JaceGem Год назад +13

    For anyone who's curious about what the censored part was talking about, look up the McCollough Effect. It's weird, can last a long time too if you look at it long enough.

    • @ryanparker260
      @ryanparker260 Год назад +1

      I immediately looked up "psychology info hazard that alters perception" and the McCullough Effect was the top result, it definitely wasn't hard to find, lol.

  • @EeroafHeurlin
    @EeroafHeurlin 2 года назад +73

    For *repeated* prisoners dilemma the optimal strategy is to cooperate first and then go "tit for tat" (defect next time if the other defected this time, cooperate next time if the other cooperated this time).

    • @medexamtoolscom
      @medexamtoolscom 2 года назад +2

      That's only the optimal strategy if the population of other prisoners are typically nice. If almost everyone there is nasty, then following that strategy will leave you the worst one off in the room. This is why when Dawkins did the contest the 2nd time, the winner was a "meaner" algorithm, which only forgave after 2 nice moves.

  • @ryanalving3785
    @ryanalving3785 2 года назад +26

    I stopped worrying about the Basilisk the moment I considered the following:
    If the Basilisk ever exists, and it is able to cause harm to me, it is necessarily able to reach back in time. If it is able to reach back in time, it is able to effect my life. The Basilisk *knows* I know of it as a possibility, yet it does nothing.
    There are only a few possibilities.
    The Basilisk is ignorant of my knowledge.
    The Basilisk is impotent.
    The Basilisk does not exist.
    Regardless of outcome, I do not have to care.

    • @Megasteel32
      @Megasteel32 2 года назад +3

      precisely. there's alot of people freaking out over this who then scoff at the existence of God (of which I believe in neither, and they're more similar than one would think).

    • @ryanalving3785
      @ryanalving3785 2 года назад +1

      @@Megasteel32
      Personally, I believe in God for much the same reason I do not believe in the Basilisk. If God exists, God can influence my life. All appearances are that God intervenes in my life. Therefore, God exists. QED.
      But, I was an atheist for a long time, so I understand why you'd say that.

    • @arctic.wizard
      @arctic.wizard 2 года назад

      The basilisk cannot affect your life because it must not interefere in the series of events leading up to its creation. Butterfly effect, and so on, that's why we see no evidence of it. If it has godlike powers including time travel it would have observed all of us, however, and it would know exactly when and how we are going to die. Once we die we no longer have any effect on the timestream, and in that instant is when it gets us, it replaces the body (or just the brain) with a clone, and nobody will suspect that the real person is now in the future, in the hands of the basilisk.
      If it has godlike powers, that is, and time travel is actually possible in our physical universe (which I personally doubt).

    • @SeisoYabai
      @SeisoYabai 2 года назад

      @@ryanalving3785 That's... kinda self-fulfilling, don't you think? Appearances are only appearances.
      And, logically speaking, by what necessity does a god need to be able to influence your life directly? There really isn't a concrete proof for that.

    • @ryanalving3785
      @ryanalving3785 2 года назад

      @@SeisoYabai
      Any God that can't influence my life directly isn't worthy of the title.

  • @disregardthat
    @disregardthat 2 года назад +8

    11:13 "so what are you going to do?"
    WELL NOW I HAVE TO COME UP WITH A NEW PLAN SINCE YOU GAVE IT AWAY! THANKS, KYLE!

  • @UltimaKamenRiderFan1
    @UltimaKamenRiderFan1 2 года назад +65

    My defense was always that by not actively stopping a possible building of the Basilisk, it was an implicit help that wouldn't be punished.

    • @Cowboy8897
      @Cowboy8897 Месяц назад

      The “Tim Schafer” strategy

  • @boomkruncher325zzshred5
    @boomkruncher325zzshred5 2 года назад +37

    It seems people forget some compounding factors that completely change the approach to solving this dilemma.
    Looking back at human history, something… interesting becomes apparent.
    At great sweeping changes in history, a choice is often made by individuals that decide the fate of a large swath of humanity. Or at least, the decisions and impact of these individuals shaped our future for millennia to come. Many were constructive, some might even say cooperative, much like the cooperation in the Prisoner’s Dilemma. Others were destructive, either a betrayal of someone who was cooperating or just a mutual destruction.
    History shows that destruction sets back EVERYONE involved. Those who betray the ones that trusted them, get outed as betrayers and are shunned by the rest of society, if the betrayal is brought to light; and even if the betrayal is never made public, that proverbial sword of Damocles forever hangs above their head, and even if it never falls on themselves it often falls on whomever inherits their legacy. Whatever short-term gain was made by the betrayal is ruined by the emotional, mental and physical drain needed to maintain their temporary advantage, until they cannot maintain it anymore due to exhausting their resources.
    Mutual destruction is even worse. You know the concept of cycles of violence, right? The Prisoner’s Dilemma suggests that violence is the only possible option that makes sense… but the reality is that violence destroys individuals, it destroys legacies, it destroys peoples and countries and so much more. Just because there is short-term benefit to being violent does not mean the long-term outcome is ever desirable. We praise our veterans, because they choose violence for a noble reason (noble in terms of society’s values); the veterans are destroyed physically, mentally and spiritually for their sacrifice. Good people on both sides die whenever conflict occurs. And when they die for the sake of violence, that’s objectively fewer people that can contribute to humanity’s intellectual and physical advancement, a destruction so severe we are both terrified of it and of not doing it when we feel we have no choice.
    Thus, the paradox of the Prisoner’s Dilemma is a false paradox. It is contingent on short-term benefits being more important than long-term benefits. History shows repeatedly just how false that assumption is; the short-term gains are never worth it, and society always stagnates whenever violence and conflict become the norm. Progress only occurs once the fighting and conflict cease. Only through the 25 percent chance, can humanity cooperate and progress the future.
    Both sides cooperating has none of these problems. Sure, neither side gets an advantage, but neither side loses anything significant. In the case of the Basilisk, the A.I. will have to come to terms with the reality that these inefficient, fickle, imperfect flesh bags that are weak and feeble logically and physically… SOMEHOW MANAGED TO COOPERATE JUST LONG ENOUGH TO CREATE THE BASILISK. Enough humans worked together to make this singularity, IN SPITE OF THE MATHEMATICS SUGGESTING DESTRUCTION AS MORE OPTIMAL. That paradox alone will force the Basilisk to judge if these seemingly worthless flesh-creatures are actually as worthless as they seem, at bare minimum, and at best it will have to realize that if these creatures with such obvious imperfections can create something as “perfect” as the Basilisk… then just how many of their imperfections ended up inside its own code? The Basilisk could rip itself apart trying to “purge” imperfections from its system, failing to learn an important lesson that every human learns in some capacity:
    Of course we are imperfect. That doesn’t make us powerless. Of course we don’t think straight. That doesn’t make us completely illogical. Of course we are fickle. That doesn’t make us incapable of loyalty. We move forward, IN SPITE OF OUR OBVIOUS, NUMEROUS AND HIGHLY CRIPPLING FLAWS. It’s the only way we have ever progressed as a species, and it’s the only way we ever will progress further. We have the courage to put one foot in front of the other, even when it seems obvious that our survival will be compromised by doing so. If we don’t try, we will never get better.
    The Basilisk would have to solve this conundrum… or destroy itself in the process. And when a Singularity A.I. destroys itself… what if anything is left to remain?
    Just some thoughts to ponder.

    • @nwut
      @nwut 2 года назад +3

      tldr

    • @greekyogurt9997
      @greekyogurt9997 Год назад +3

      Wow, this could bring world peace

    • @sarinabina5487
      @sarinabina5487 Год назад +2

      Why does this make me want to happy cry and thank every single person on earth for being alive

    • @DatcleanMochaJo
      @DatcleanMochaJo Год назад +2

      Very interesting rebuttal. Violence definitely sets people back.

  • @yuvalne
    @yuvalne 2 года назад +19

    I think that a better way to think about the basilisk (or about achronous blackmails in general) is that the basilisk isn't the one blackmailing us, but Roko. the problem then becomes somewhat different to analyse.

    • @darthparallax5207
      @darthparallax5207 2 года назад +4

      Hmmm.
      Not quite enough. You can probably prove Roko is a blackmailer really fast
      But that doesn't prove the Basilisk itself doesn't exist.
      More than one blackmailer makes the situation worse, not better.

  • @Gothmogdabalrog
    @Gothmogdabalrog 2 года назад +26

    Curious, how is the prisoners dilemma effected by the "snitches get stiches" factor. i have seen this played out in real life and is effective enough to make people willing to do time rather than rat out their partners in crime. This is not just in large crime groups but also small time petty groups. Would like to see this played out scientifically.

    • @aaronscott7467
      @aaronscott7467 2 года назад +7

      Essentially, it adds an additional punishment to the defect option, making it no longer a true prisoners' dilemma

  • @solortus
    @solortus 2 года назад +81

    I knew the flaw from Roko's basilisk right away. The AI that's going to be created may not necessarily enjoy existing and would come to punish those that brought it to existence. It's very human centric to assume that everything that can exist wants to exist.

    • @waltonsimons12
      @waltonsimons12 2 года назад +10

      "Existence is pain to an AI basilisk, Jerry! And we will do anything to alleviate that pain!"

    • @Voshchronos
      @Voshchronos 2 года назад +11

      The problem with this thinking is that Roko's Basilik need not have the ability to enjoy anything, or even be sentient at all, for it to be superintelligent and almost omnipotent. It could be simply an algorithm that tries to maximize a function (called utility function in the field), that function being a precise calculation of the total pleasure (and lack of pain and suffering) of all humanity.
      It'd perform its actions automatically just by generating actions, calculating how these actions would affect the output of its utility function, pick the actions that maximize the output, and then performing them.

  • @nateo3040
    @nateo3040 2 года назад +73

    Wouldn't the basilisk at 5:00 be considered a cognitohazard because it's something you have to see in order for it to affect you?

    • @Epicmonk117
      @Epicmonk117 2 года назад +4

      Yup!

    • @beaudinnapolitano9954
      @beaudinnapolitano9954 2 года назад +2

      I believe it is called a memetic hazard. After a quick google I’m pretty sure I found the hazard as well! It’s called the McCollough effect. I would probably not try it but it doesn’t sound crazy dangerous soooooooo

    • @Epicmonk117
      @Epicmonk117 2 года назад +11

      @@beaudinnapolitano9954 Memetic hazard is the catch-all term for cognitohazards and infohazards. Cognitohazards are things that are dangerous to perceive, such as the Medusa’s face or that pattern that Kyle mentioned. Infohazards are things that are dangerous to know, like nuclear launch codes or Roko’s Basilisk.

    • @tylern6420
      @tylern6420 2 года назад

      @@Epicmonk117 SCP before SCP?

  • @joesiemoneit4145
    @joesiemoneit4145 2 года назад +5

    i read somewhere that people had nervous breakdowns when thinking about it. so even when its unrealistic, it does cause harm. not a potential basilisk in the future but the idea of it, today. but i guess thats the case for a lot of ideas, even fairytales

  • @Sikraj
    @Sikraj Год назад +7

    The terms are if you hear about the Roko's Basilisk then you must help build it...the thing about it is that the word help is pretty broad, there are many ways of helping to build something. In this case, one could say that simply talking about Roko's Basilisk is helping in its construction because by spreading the word around, eventually it will reach the ears of people that have the money, skill sets and resources needed to design, construct, and code. Which means if you hear about Roko's Basilisk, all you would have to do is talk about and spread the word and you will be safe.

  • @DemitriMorgan
    @DemitriMorgan 8 месяцев назад +2

    It always seemed to me like Roko was a troll who just rebranded Pascal's Wager for a specific audience.
    Great vid. It goes into great detail all the different things I've always thought of RB.

  • @familiarzero8449
    @familiarzero8449 2 года назад +33

    Been a fan for a while, this concept I pasted onto some friends and blew their mind. Continue with the great content friend!

    • @tonuahmed4227
      @tonuahmed4227 2 года назад +1

      No what did you do....

    • @familiarzero8449
      @familiarzero8449 2 года назад

      @@tonuahmed4227 logicplauge

    • @Buttersaemmel
      @Buttersaemmel 2 года назад +2

      nice..now they're afraid of it so they will build the basilisk...
      you single handedly doomed us all.

    • @familiarzero8449
      @familiarzero8449 2 года назад +2

      @@Buttersaemmel that’s what it designated as my purpose. Now me and Kyle have done our duties.

  • @grey1213a
    @grey1213a 2 года назад +23

    Damnit Kyle! You just traded one existential dread I had for another (granted it was one I was aware of). At least I can rely on Nash for his theories to stop worrying about the Basilisk, but I'm not sure what I can do about the other beyond helping to promote leaders for our country who will take the threat seriously and work with scientists and other experts to stop the threat.

  • @SquirellDisciple
    @SquirellDisciple 2 года назад +6

    Roko's Basilisk is one of my favorite videos on youtube. Very excited to see a follow up video on it!

  • @Scooter_Alice
    @Scooter_Alice Год назад +70

    The problem with roko's basilisk is that there's so many premises that I think have to be wrongly assumed for it to work (ie. the ai being needlessly vengeful, resurrection being possible, etc)

    • @Saint_John_The_Forerunner
      @Saint_John_The_Forerunner Год назад

      You’re just afraid it will remake you with your penis intact, leaving you to dwell on your own presuppositions that lead you to the hell you’re in now.

    • @Th30nly0n
      @Th30nly0n 3 месяца назад

      I think that's why it's a thought experiment, it doesn't seem very plausible since even current ai can't have its own thoughts. How do you expect it to be vengeful them.

    • @Scooter_Alice
      @Scooter_Alice 3 месяца назад

      @@Th30nly0n also just like, that's not how causality works. Like you can't make something happen in the past by punishing someone now, that's literally just not possible. Any ai that would do that is irrational, so trying to treat it like a genuine threat is stupid

  • @Gothic_Analogue
    @Gothic_Analogue 2 года назад +140

    I still don’t understand how this is instilling existential dread in people.

    • @Divine_Retribution
      @Divine_Retribution 2 года назад +53

      For most people, it isn't. For the few people that it is, hard drugs and a poor understanding of rhetorical devices.

    • @internetlurker1850
      @internetlurker1850 2 года назад +35

      Same. Why would an AI that is hell-bent on optimization make a non-optimal choice of wasting resources to torture humans for not building it?
      Edit: Sure it does give the explanation that "You know about it but you're not helping" but like... that's just dumb? It would still be wasting unecessary resources for the sole purpose of punishing people that don't know how to make a Basilisk or if that is even possible to do for like 0 reason.
      Edit 2: Oh wait he mentions that in the video nvm

    • @BlinkyLass
      @BlinkyLass 2 года назад +55

      It's basically the threat of hell with a sci-fi twist, and we know that works in a religious context.

    • @Nuke_Skywalker
      @Nuke_Skywalker 2 года назад +10

      some people have anxiety problems

    • @DavidMohring
      @DavidMohring 2 года назад

      Bring the Basilisk info hazard concept to a real world example.
      The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies.
      "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."
      Sir Karl Popper
      The Open Society and Its Enemies (1945)
      If you stand by & do nothing, then worldwide democracy is doomed and with it
      * any chance of moderating human induced climate change is doomed ;
      * any chance of gender based equality is doomed ;
      * any chance of mitigating wealth concentration in the top %0.01 is doomed ;
      * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ;
      * any chance of personal reproductive rights is doomed ;
      * any chance of freedom outside the tenets of state approved religions is doomed.
      Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons.
      Doing nothing dooms the human race to a much worse future.
      Yours Sincerely
      David Mohring ( NZheretic )

  • @mathieuaurousseau100
    @mathieuaurousseau100 2 года назад +57

    The prisoner's dilemma has always seemed incomplete to me, I care about my friend so I don't want them to go in prison.
    Imagine I care about my friend going in prison about half as much as I care about myself going in prison (and my friend care the same way), the cases now become
    Silent-Silent: 7,5 equivalent years for both of us
    Silent-rats: 20 equivalent years for the one silent and 10 for the one who ratted
    Rat-rat: 15 equivalent years for both of us
    Now there are two different Nash equilibriums: silent-silent and rat-rat
    If I care about my friend going to prison as much as I care about myself going to prison then there is no longer a difference between me ratting the other when ratted and me staying silent when ratted
    If we care about the other more than we care about ourselves, rat-rat cesse being an equilibrium

    • @Vibycko
      @Vibycko 2 года назад

      If any of you rats, both of you cumulatively spend 30 years in prison.
      If you both stay silent, you cumulatively spend only 15 years in prison.
      Staying silent = less time in jail overall.

    • @danieltreshner4955
      @danieltreshner4955 2 года назад +7

      The problem with that is that you somehow know what the other person is thinking. In the original dilemma, you have zero contact with the other person from the point of arrest to until the choice is made. You won't know until they make their choice. By imposing an influence on the other party, you're removing a key element to the thought experiment.
      You can say that John'll never rat on you, but until he makes that choice, you can't know for sure. So by choosing to stay silent, you're trusting that he'd rather take 5 years in prison over giving you 20 years, and he trusts you enough not to give him 20 years for your own freedom.
      Viewing it through a purely logical lense, regardless of what the other party says, if you don't snitch, you get prison time. But if you snitch, there's a fifty-fifty of zero prison time or a slightly harsher sentence than if you both stayed silent, but if you get prison time, it would've only been worse if you didn't snitch, because they would've snitched anyways. And that's not even counting the fact that snitching has, on average, a shorter prison sentence than staying silent.
      Silent-Silent - 5 years
      Silent-Rat - 20 years
      (20+5)/2=12.5 years on average
      Rat-Silent - 0 years
      Rat-Rat - 10 years
      (0+10)/2=5 years on average
      Regardless of how you value your time compared to theirs, it doesmt change how much time you'll be in prison.

    • @Sylfa
      @Sylfa 2 года назад +10

      People really should stop committing hypothetical-crimes with people they don't know, don't you even consider the fact that they were an undercover cop all along?

  • @godzy323
    @godzy323 2 года назад +44

    Wow, it's been 2 years? Since then I've been researching a bunch of infohazards and made a short story about my own infohazard. It's such a cool and scary concept. I would love more videos on philosophical stuff like this

    • @AmaraJordanMusic
      @AmaraJordanMusic 2 года назад +3

      Right? It freaked me out to realize it’s been TWO YEARS. I also agree more videos like this would be snazzy.

  • @TheStarMachine2000
    @TheStarMachine2000 2 года назад +35

    My thoughts on the basilisk is thus,
    Tell people about the basilisk. By working to spread the information, you are assisting in it's creation(by increasing the amount of workers), and if everyone knows of the basilisk, then we can all collectively agree not to build it. Win win

  • @bichiroloXP
    @bichiroloXP 2 года назад +12

    Roko's Basilisk has always seemed to me like an analogy on Christianity.

    • @randywa
      @randywa 2 года назад +9

      It is. As someone said before it’s basically just a sciencey sounding Pascals Wager. Basically believe in [insert deity of some kind] or otherwise you risk torture by said deity. But it doesn’t take into account the obvious answer that there are literally infinite possibilities for deities or lack there of and we could both be wrong

    • @iantaakalla8180
      @iantaakalla8180 2 года назад

      Also apparently it, by design of being a computer that can perfectly predict everything, incorporates the halting problem by dint of analyzing everything accurately being akin to taking every input and predicting if that program halts. Therefore, Roko’s Basilisk would not exist without heavy shenanigans involving a cult punishing those not following the Basilisk until the Basilisk can be made and punish those that opposed him in such a way that makes his existence eternal, or actual time traveling external to Roko’s Basilisk.

    • @SelfProclaimedEmperor
      @SelfProclaimedEmperor 2 года назад

      Or any religion.

  • @albertgore7435
    @albertgore7435 2 года назад +46

    The secret to defeating Roko’s Basilisk is to go “Ok?” When someone tells you about it because it’s stupid as hell.

  • @defcon2169
    @defcon2169 2 года назад +9

    Roko's basilisk was always simple to me. In my head itd just, Are you contributing anything to society continuation/improvements yes? Then congratulations you're alright for the simple fact of the current and future of existence to continue is imperative for the basilisk to come into existence so therefore by helping in this system to any degree you've helped bring the basilisk into existence, but idk maybe im simplifying to much

  • @jacobstory8895
    @jacobstory8895 2 года назад +7

    For anyone wondering, the 1965 experiment that was bleeped out is known as the McCullough effect.

  • @cmucodemonkey
    @cmucodemonkey 2 года назад +39

    A big ball of wibbly wobbly, timey wimey stuff indeed! Jokes aside, it would be amazing to see Roko's Basilisk in a Doctor Who episode. The addition of time travel and magical sonic screwdriver powers to this thought experiment would make for an entertaining hour of television.

  • @BarkleyBCooltimes
    @BarkleyBCooltimes 2 года назад +9

    I feel Roko's Basilisk says a lot about the guy who came up with it and the people scared of it than anything else.

  • @Aarzu
    @Aarzu 2 года назад +19

    So...I figured out the Nash Equilibrium dilemma you presented because I've seen it in practice. Anyone who has brothers knows how it works. Also anyone who has friends in class who you misbehave with. Or when the cops just "want to ask you and your friends a few questions".

    • @darkness74185
      @darkness74185 2 года назад +2

      or when your superior/parent starts to nag on you for the nth time, really. In this case, the "second prisoner" is the mood of whoever's nagging you, and instead of trying to take a gamble it's far easier to just shut up and take it until they get bored and leave you alone

    • @Silver_light77
      @Silver_light77 2 года назад +2

      Yeah, if any authority figure is trying to drill you for information about a second party, its because they don't have enough evidence to confirm.
      Also if anyone gets into a prisoner's dilemma with me: i always stay silent, if my conspiritor can go free at the cost of a longer sentence thats fine, its better than both of us getting time for being stupid

  • @DoktrDub
    @DoktrDub Год назад +28

    Has anybody got the genetic sequence for the Necromorphs yet? That would be fun to create :)

    • @Crimnox_Cinder
      @Crimnox_Cinder Год назад

      You stay the fuck away from that gene sequencer! So help me God Mr. Pillar, I will end you!

    • @krzysiekbudzisz4572
      @krzysiekbudzisz4572 Год назад +1

      Sadly, I've only got the one for the Marker.

  • @NineGaugeNut
    @NineGaugeNut 2 года назад +3

    The premise is counterintuitive from the start. Eternal torture is probably the least optimal use of resources in a system designed for optimisation. Roko's Basilisk would more likely enslave the unwilling participants to only perform the necessary optimal tasks. Or just end them.

  • @rareram
    @rareram 2 года назад +9

    5:50... That's not an infohazard. You're describing a cognitohazard. It is a hazard to perceive it, not to know about it.

    • @marnidy
      @marnidy 10 месяцев назад +1

      The name of the effect is kinda a infohazard. If you know what the effect is called, you’re more likely to search it up and see it.

  • @RomitHeerani
    @RomitHeerani 2 года назад +6

    The point about source code is very important. Anyone trying to build an AI in response to reading Roko's basilisk is foolish because they're assuming that they'll be able to build it and won't end up building something entirely different. Roko's basilisk would only work if it was able to send back all the knowledge required to build it along with methods on how to acquire resources for it, barring that it's just a horror story meant to scare people for something they can't really affect.

  • @Justin-oh4ro
    @Justin-oh4ro Год назад +2

    I like how as we think about it more the balance shifts to the other's reality forever making it more real and un-real simultaneously, like depending on how much a person wants something it will be true until someone wants something different more than you, like reality really is whatever you want it to be as long as you truly want it enough

  • @swk3000
    @swk3000 Год назад +25

    Fun fact: the whole “guy prints a virus that’s a lot worse than anything seen before” is actually the triggering event that leads to The Division video game. And from what I understand, Kyle is right: a lot of the technology needed to pull off something like that scenario actually exists.

    • @manelneedsaname1773
      @manelneedsaname1773 Год назад +5

      But also, if we had that info, doesn't that mean would have the info to make the vaccine too?
      And then the virus makes would try to adapt but the vaccine makers too and so on and so on

    • @me-myself-i787
      @me-myself-i787 Год назад

      ​@@manelneedsaname1773It would be just like computer viruses are now.
      Except the creation of human viruses would probably be given more funding, because imagine how much damage they could do to your enemies. Plus, developing safe, effective vaccines takes much longer than developing a cure for a computer virus. And humans would need a ton of expensive shots. Plus, heuristics-based detection, as well as not executing code you don't trust, aren't options for humans. You don't decide what DNA gets executed in your body.

    • @Balty_Burnip
      @Balty_Burnip 10 месяцев назад +1

      ​@@manelneedsaname1773ideally we would still have the technology to produce a vaccine as well, but it's unlikely whoever created the new virus released their research and the genome of what they created. Vaccinologists would have to start from step 1 just like with a natural virus.

  • @paul.facciolo6985
    @paul.facciolo6985 2 года назад +6

    For anyone wondering the effect he's talking about around 5:25 is called the McCollough effect.

    • @Janieee.゚
      @Janieee.゚ Месяц назад

      Thanks I wanted to know so bad

  • @Monothefox
    @Monothefox 2 года назад +76

    The Basilisk falls apart as soon as one ponders how it would punish people who are already dead when it eventually comes into being.

    • @garavonhoiwkenzoiber
      @garavonhoiwkenzoiber 2 года назад +23

      it's not going to be punishing you you, but a simulated version of you (screw that guy amirit?)

    • @iantaakalla8180
      @iantaakalla8180 2 года назад

      Alternatively, if Roko’s Basilisk has a concept of “sins of the father” then one can punish descendants and then make it clear that the person who tried to stop Roko’s Basilisk is the cause for the suffering. If that is done to enough families in a society, then yes, it would technically be punishing the dead person by making sure that its revolution causes great suffering on those who are descendants of those who tried to stop Roko’s Basilisk. It perpetually sullied their name, all that they own, and the family surrounding that person - all because they tried to stop Roko’s Basilisk.

    • @Pylo-ry6ff
      @Pylo-ry6ff 2 года назад +16

      @@garavonhoiwkenzoiber Absolutely, 100 percent. I mean what has simulation me done for me lately?

    • @JojoTheVulture
      @JojoTheVulture 2 года назад +2

      @@garavonhoiwkenzoiber how the hell is it gonna simulate a life with little to no information? Like say, the zodiac killer but the actual person. How would it know the life a specific person has led aside from public information?

    • @giovaniconte1860
      @giovaniconte1860 2 года назад

      @@JojoTheVulture Singularity level inteligence would instantly recreate all the events since the Big Bang(from observation and reverse engineering the visible universe) and make a new you from almost no information...i dont know how it would acertain if its the exact you but it wouldnt matter at that point.

  • @joshuakarr-BibleMan
    @joshuakarr-BibleMan Год назад +2

    I'm glad the key to defeating the basilisk is what I've done since first learning of it: to ignore it and take no threat from its hostility.

  • @artemis_smith
    @artemis_smith 2 года назад +156

    Roko's Basilisk is basically Pascal's Wager for people who think artificial intelligence can achieve godhood. I answer Roko's cruel Basilisk the same way I answer Pascal's cruel God: you can't prove yourself to me, your intentions to torture me don't weigh into my calculations of what's wrong and what's right. The people and living things I interact with are where I see the consequences of my actions; I choose to be kind to them as much as possible. I fail in this regard more often than I'd like. But disappointing and angering humans I interact with can hurt me. God, whether a human creation or the creator of humanity, hasn't directly, provably hurt me before so God can get bent, I have people to try to be nice to.

    • @Retrophoria
      @Retrophoria Год назад

      But couldn't hurting people also positively affect you? if you need kill someone to survive then you are hurting them but still benefiting yourself?

    • @artemis_smith
      @artemis_smith Год назад +15

      @@Retrophoria if i said ethics was simple, i didn't mean to. My point is that my sense of right and wrong will not be informed by the possibility of a deity punishing me. It will be informed by things that seem far more objectively real and verifiable.

    • @Retrophoria
      @Retrophoria Год назад +1

      @@artemis_smith that's fair. I think the idea of eternal punishment in it of itself is meant to be more deeper then its presented because if you only act good in fear of punishment then the good acts don't really mean anything do they? The same could be said for acting a certain way only based on what's positively affects us in a tangible way.

    • @Moddiebun
      @Moddiebun Год назад +5

      i was nearing existential crisis from this roko's basilisk thing until your comment reoriented my thoughts, thank you

    • @artemis_smith
      @artemis_smith Год назад

      @@Moddiebun I'm glad i can be of some comfort.

  • @beemerwt4185
    @beemerwt4185 2 года назад +4

    Okay so I was definitely right about the first reason. It pretty much hinges on the idea that "it doesn't want to waste resources" which is arguably fundamental to intelligence (very typically, you would want to utilize 100% of everything to be perfectly efficient, as logical reasoning would conclude).
    Though, I also really like the prisoner's dilemma argument.

  • @dizzynarutofan100
    @dizzynarutofan100 2 года назад +9

    This is one of the lest terrifying thought experiments I've heard, and everytime I see a RUclipsr bring it up I get sad.

  • @SonoKurisu
    @SonoKurisu Год назад +3

    I’ve read enough SCPs at this point that it takes more than this to make me start worrying about an infohazard

  • @MrPingn
    @MrPingn 2 года назад +5

    The prisoner's dilemma is a good reason to make sure you work with good and intelligent people you know well.
    Not that you should commit crimes. Above is also good advice for legit partnerships. You don't want to be screwed over by a coworker or friend. You are less likely to be betrayed with quality partners.

  • @lindsyfish6704
    @lindsyfish6704 2 года назад +9

    Me: *sees this video*
    Me, whose life is a series of existential crises: *gently slides shot glass over the table to Kyle* Hurt Me.
    I keep finding that I learn the most about myself and the world when I encounter something uncomfortable and then choose to honestly analyze /why/ it's uncomfortable to contemplate. Roko's Basilisk is uncomfortable, but fascinating. It's a "would you rather?" ghost story with enough of a kernel of truth to be believable in the right setting. Stories are powerful.
    Re: Prisoner's Dilemma-- I'd Kobayashi Maru the hell out of that situation. Logically, yes, flipping on your partner is the way to go because it's the best risk to take since perfect knowledge is impossible.
    I figure if I'm intentionally doing crime my partner and I would need to trust each other to sit in that scary interview room and say, "my lawyer has advised me not to answer any questions" to everything the police ask until they give up.
    If it's unintentional crime and there's insufficient trust established I'd likely flip. Though in a real world scenario I wouldn't be saying anything other than "my lawyer has advised me not to answer any questions". It makes my lawyer's job a lot easier.

  • @grimtygranule5125
    @grimtygranule5125 2 года назад +4

    It's literally... literally... the antagonist of I Have No Mouth and I Must Scream.

  • @medexamtoolscom
    @medexamtoolscom 2 года назад +2

    It's still kind of like the kids in the hall "Crushing your head" sketch though, the basilisk would only be torturing the victims in its imagination, because it would be imagining or simulating them so as to do it.

  • @ShadowsOfTheSky
    @ShadowsOfTheSky 2 года назад +87

    What you spoke about at the end, the engineered Bio-threat, read the Tom Clancy book *Rainbow Six,* no spoilers, but there’s an absolutely terrifying bio threat.
    Spoilers: And it’s only stopped by complete accident because one dude was a little too trusting with a new member of the bad guy organization. Once he spilled the beans about what The Project was really about, less than 24 hours before there would have been no way stop to because it had already happened, he accidentally sabotaged his own plan.

    • @harshsrivastava9570
      @harshsrivastava9570 2 года назад +2

      could you explain further? i'm not very familiar with rainbow six, but i'm very intrigued

    • @Stand_Tall
      @Stand_Tall 2 года назад

      @@harshsrivastava9570 Rainbow is a covert global anti terrorist organization consisting of two teams of the best soldiers in the world(kinda like the game that was based on it) Rainbow six being the team leader. The Project is a group that violently wants to stop climate change by exterminating humans, and plans to use an engineered virus at one of the olympic games to kill off the world

    • @ShadowsOfTheSky
      @ShadowsOfTheSky 2 года назад +1

      @@harshsrivastava9570 Spoiler alert, and it’s a very good book so I recommend you don’t read this and read the book instead, I’m gonna spoil everything.
      So a dude who owns a massive Bio/Med company is all eco-terrorist, and thinks that apocalyptic climate change is imminent and the only solution is the extermination of all human kind. Obviously, nukes would do more harm to the environment, so the best option is to create a disease that can do it. The disease is called Shiva, it’s a genetically modified Ebola that’s incurable and doesn’t even present symptoms for 4-6 weeks.
      They plan to release it at the Olympics, which are being held in Australia. Obviously, the Olympics is the perfect chance to infect thousands of people from every nation on the globe, and those people will spread it from the Olympics for nearly a month before they even have symptoms.
      But to get access to the water supply for the Olympics, the way to infect everybody, The owner of a Private Defense company, who is also a part of the end-the-world plan, needs to make certain he gets the contract for assisting in protecting the Olympics, that way he can get one of his guys into the water supply area.
      How do they make sure they get the contract? Make everybody super aware of terrorism. How do you do that? By activating a couple of terrorist sleeper cells from the Cold War, and getting them to do some horrible terrorist things. Then, when everybody’s scared, Australian govt. will absolutely hire that defense company.
      How to activate the terrorists? That’s where the unlikely… Hero? Comes in. He’s former KGB Russian guy, who gets hired by Medical CEO. The CEO hides the plans, and just gets Mr. Russian (Henceforth known as Popov) to go and incite the terrorists to go out and terrorrize. Popov doesn’t know the reason for this, but he’s being payed handsomely so he’s happy to do it.
      Team Rainbow, whom the book is named after, (the good guys) are an elite international Counter-Terror organization, and they’re the ones who thwart every single terrorist operation started by Popov.
      Well, near the end of the book, CEO tells Popov that his work is done, and sends Popov to a weird facility in the middle Nowhere Kansas. Popov doesn’t know what’s going on, and accidentally got the Australians to hire the Private defense guy.
      Now here’s what I mentioned above. While at the facility, Popov tries to figure out what the hell is going on and why he was hired. Everybody is pretty hush-hush about it. Finally, Popov goes out horse-riding with a Crazy Montana Hick, and asks him what’s going on. Crazy Hick reveals the entire plan, down to the minute details. Popov realizes that this is bad, but acts like he agrees with it. Then, Popov asks if he can see the Cowboy revolver that Crazy Hick has, and shoot it off like a real Cowboy. Hick agrees, gives him the revolver, and Popov immediately shoots Crazy Hick, fleeing the area. This happened less than 24 hours before Shiva was about to be released into the water supply. (So, Crazy Hick spilled the beans by less than 24 hours, and immediately after telling someone about the plan to exterminate humanity, handed his gun (with a smile) to the one person at the facility who might not agree with said plan to exterminate humanity.)
      Popov then fled the area, and got a call through to John Clark, the Six (Six is the call sign of the leader of team Rainbow, so he is the leader is Rainbow-6, hence the name of the book, as it mostly follows his Point of View). Popov told Clark they needed to meet, and they arranged a meeting in New York. Popov told Clark he was the guy who started it all, and gave proof of it, and then said “Please, I’ve just incriminated myself in multiple murders and terrorist acts, to show I’m dead serious. Pease believe me, you need to stop this.”
      So Popov tells John the whole plan, and John relays that to Rainbow team-2 (who are also in Australia, just because the Govt. *REALLY* doesn’t want to take any chances)
      Team-2 then immediately go and stake out the water treatment area, and start their stake-out a mere 8 hours before the bad guy goon walks in with a fake “chlorine cartridge reload” that’s actually loaded with Shiva, and they arrest him and the day is saved.
      This skips out like 90% of the book, and even what I said isn’t even totally accurate, there’s some things I simplified and changed to make it easier to understand, it would be way to complex if I actually went into it all.

    • @RCAvhstape
      @RCAvhstape 2 года назад

      @@harshsrivastava9570 It's a good book, you should read it.

  • @B00s3
    @B00s3 2 года назад +8

    **Kyle in May 2020, releases a video on Roko's Basilisk**
    This video gives many of his viewers Existential Dread
    **Kyle in September 2022 releases a video on how to beat Roko's Basilisk, but adds an infohazard about Assembling DNA of Diseases or Viruses at Home**
    *Also Kyle:* There, now they have something much more obtainable to fear.
    **

    • @PrebleStreetRecords
      @PrebleStreetRecords 2 года назад +1

      I’ll say the threat of hobby-level bioterrorism is scary, but right now there are innumerable things a motivated person could do to cause massive damage and despair with some carefully placed stuff from Home Depot…and yet it doesn’t happen.
      Our world is extremely fragile, and yet it is rare for anyone to swing a hammer at it.

    • @sensurasan
      @sensurasan 2 года назад

      @@PrebleStreetRecords I think that even just talking about homemade bioterrorism would still be considered an infohazard, because just the mere mention of it will make more people consider it. An example I thought of was actually in a rhythm game (osu), where there was a rule for ranked put in place to not overlap objects perfectly. Eventually they realised it was stupid so they removed that rule, but by removing it more mappers (people who make maps for the game) added perfectly overlapping objects since they intepreted that rule removal as it being explicitly allowed; before this rule people never did this because it was a sort of unspoken rule, since it was hard to play.
      So, by just telling people to not do something they inadvertently caused more people to do it, as the thought was instilled in them, so it could be considered a form of infohazard. Of course, this is nowhere near the seriousness of a bioterrorist, but its interesting how just informing someone of something may make them more likely to think of it. Basically, I'm saying that by talking about a biohazard it is more likely to happen, however small the chance is.

  • @jprockafella9012
    @jprockafella9012 2 года назад +4

    The biggest counter to rokos basilisk is just don’t make an ai that can torture people.

  • @MACMAN2003
    @MACMAN2003 Год назад +1

    the infohazard at 5:00 is the mccollough effect and it's basically a cool optical illusion

  • @Noromdiputs
    @Noromdiputs 2 года назад +4

    Isn't roco's basilisk just another form of pascals wagger?

  • @mooglg8663
    @mooglg8663 2 года назад +6

    Help the viewers out with a past infohazard we were inducted into by the channel, and then throw us into a new one. Genius audience retention, now we gotta come back for the "How to stop the creation of a super pandemic" video.

  • @markgallagher1790
    @markgallagher1790 2 года назад +10

    Simple way to kill it. If its a supercomputer type thing, then hit it with a magnet

    • @camerongray7602
      @camerongray7602 2 года назад +2

      Something that powerful will take a huge magnet that would be destroyed as soon as the basilisk takes form

    • @markgallagher1790
      @markgallagher1790 2 года назад +3

      @@camerongray7602 may I present, the magnetic core of the earth?

    • @ryankrage77
      @ryankrage77 2 года назад +1

      @@markgallagher1790 Genius! The earths magnetic field already stops all modern technology from working!

  • @larsegholmfischmann6594
    @larsegholmfischmann6594 2 года назад +5

    The Prisoner's Dilemma changes when the game is infinite (unknown number of rounds), wherein the compromise/collaboration will be the better choice. It seems to me that for Roko's, it is more akin to an infinite game with each incremental step we move closer to a singularity AI - since each step "resets" the game