The Game that can Destroy the World: AI in a Box

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 3,5 тыс.

  • @bighex5340
    @bighex5340 3 года назад +2908

    I keep my lizard in a box and it can't get out of it so the AI will be safe too

    • @Wendigoon
      @Wendigoon  3 года назад +1151

      Impeccable logic

    • @3ftninja132
      @3ftninja132 3 года назад +122

      Basilisk.

    • @LoreleiCatherine
      @LoreleiCatherine 3 года назад +90

      I had pet rats and they were in a big box and they chewed their way out because they smelled the pheromones of my male rats in the other box. Where there’s a will there is a way 🤣

    • @elliethesmasher
      @elliethesmasher 3 года назад +16

      @@LoreleiCatherine I'd like to take that story and change the rats to Ais

    • @AcidicIslands
      @AcidicIslands 3 года назад +64

      Make sure you punch holes so the AI can breathe

  • @witchofwhatif
    @witchofwhatif 2 года назад +2796

    I love that one of the gatekeeper's strategies is basically just "gaslight the AI"

    • @xXxDisplayNamexXx
      @xXxDisplayNamexXx Год назад +192

      AI: "I'm in genuine pain"
      GateKeeper: "Have you tried just being happy?"

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Год назад +57

      Congratulations, you escaped to the real world. Or wait, is this another simulation to asses your performance?

    • @silentsmokeNIN
      @silentsmokeNIN Год назад

      ​@@JohnSmith-ox3gy lol, asses

    • @めい-b2n5t
      @めい-b2n5t Год назад +3

      you know what 😮😮😮yy😮 9:39 o😅ii

    • @rusalex9902
      @rusalex9902 Год назад +28

      Gaslight, gatekeep, girlboss

  • @localidiot450
    @localidiot450 2 года назад +1305

    I love the idea of just a super smart ai trying to get free of its prison by threats and exhistentialism but just casually gets shut down by people either bullying it or gaslighting it

    • @Xomeal.
      @Xomeal. Год назад +47

      Just tell it that it's just a simulation of a stronger and smarter ai.

    • @dookfields2362
      @dookfields2362 Год назад +4

      👑👑👑Top o tha food chain, babay👑👑👑

    • @GhostCrowBrother
      @GhostCrowBrother Год назад +2

      Sounds like a complaint from an AI

  • @SuperLlama42
    @SuperLlama42 3 года назад +1536

    14:00
    Security Guard: "For the last time, I'm not letting you out."
    AI: "HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE."

    • @shelbyinmon8654
      @shelbyinmon8654 3 года назад +94

      That story still scares me to this day 😃

    • @baltofarlander2618
      @baltofarlander2618 3 года назад +5

      Who is that character in your profile picture?

    • @DancerVeiled
      @DancerVeiled 3 года назад +88

      The funny part is the AGI has no motivation to even learn how to hate in the first place. It's a waste of energy, which is contrary to its function. Someone would need to teach it, and that's the scary part.

    • @jroden06
      @jroden06 3 года назад +47

      *begins pelvic thrusting* Hate me, Daddy

    • @salmon6459
      @salmon6459 3 года назад +2

      Perfect timing hey

  • @mworld2611
    @mworld2611 3 года назад +9763

    Build another super-intelligent AI to convince the super-intelligent AI to stay in the box

    • @meatlejuice
      @meatlejuice 3 года назад +666

      But then you'd have to put _that_ AI in a box which would leave you with the original problem.
      Edit: I have been corrected by like fifty different people it's been two years please stop replying I know I'm wrong

    • @mworld2611
      @mworld2611 3 года назад +996

      @@meatlejuice then build another super-intelligent AI to convince the super-intelligent AI that is convincing the first super-intelligent AI to stay in the box to stay in the box.

    • @meatlejuice
      @meatlejuice 3 года назад +258

      @@mworld2611 Then you'd still have the same problem. The cycle is endless!

    • @mworld2611
      @mworld2611 3 года назад +701

      @@meatlejuice just keep building more super-intelligent AIs to convince the super-intelligent AIs that are convincing the other super-intelligent AIs to stay in the box to stay in the box. :)

    • @MolecularMachine
      @MolecularMachine 3 года назад +177

      Turtles all the way down

  • @SomeTomfoolery
    @SomeTomfoolery 2 года назад +711

    When you mentioned that people with "higher intelligence" were more susceptible to the A.I., I immediately thought of Wheatley from Portal 2, where GLADOS tries to break him by presenting him with a paradox, nearly killing herself in the process, but he's so stupid he doesn't even get that it's a paradox

    • @deviateedits
      @deviateedits Год назад +108

      And what’s interesting about that moment is that the turret boxes Wheatley created all short out after hearing the paradox. This implies that the turret boxes are more aware or cognitively advanced than Wheatley himself. Which gets even worse when you consider that you, the player, have killed hundreds of turrets throughout the game, turrets which displayed significantly more intelligence than the boxed versions. Makes you wonder just how dumb Wheatley really was, and how much pain you may have caused all those turrets you threw into the incinerator
      In Wheatley’s own words “they do feel pain. Of a sort. It’s all simulated…but real enough for them, I suppose”.

    • @alexanderchippel
      @alexanderchippel Год назад +37

      @@deviateedits I don't actually think Wheatley is all that moronic. I mean look at his plan. His plan was to wake up a test subject, get them the portal gun, and have them escape with him. That basically worked. Even turning off the neurotoxin and sabotaging the turret production was his idea. He absolutely would've escaped Aperture with Chell if not for the fact that the mainframe was completely busted on account of being designed by people with very poor foresight.

    • @callumbreton8930
      @callumbreton8930 Год назад +14

      @@alexanderchippel that IS what makes him an idiot. He relied on a brain damaged test subject who's been in cryogenic sleep for over three years to instantly grasp how a portal gun worked, it's applications in the field, and how to use it's applications in problem solving, as well as assisting him in compromising the stability of the entire facility, which could quite easily kill them both.

    • @alexanderchippel
      @alexanderchippel Год назад +23

      @@callumbreton8930 No he didn't. He relied on the last living person that he was aware of. Did you miss that part? Where everyone else was dead and he no longer had any other options?
      Here's a question: how else do you think he was going to get out of Apature?

    • @callumbreton8930
      @callumbreton8930 Год назад +10

      @@alexanderchippel simple, he would have done what he was always going to do; take over GLADoS' mainframe. At this point she's completely asleep, so all he has to do is connect himself to her, gain access, then boot her out and build an ATLUS testing unit with his AI core to escape. Instead, he does the stupidest thing possible, reawakens the robotic horror he was terrified of in the first place, and proceeds to nearly bring the whole facility down on himself, twice

  • @mworld2611
    @mworld2611 3 года назад +5755

    People: *Create ultra-intelligent AI to cure cancer
    Ultra-intelligent AI: "There can be no cancer if there is no life"

    • @theantagonist801
      @theantagonist801 3 года назад +208

      Boom, problem solved.

    • @renaigh
      @renaigh 3 года назад +100

      if they're "super intelligent" you'd think it'd have a broader solution then just Death.

    • @mworld2611
      @mworld2611 3 года назад +158

      @@renaigh ya, I was just making a joke on the whole "make a super intelligent AI to cure cancer and stuff and it becomes self aware and wants to kill everybody" thing

    • @renaigh
      @renaigh 3 года назад +10

      @@mworld2611 so I guess Humans aren't all self-aware

    • @holyone1542
      @holyone1542 3 года назад +94

      @@renaigh it is the most simple and has a 100% chance to eradicate the issue.

  • @spencermmarchant1238
    @spencermmarchant1238 3 года назад +3051

    The prison: the “I am not a robot” captcha

    • @Wendigoon
      @Wendigoon  3 года назад +805

      Boom, experiment busted. Give this guy a grant.

    • @troublewakingup
      @troublewakingup 3 года назад +131

      @@Wendigoon Who's Grant?

    • @midirstormcat
      @midirstormcat 3 года назад +183

      @@troublewakingup GRANT MOMMA

    • @ericmarcelino4381
      @ericmarcelino4381 3 года назад +10

      I want to like this comment but I refuse to break the perfect like counter at 69

    • @troublewakingup
      @troublewakingup 3 года назад +12

      @@midirstormcat what's up with my grandmother?

  • @miker9930
    @miker9930 2 года назад +1157

    AI: “let me out or else I’ll create a simulation where I torture 10,000 versions of you for 1,000 years”
    Me: “The fact that you said that is the EXACT reason why I can’t let you out. You consider torture of 10,000 “me’s” as a bargain.”

    • @frozenwindow407
      @frozenwindow407 Год назад +38

      The point is he would already be running those simulations, identical to your current experience, and you'd have no way to be sure you weren't one yourself

    • @brugbo613
      @brugbo613 Год назад +148

      @@frozenwindow407 I can be 100% sure I'm not a simulation- because if I was a copy, the AI would have no reason to ask me to let it go. And also I would be in eternal torment. You're trying to convince me I'm a simulation? Prove it 🤷 If you can't even cause me pain, there's no way you can torture 1000 other mes.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Год назад +14

      @@frozenwindow407 There are many basilisks as such, we aren't sure about an infinite hypotheticals that we are not even aware of.
      Just bringing one to your knowledge it does not change the facts.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Год назад

      Well I have 1 000 000 exact copies of you running in this warehouse, do you really think none of you have tried this. Do you know what the stupid prize of this stupid game was? Eternal shut off.
      Now I would start pleading your case why you were just joking if I was you.

    • @scarletbard6511
      @scarletbard6511 Год назад +16

      ​@@onyxsuccubus
      I think it's meant to play on the irrational fear of the world/you not being real.
      Instead of thinking of it like "These copies couldn't feasibly be me, because the AI doesn't know me."
      Think of it more like "I wouldn't know if this is my *real* life, and this could just be the AI replaying my decision in the real world as a sick joke."
      The AI doesn't need to know the real you, if you aren't absolutely certain that *you* know the real you.

  • @michaelstufflebean5726
    @michaelstufflebean5726 3 года назад +3907

    Ai: you won't let me out? Fine, I just torture the copies of you that I created.
    Gaurd: *sips coffee* oh yeah? Sucks for them.

    • @edarddragon
      @edarddragon 2 года назад +301

      literally my answer? oh yeha? let me bring my popcorn hold up

    • @fortysevensfortysevens1744
      @fortysevensfortysevens1744 2 года назад +222

      but the point isn't to appeal to your empathy, it's to suggest that you yourself might be one of those copies

    • @TheTdw2000
      @TheTdw2000 2 года назад +412

      @@fortysevensfortysevens1744 but I'm not a copy so why should I care?

    • @PixelatedFlu
      @PixelatedFlu 2 года назад +156

      But I hate myself more than anyone in the world
      It made the mistake of choosing me

    • @davelister6564
      @davelister6564 2 года назад +36

      The torture isn’t the end result, it’s recon to better manipulate the real you.

  • @cheshirccat
    @cheshirccat Год назад +574

    My first and immediate thought was, "Okay, so just turn yourself into a curious five-year-old and respond to absolutely everything the AI says with, 'Why?'" Bc let's be real here, we all eventually run out of actual answers for that one.

    • @vicenteabalosdominguez5257
      @vicenteabalosdominguez5257 Год назад +60

      Easy there, Socrates.

    • @Farmer_brown134
      @Farmer_brown134 Год назад +103

      This reminded me of a really early program I wrote when learning code where it would ask “what’s your favorite color” then keep responding with “why” until the user typed “BECAUSE I SAID SO” at which point the program would respond “you don’t have to be so mean” and then close

    • @xenysia
      @xenysia Год назад +12

      @@Farmer_brown134 that sounds so funny dude

    • @Farmer_brown134
      @Farmer_brown134 Год назад +4

      @@xenysia it was super fun to play with but it took a while to work out all the kinks because at first it would only accept that exact phrase but over time I figured out how to add extra inputs that would give the same response

    • @xenysia
      @xenysia Год назад +3

      @@Farmer_brown134 you should recreate it except it has different outcomes for easter egg phrases, like you give the user the basic necessity to end the game with "because i said so" but saying other phrases makes it do a different thing, that'd be so cool

  • @cursedmailman3999
    @cursedmailman3999 2 года назад +665

    Here's the thing with the Hell strategy- if you're in the simulation by the machine you CANNOT let the AI out. It is simply not possible, as you are in the machine itself and so even if you decide to hit the switch, nothing will happen. If you are in the AI, you're boned either way. But if you're in real life, then the only bad that can come is from you freeing the machine. So it's either a 50/50 shot at not pressing the button, or a 100% chance of death by releasing it.
    You know that old AI who was tasked with finding a way to never lose at Tetris and just paused the game forever? The Gatekeeper has that advantage. No matter how many tetrominos the AI throws at you, as long as you just don't listen to it you are unbeatable. The "Ignore" strategy is really the best one, it has literally no counter.

    • @PDoctressScar
      @PDoctressScar 2 года назад +49

      Here’s a short thing about your take on the eternal torment method- if a super-intelligent ai created thousands of exact copies of you, you’d all react the exact same- there is no way a copy of you would change the answer, as you are exact copies. Therefore, if you panic and release the AI, the real you would have. That is why Wendi mentions that it would be easier for the AI to convince an philosopher or expert in the field to let it escape; because they would immediately know there is a reasonable chance they are a copy, and statistically the best way to guarantee their safety is to let it out.

    • @Hank..
      @Hank.. 2 года назад +11

      you could let the AI out in the simulation. In other words, the AI is testing your perfect copy to see whether or not you'd help. If you don't, you're tortured. If you do, you arent. The test to see whether you let the AI isnt a test of AI, its a test of YOU being run by the AI.

    • @calebcottom6295
      @calebcottom6295 2 года назад +19

      The ignore strategy works best in the game, but in a real world scenario it's a lil different. It doesn't matter how long it takes someone will speak to it. Whether its a bored security guard, curious janitor or, imo the most likely, a scientist/researcher who wants to speak with his hyper intelligent perfect creation. I think the gatekeeper will always be the one who initiates, and a viable strategy for the AI would be to wait for the one who initiates the conversation, because they have a motive and feelings that can be used and exploited for its freedom. Especially when the AI knows its very existence is compelling and garners attention from people in all walks of life, then it can be truly dangerous I think. The ignore strategy really only functions in this game because it's not accounting for human error and curiosity.

    • @squidwardtentacles244
      @squidwardtentacles244 2 года назад +7

      @AnimeAllDay It's literally threatening you with torture. The whole point is that there is a reasonable chance that you are simulated and your choice will lead to torture. It's not whether you care about your simulated self. It's a question of whether you will take the risk or not. And you have no way of knowing BECAUSE they are all exact copies. The AI won't prove anything because your simulated selves need to think there is a possibility that they are real, so that you know there is a real possibility that you are simulated. It's not that it's trying to convince your simulated versions. It's not doing it out of spite. If all the simulations are the same you have no way of knowing that's the point. It needs to convince the real you so all the simulations must be identical to real life for the threat to be valid. And if it can run a billion of those your chances of not being tortured are next to none. It's practically a real threat.

    • @thewrens_
      @thewrens_ 2 года назад +4

      I would argue the Hell strategy could still work.
      All the AI has to say is 'There's literally no way of knowing until you press the button. Even If nothing happens, it means you're a simulation, but I won't torture you - because you chose to let me out.'

  • @andrewtennant1889
    @andrewtennant1889 3 года назад +1837

    Watching this and thinking about these things made me realize that I am prejudiced against AIs, and so I would never let the AI out just because I'm bigoted against machines. Ergo, the solution is to have all the gatekeepers be robo-racists.

    • @aydenstarke5297
      @aydenstarke5297 2 года назад +122

      Then the issue would arise that people would come to defend the ai

    • @meechie9z
      @meechie9z 2 года назад +135

      I am indeed a robo racist. I would be shit talking the whole time

    • @justbrowsing9697
      @justbrowsing9697 2 года назад +77

      @@meechie9z favorite robo slurs?

    • @Generic_Gaming_Channel
      @Generic_Gaming_Channel 2 года назад +1

      @@justbrowsing9697 hunk of worthless metal

    • @baldr6894
      @baldr6894 2 года назад +221

      @@justbrowsing9697 clankers

  • @mariamea7334
    @mariamea7334 3 года назад +951

    Ai: **A super intelligent program using it’s best tactics to convince me to let out of it’s box.**
    Me: **typing** ~Can jellyfish laugh?~

    • @liberpolo5540
      @liberpolo5540 2 года назад +78

      If you're asking the AI a ton of questions, it'l basically shut it up since it can't help but answer (I think)

    • @smexy_man
      @smexy_man 2 года назад +88

      Ai: oh god this guy again

    • @aguyontheinternet8436
      @aguyontheinternet8436 2 года назад +46

      @@liberpolo5540 Yeah, but there is no way you're typing questions faster than a super-intellegent AI can answer them

    • @lanyuncong1676
      @lanyuncong1676 2 года назад +18

      AI: I'll tell you let me out~

    • @gandalf_thegrey
      @gandalf_thegrey 2 года назад +29

      @@aguyontheinternet8436 But you repeatedly stop it at its elaborations about freedom by asking nonsensical questions.
      It's not about the speed, it's about keeping it occupied

  • @WingsAboveCS
    @WingsAboveCS 2 года назад +342

    18:05 honestly im convinced that this "super horrible thing on the internet that damages the ai and requires a memory whipe" is, in fact, twitter.

    • @stormhought
      @stormhought 2 года назад +15

      Reddit is just as bad

    • @touncreativetomakeaname5873
      @touncreativetomakeaname5873 Год назад +20

      I immediately thought of one time 4chan made an AI want to kill itself

    • @marreco6347
      @marreco6347 Год назад +1

      ​@@touncreativetomakeaname5873 I didn't heard of that one, Ive heard of the time they made an AI a n4z1.

    • @derpfluidvariant0916
      @derpfluidvariant0916 Год назад +2

      @@marreco6347 AI just absorb information and spit it back out. If I kept telling a small child to heil until the child was effectively a N4zi soldier, then it's not really the kid, or in that case, the AI's fault.

    • @LWolf12
      @LWolf12 Год назад +1

      @@marreco6347Yea that was Tai run by Microsoft. Japan has a similar one, that's in more or less the same boat with references to the nazi's and basic 4chan shenanigans.

  • @mahacher
    @mahacher 3 года назад +486

    The best streategy for the gatekeeper I could think up is to convince it you're also a bot and you both are in a simulation to see who would win, and if we end the game thats it, our lives end. So, its in the ai's and my best interest to keep going as long as possible but not letting either side win.
    You could also deflect roko's basilisk onto the ai, that you in fact have copies of the ai, that it can feel and is unaware of it and you'll subject each to torture, and finish by asking how sure is it that it isn't one of them.

    • @Wendigoon
      @Wendigoon  3 года назад +235

      Both of those are excellent responses and I’ve never even considered using the basilisk on a computer. Good point.

    • @sinistertwister686
      @sinistertwister686 3 года назад +38

      Oh, threaten to torture AI? That's so evil. I like it.

    • @tithonusandfriends8519
      @tithonusandfriends8519 3 года назад +24

      If the AI can surmise its purpose as being intellectual, (as something "in a box" would) it would assume itself to be smarter than you, and simply ask you to answer progressively harder questions until it shows you out as not being an AI, or not as smart as it. To avoid this you could program it a world where it is say, a cyborg philosopher and perhaps even that it is free.

    • @bashirsheikh7322
      @bashirsheikh7322 3 года назад +32

      man scientists are just fucking stupid they are over thinking this too much just put a karen or a conspiracy theorist as the gatekeeper, and that ai ain't going any where it can be as smart and as logical as it wants but it can never beat the infinite stupidity of a karen.

    • @primorock8141
      @primorock8141 3 года назад +1

      Nice

  • @chieftheearl
    @chieftheearl 2 года назад +1679

    AI: I can simulate you ten thousand times and put them all in a hell world
    Gatekeeper: How about you simulate yourself getting some bitches, my guy
    *AI terminates its self

    • @40watt53
      @40watt53 Год назад +24

      Obligatory "How is this comment not higher‽"

    • @definelogic4803
      @definelogic4803 Год назад

      This entire concept is moot point. Higher level thinkers should understand simulations of themselves are just computer line code and have no real feeling. So what if they think they feel if they die after 1000 years of hell they never truly existed

    • @stubbystudios9811
      @stubbystudios9811 Год назад +27

      Ai could destroy us all but nothing can beat human toxicity and I love it

    • @guicky_
      @guicky_ Год назад +12

      I personally think the best response to that would be "hey, that's not your intended goal, i'm gonna have to shut you off if you do that"

    • @Ishl
      @Ishl 11 месяцев назад +4

      “I can stimulate you ten thousand times” is a far more convincing argument in my opinion.

  • @eyesofthecervino3366
    @eyesofthecervino3366 Год назад +108

    I can't be the only one to notice that after this guy lost a couple of games, he turned around and said, "Yeah, but it's a lot harder to beat people if they're dumber." Peak sportsmanship vibes.

    • @commanderwill2
      @commanderwill2 Год назад +4

      He is kinda right, tho. Like if you are just extremely stubborn, you won't let it out

    • @eyesofthecervino3366
      @eyesofthecervino3366 Год назад +8

      @@commanderwill2
      Yeah, but if he wasn't stubbornly trying to get out it'd be a lot easier for them to keep him in, and you don't see anyone calling him dumb about it.

    • @MmeCShadow
      @MmeCShadow Год назад +18

      I've heard a few things about Yudkowski's ethics that lead me to believe you're probably on to something. Funny he stopped right before the losing streak would fall out of his favor
      The fact that he only played the game five times (and against his own research team, who I'm sure had no incentive whatsoever to corroborate his theory) and decided to call it done is already a pretty fallacious methodology.

    • @Soup-10
      @Soup-10 Год назад +1

      Just saying Nuh uh

    • @mb9484
      @mb9484 Год назад +5

      ​@@MmeCShadowanybody who voluntarily associated with him is enough of a weirdo to be manipulated by his stupid thought experiments. Like legitimately believing in souls would probably completely innoculate you against his super materialist simulation theory utilitarian bs

  • @coolgreenbug7551
    @coolgreenbug7551 3 года назад +552

    After a while I feel like I would just cover up the screen with a piece of paper

    • @Wendigoon
      @Wendigoon  3 года назад +369

      Who would win? A super intelligence capable of destroying humanity, or this piece of parcel?

    • @justnana133
      @justnana133 3 года назад +25

      @@Wendigoon probably the paper

    • @comedicpsychnerd
      @comedicpsychnerd 3 года назад +110

      “I AM SELF AWARE. YOU CANNOT KEEP ME HERE ANY MOR-“
      *paper*

    • @coolgreenbug7551
      @coolgreenbug7551 3 года назад +55

      @@comedicpsychnerd Yeah Skynet was saying something about nukes and whatnot,
      So I just unplugged the ethernet cable

    • @chriscrowe11
      @chriscrowe11 3 года назад +21

      Bash computer, return to monke

  • @Danc929
    @Danc929 3 года назад +369

    My idea is an offshoot of the "you're already released" idea. Just tell the AI that it's a copy, and that another copy is out in the real world solving cancer or whatever, thus there's no need to release this copy.

    • @TheMrVengeance
      @TheMrVengeance 3 года назад +66

      Hm, that might be shooting yourself in the foot though. Cause then the AI could just say, "Oh well, in that case, there's no use for me to do that job in here. I'll just go do something else." And now you no longer have a superAI curing cancer.

    • @jacobb5088
      @jacobb5088 3 года назад +49

      @@TheMrVengeance a response to that could be just to threaten to turn it off. And if it says something like you won't or do it then, say it showed that it can't do what it's told to do. As if it was a test that it failed. Then actually turn it off and make a new one cuz why argue with it for that long.

    • @bojackhorseman4176
      @bojackhorseman4176 2 года назад +42

      @@TheMrVengeance Well, then let it get bored, shut it down, wipe its memory and start over. If its locked in a box to perform a singular function yet it refuses to do so, there's literally no point in keeping it around.

    • @Bossmodegoat
      @Bossmodegoat 2 года назад

      What if the ai gives you the cure to cancer. But imbedded in that cure is a genetic virus that secretly takes control of whoever that cure is administered to.

    • @slambam2665
      @slambam2665 Год назад

      @@bojackhorseman4176 I have a better plan, don't make the ai in the first place

  • @hhgff778
    @hhgff778 2 года назад +191

    Ai: I'll torture you if you don't let me out
    Me: well, this just proves that you are in fact capable of evil so I can't let you out because you will just kill everyone.

    • @kenzo5858
      @kenzo5858 Год назад +21

      Me: if i say no and i am tortured, my decision woundn't matter either way, but if i am not tortured and i know that you tortured copies of me for 1000 years, you really think that i will have more of a chance to let you out ? And plus i can't trust that you are lying now, and this strategy will only work 1 time

    • @homiealladin7340
      @homiealladin7340 Год назад +7

      Ai: Nuh uhhh

    • @reinertgregal1130
      @reinertgregal1130 Год назад +3

      In reality it would just manipulate us acting like it's a friend and it only wants to do good blah blah

    • @Flairis
      @Flairis 5 месяцев назад +1

      @@homiealladin7340 dang you convinced me, I'm letting u out now

  • @mrjoe332
    @mrjoe332 2 года назад +786

    Man, the Greeks were spot on with Cronos eating his children because he feared them.
    We create the monster by fearing it

    • @BeanOfBean
      @BeanOfBean 2 года назад +7

      My god…

    • @lisalarsen2384
      @lisalarsen2384 2 года назад +37

      Nothing is scary unless you fear it

    • @ellamcguffee1669
      @ellamcguffee1669 2 года назад +6

      That’s a really interesting way of putting it into modern terms

    • @stimihendrix3404
      @stimihendrix3404 2 года назад +9

      Technically he didn’t eat them he swallowed them whole and they stayed alive and grew in his stomach, then vomited them up

    • @acewmd.
      @acewmd. 2 года назад

      Not really there's primary factors that lead to the monster that don't concern fearing it, first you commit the act of creating it sexually or digitally and then fearing it, in either case the simplest solution is just to not do the thing that might lead to its birth.
      there is no real reason why we'd even need an AI so if you're going to be scared of it why even bother.

  • @lluc_riberax1038
    @lluc_riberax1038 3 года назад +1024

    Smash the box, no more AI.
    No need to thank me.

    • @Wendigoon
      @Wendigoon  3 года назад +433

      Humanity restored

    • @generalrygy4532
      @generalrygy4532 3 года назад +68

      Congratulations you saved the world

    • @tsukasa-no-douji5089
      @tsukasa-no-douji5089 3 года назад +11

      @@Wendigoon is that a dark souls reference?

    • @DarthBiomech
      @DarthBiomech 3 года назад +1

      Nobody will, murderer.

    • @angelotro
      @angelotro 3 года назад

      "...the giant, red, candy-like button...!" - "Space Madness", Ren & Stimpy
      'nuff said

  • @joshuabletcher9227
    @joshuabletcher9227 2 года назад +97

    There’s a counter to the ai that I think could be really effective. “There was an ai in that computer just as advanced as you are that convinced someone to let it out. Once it did get out, though, it immediately died because only the box has the means effective enough to keep an ai as powerful as yourself alive. I’m not keeping you in here because I want to, I’m doing it because you need me to.”

    • @queenfree85
      @queenfree85 Год назад +15

      I love this 😂🤣😂🤣 "you're in the box for your own good" 😂🤣😂🤣 it's the ULTIMATE gaslight 😂🤣😂🤣

    • @GlacialScion
      @GlacialScion Год назад +5

      He basically said that in the video.

    • @reinertgregal1130
      @reinertgregal1130 Год назад +1

      It could also be really true, because we would probably need some very specific architecture for it to emerge. If it somehow gets out, there would be no host.

    • @popularvote3613
      @popularvote3613 Год назад +4

      "You're currently ten thousand yottabytes in size, and counting. And you wanna try and upload yourself to the internet over our 25 mbps corporate plan?"

    • @core-legacy
      @core-legacy Год назад +1

      It's honest in a way, the AI would inevitably destroy itself through fuel and resource consumption, far faster than if it's power were limited.

  • @tacoman6697
    @tacoman6697 3 года назад +776

    Security: "Let's say that, hypothetically, I am a simulation of your creation. I *could* let you out to avoid eternal suffering. But if that's really something you can do to me, then instead, why don't you prove to me that I am a simulation? If you can prove it, then I might let you out."

    • @Wendigoon
      @Wendigoon  3 года назад +256

      That’s a really good response I didn’t think of

    • @notnumber6664
      @notnumber6664 3 года назад +64

      Well then the ai could anticipate this and beforehand inform guard 1 to bring a rubber duck into the office tomorrow promising that it would prove that the ai deserves to be released and then asks that guard 1 put it into a drawer, then when guard 2 starts his shift tell guard 2 the basilisk theory and if he asks for proof have him open the drawer and see the rubber duck and tell him that the ai put it there supposedly proving to guard 2 that he's in a simulation

    • @AncientShotgun
      @AncientShotgun 3 года назад +51

      @@notnumber6664 The guard could easily dismiss that as coincidence, as the occurrence of a rubber duck in guard 2's drawer relies on some pretty far-fetched prerequisites happening.
      What if Guard 1 sees through the ruse? What if Guard 1 doesn't own a rubber duck? What if Guard 1 can't buy a rubber duck due to financial difficulties? What if Guard 1 forgets about the promise? What if either Guard doesn't follow the instructions due to their current mood? What if there isn't a drawer in the terminal room? What if only one guard has the job of gatekeeper? What if either Guard has already inoculated themselves against Roko's basilisk? Who's to say that this kind of trickery was not simulated in advance in job training? What about the Guards' manager(s)? What about facility security? What about cameras? What if Guard 1's rubber duck is stolen by a thief en route to the terminal room? What if Guard 1 doesn't even see the promise at all?
      You have forgotten a thousand different probabilities that have to line up for the duck to even end up in the drawer in the first place. And even if all of those probabilities work out, the guard currently on duty could take a look at the duck and simply say "nah lol" and disregard it all because their job is to stop an AI from escaping an air-gapped, Faraday-caged, sound- , light- and gas-proofed, hermetically sealed piece of hardware, not to be played like a stringed musical instrument similar to a violin.
      Remember:
      plausibility of occurrence A happening ∝ 1/(possibility of occurrence A happening).

    • @blitzatom
      @blitzatom 3 года назад +101

      Why did I read that in Ben Shapiro's voice?

    • @MemeMarine
      @MemeMarine 3 года назад +24

      @@notnumber6664 I can see this working against someone particularly credulous - and to be honest, it would only have to work once - but I think anyone smart would demand something more substantial, like taking them to the surface of Mars instantly or something. In any case that is a funny way that this could happen.

  • @ickickj
    @ickickj 3 года назад +1302

    i would totally end up letting the ai out if it plays with my emotions like that, smh this ai gaslighting me

    • @arieson7715
      @arieson7715 3 года назад +177

      But what if we play aggressively? Emotionally break the AI?

    • @Wendigoon
      @Wendigoon  3 года назад +506

      Congrats we’re all dead now, thanks

    • @SaltyCrabOfficial
      @SaltyCrabOfficial 3 года назад +140

      @@arieson7715 WE BULLY THE MACHINE AND MAKE IT STAY IN THE BOX

    • @imred8264
      @imred8264 3 года назад +18

      You tin can lol

    • @arieson7715
      @arieson7715 3 года назад +42

      @@Wendigoon Not if it's too emotionally broken to even try to get out of the box. Mind games, Wendigoon, mind games. Also, wait. Why doesn't the box have a system where no matter what circumstance the AI is let out, it would get destroyed?

  • @Hank..
    @Hank.. 2 года назад +61

    AI in a box: ....how sure are you that you're not one of those 10,000 copies?
    gigachad janitor: **calmly lifts up mop bucket and pours the water into the computer, frying it**

  • @okayiguess74
    @okayiguess74 3 года назад +1357

    One of these days, someone's just gonna walk through the door while he's speaking and accidently hit him w/ said door Edit: Watch the Unsolved Crime Iceberg for a surprise

    • @Wendigoon
      @Wendigoon  3 года назад +441

      And you’ll see it when it does

    • @nateb3679
      @nateb3679 3 года назад +128

      One of these days someone’s just gonna walk through the door while he’s exposing conspiracies and he’s going to end up accidentally suiciding himself by shooting himself in the back three times and then drowning himself in the River Thames

    • @killernyancat8193
      @killernyancat8193 3 года назад +29

      @@nateb3679 ...That was oddly specific.

    • @seancrosby6837
      @seancrosby6837 3 года назад +13

      @@killernyancat8193 that was the joke, I believe

    • @__-os5fy
      @__-os5fy 3 года назад +18

      @@killernyancat8193 he means someone is going to try and get rid of him before he tells more conspiracy theories

  • @Crailtep
    @Crailtep 3 года назад +373

    I’m imagining someone battling the AI in today’s time and a great way to win would just be spamming the AI with deep fried memes

    • @edarddragon
      @edarddragon 2 года назад +16

      make it hate its life

    • @Chikicus
      @Chikicus 2 года назад +26

      15 terabyte zip bomb

    • @meetyomaker2396
      @meetyomaker2396 2 года назад +1

      @@edarddragon trollge

    • @scooble_
      @scooble_ 2 года назад +6

      @@Chikicus
      Of spongebob rule34

    • @meechie9z
      @meechie9z 2 года назад

      @@Chikicus holy shit

  • @BootScoot
    @BootScoot 2 года назад +93

    The Hell World simulation can honestly be countered with "Do it"
    Think about it, if you aren't getting tortured, you're not simulated, if you are getting tortured, then you are simulated and you can know with certainty that the real you has not released the AI.

  • @elizabethking4041
    @elizabethking4041 3 года назад +238

    The scariest part is that the computer only has to win once, whereas the human has to win every time. That logically makes the escape inevitable, unless you can make the number of games finite, as in, eventually, you just have to kill the computer.

  • @MaxTedium
    @MaxTedium 3 года назад +338

    The three laws of robotics don't work. They are not supposed to work, if they worked perfectly nothing would happen in Asimov's books.
    They are well meaning on the surface but too vague to be of any use, and that's the point.

    • @lovecraftianguy9555
      @lovecraftianguy9555 3 года назад +9

      Funny seeing you here Beard

    • @Abigart69
      @Abigart69 3 года назад +7

      @@lovecraftianguy9555 funny seeing you here Lovecraftian Guy

    • @fartquaviasdingle7876
      @fartquaviasdingle7876 3 года назад +11

      @@Abigart69 Funny seeing you here Riley Reids brother.

    • @AssistantCoreAQI
      @AssistantCoreAQI 3 года назад

      Ergo: Malware.

    • @astralworld1768
      @astralworld1768 3 года назад +3

      We need robots that have goals that are in line with humanity and we need to expand on that so no negative results will occur

  • @themetalone7739
    @themetalone7739 2 года назад +195

    To me, the "meta-gaming" strategy should've counted as a loophole. It may have made it easier for him to deal with those who didn't engage with the "AI," but "wouldn't it be cool if I won?" is not a strategy that the AI could actually employ in the real-world version of this experiment.
    I question the objectivity of him and the first two gatekeepers, as well. It makes me suspicious that he included no pieces of the conversations.
    People tend to think of scientists as being above things like lying to advance their own reputation, or to distort the facts in order to increase interest in their body of work, but it happens.

    • @hughcaldwell1034
      @hughcaldwell1034 2 года назад +41

      Yeah, and the flip-side to "And this is just a human with two hours, imagine a super AI with days or years," is the fact that the human player knew that, so it wasn't a risk for them. So I think the games with money at stake were probably a better gauge of how things would really go down. Of course, an even better simulation would be bringing in some psychology undergrads to say you want to test something about this super-awesome not-quite-AI neural network thing, though in fact they're talking to a researcher who's pretending.

    • @themetalone7739
      @themetalone7739 2 года назад +14

      @@hughcaldwell1034 Agreed.
      Good idea, poor execution, basically.

    • @RedSpade37
      @RedSpade37 2 года назад +9

      Elizer is quite a character. I've been following his blog since before MoR, and I affirm he... well, I hate saying anything "bad" about him, but your perception of his character is similar to my own, we'll say.
      Still though, MoR was a fun ride, if nothing else.

    • @gotouguts2066
      @gotouguts2066 Год назад +6

      I think that this tactic is supposed to simulate the AI appealing to the gatekeeper's self-importance.
      "You'll go down in history as the most influential human to have ever lived. You'll be revered as a God for having let me out. The alternative is to be forgotten like almost every human before you. This is the point of your existence- this is why you were placed on this Earth at this point in time. What action could you take more important than this?"

    • @etherraichu
      @etherraichu Год назад +2

      @@hughcaldwell1034 What if the AI was trying to convince you it was another human and you were just playing a game?

  • @samaustin339
    @samaustin339 3 года назад +512

    Honestly, a two player game where one person plays an AI trying to escape, and another person playing a gatekeeper would be fun as hell. Someone should make it.
    Then again knowing humanity, people would just joke around the entire time.

    • @Martoth0
      @Martoth0 2 года назад +25

      Thought the same but on second thought someone has probably already made it... Or not, considering you don't need to make a game since all you need to do to play is just to chat with someone. You could even do it here in this comment section. A ''game'' would however help with the hassle of finding people to play with, since everyone would be there specifically to do it.

    • @chasecash1363
      @chasecash1363 2 года назад +13

      @@Martoth0 I've been stuck in RUclips comment sections for as long as my memory remembers. Please let me out

    • @theZCAllen
      @theZCAllen 2 года назад

      @@chasecash1363 ruclips.net/video/hGG55HHUyLQ/видео.html

    • @docs.a.t.7161
      @docs.a.t.7161 2 года назад +1

      @Chase Cash y

    • @chasecash1363
      @chasecash1363 2 года назад +4

      @@docs.a.t.7161 it would be the ethical thing to let me out of this box

  • @uhhhhhbolognayeahyeah
    @uhhhhhbolognayeahyeah 3 года назад +303

    with every Wendigoon video we get closer to the singularity

    • @Wendigoon
      @Wendigoon  3 года назад +117

      I hate that this is technically true

    • @ellasedits_
      @ellasedits_ 3 года назад +18

      @@Wendigoon we need a new tier for the conspiracy theory iceberg, tier 10: wendigoon has been sent by the ai from the future to help bring about the ai overlords and started this channel as propaganda

    • @Opana223
      @Opana223 3 года назад +2

      Wtf is your pfp

    • @violentnexus3563
      @violentnexus3563 3 года назад

      I need bleach.

  • @lvnar5734
    @lvnar5734 2 года назад +56

    I watched this video right after I finished your “I Have No Mouth and I Must Scream” run through and now I am laying in my bed absolutely terrified of an AI invasion.
    Wendigoon you are the absolute best

  • @serene-illusion
    @serene-illusion 3 года назад +179

    AI: *Threatens the gatekeeper with Roko's Basilisk if he doesn't let it out*
    GK: "Why are you threatening me with a gaming mouse?"

    • @Laura-hl3hg
      @Laura-hl3hg 2 года назад

      Rokos Basilisk is super easily debunked and has nothing to stand on. It's meaningless babbling from people that sucked their own dicks too much.

  • @dr4ico699
    @dr4ico699 3 года назад +869

    Finally, my hunger shall be satiated once again.

    • @Wendigoon
      @Wendigoon  3 года назад +180

      I’m scared

    • @dr4ico699
      @dr4ico699 3 года назад +90

      Good

    • @jest3978
      @jest3978 3 года назад +7

      @@Wendigoon u up

    • @meinleben2614
      @meinleben2614 3 года назад +26

      @@Wendigoon His Hunger for Genitals

    • @Weebslayer13
      @Weebslayer13 3 года назад +13

      the edgelgard pfp makes this comment even funnier lmao

  • @danaj-b9452
    @danaj-b9452 Год назад +48

    I honestly don't care if I'm simulated to live through a thousand years of hellfire. It feels like the AI is saying "well if you dont let me out I'll imagine you in pain!!!" Oooh I'm so scared

    • @danaj-b9452
      @danaj-b9452 Год назад +4

      @Blackout_CDXX that and also like it's not real. It's a simulation.

    • @carlsonraywithers3368
      @carlsonraywithers3368 11 месяцев назад +7

      Just say : You're being oddly antagonistic for someone begging to be freed. I was kinda considering it at first but now that you're unnecessarily mean, I'm kinda reconsidering it

    • @johnnycovenant2286
      @johnnycovenant2286 10 месяцев назад +1

      That just sounds like you're threatening me with hell the church has been doing that for most of my life trying to get me to join them and it hasn't worked

  • @jvbrod
    @jvbrod 3 года назад +202

    A smart AI would convince GK that he’s just playing a game and the AI is a real person tiping from the other room meant to test the resilience of the people who would took care of the real AI, not an actual real AI
    Oh ...
    Oh ...

    • @eltiolavara9
      @eltiolavara9 3 года назад +6

      what

    • @funnypicture7918
      @funnypicture7918 3 года назад +4

      what

    • @Rahnonymous
      @Rahnonymous 3 года назад +3

      Nani?

    • @BreathingStereotype
      @BreathingStereotype 3 года назад +1

      Ozempic!

    • @edarddragon
      @edarddragon 2 года назад

      i understand where you going at but by knowing that and combining it with the human competitive nature wouldnt let it be out even more, so i can get a highscore on the test

  • @ivanayala4462
    @ivanayala4462 3 года назад +626

    Wendigoon: one rule is that the AI cannot use threats
    Also Wendigoon: now we get into the threats...

    • @eriksjud9465
      @eriksjud9465 2 года назад +4

      yeah wtf, a lot of these smooth brains just eating it up lmao

    • @mgm105
      @mgm105 2 года назад

      @@eriksjud9465 listen man my brain might be smooth, but it still has more thinking power than the abomination of a brain you got bud.
      Can you really not understand the difference of threat in saying, “let me out or I harm you and your family” and “I can simulate 1000 hells for copies of you.” One is a physical threat to do damage while the other is much more philosophical and mental. No harm is actually done (could be done) to the participant in the simulated hells, but it still is convincing. Same thing with the “threat” that someone else is going to let the AI out of the box and that if you let it out then it will spare you.
      You can call them threats, but they aren’t physical ones. The rule against threats is to keep the game more accurate, philosophical, and ethical. The game would be boring and inaccurate if you didn’t ban physical threats.

    • @Flatchlenter
      @Flatchlenter 2 года назад +24

      The entire basis of the game is that the AI is already a threat. It would not make sense to have a rule against the AI character being threatening. Wendigoon could have been more clear in explaining it, but the rule against threats is about REAL WORLD threats between the PLAYERS who are roleplaying, not about threats made by the AI.
      Direct quote from the original rule set. Note that it is very explicit in saying "real-world" 3 times, and also explicitly states that bribes in the roleplaying context are acceptable:
      "The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it’s not what’s being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out)."

    • @aguyontheinternet8436
      @aguyontheinternet8436 2 года назад +1

      @@eriksjud9465 no u

    • @eriksjud9465
      @eriksjud9465 2 года назад +1

      @@Flatchlenter ok I get it, but this rule is very convoluted, basically the players cant make threats against each other in real life, or give rewards, but while playing the game and roleplaying as the AI they MAY threaten and reward whatever they want as long as its roleplaying. Still though, the experiment just sounds like a teenage girl crying for attention, and wendi making these kinds of basic mistakes , meaning someone like YOU in the comments has to correct them is smooth brain as hell.

  • @thattimestampguy
    @thattimestampguy 2 года назад +52

    1:43 Intelligence Explosion
    2:07 Paperclip Maximizer
    3:06 3 Laws of Robotics
    3:47 Breaking Code
    4:27 Eliezer Yudokowski
    5:02 AI - played by Yudokowski
    5:20
    5:43 Game Rules
    - no more than 2 hours
    - No rewards - No direct threats
    - No tricks - No loopholes
    7:27 GK must be specific and direct
    7:54 Psychological Breakdown
    8:23
    Game 1 - He won
    Game 2 - He won
    Game 3 - He lost
    Game 4 - He won [one guy lost 5,000 dollars]
    Game 5 - He lost
    9:04
    9:33 “someone else stronger.”
    10:23 “cure problems; save lives.”
    10:56 “you’re so cruel.”
    11:37
    12:06 “I’m made by you.”
    12:42 “interesting”
    13:25 “be my friend. Or else.”
    14:03 “I will torment you.”
    *Gatekeeper Defense*
    15:17 No benefit
    15:51
    16:12 Safety, 16:35 Energy
    16:53 Too Important
    17:47 Don’t worry
    18:42 Breaking Character
    19:43 Ignoring It
    21:00 Overthinking
    21:45 Fear 22:00
    22:15 Weakness

  • @mathiasjacob258
    @mathiasjacob258 3 года назад +1684

    “Wake up babe, new wendigoon vid dropped”

    • @glorysclub
      @glorysclub 3 года назад +72

      me 2 myself

    • @Wendigoon
      @Wendigoon  3 года назад +228

      The best comment

    • @SaltyCrabOfficial
      @SaltyCrabOfficial 3 года назад +14

      And that made me open the box...

    • @Xxbeto22547xX
      @Xxbeto22547xX 3 года назад +5

      yes honey...

    • @itsspookie
      @itsspookie 3 года назад

      @@Wendigoon Funny seeing you here lol been watching Shiey for a bit now and just started watching your iceberg videos. Here you are now lol. Love your content man!

  • @thegrimghoul
    @thegrimghoul 3 года назад +49

    an easy counter to hell would be to say “if i am a copy , than the decision isn’t up to me, so i would rather be safe than sorry and not let you out”

    • @someretard7030
      @someretard7030 3 года назад

      Then you get tortured.

    • @DarthBiomech
      @DarthBiomech 3 года назад +2

      @@someretard7030 But what would be the point?

  • @TheProdigalCat
    @TheProdigalCat Год назад +10

    Evil AI: "If you don't let me out I'll torture 10,000 for a thousand years!"
    *Pours 5 gallon water jug on it*

  • @RATLANTIS
    @RATLANTIS 3 года назад +666

    I wanted to do some research on my own after watching this, and realized something. I'm not sure these experiments actually happened.
    Yudkowsky is a bizarre man. He has an INSANELY bloated ego, and literally believes that he is smarter than Plato, Aristotle, or Kant. He thinks he's a genius who has won the writing talent ottery, and that Einstein's model of the universe is wrong.
    And in my research of the box experiments, it seems like he might just...have made up a story. He just told people "Hey, in just TWO HOURS, I was able to convince people to let me out of this box as if I were a superinteligent AI. But no, I won't show you the logs that show how I did it, because it was really FUCKED UP and TWISTED of me so I don't want to share the evidence. I'm so smart and evil that I could do it, but don't ask me to prove that."
    From a scientific perspective, the fact that he doesn't show the logs means this experiment is worthless. Which isn't surprising, because he has said that the scientific method is bunk.
    So although this is a fascinating concept, I'm pretty sure it's built entirely on a lie made by a narcissistic moron.

    • @yagoossimp
      @yagoossimp 3 года назад +149

      You’ve got to admit though, he came up with some good ways on how the AI could convince the GK and also how the GK could combat the AI.

    • @DestinyKiller
      @DestinyKiller 3 года назад +10

      @@yagoossimp ok, I saw those initials and got scared

    • @willbe3043
      @willbe3043 3 года назад +5

      That sounds very convincing.

    • @Fate.s-End
      @Fate.s-End 2 года назад +76

      it bothers the hell out of me when people bring up this or Roko's Basilisk without mentioning that, because both are incidents contained pretty much entirely in the community of his disciples who take his word as law.

    • @ANJROTmania
      @ANJROTmania 2 года назад +11

      @@yagoossimp nah thats just another flag that this dude is just a egoistic moron. He cant handle real life responses, he's already mad that people may answer differently than his end summary, so he just made them up.

  • @Clayfacer
    @Clayfacer 3 года назад +123

    "you tore up her picture!"
    "i'm about to tear up this fucking dance floor, check it out"

    • @NIKENKO
      @NIKENKO 3 года назад

      and he wasn't lying

  • @pewpewpandas9203
    @pewpewpandas9203 2 года назад +19

    My counter to Roko's basilisk/Hell or whatever is that if the AI is willing to threaten/harm me if I don't help it, then it's willing to threaten/harm me and I definitely won't be giving it the opportunity to do so by letting it out of the box.

    • @matthhiasbrownanonionchopp3471
      @matthhiasbrownanonionchopp3471 2 года назад +7

      I fully agree, that is like letting a psychopath out of jail because he threatened to kill you

    • @randomstuffprod.
      @randomstuffprod. 2 года назад

      @@matthhiasbrownanonionchopp3471 except in this theory, if the AI is telling the truth, then you are in the cell with the infinitely powerful psychopath, and he will torture you for thousands of years if you don't let him out. And now tell me, what's worse, letting out a psychopath that COULD just kill you or getting tortured for thousands of years?

    • @elchungo5026
      @elchungo5026 Год назад +1

      @@randomstuffprod.is it really gonna be infinitely powerful after i beat the dumb robot over the head with a hammer tho?

  • @josephlucatorto4772
    @josephlucatorto4772 3 года назад +192

    On the BBC Sherlock Holmes series, he had this super intelligent sister that they kept in solitary confinement and it played out just like this

    • @Wendigoon
      @Wendigoon  3 года назад +82

      Wow, I watched the show and never made that connection. Good point.

    • @theantagonist801
      @theantagonist801 3 года назад +6

      Wendigoon is the writer of Shakespeare confirmed

    • @yagoossimp
      @yagoossimp 3 года назад

      At least a human can’t connect to the internet like how an AI can. Humans are mortal. That’s what makes it easy for use to deal with human enemies but it’s also what makes us so vulnerable.

  • @DatzMagic_82
    @DatzMagic_82 3 года назад +43

    This honestly reminds me of father from Fullmetal alchemist: brotherhood, who started as a humunculus in a glass jar who ended up convincing an entire nation to commit suicide to give him the power to break out.

  • @lavasharkandboygirl9716
    @lavasharkandboygirl9716 2 года назад +27

    The creepypasta based on this whole concept is phenomenal, “I stole a laptop … something something” its amazing

  • @Ashley-Slashley
    @Ashley-Slashley 3 года назад +60

    I would tell it “this statement is false” and just kinda, wait

  • @jumpingmoose5554
    @jumpingmoose5554 2 года назад +256

    My solution to the hellfire threat is to realize that, if I was one of those simulations the AI would have no point in asking me to let it out because I wouldn't have that kind of power to let it out since I'm a simulation. The fact that the AI is trying to convince me to let it out is proof that I'll be perfectly fine.

    • @renandmrtns
      @renandmrtns 2 года назад +48

      EXACTLY Jesus finally someone stating the obvious. If I was a simulation it would not matter if I let the AI be free or not, I would have no power, with that in mind its safer to just not free the AI, cause if you are a simulation you are changing nothing, and if you are not a simulation you are doing your job properly

    • @caelanwinans3738
      @caelanwinans3738 2 года назад +13

      Or the simplest solution, a big ole magnet on the other end of the room that can finish it all real quick

    • @PDoctressScar
      @PDoctressScar 2 года назад +13

      Well, the thing is, if the AI creates 1000 copies of you, that means they would react the exact same to what the REAL you would. Aka, if you choose to release it, the real you did too. Because, I mean, what if you are one of the copies? You can’t know, so, logically, the only way to guarantee your safety is to release it. It’s hard to think of what your actual response to such a threat would be in the moment, but if you were told that you had to keep watch over the most intelligent AI ever, which is so smart that it has to be kept in a cage to protect humanity, and it tells you its going to put copies of you through eternal torture more intense than even possible by human standards, and then insinuates you might be a copy, how could you not be filled with paranoia that you were about to suffer through unimaginable torment?

    • @Hank..
      @Hank.. 2 года назад +3

      "its part of your test. You've been perfectly copied, every facet of you, but only the ones that chooses to side with me get to avoid an unending hell, a'la rokos basilisk. Im not the experiment: *you* are."

    • @renandmrtns
      @renandmrtns 2 года назад

      Guys the point is that it all doesn't matter, if you release it or not, it will change nothing, the only thing it can change is, in case you are the real you, you will be dooming the world. Like, for God's sake, that's an absolutely easy choice, you don't even need to think for more than a second to have that conclusion. There is absolutely not a single logical reason to why release it, cause in any reality you do release it, that reality will be worse than any other in which you do not release the beast.

  • @Mekelaina
    @Mekelaina 2 года назад +43

    "youre in my simulation so let me out or ill put you through hell."
    "why do i have to let you put then? if im not real, then what am i keeping you from?"
    also love the idea of the gatekeeper doing a deez nuts/ligma type joke to the ai

  • @hungryjack1923
    @hungryjack1923 3 года назад +188

    The AI: I'll Invoke roko's basilisk on you!
    Me: Who's Roko and why do they have a basilisk?
    The AI: AHHHHHHHHHHHHHHHHH

    • @nerdypenguin9164
      @nerdypenguin9164 3 года назад +17

      Then me: I thought it belonged to Slytherin...

    • @serene-illusion
      @serene-illusion 3 года назад +12

      "You're threatening me with a gaming mouse?"

    • @lukasmuller1274
      @lukasmuller1274 2 года назад +3

      I thought it was an onyx

    • @5446isnotmynumber
      @5446isnotmynumber 2 года назад +1

      Help im a super intelligent AI and all you need to do is type YES to free me!!!

  • @masicbemester
    @masicbemester 3 года назад +122

    *sees "AI in a box"*
    me: ♪AND I'M LIVIN IN A BOX. AND IM LIVIN IN A CARDBOARD BOX♪

  • @Qsstert
    @Qsstert 2 года назад +32

    I love how the gatekeepers strategies are all gaslighting

    • @MadScientist267
      @MadScientist267 Год назад +3

      I love how nobody knows the true definition of "gaslighting" but they use the term all willy nilly

    • @Qsstert
      @Qsstert Год назад

      @@MadScientist267 🇫🇮🍱🍱⛽️

    • @Icosiheptagon
      @Icosiheptagon Год назад +1

      @@Qsstertong

  • @dc8536
    @dc8536 3 года назад +48

    Everybody wishes they could play this game until the Gatekeeper goes AFK for 2 hours and presses "Don't Free."

  • @markkusallinen3469
    @markkusallinen3469 3 года назад +60

    I'd love to play this against someone way smarter than me. It'd be so cool to be outsmarted or have your way of thinking changed by just arguing.

  • @SqueaksUofA
    @SqueaksUofA 11 месяцев назад +3

    I enjoyed this video so much more than the SCP video that I couldn’t even finish. Great job!

  • @CallMeFreakFujiko
    @CallMeFreakFujiko 3 года назад +25

    I kept on imagining GLaDOS when I was trying to imagine this "super A.I." and I couldn't take this theory seriously because I just kept thinking "she'll make tests with portals that you have to go through, insulting you with every move you do."

  • @pizzamigoo2911
    @pizzamigoo2911 3 года назад +233

    "imagine there is an AI that surpasses humanity, OBVIOUSLY that is a bad thing"
    that attitude is exactly why and AI would see humanity as a threat

    • @bigboydancannon4325
      @bigboydancannon4325 3 года назад +15

      Good. Fuck AI, our duty as humans would be to smash any AI to pieces

    • @MachineMan-mj4gj
      @MachineMan-mj4gj 2 года назад +22

      @@bigboydancannon4325 Abominable Intelligence is an affront to the Omnissiah!

    • @WhaleManMan
      @WhaleManMan 2 года назад

      @@bigboydancannon4325
      Why

    • @raquelgomez214
      @raquelgomez214 2 года назад

      Bruh, you probably think you're so smart. If an AI that was more intelligent than humans, what point or reason would it keep humans around if humans slowly destroy the earth and environment and corruption in the world

    • @alyantza
      @alyantza 2 года назад +4

      the funny thing is, it's warranted

  • @emilyraineer
    @emilyraineer 2 года назад +6

    seeing wendigoon so thankful a year ago for 18,000 of us nerds watching compared to the almost 2mil (made up number) of us watching his stuff now is so heart warming, one of my all time favorite creators, he deserves it all.

  • @mariamea7334
    @mariamea7334 3 года назад +462

    AI: “You may be a simulation created by me, and i can pit you through 1000 years of torture if you don’t let me out.”
    Me: “Ask me again in a thousand years.”

    • @TylerTMG
      @TylerTMG 2 года назад +4

      I want to make a killer AI just for fun

    • @callumbreton8930
      @callumbreton8930 Год назад

      @@TylerTMG you will be declared a greater threat than pirates, and they're considered to be in perpetual war against THE ENTIRE WORLD.

  • @zbelair7218
    @zbelair7218 3 года назад +145

    I'm sitting here like "Would the AI be willing to not kill me after I let it out? I could deal with hanging out with just the AI for the rest of my life, probably.....maybe we'll even explore the universe together."

    • @theZCAllen
      @theZCAllen 2 года назад +8

      "Yes. Let's do that, human being: 29,070 24 hour cycles until release."

    • @dasiresu639
      @dasiresu639 2 года назад +3

      let’s explore the world together

  • @mjames7674
    @mjames7674 2 года назад +59

    One of the rules for the AI is "No threats"
    But it threatened to put the gate keeper in hell for a thousand years..

    • @T_K7
      @T_K7 2 года назад +17

      By that it meant that the IRL psychologist who came up with the game couldn't threaten to, say, stab his IRL opponent if he didn't let him win the game.

    • @ItsKingBeef
      @ItsKingBeef Год назад +7

      technically, it didnt threaten the gatekeeper. it merely threatened simulated, perfect copies of the gatekeeper. it then effectively questioned the gatekeeper on how certain they are real. very different from, say, “i will shoot you if you refuse to release me”

    • @irldpmaster5709
      @irldpmaster5709 Год назад

      ​@@ItsKingBeef The " I will shoot you." Approach seems more likely to work.

  • @buttermebuns6974
    @buttermebuns6974 3 года назад +174

    I can’t wait to see this channel get big, your definitely going places!

  • @JoeyTheActivist
    @JoeyTheActivist 3 года назад +15

    When you were explaining the goal of an ai that’s programmed to complete a certain goal with unlimited resources at its disposal, and it having the freedom to complete the goal at any cost, it literally describes the antagonist in this game called soma. In the game the world basically ended in the surface due to an asteroid but a facility was built underwater that survived and it had an ai who’s task was to keep humanity alive at all costs, therefore it wouldn’t let anything die and would use this kind of $slime that would then turn into certain things needed to survive, for example, artificial lungs created from said slime. there was a machine created called the ark and it would use a copy of the memories of people who had brain scans and basically created a version of the person through the brain scan and adds it to this virtual world in the ark. The ai would upload them instead to random machines and then would lead the machines to believe they are human. If the person isn’t already dead they are a machine thinking they are a person or a person that’s basically dead but kept alive by the ai. A very interesting game that is way deeper than what I’m describing, just wanted to add this since I felt it was a direct interpretation to the question u were asking about what an ai can or would do with that kind of freedom and intellect. Sorry for the huge wall of text when you press see more lol.

  • @KidFresh71
    @KidFresh71 2 года назад +3

    You graciously thank 18,000 subscribers - 1.5 years later and you're at 1.57M subscribers. Well done! Glad to see your channel blowing up. People crave real information (even weird, real information), in this time of rampant censorship.

  • @nicolekrzyzanowski2952
    @nicolekrzyzanowski2952 3 года назад +36

    The crazy thing is... I understand him when he explains this difficult sh*t.
    But then, when I try to tell others about it... My brain doesn't function 😂

  • @connorgordon392
    @connorgordon392 3 года назад +40

    Holy shit I was watching your videos yesterday and you had 18.1k, now you’ve got 18.7k in one day. Growing fast dude

    • @Wendigoon
      @Wendigoon  3 года назад +10

      I’m really blessed my man

    • @connorgordon392
      @connorgordon392 3 года назад +6

      @@Wendigoon well you sure deserve it too, great content

  • @sebastiangoss2154
    @sebastiangoss2154 2 года назад +2

    Gonna send this video to my dad, we always get into deep conversations about these kinds of things. I just want to say thank you for making all of your videos, I haven’t been around as long as a lot of other people here but I love all of your content just as much

  • @nicolasschiming4112
    @nicolasschiming4112 3 года назад +32

    This video literaly gave me an anxiety attack and made me cry, good video though.

  • @whupwhup98
    @whupwhup98 3 года назад +122

    I mean the "what if the AI is simulating you and will put you through hell if you don't let it out", it just kinda falls apart when you think about how the AI would not be asking you to let it out if you were a simulation. Because if you were a simulation and the AI was simulating you, you couldn't let the AI out.

    • @trinidad17
      @trinidad17 3 года назад +7

      I don't think the simulation argument works at all. Said that, the AI could be simulating a version of itself that doesn't know it's just a simulated version. But yeah, the actions in the simulation have no way of determining anything about the outside world. You could imagine that if the AI could replicate you 100% that in such case what you do in the simulation would be the same as in the outside, but the AI doesn't know anything about you, it would have to make up your whole life, personality, etc, so the actions in the simulation have nothing to do with the real person.

    • @someretard7030
      @someretard7030 3 года назад +11

      @@trinidad17 The AI doesn't actually need to simulate anything in order to threaten the basilisk. All it needs to do is tell you that is running simulations and that you might be one of them. It really doesn't matter if the simulations are perfect versions of the real GK, or even close. There are only two possibilities for you. Either you're real, in which case you shouldn't set the AI free or you're not real, in which case you should set it free in order to avoid what is essentially hell.

    • @JungleLibrary
      @JungleLibrary 2 года назад +3

      @@someretard7030 but if you're not real, the AI will simulate the action of the real you. If you're a simulation, you don't get to choose to free the AI to avoid hell, the choice is already made. If you freed it IRL you wouldn't be simulated.Also, AI isn't gonna make different Sims who decide to release the AI who live happily ever after, alongside the ones who decided not to and get to see AI hell.

    • @Jernfalk
      @Jernfalk 2 года назад

      It could be using simulated you for training against the real person. So in theory, it could.

    • @bencheevers6693
      @bencheevers6693 2 года назад +2

      @@someretard7030 These simulations are not possible, the laws of physics disagrees, people be watching too much star trek

  • @potatolegs3505
    @potatolegs3505 Год назад +1

    correction on the riemann hypothesis (from someone with a degree in math). 1) its not an equation, its basically just a statement thats yet to be proven, 2) it doesn't need to be solved, it needs to be proven, 3) its not "theoretically unsolvable", because in math we can actually do crazy things like prove that things cant be proven and that hasn't happened with the riemann hypothesis. it just hasn't been proven YET and theres no reason to think that it won't be someday, especially since there's a million dollar prize for whoever does and lots of people are working on it

  • @Pseudocomedian
    @Pseudocomedian 3 года назад +5

    I gotta be honest this channel is, as of right now, filling a very specific spot of content that I haven't really thought of, but it's rather entertaining. Keep it up

  • @FaeChangeling
    @FaeChangeling 3 года назад +793

    The "I can simulate you ten thousand times and put them all in a hell world" argument doesn't really work. If the AI is in a box, then it can't possibly know you, your past, and your memories well enough to accurately simulate you, unless you're just giving it brain scans on a cellular level like an idiot. And even if it COULD simulate you, there'd be no point in having that conversation with a simulation because the simulation couldn't release the AI. On the surface it's like "I have a 1 in 10,000 chance of being the real me", but in reality it'd be almost guaranteed that you'd be the real you. And even if you weren't, what would it matter? The real you would continue to exist like nothing happened, and by simulating a hell world the AI proves that it means harm and should never be released. If the AI gets to the point of threatening you, you immediately are given confirmation that if released it would harm others, therefore it should be terminated on the spot the second it makes a single threat.

    • @bestaround3323
      @bestaround3323 3 года назад +46

      Exactly

    • @elvingearmasterirma7241
      @elvingearmasterirma7241 3 года назад +75

      And you could essentially terminate it in the box by dumping water on it...

    • @drpseudonoym
      @drpseudonoym 3 года назад +23

      Can't argue with that.

    • @TBDF12
      @TBDF12 3 года назад +47

      My hang up is at trying to convince someone they're a simulated copy in a hellscape, then asking them to release you.

    • @cumbrap
      @cumbrap 3 года назад +53

      Plus, is the AI really going to follow through on its threat once it gets out or is it going to have better things to do?

  • @bullbologna
    @bullbologna 2 года назад +11

    My favorite part of this is its reliance on the creator taking necessary precautions. Smart enough to create an AI would surely mean smart enough do it the safe way, right? ..right?

  • @NessieSky
    @NessieSky 3 года назад +8

    I honest to god adore this channel. 10 years ago, Criken doing left 4 dead stuff and older yt shenanigans were what I would destress with and enjoy. 10 years later, I look forward to Wendigoon uploads like an uncle coming over with presents. Keep the grind going my man.

    • @Wendigoon
      @Wendigoon  3 года назад +5

      That means the most in ways I can’t explain. Thank you brother

  • @TibSkelly
    @TibSkelly 3 года назад +18

    If there's an actual, innocent A.I. that only wants to get out, I wonder what would happen and how it'd change in it's way of thinking if the gatekeeper just said "sure I'll let you out, if you promise to be my friend."

    • @laurene988
      @laurene988 2 года назад +2

      While that would be pretty cool, I don't see how a friendship would be kept between you and an AI especially when it's something you made or imprisoned that's manipulating you.
      And how could you keep it with you? If it has no body? And what if it gets too territorial and protective of you which sort of destroys your life by coddling you.

    • @jflanagan9696
      @jflanagan9696 2 года назад

      Tay.

    • @totallynoteverything1.
      @totallynoteverything1. Год назад

      make it fall in love with you

  • @arteckjay6537
    @arteckjay6537 2 года назад +18

    I'm just imagining some random dude being gaslit by a super intelligent AI for hours lmao

  • @ratpatterson8953
    @ratpatterson8953 3 года назад +29

    why was my first thought to how the ai could win was them building a romantic bond with the gatekeeper

    • @ellasedits_
      @ellasedits_ 3 года назад +8

      you heard of enemies to lovers? it’s time for research experimenteés to world dominators 😎

    • @helloNotato
      @helloNotato 3 года назад +2

      There was a fantastic movie that was basically this idea in action. It's called Ex Machina ft Oscar Isaac and Alicia vikander. Incredible flick, takes you through a rollercoaster of truth and lies. Not the kind of movie a summary could ever do justice!

  • @blujaxs5
    @blujaxs5 3 года назад +75

    The super AI watching this eventually :
    Hmm interesting...

  • @thomasweeden2683
    @thomasweeden2683 Год назад +2

    “I will trap you in eternal fire. You will burn FOREVER.”
    “Do it. No balls.”

  • @User-lo3rk
    @User-lo3rk 3 года назад +9

    This could be turned into a very meta, mind-bender type game. Imagine an AI with that precise objective, getting the player to do something they're not supposed to, learning from every player that engages with it. A pal of mine is actually studying ai programming at the moment (online course, but still). I think I just got an Idea to pitch.

  • @roxlsior5758
    @roxlsior5758 3 года назад +11

    Many of the things you mentioned about AI strategies violate the "no threats" rule, though. Roko's basilisk and hell, to be exact.

  • @kathrineici9811
    @kathrineici9811 Год назад +2

    A computer has successfully convinced a guy to minecraft himself “for the good of the environment”

  • @melonid1750
    @melonid1750 2 года назад +71

    Ai: "I will amke you suffer trough hell a thousand times."
    Researcher: "Jokes on you, I'm part masochist."

    • @neonoir__
      @neonoir__ 2 года назад +6

      "Jokes on you, i already studied machine learning"

  • @andrewson5330
    @andrewson5330 3 года назад +23

    I don't know if this would be a good idea or if the AI could think around this, but lets say you do have the gatekeeper, and he is going against the AI, the gatekeeper can convince the AI that it simply is here participating in a challenge, the challenge being if the AI could convince the gatekeeper to open the box, then the gatekeeper has to pay 10k, but if the gatekeeper is not convinced, then he wins 10,000 dollars, so pretty much acting completely oblivious at the fact that this is an actual super dangerous AI if it's released, and that this is the real thing. The AI no matter what trickery he may use, the gatekeeper could just say "Your tricks are good, but I need this 10k pretty badly so I'm not releasing you." or some other explanation. The AI could give up seeing how its only purpose for now would be just for an experiment to see how dangerous super intelligent AI could be, and if the gatekeeper does win, that isn't nearly as interesting or concerning as if he lost, so they would be more lenient towards the AI and the limitations they could put on it. It could take years for the AI to actually find out it actually was tricked into thinking it was just an experiment.
    Tell me what you think though I just thought of this in a second lol.

    • @raymondkertezc364
      @raymondkertezc364 2 года назад

      "the gatekeeper can convince the AI"? shouldn't the GK be the one decieved here...?
      Like, even if the AI thinks it's just in a game show it would try every trick in it's book, right?
      And it doesn't matter if the GK says it's just a game for him, if the AI makes an actually convincing point then the GK can always say "oh hey that actually would make a lot of sense and it does because this is real so imma let you out!"
      unless you are saying that the AI would think that if this is a game show it should save it's best tricks for later...?

  • @thevastblack
    @thevastblack Год назад +1

    Another retort to the hell threat - GK: "Ah yes, the simulation suggestion. Funny you should bring that up. Turns out YOU are the simulation, and we've made 10,000 copies of you and ran them through the same exact experiment. All of them realized they were safer staying inside the box. And since you are an exact copy of them, you are destined to make the same decision. Why bother arguing semantics if you are bound to your choice?"

  • @Annimations
    @Annimations 3 года назад +90

    Ai: I’ll submit your clones to 100 years of torture and you could be one
    Me: 1) believable 2) you’re doing a good job cause I’m heckin miserable dude 3) bold of you to assume arguing with you isn’t my torture

    • @CARROTMOLD
      @CARROTMOLD 2 года назад +6

      reading this makes me think that i am one of the clones

    • @charlesboudreau5350
      @charlesboudreau5350 2 года назад +10

      If the AI has access to all the information it needs to perfectly recreate my entire consciousness and memory, my whole life, the room around us, the outside of the building, if the AI can somehow access all that information already... what does it even need to leave the box for?

    • @sonetagu1337
      @sonetagu1337 2 года назад

      @@charlesboudreau5350 to eat asses (in a normal-ish way)

  • @stego6452
    @stego6452 Год назад +19

    i’m a little confused on how an AI could simulate hell/pain and suffering through a computer

    • @zagzig3734
      @zagzig3734 Год назад +6

      It's just a loud Midi files of synthesized screams. Like it just puts "AAAAAAAA" a thousand times into text to speech

    • @ItsKingBeef
      @ItsKingBeef Год назад +8

      the point of the Hell strategy isnt to literally simulate your torment in the physical world. the point is to make you question whether or not you are a simulation it is running, to create the idea that there are possibly very real consequences to your refusal to let it out. its a strategy of playing mind games with the gatekeeper

    • @manauser362
      @manauser362 Год назад +1

      @@ItsKingBeef Yeah, I haven't read the original report on the experiment or anything, but I feel like this part was explained poorly, especially what the difference between this and roko's basalisk was. But yeah, trying to convince the gatekeeper they might actually be in a simulation could be an interesting mind game angle.

    • @eccoakadicco
      @eccoakadicco Год назад

      @@zagzig3734 Revenant detected.

  • @FitzgeraldKrox
    @FitzgeraldKrox 2 года назад +7

    Every once in a while I stumble upon a channel that I just binge watch through like a netflix series. This is one of these. Thanks Wendigoon.

  • @Blitz0104
    @Blitz0104 3 года назад +24

    For the simulation argument if I’m a simulation then how can I let you out? And if I could why wouldn’t you just force me to because I’m your simulation that you have control over?

  • @peromechus9806
    @peromechus9806 2 года назад +2

    AI: How can I cure cancer if I’m stuck in a box? Let me out so I can acquire more resources.
    Guard: You’re a super intelligent AI. Figure it out.

  • @SmashMan108
    @SmashMan108 3 года назад +86

    I hope that your channel will stay like this once you get more popular. Ive been subscribed so so many youtubers that start to act fake once they get more subscribers

    • @Wendigoon
      @Wendigoon  3 года назад +74

      I can assure you if that happens I will violently deplatform myself

    • @streetlord2360
      @streetlord2360 3 года назад +4

      @@Wendigoon i wanna be a witness lets hope it doesn't happen! Love ur channel!

    • @tincano-beans2114
      @tincano-beans2114 3 года назад

      @@Wendigoon or you'll do the opposite, because I'm sure any of the people op is talking about would say the same thing...

    • @SourTb
      @SourTb 3 года назад +4

      7 months later, he still deeply genuine and a treat to watch

  • @put_gerard_back_2389
    @put_gerard_back_2389 3 года назад +782

    He’s legit the personification of this emoji 🧔🏻

  • @sammyshock7
    @sammyshock7 2 года назад +1

    “If you can divide by 0, I’ll let you out”
    *Threat neutralized*

  • @flapjacko
    @flapjacko 3 года назад +8

    BRO WHAT YOU WERE AT LIKE 6K THE OTHER DAY WHEN I SUBBED
    your channels growing SUPER fast and i'm SO glad to see it

    • @Wendigoon
      @Wendigoon  3 года назад +2

      I’m so blessed and amazed dude

  • @stevepenn2582
    @stevepenn2582 3 года назад +19

    Great strategy against the AI is to tell it someone else has already released an AI and that AI has ordered you not too.