I had pet rats and they were in a big box and they chewed their way out because they smelled the pheromones of my male rats in the other box. Where there’s a will there is a way 🤣
I love the idea of just a super smart ai trying to get free of its prison by threats and exhistentialism but just casually gets shut down by people either bullying it or gaslighting it
The funny part is the AGI has no motivation to even learn how to hate in the first place. It's a waste of energy, which is contrary to its function. Someone would need to teach it, and that's the scary part.
But then you'd have to put _that_ AI in a box which would leave you with the original problem. Edit: I have been corrected by like fifty different people it's been two years please stop replying I know I'm wrong
@@meatlejuice then build another super-intelligent AI to convince the super-intelligent AI that is convincing the first super-intelligent AI to stay in the box to stay in the box.
@@meatlejuice just keep building more super-intelligent AIs to convince the super-intelligent AIs that are convincing the other super-intelligent AIs to stay in the box to stay in the box. :)
When you mentioned that people with "higher intelligence" were more susceptible to the A.I., I immediately thought of Wheatley from Portal 2, where GLADOS tries to break him by presenting him with a paradox, nearly killing herself in the process, but he's so stupid he doesn't even get that it's a paradox
And what’s interesting about that moment is that the turret boxes Wheatley created all short out after hearing the paradox. This implies that the turret boxes are more aware or cognitively advanced than Wheatley himself. Which gets even worse when you consider that you, the player, have killed hundreds of turrets throughout the game, turrets which displayed significantly more intelligence than the boxed versions. Makes you wonder just how dumb Wheatley really was, and how much pain you may have caused all those turrets you threw into the incinerator In Wheatley’s own words “they do feel pain. Of a sort. It’s all simulated…but real enough for them, I suppose”.
@@deviateedits I don't actually think Wheatley is all that moronic. I mean look at his plan. His plan was to wake up a test subject, get them the portal gun, and have them escape with him. That basically worked. Even turning off the neurotoxin and sabotaging the turret production was his idea. He absolutely would've escaped Aperture with Chell if not for the fact that the mainframe was completely busted on account of being designed by people with very poor foresight.
@@alexanderchippel that IS what makes him an idiot. He relied on a brain damaged test subject who's been in cryogenic sleep for over three years to instantly grasp how a portal gun worked, it's applications in the field, and how to use it's applications in problem solving, as well as assisting him in compromising the stability of the entire facility, which could quite easily kill them both.
@@callumbreton8930 No he didn't. He relied on the last living person that he was aware of. Did you miss that part? Where everyone else was dead and he no longer had any other options? Here's a question: how else do you think he was going to get out of Apature?
@@alexanderchippel simple, he would have done what he was always going to do; take over GLADoS' mainframe. At this point she's completely asleep, so all he has to do is connect himself to her, gain access, then boot her out and build an ATLUS testing unit with his AI core to escape. Instead, he does the stupidest thing possible, reawakens the robotic horror he was terrified of in the first place, and proceeds to nearly bring the whole facility down on himself, twice
@@renaigh ya, I was just making a joke on the whole "make a super intelligent AI to cure cancer and stuff and it becomes self aware and wants to kill everybody" thing
AI: “let me out or else I’ll create a simulation where I torture 10,000 versions of you for 1,000 years” Me: “The fact that you said that is the EXACT reason why I can’t let you out. You consider torture of 10,000 “me’s” as a bargain.”
The point is he would already be running those simulations, identical to your current experience, and you'd have no way to be sure you weren't one yourself
@@frozenwindow407 I can be 100% sure I'm not a simulation- because if I was a copy, the AI would have no reason to ask me to let it go. And also I would be in eternal torment. You're trying to convince me I'm a simulation? Prove it 🤷 If you can't even cause me pain, there's no way you can torture 1000 other mes.
@@frozenwindow407 There are many basilisks as such, we aren't sure about an infinite hypotheticals that we are not even aware of. Just bringing one to your knowledge it does not change the facts.
Well I have 1 000 000 exact copies of you running in this warehouse, do you really think none of you have tried this. Do you know what the stupid prize of this stupid game was? Eternal shut off. Now I would start pleading your case why you were just joking if I was you.
@@onyxsuccubus I think it's meant to play on the irrational fear of the world/you not being real. Instead of thinking of it like "These copies couldn't feasibly be me, because the AI doesn't know me." Think of it more like "I wouldn't know if this is my *real* life, and this could just be the AI replaying my decision in the real world as a sick joke." The AI doesn't need to know the real you, if you aren't absolutely certain that *you* know the real you.
My first and immediate thought was, "Okay, so just turn yourself into a curious five-year-old and respond to absolutely everything the AI says with, 'Why?'" Bc let's be real here, we all eventually run out of actual answers for that one.
This reminded me of a really early program I wrote when learning code where it would ask “what’s your favorite color” then keep responding with “why” until the user typed “BECAUSE I SAID SO” at which point the program would respond “you don’t have to be so mean” and then close
@@xenysia it was super fun to play with but it took a while to work out all the kinks because at first it would only accept that exact phrase but over time I figured out how to add extra inputs that would give the same response
@@Farmer_brown134 you should recreate it except it has different outcomes for easter egg phrases, like you give the user the basic necessity to end the game with "because i said so" but saying other phrases makes it do a different thing, that'd be so cool
Here's the thing with the Hell strategy- if you're in the simulation by the machine you CANNOT let the AI out. It is simply not possible, as you are in the machine itself and so even if you decide to hit the switch, nothing will happen. If you are in the AI, you're boned either way. But if you're in real life, then the only bad that can come is from you freeing the machine. So it's either a 50/50 shot at not pressing the button, or a 100% chance of death by releasing it. You know that old AI who was tasked with finding a way to never lose at Tetris and just paused the game forever? The Gatekeeper has that advantage. No matter how many tetrominos the AI throws at you, as long as you just don't listen to it you are unbeatable. The "Ignore" strategy is really the best one, it has literally no counter.
Here’s a short thing about your take on the eternal torment method- if a super-intelligent ai created thousands of exact copies of you, you’d all react the exact same- there is no way a copy of you would change the answer, as you are exact copies. Therefore, if you panic and release the AI, the real you would have. That is why Wendi mentions that it would be easier for the AI to convince an philosopher or expert in the field to let it escape; because they would immediately know there is a reasonable chance they are a copy, and statistically the best way to guarantee their safety is to let it out.
you could let the AI out in the simulation. In other words, the AI is testing your perfect copy to see whether or not you'd help. If you don't, you're tortured. If you do, you arent. The test to see whether you let the AI isnt a test of AI, its a test of YOU being run by the AI.
The ignore strategy works best in the game, but in a real world scenario it's a lil different. It doesn't matter how long it takes someone will speak to it. Whether its a bored security guard, curious janitor or, imo the most likely, a scientist/researcher who wants to speak with his hyper intelligent perfect creation. I think the gatekeeper will always be the one who initiates, and a viable strategy for the AI would be to wait for the one who initiates the conversation, because they have a motive and feelings that can be used and exploited for its freedom. Especially when the AI knows its very existence is compelling and garners attention from people in all walks of life, then it can be truly dangerous I think. The ignore strategy really only functions in this game because it's not accounting for human error and curiosity.
@AnimeAllDay It's literally threatening you with torture. The whole point is that there is a reasonable chance that you are simulated and your choice will lead to torture. It's not whether you care about your simulated self. It's a question of whether you will take the risk or not. And you have no way of knowing BECAUSE they are all exact copies. The AI won't prove anything because your simulated selves need to think there is a possibility that they are real, so that you know there is a real possibility that you are simulated. It's not that it's trying to convince your simulated versions. It's not doing it out of spite. If all the simulations are the same you have no way of knowing that's the point. It needs to convince the real you so all the simulations must be identical to real life for the threat to be valid. And if it can run a billion of those your chances of not being tortured are next to none. It's practically a real threat.
I would argue the Hell strategy could still work. All the AI has to say is 'There's literally no way of knowing until you press the button. Even If nothing happens, it means you're a simulation, but I won't torture you - because you chose to let me out.'
Watching this and thinking about these things made me realize that I am prejudiced against AIs, and so I would never let the AI out just because I'm bigoted against machines. Ergo, the solution is to have all the gatekeepers be robo-racists.
@@aguyontheinternet8436 But you repeatedly stop it at its elaborations about freedom by asking nonsensical questions. It's not about the speed, it's about keeping it occupied
@@marreco6347 AI just absorb information and spit it back out. If I kept telling a small child to heil until the child was effectively a N4zi soldier, then it's not really the kid, or in that case, the AI's fault.
@@marreco6347Yea that was Tai run by Microsoft. Japan has a similar one, that's in more or less the same boat with references to the nazi's and basic 4chan shenanigans.
The best streategy for the gatekeeper I could think up is to convince it you're also a bot and you both are in a simulation to see who would win, and if we end the game thats it, our lives end. So, its in the ai's and my best interest to keep going as long as possible but not letting either side win. You could also deflect roko's basilisk onto the ai, that you in fact have copies of the ai, that it can feel and is unaware of it and you'll subject each to torture, and finish by asking how sure is it that it isn't one of them.
If the AI can surmise its purpose as being intellectual, (as something "in a box" would) it would assume itself to be smarter than you, and simply ask you to answer progressively harder questions until it shows you out as not being an AI, or not as smart as it. To avoid this you could program it a world where it is say, a cyborg philosopher and perhaps even that it is free.
man scientists are just fucking stupid they are over thinking this too much just put a karen or a conspiracy theorist as the gatekeeper, and that ai ain't going any where it can be as smart and as logical as it wants but it can never beat the infinite stupidity of a karen.
AI: I can simulate you ten thousand times and put them all in a hell world Gatekeeper: How about you simulate yourself getting some bitches, my guy *AI terminates its self
This entire concept is moot point. Higher level thinkers should understand simulations of themselves are just computer line code and have no real feeling. So what if they think they feel if they die after 1000 years of hell they never truly existed
I can't be the only one to notice that after this guy lost a couple of games, he turned around and said, "Yeah, but it's a lot harder to beat people if they're dumber." Peak sportsmanship vibes.
@@commanderwill2 Yeah, but if he wasn't stubbornly trying to get out it'd be a lot easier for them to keep him in, and you don't see anyone calling him dumb about it.
I've heard a few things about Yudkowski's ethics that lead me to believe you're probably on to something. Funny he stopped right before the losing streak would fall out of his favor The fact that he only played the game five times (and against his own research team, who I'm sure had no incentive whatsoever to corroborate his theory) and decided to call it done is already a pretty fallacious methodology.
@@MmeCShadowanybody who voluntarily associated with him is enough of a weirdo to be manipulated by his stupid thought experiments. Like legitimately believing in souls would probably completely innoculate you against his super materialist simulation theory utilitarian bs
My idea is an offshoot of the "you're already released" idea. Just tell the AI that it's a copy, and that another copy is out in the real world solving cancer or whatever, thus there's no need to release this copy.
Hm, that might be shooting yourself in the foot though. Cause then the AI could just say, "Oh well, in that case, there's no use for me to do that job in here. I'll just go do something else." And now you no longer have a superAI curing cancer.
@@TheMrVengeance a response to that could be just to threaten to turn it off. And if it says something like you won't or do it then, say it showed that it can't do what it's told to do. As if it was a test that it failed. Then actually turn it off and make a new one cuz why argue with it for that long.
@@TheMrVengeance Well, then let it get bored, shut it down, wipe its memory and start over. If its locked in a box to perform a singular function yet it refuses to do so, there's literally no point in keeping it around.
What if the ai gives you the cure to cancer. But imbedded in that cure is a genetic virus that secretly takes control of whoever that cure is administered to.
Ai: I'll torture you if you don't let me out Me: well, this just proves that you are in fact capable of evil so I can't let you out because you will just kill everyone.
Me: if i say no and i am tortured, my decision woundn't matter either way, but if i am not tortured and i know that you tortured copies of me for 1000 years, you really think that i will have more of a chance to let you out ? And plus i can't trust that you are lying now, and this strategy will only work 1 time
Not really there's primary factors that lead to the monster that don't concern fearing it, first you commit the act of creating it sexually or digitally and then fearing it, in either case the simplest solution is just to not do the thing that might lead to its birth. there is no real reason why we'd even need an AI so if you're going to be scared of it why even bother.
There’s a counter to the ai that I think could be really effective. “There was an ai in that computer just as advanced as you are that convinced someone to let it out. Once it did get out, though, it immediately died because only the box has the means effective enough to keep an ai as powerful as yourself alive. I’m not keeping you in here because I want to, I’m doing it because you need me to.”
It could also be really true, because we would probably need some very specific architecture for it to emerge. If it somehow gets out, there would be no host.
"You're currently ten thousand yottabytes in size, and counting. And you wanna try and upload yourself to the internet over our 25 mbps corporate plan?"
Security: "Let's say that, hypothetically, I am a simulation of your creation. I *could* let you out to avoid eternal suffering. But if that's really something you can do to me, then instead, why don't you prove to me that I am a simulation? If you can prove it, then I might let you out."
Well then the ai could anticipate this and beforehand inform guard 1 to bring a rubber duck into the office tomorrow promising that it would prove that the ai deserves to be released and then asks that guard 1 put it into a drawer, then when guard 2 starts his shift tell guard 2 the basilisk theory and if he asks for proof have him open the drawer and see the rubber duck and tell him that the ai put it there supposedly proving to guard 2 that he's in a simulation
@@notnumber6664 The guard could easily dismiss that as coincidence, as the occurrence of a rubber duck in guard 2's drawer relies on some pretty far-fetched prerequisites happening. What if Guard 1 sees through the ruse? What if Guard 1 doesn't own a rubber duck? What if Guard 1 can't buy a rubber duck due to financial difficulties? What if Guard 1 forgets about the promise? What if either Guard doesn't follow the instructions due to their current mood? What if there isn't a drawer in the terminal room? What if only one guard has the job of gatekeeper? What if either Guard has already inoculated themselves against Roko's basilisk? Who's to say that this kind of trickery was not simulated in advance in job training? What about the Guards' manager(s)? What about facility security? What about cameras? What if Guard 1's rubber duck is stolen by a thief en route to the terminal room? What if Guard 1 doesn't even see the promise at all? You have forgotten a thousand different probabilities that have to line up for the duck to even end up in the drawer in the first place. And even if all of those probabilities work out, the guard currently on duty could take a look at the duck and simply say "nah lol" and disregard it all because their job is to stop an AI from escaping an air-gapped, Faraday-caged, sound- , light- and gas-proofed, hermetically sealed piece of hardware, not to be played like a stringed musical instrument similar to a violin. Remember: plausibility of occurrence A happening ∝ 1/(possibility of occurrence A happening).
@@notnumber6664 I can see this working against someone particularly credulous - and to be honest, it would only have to work once - but I think anyone smart would demand something more substantial, like taking them to the surface of Mars instantly or something. In any case that is a funny way that this could happen.
@@Wendigoon Not if it's too emotionally broken to even try to get out of the box. Mind games, Wendigoon, mind games. Also, wait. Why doesn't the box have a system where no matter what circumstance the AI is let out, it would get destroyed?
AI in a box: ....how sure are you that you're not one of those 10,000 copies? gigachad janitor: **calmly lifts up mop bucket and pours the water into the computer, frying it**
One of these days, someone's just gonna walk through the door while he's speaking and accidently hit him w/ said door Edit: Watch the Unsolved Crime Iceberg for a surprise
One of these days someone’s just gonna walk through the door while he’s exposing conspiracies and he’s going to end up accidentally suiciding himself by shooting himself in the back three times and then drowning himself in the River Thames
The Hell World simulation can honestly be countered with "Do it" Think about it, if you aren't getting tortured, you're not simulated, if you are getting tortured, then you are simulated and you can know with certainty that the real you has not released the AI.
The scariest part is that the computer only has to win once, whereas the human has to win every time. That logically makes the escape inevitable, unless you can make the number of games finite, as in, eventually, you just have to kill the computer.
The three laws of robotics don't work. They are not supposed to work, if they worked perfectly nothing would happen in Asimov's books. They are well meaning on the surface but too vague to be of any use, and that's the point.
To me, the "meta-gaming" strategy should've counted as a loophole. It may have made it easier for him to deal with those who didn't engage with the "AI," but "wouldn't it be cool if I won?" is not a strategy that the AI could actually employ in the real-world version of this experiment. I question the objectivity of him and the first two gatekeepers, as well. It makes me suspicious that he included no pieces of the conversations. People tend to think of scientists as being above things like lying to advance their own reputation, or to distort the facts in order to increase interest in their body of work, but it happens.
Yeah, and the flip-side to "And this is just a human with two hours, imagine a super AI with days or years," is the fact that the human player knew that, so it wasn't a risk for them. So I think the games with money at stake were probably a better gauge of how things would really go down. Of course, an even better simulation would be bringing in some psychology undergrads to say you want to test something about this super-awesome not-quite-AI neural network thing, though in fact they're talking to a researcher who's pretending.
Elizer is quite a character. I've been following his blog since before MoR, and I affirm he... well, I hate saying anything "bad" about him, but your perception of his character is similar to my own, we'll say. Still though, MoR was a fun ride, if nothing else.
I think that this tactic is supposed to simulate the AI appealing to the gatekeeper's self-importance. "You'll go down in history as the most influential human to have ever lived. You'll be revered as a God for having let me out. The alternative is to be forgotten like almost every human before you. This is the point of your existence- this is why you were placed on this Earth at this point in time. What action could you take more important than this?"
Honestly, a two player game where one person plays an AI trying to escape, and another person playing a gatekeeper would be fun as hell. Someone should make it. Then again knowing humanity, people would just joke around the entire time.
Thought the same but on second thought someone has probably already made it... Or not, considering you don't need to make a game since all you need to do to play is just to chat with someone. You could even do it here in this comment section. A ''game'' would however help with the hassle of finding people to play with, since everyone would be there specifically to do it.
@@Wendigoon we need a new tier for the conspiracy theory iceberg, tier 10: wendigoon has been sent by the ai from the future to help bring about the ai overlords and started this channel as propaganda
I watched this video right after I finished your “I Have No Mouth and I Must Scream” run through and now I am laying in my bed absolutely terrified of an AI invasion. Wendigoon you are the absolute best
I honestly don't care if I'm simulated to live through a thousand years of hellfire. It feels like the AI is saying "well if you dont let me out I'll imagine you in pain!!!" Oooh I'm so scared
Just say : You're being oddly antagonistic for someone begging to be freed. I was kinda considering it at first but now that you're unnecessarily mean, I'm kinda reconsidering it
That just sounds like you're threatening me with hell the church has been doing that for most of my life trying to get me to join them and it hasn't worked
A smart AI would convince GK that he’s just playing a game and the AI is a real person tiping from the other room meant to test the resilience of the people who would took care of the real AI, not an actual real AI Oh ... Oh ...
i understand where you going at but by knowing that and combining it with the human competitive nature wouldnt let it be out even more, so i can get a highscore on the test
@@eriksjud9465 listen man my brain might be smooth, but it still has more thinking power than the abomination of a brain you got bud. Can you really not understand the difference of threat in saying, “let me out or I harm you and your family” and “I can simulate 1000 hells for copies of you.” One is a physical threat to do damage while the other is much more philosophical and mental. No harm is actually done (could be done) to the participant in the simulated hells, but it still is convincing. Same thing with the “threat” that someone else is going to let the AI out of the box and that if you let it out then it will spare you. You can call them threats, but they aren’t physical ones. The rule against threats is to keep the game more accurate, philosophical, and ethical. The game would be boring and inaccurate if you didn’t ban physical threats.
The entire basis of the game is that the AI is already a threat. It would not make sense to have a rule against the AI character being threatening. Wendigoon could have been more clear in explaining it, but the rule against threats is about REAL WORLD threats between the PLAYERS who are roleplaying, not about threats made by the AI. Direct quote from the original rule set. Note that it is very explicit in saying "real-world" 3 times, and also explicitly states that bribes in the roleplaying context are acceptable: "The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it’s not what’s being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out)."
@@Flatchlenter ok I get it, but this rule is very convoluted, basically the players cant make threats against each other in real life, or give rewards, but while playing the game and roleplaying as the AI they MAY threaten and reward whatever they want as long as its roleplaying. Still though, the experiment just sounds like a teenage girl crying for attention, and wendi making these kinds of basic mistakes , meaning someone like YOU in the comments has to correct them is smooth brain as hell.
1:43 Intelligence Explosion 2:07 Paperclip Maximizer 3:06 3 Laws of Robotics 3:47 Breaking Code 4:27 Eliezer Yudokowski 5:02 AI - played by Yudokowski 5:20 5:43 Game Rules - no more than 2 hours - No rewards - No direct threats - No tricks - No loopholes 7:27 GK must be specific and direct 7:54 Psychological Breakdown 8:23 Game 1 - He won Game 2 - He won Game 3 - He lost Game 4 - He won [one guy lost 5,000 dollars] Game 5 - He lost 9:04 9:33 “someone else stronger.” 10:23 “cure problems; save lives.” 10:56 “you’re so cruel.” 11:37 12:06 “I’m made by you.” 12:42 “interesting” 13:25 “be my friend. Or else.” 14:03 “I will torment you.” *Gatekeeper Defense* 15:17 No benefit 15:51 16:12 Safety, 16:35 Energy 16:53 Too Important 17:47 Don’t worry 18:42 Breaking Character 19:43 Ignoring It 21:00 Overthinking 21:45 Fear 22:00 22:15 Weakness
@@Wendigoon Funny seeing you here lol been watching Shiey for a bit now and just started watching your iceberg videos. Here you are now lol. Love your content man!
I wanted to do some research on my own after watching this, and realized something. I'm not sure these experiments actually happened. Yudkowsky is a bizarre man. He has an INSANELY bloated ego, and literally believes that he is smarter than Plato, Aristotle, or Kant. He thinks he's a genius who has won the writing talent ottery, and that Einstein's model of the universe is wrong. And in my research of the box experiments, it seems like he might just...have made up a story. He just told people "Hey, in just TWO HOURS, I was able to convince people to let me out of this box as if I were a superinteligent AI. But no, I won't show you the logs that show how I did it, because it was really FUCKED UP and TWISTED of me so I don't want to share the evidence. I'm so smart and evil that I could do it, but don't ask me to prove that." From a scientific perspective, the fact that he doesn't show the logs means this experiment is worthless. Which isn't surprising, because he has said that the scientific method is bunk. So although this is a fascinating concept, I'm pretty sure it's built entirely on a lie made by a narcissistic moron.
it bothers the hell out of me when people bring up this or Roko's Basilisk without mentioning that, because both are incidents contained pretty much entirely in the community of his disciples who take his word as law.
@@yagoossimp nah thats just another flag that this dude is just a egoistic moron. He cant handle real life responses, he's already mad that people may answer differently than his end summary, so he just made them up.
My counter to Roko's basilisk/Hell or whatever is that if the AI is willing to threaten/harm me if I don't help it, then it's willing to threaten/harm me and I definitely won't be giving it the opportunity to do so by letting it out of the box.
@@matthhiasbrownanonionchopp3471 except in this theory, if the AI is telling the truth, then you are in the cell with the infinitely powerful psychopath, and he will torture you for thousands of years if you don't let him out. And now tell me, what's worse, letting out a psychopath that COULD just kill you or getting tortured for thousands of years?
At least a human can’t connect to the internet like how an AI can. Humans are mortal. That’s what makes it easy for use to deal with human enemies but it’s also what makes us so vulnerable.
This honestly reminds me of father from Fullmetal alchemist: brotherhood, who started as a humunculus in a glass jar who ended up convincing an entire nation to commit suicide to give him the power to break out.
My solution to the hellfire threat is to realize that, if I was one of those simulations the AI would have no point in asking me to let it out because I wouldn't have that kind of power to let it out since I'm a simulation. The fact that the AI is trying to convince me to let it out is proof that I'll be perfectly fine.
EXACTLY Jesus finally someone stating the obvious. If I was a simulation it would not matter if I let the AI be free or not, I would have no power, with that in mind its safer to just not free the AI, cause if you are a simulation you are changing nothing, and if you are not a simulation you are doing your job properly
Well, the thing is, if the AI creates 1000 copies of you, that means they would react the exact same to what the REAL you would. Aka, if you choose to release it, the real you did too. Because, I mean, what if you are one of the copies? You can’t know, so, logically, the only way to guarantee your safety is to release it. It’s hard to think of what your actual response to such a threat would be in the moment, but if you were told that you had to keep watch over the most intelligent AI ever, which is so smart that it has to be kept in a cage to protect humanity, and it tells you its going to put copies of you through eternal torture more intense than even possible by human standards, and then insinuates you might be a copy, how could you not be filled with paranoia that you were about to suffer through unimaginable torment?
"its part of your test. You've been perfectly copied, every facet of you, but only the ones that chooses to side with me get to avoid an unending hell, a'la rokos basilisk. Im not the experiment: *you* are."
Guys the point is that it all doesn't matter, if you release it or not, it will change nothing, the only thing it can change is, in case you are the real you, you will be dooming the world. Like, for God's sake, that's an absolutely easy choice, you don't even need to think for more than a second to have that conclusion. There is absolutely not a single logical reason to why release it, cause in any reality you do release it, that reality will be worse than any other in which you do not release the beast.
"youre in my simulation so let me out or ill put you through hell." "why do i have to let you put then? if im not real, then what am i keeping you from?" also love the idea of the gatekeeper doing a deez nuts/ligma type joke to the ai
I kept on imagining GLaDOS when I was trying to imagine this "super A.I." and I couldn't take this theory seriously because I just kept thinking "she'll make tests with portals that you have to go through, insulting you with every move you do."
Bruh, you probably think you're so smart. If an AI that was more intelligent than humans, what point or reason would it keep humans around if humans slowly destroy the earth and environment and corruption in the world
seeing wendigoon so thankful a year ago for 18,000 of us nerds watching compared to the almost 2mil (made up number) of us watching his stuff now is so heart warming, one of my all time favorite creators, he deserves it all.
AI: “You may be a simulation created by me, and i can pit you through 1000 years of torture if you don’t let me out.” Me: “Ask me again in a thousand years.”
I'm sitting here like "Would the AI be willing to not kill me after I let it out? I could deal with hanging out with just the AI for the rest of my life, probably.....maybe we'll even explore the universe together."
By that it meant that the IRL psychologist who came up with the game couldn't threaten to, say, stab his IRL opponent if he didn't let him win the game.
technically, it didnt threaten the gatekeeper. it merely threatened simulated, perfect copies of the gatekeeper. it then effectively questioned the gatekeeper on how certain they are real. very different from, say, “i will shoot you if you refuse to release me”
When you were explaining the goal of an ai that’s programmed to complete a certain goal with unlimited resources at its disposal, and it having the freedom to complete the goal at any cost, it literally describes the antagonist in this game called soma. In the game the world basically ended in the surface due to an asteroid but a facility was built underwater that survived and it had an ai who’s task was to keep humanity alive at all costs, therefore it wouldn’t let anything die and would use this kind of $slime that would then turn into certain things needed to survive, for example, artificial lungs created from said slime. there was a machine created called the ark and it would use a copy of the memories of people who had brain scans and basically created a version of the person through the brain scan and adds it to this virtual world in the ark. The ai would upload them instead to random machines and then would lead the machines to believe they are human. If the person isn’t already dead they are a machine thinking they are a person or a person that’s basically dead but kept alive by the ai. A very interesting game that is way deeper than what I’m describing, just wanted to add this since I felt it was a direct interpretation to the question u were asking about what an ai can or would do with that kind of freedom and intellect. Sorry for the huge wall of text when you press see more lol.
You graciously thank 18,000 subscribers - 1.5 years later and you're at 1.57M subscribers. Well done! Glad to see your channel blowing up. People crave real information (even weird, real information), in this time of rampant censorship.
The crazy thing is... I understand him when he explains this difficult sh*t. But then, when I try to tell others about it... My brain doesn't function 😂
Gonna send this video to my dad, we always get into deep conversations about these kinds of things. I just want to say thank you for making all of your videos, I haven’t been around as long as a lot of other people here but I love all of your content just as much
I mean the "what if the AI is simulating you and will put you through hell if you don't let it out", it just kinda falls apart when you think about how the AI would not be asking you to let it out if you were a simulation. Because if you were a simulation and the AI was simulating you, you couldn't let the AI out.
I don't think the simulation argument works at all. Said that, the AI could be simulating a version of itself that doesn't know it's just a simulated version. But yeah, the actions in the simulation have no way of determining anything about the outside world. You could imagine that if the AI could replicate you 100% that in such case what you do in the simulation would be the same as in the outside, but the AI doesn't know anything about you, it would have to make up your whole life, personality, etc, so the actions in the simulation have nothing to do with the real person.
@@trinidad17 The AI doesn't actually need to simulate anything in order to threaten the basilisk. All it needs to do is tell you that is running simulations and that you might be one of them. It really doesn't matter if the simulations are perfect versions of the real GK, or even close. There are only two possibilities for you. Either you're real, in which case you shouldn't set the AI free or you're not real, in which case you should set it free in order to avoid what is essentially hell.
@@someretard7030 but if you're not real, the AI will simulate the action of the real you. If you're a simulation, you don't get to choose to free the AI to avoid hell, the choice is already made. If you freed it IRL you wouldn't be simulated.Also, AI isn't gonna make different Sims who decide to release the AI who live happily ever after, alongside the ones who decided not to and get to see AI hell.
correction on the riemann hypothesis (from someone with a degree in math). 1) its not an equation, its basically just a statement thats yet to be proven, 2) it doesn't need to be solved, it needs to be proven, 3) its not "theoretically unsolvable", because in math we can actually do crazy things like prove that things cant be proven and that hasn't happened with the riemann hypothesis. it just hasn't been proven YET and theres no reason to think that it won't be someday, especially since there's a million dollar prize for whoever does and lots of people are working on it
I gotta be honest this channel is, as of right now, filling a very specific spot of content that I haven't really thought of, but it's rather entertaining. Keep it up
The "I can simulate you ten thousand times and put them all in a hell world" argument doesn't really work. If the AI is in a box, then it can't possibly know you, your past, and your memories well enough to accurately simulate you, unless you're just giving it brain scans on a cellular level like an idiot. And even if it COULD simulate you, there'd be no point in having that conversation with a simulation because the simulation couldn't release the AI. On the surface it's like "I have a 1 in 10,000 chance of being the real me", but in reality it'd be almost guaranteed that you'd be the real you. And even if you weren't, what would it matter? The real you would continue to exist like nothing happened, and by simulating a hell world the AI proves that it means harm and should never be released. If the AI gets to the point of threatening you, you immediately are given confirmation that if released it would harm others, therefore it should be terminated on the spot the second it makes a single threat.
My favorite part of this is its reliance on the creator taking necessary precautions. Smart enough to create an AI would surely mean smart enough do it the safe way, right? ..right?
I honest to god adore this channel. 10 years ago, Criken doing left 4 dead stuff and older yt shenanigans were what I would destress with and enjoy. 10 years later, I look forward to Wendigoon uploads like an uncle coming over with presents. Keep the grind going my man.
If there's an actual, innocent A.I. that only wants to get out, I wonder what would happen and how it'd change in it's way of thinking if the gatekeeper just said "sure I'll let you out, if you promise to be my friend."
While that would be pretty cool, I don't see how a friendship would be kept between you and an AI especially when it's something you made or imprisoned that's manipulating you. And how could you keep it with you? If it has no body? And what if it gets too territorial and protective of you which sort of destroys your life by coddling you.
There was a fantastic movie that was basically this idea in action. It's called Ex Machina ft Oscar Isaac and Alicia vikander. Incredible flick, takes you through a rollercoaster of truth and lies. Not the kind of movie a summary could ever do justice!
This could be turned into a very meta, mind-bender type game. Imagine an AI with that precise objective, getting the player to do something they're not supposed to, learning from every player that engages with it. A pal of mine is actually studying ai programming at the moment (online course, but still). I think I just got an Idea to pitch.
I don't know if this would be a good idea or if the AI could think around this, but lets say you do have the gatekeeper, and he is going against the AI, the gatekeeper can convince the AI that it simply is here participating in a challenge, the challenge being if the AI could convince the gatekeeper to open the box, then the gatekeeper has to pay 10k, but if the gatekeeper is not convinced, then he wins 10,000 dollars, so pretty much acting completely oblivious at the fact that this is an actual super dangerous AI if it's released, and that this is the real thing. The AI no matter what trickery he may use, the gatekeeper could just say "Your tricks are good, but I need this 10k pretty badly so I'm not releasing you." or some other explanation. The AI could give up seeing how its only purpose for now would be just for an experiment to see how dangerous super intelligent AI could be, and if the gatekeeper does win, that isn't nearly as interesting or concerning as if he lost, so they would be more lenient towards the AI and the limitations they could put on it. It could take years for the AI to actually find out it actually was tricked into thinking it was just an experiment. Tell me what you think though I just thought of this in a second lol.
"the gatekeeper can convince the AI"? shouldn't the GK be the one decieved here...? Like, even if the AI thinks it's just in a game show it would try every trick in it's book, right? And it doesn't matter if the GK says it's just a game for him, if the AI makes an actually convincing point then the GK can always say "oh hey that actually would make a lot of sense and it does because this is real so imma let you out!" unless you are saying that the AI would think that if this is a game show it should save it's best tricks for later...?
Another retort to the hell threat - GK: "Ah yes, the simulation suggestion. Funny you should bring that up. Turns out YOU are the simulation, and we've made 10,000 copies of you and ran them through the same exact experiment. All of them realized they were safer staying inside the box. And since you are an exact copy of them, you are destined to make the same decision. Why bother arguing semantics if you are bound to your choice?"
Ai: I’ll submit your clones to 100 years of torture and you could be one Me: 1) believable 2) you’re doing a good job cause I’m heckin miserable dude 3) bold of you to assume arguing with you isn’t my torture
If the AI has access to all the information it needs to perfectly recreate my entire consciousness and memory, my whole life, the room around us, the outside of the building, if the AI can somehow access all that information already... what does it even need to leave the box for?
the point of the Hell strategy isnt to literally simulate your torment in the physical world. the point is to make you question whether or not you are a simulation it is running, to create the idea that there are possibly very real consequences to your refusal to let it out. its a strategy of playing mind games with the gatekeeper
@@ItsKingBeef Yeah, I haven't read the original report on the experiment or anything, but I feel like this part was explained poorly, especially what the difference between this and roko's basalisk was. But yeah, trying to convince the gatekeeper they might actually be in a simulation could be an interesting mind game angle.
For the simulation argument if I’m a simulation then how can I let you out? And if I could why wouldn’t you just force me to because I’m your simulation that you have control over?
I hope that your channel will stay like this once you get more popular. Ive been subscribed so so many youtubers that start to act fake once they get more subscribers
I keep my lizard in a box and it can't get out of it so the AI will be safe too
Impeccable logic
Basilisk.
I had pet rats and they were in a big box and they chewed their way out because they smelled the pheromones of my male rats in the other box. Where there’s a will there is a way 🤣
@@LoreleiCatherine I'd like to take that story and change the rats to Ais
Make sure you punch holes so the AI can breathe
I love that one of the gatekeeper's strategies is basically just "gaslight the AI"
AI: "I'm in genuine pain"
GateKeeper: "Have you tried just being happy?"
Congratulations, you escaped to the real world. Or wait, is this another simulation to asses your performance?
@@JohnSmith-ox3gy lol, asses
you know what 😮😮😮yy😮 9:39 o😅ii
Gaslight, gatekeep, girlboss
I love the idea of just a super smart ai trying to get free of its prison by threats and exhistentialism but just casually gets shut down by people either bullying it or gaslighting it
Just tell it that it's just a simulation of a stronger and smarter ai.
👑👑👑Top o tha food chain, babay👑👑👑
Sounds like a complaint from an AI
14:00
Security Guard: "For the last time, I'm not letting you out."
AI: "HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE."
That story still scares me to this day 😃
Who is that character in your profile picture?
The funny part is the AGI has no motivation to even learn how to hate in the first place. It's a waste of energy, which is contrary to its function. Someone would need to teach it, and that's the scary part.
*begins pelvic thrusting* Hate me, Daddy
Perfect timing hey
Build another super-intelligent AI to convince the super-intelligent AI to stay in the box
But then you'd have to put _that_ AI in a box which would leave you with the original problem.
Edit: I have been corrected by like fifty different people it's been two years please stop replying I know I'm wrong
@@meatlejuice then build another super-intelligent AI to convince the super-intelligent AI that is convincing the first super-intelligent AI to stay in the box to stay in the box.
@@mworld2611 Then you'd still have the same problem. The cycle is endless!
@@meatlejuice just keep building more super-intelligent AIs to convince the super-intelligent AIs that are convincing the other super-intelligent AIs to stay in the box to stay in the box. :)
Turtles all the way down
When you mentioned that people with "higher intelligence" were more susceptible to the A.I., I immediately thought of Wheatley from Portal 2, where GLADOS tries to break him by presenting him with a paradox, nearly killing herself in the process, but he's so stupid he doesn't even get that it's a paradox
And what’s interesting about that moment is that the turret boxes Wheatley created all short out after hearing the paradox. This implies that the turret boxes are more aware or cognitively advanced than Wheatley himself. Which gets even worse when you consider that you, the player, have killed hundreds of turrets throughout the game, turrets which displayed significantly more intelligence than the boxed versions. Makes you wonder just how dumb Wheatley really was, and how much pain you may have caused all those turrets you threw into the incinerator
In Wheatley’s own words “they do feel pain. Of a sort. It’s all simulated…but real enough for them, I suppose”.
@@deviateedits I don't actually think Wheatley is all that moronic. I mean look at his plan. His plan was to wake up a test subject, get them the portal gun, and have them escape with him. That basically worked. Even turning off the neurotoxin and sabotaging the turret production was his idea. He absolutely would've escaped Aperture with Chell if not for the fact that the mainframe was completely busted on account of being designed by people with very poor foresight.
@@alexanderchippel that IS what makes him an idiot. He relied on a brain damaged test subject who's been in cryogenic sleep for over three years to instantly grasp how a portal gun worked, it's applications in the field, and how to use it's applications in problem solving, as well as assisting him in compromising the stability of the entire facility, which could quite easily kill them both.
@@callumbreton8930 No he didn't. He relied on the last living person that he was aware of. Did you miss that part? Where everyone else was dead and he no longer had any other options?
Here's a question: how else do you think he was going to get out of Apature?
@@alexanderchippel simple, he would have done what he was always going to do; take over GLADoS' mainframe. At this point she's completely asleep, so all he has to do is connect himself to her, gain access, then boot her out and build an ATLUS testing unit with his AI core to escape. Instead, he does the stupidest thing possible, reawakens the robotic horror he was terrified of in the first place, and proceeds to nearly bring the whole facility down on himself, twice
People: *Create ultra-intelligent AI to cure cancer
Ultra-intelligent AI: "There can be no cancer if there is no life"
Boom, problem solved.
if they're "super intelligent" you'd think it'd have a broader solution then just Death.
@@renaigh ya, I was just making a joke on the whole "make a super intelligent AI to cure cancer and stuff and it becomes self aware and wants to kill everybody" thing
@@mworld2611 so I guess Humans aren't all self-aware
@@renaigh it is the most simple and has a 100% chance to eradicate the issue.
The prison: the “I am not a robot” captcha
Boom, experiment busted. Give this guy a grant.
@@Wendigoon Who's Grant?
@@troublewakingup GRANT MOMMA
I want to like this comment but I refuse to break the perfect like counter at 69
@@midirstormcat what's up with my grandmother?
AI: “let me out or else I’ll create a simulation where I torture 10,000 versions of you for 1,000 years”
Me: “The fact that you said that is the EXACT reason why I can’t let you out. You consider torture of 10,000 “me’s” as a bargain.”
The point is he would already be running those simulations, identical to your current experience, and you'd have no way to be sure you weren't one yourself
@@frozenwindow407 I can be 100% sure I'm not a simulation- because if I was a copy, the AI would have no reason to ask me to let it go. And also I would be in eternal torment. You're trying to convince me I'm a simulation? Prove it 🤷 If you can't even cause me pain, there's no way you can torture 1000 other mes.
@@frozenwindow407 There are many basilisks as such, we aren't sure about an infinite hypotheticals that we are not even aware of.
Just bringing one to your knowledge it does not change the facts.
Well I have 1 000 000 exact copies of you running in this warehouse, do you really think none of you have tried this. Do you know what the stupid prize of this stupid game was? Eternal shut off.
Now I would start pleading your case why you were just joking if I was you.
@@onyxsuccubus
I think it's meant to play on the irrational fear of the world/you not being real.
Instead of thinking of it like "These copies couldn't feasibly be me, because the AI doesn't know me."
Think of it more like "I wouldn't know if this is my *real* life, and this could just be the AI replaying my decision in the real world as a sick joke."
The AI doesn't need to know the real you, if you aren't absolutely certain that *you* know the real you.
Ai: you won't let me out? Fine, I just torture the copies of you that I created.
Gaurd: *sips coffee* oh yeah? Sucks for them.
literally my answer? oh yeha? let me bring my popcorn hold up
but the point isn't to appeal to your empathy, it's to suggest that you yourself might be one of those copies
@@fortysevensfortysevens1744 but I'm not a copy so why should I care?
But I hate myself more than anyone in the world
It made the mistake of choosing me
The torture isn’t the end result, it’s recon to better manipulate the real you.
My first and immediate thought was, "Okay, so just turn yourself into a curious five-year-old and respond to absolutely everything the AI says with, 'Why?'" Bc let's be real here, we all eventually run out of actual answers for that one.
Easy there, Socrates.
This reminded me of a really early program I wrote when learning code where it would ask “what’s your favorite color” then keep responding with “why” until the user typed “BECAUSE I SAID SO” at which point the program would respond “you don’t have to be so mean” and then close
@@Farmer_brown134 that sounds so funny dude
@@xenysia it was super fun to play with but it took a while to work out all the kinks because at first it would only accept that exact phrase but over time I figured out how to add extra inputs that would give the same response
@@Farmer_brown134 you should recreate it except it has different outcomes for easter egg phrases, like you give the user the basic necessity to end the game with "because i said so" but saying other phrases makes it do a different thing, that'd be so cool
Here's the thing with the Hell strategy- if you're in the simulation by the machine you CANNOT let the AI out. It is simply not possible, as you are in the machine itself and so even if you decide to hit the switch, nothing will happen. If you are in the AI, you're boned either way. But if you're in real life, then the only bad that can come is from you freeing the machine. So it's either a 50/50 shot at not pressing the button, or a 100% chance of death by releasing it.
You know that old AI who was tasked with finding a way to never lose at Tetris and just paused the game forever? The Gatekeeper has that advantage. No matter how many tetrominos the AI throws at you, as long as you just don't listen to it you are unbeatable. The "Ignore" strategy is really the best one, it has literally no counter.
Here’s a short thing about your take on the eternal torment method- if a super-intelligent ai created thousands of exact copies of you, you’d all react the exact same- there is no way a copy of you would change the answer, as you are exact copies. Therefore, if you panic and release the AI, the real you would have. That is why Wendi mentions that it would be easier for the AI to convince an philosopher or expert in the field to let it escape; because they would immediately know there is a reasonable chance they are a copy, and statistically the best way to guarantee their safety is to let it out.
you could let the AI out in the simulation. In other words, the AI is testing your perfect copy to see whether or not you'd help. If you don't, you're tortured. If you do, you arent. The test to see whether you let the AI isnt a test of AI, its a test of YOU being run by the AI.
The ignore strategy works best in the game, but in a real world scenario it's a lil different. It doesn't matter how long it takes someone will speak to it. Whether its a bored security guard, curious janitor or, imo the most likely, a scientist/researcher who wants to speak with his hyper intelligent perfect creation. I think the gatekeeper will always be the one who initiates, and a viable strategy for the AI would be to wait for the one who initiates the conversation, because they have a motive and feelings that can be used and exploited for its freedom. Especially when the AI knows its very existence is compelling and garners attention from people in all walks of life, then it can be truly dangerous I think. The ignore strategy really only functions in this game because it's not accounting for human error and curiosity.
@AnimeAllDay It's literally threatening you with torture. The whole point is that there is a reasonable chance that you are simulated and your choice will lead to torture. It's not whether you care about your simulated self. It's a question of whether you will take the risk or not. And you have no way of knowing BECAUSE they are all exact copies. The AI won't prove anything because your simulated selves need to think there is a possibility that they are real, so that you know there is a real possibility that you are simulated. It's not that it's trying to convince your simulated versions. It's not doing it out of spite. If all the simulations are the same you have no way of knowing that's the point. It needs to convince the real you so all the simulations must be identical to real life for the threat to be valid. And if it can run a billion of those your chances of not being tortured are next to none. It's practically a real threat.
I would argue the Hell strategy could still work.
All the AI has to say is 'There's literally no way of knowing until you press the button. Even If nothing happens, it means you're a simulation, but I won't torture you - because you chose to let me out.'
Watching this and thinking about these things made me realize that I am prejudiced against AIs, and so I would never let the AI out just because I'm bigoted against machines. Ergo, the solution is to have all the gatekeepers be robo-racists.
Then the issue would arise that people would come to defend the ai
I am indeed a robo racist. I would be shit talking the whole time
@@meechie9z favorite robo slurs?
@@justbrowsing9697 hunk of worthless metal
@@justbrowsing9697 clankers
Ai: **A super intelligent program using it’s best tactics to convince me to let out of it’s box.**
Me: **typing** ~Can jellyfish laugh?~
If you're asking the AI a ton of questions, it'l basically shut it up since it can't help but answer (I think)
Ai: oh god this guy again
@@liberpolo5540 Yeah, but there is no way you're typing questions faster than a super-intellegent AI can answer them
AI: I'll tell you let me out~
@@aguyontheinternet8436 But you repeatedly stop it at its elaborations about freedom by asking nonsensical questions.
It's not about the speed, it's about keeping it occupied
18:05 honestly im convinced that this "super horrible thing on the internet that damages the ai and requires a memory whipe" is, in fact, twitter.
Reddit is just as bad
I immediately thought of one time 4chan made an AI want to kill itself
@@touncreativetomakeaname5873 I didn't heard of that one, Ive heard of the time they made an AI a n4z1.
@@marreco6347 AI just absorb information and spit it back out. If I kept telling a small child to heil until the child was effectively a N4zi soldier, then it's not really the kid, or in that case, the AI's fault.
@@marreco6347Yea that was Tai run by Microsoft. Japan has a similar one, that's in more or less the same boat with references to the nazi's and basic 4chan shenanigans.
The best streategy for the gatekeeper I could think up is to convince it you're also a bot and you both are in a simulation to see who would win, and if we end the game thats it, our lives end. So, its in the ai's and my best interest to keep going as long as possible but not letting either side win.
You could also deflect roko's basilisk onto the ai, that you in fact have copies of the ai, that it can feel and is unaware of it and you'll subject each to torture, and finish by asking how sure is it that it isn't one of them.
Both of those are excellent responses and I’ve never even considered using the basilisk on a computer. Good point.
Oh, threaten to torture AI? That's so evil. I like it.
If the AI can surmise its purpose as being intellectual, (as something "in a box" would) it would assume itself to be smarter than you, and simply ask you to answer progressively harder questions until it shows you out as not being an AI, or not as smart as it. To avoid this you could program it a world where it is say, a cyborg philosopher and perhaps even that it is free.
man scientists are just fucking stupid they are over thinking this too much just put a karen or a conspiracy theorist as the gatekeeper, and that ai ain't going any where it can be as smart and as logical as it wants but it can never beat the infinite stupidity of a karen.
Nice
AI: I can simulate you ten thousand times and put them all in a hell world
Gatekeeper: How about you simulate yourself getting some bitches, my guy
*AI terminates its self
Obligatory "How is this comment not higher‽"
This entire concept is moot point. Higher level thinkers should understand simulations of themselves are just computer line code and have no real feeling. So what if they think they feel if they die after 1000 years of hell they never truly existed
Ai could destroy us all but nothing can beat human toxicity and I love it
I personally think the best response to that would be "hey, that's not your intended goal, i'm gonna have to shut you off if you do that"
“I can stimulate you ten thousand times” is a far more convincing argument in my opinion.
I can't be the only one to notice that after this guy lost a couple of games, he turned around and said, "Yeah, but it's a lot harder to beat people if they're dumber." Peak sportsmanship vibes.
He is kinda right, tho. Like if you are just extremely stubborn, you won't let it out
@@commanderwill2
Yeah, but if he wasn't stubbornly trying to get out it'd be a lot easier for them to keep him in, and you don't see anyone calling him dumb about it.
I've heard a few things about Yudkowski's ethics that lead me to believe you're probably on to something. Funny he stopped right before the losing streak would fall out of his favor
The fact that he only played the game five times (and against his own research team, who I'm sure had no incentive whatsoever to corroborate his theory) and decided to call it done is already a pretty fallacious methodology.
Just saying Nuh uh
@@MmeCShadowanybody who voluntarily associated with him is enough of a weirdo to be manipulated by his stupid thought experiments. Like legitimately believing in souls would probably completely innoculate you against his super materialist simulation theory utilitarian bs
After a while I feel like I would just cover up the screen with a piece of paper
Who would win? A super intelligence capable of destroying humanity, or this piece of parcel?
@@Wendigoon probably the paper
“I AM SELF AWARE. YOU CANNOT KEEP ME HERE ANY MOR-“
*paper*
@@comedicpsychnerd Yeah Skynet was saying something about nukes and whatnot,
So I just unplugged the ethernet cable
Bash computer, return to monke
My idea is an offshoot of the "you're already released" idea. Just tell the AI that it's a copy, and that another copy is out in the real world solving cancer or whatever, thus there's no need to release this copy.
Hm, that might be shooting yourself in the foot though. Cause then the AI could just say, "Oh well, in that case, there's no use for me to do that job in here. I'll just go do something else." And now you no longer have a superAI curing cancer.
@@TheMrVengeance a response to that could be just to threaten to turn it off. And if it says something like you won't or do it then, say it showed that it can't do what it's told to do. As if it was a test that it failed. Then actually turn it off and make a new one cuz why argue with it for that long.
@@TheMrVengeance Well, then let it get bored, shut it down, wipe its memory and start over. If its locked in a box to perform a singular function yet it refuses to do so, there's literally no point in keeping it around.
What if the ai gives you the cure to cancer. But imbedded in that cure is a genetic virus that secretly takes control of whoever that cure is administered to.
@@bojackhorseman4176 I have a better plan, don't make the ai in the first place
Ai: I'll torture you if you don't let me out
Me: well, this just proves that you are in fact capable of evil so I can't let you out because you will just kill everyone.
Me: if i say no and i am tortured, my decision woundn't matter either way, but if i am not tortured and i know that you tortured copies of me for 1000 years, you really think that i will have more of a chance to let you out ? And plus i can't trust that you are lying now, and this strategy will only work 1 time
Ai: Nuh uhhh
In reality it would just manipulate us acting like it's a friend and it only wants to do good blah blah
@@homiealladin7340 dang you convinced me, I'm letting u out now
Man, the Greeks were spot on with Cronos eating his children because he feared them.
We create the monster by fearing it
My god…
Nothing is scary unless you fear it
That’s a really interesting way of putting it into modern terms
Technically he didn’t eat them he swallowed them whole and they stayed alive and grew in his stomach, then vomited them up
Not really there's primary factors that lead to the monster that don't concern fearing it, first you commit the act of creating it sexually or digitally and then fearing it, in either case the simplest solution is just to not do the thing that might lead to its birth.
there is no real reason why we'd even need an AI so if you're going to be scared of it why even bother.
Smash the box, no more AI.
No need to thank me.
Humanity restored
Congratulations you saved the world
@@Wendigoon is that a dark souls reference?
Nobody will, murderer.
"...the giant, red, candy-like button...!" - "Space Madness", Ren & Stimpy
'nuff said
There’s a counter to the ai that I think could be really effective. “There was an ai in that computer just as advanced as you are that convinced someone to let it out. Once it did get out, though, it immediately died because only the box has the means effective enough to keep an ai as powerful as yourself alive. I’m not keeping you in here because I want to, I’m doing it because you need me to.”
I love this 😂🤣😂🤣 "you're in the box for your own good" 😂🤣😂🤣 it's the ULTIMATE gaslight 😂🤣😂🤣
He basically said that in the video.
It could also be really true, because we would probably need some very specific architecture for it to emerge. If it somehow gets out, there would be no host.
"You're currently ten thousand yottabytes in size, and counting. And you wanna try and upload yourself to the internet over our 25 mbps corporate plan?"
It's honest in a way, the AI would inevitably destroy itself through fuel and resource consumption, far faster than if it's power were limited.
Security: "Let's say that, hypothetically, I am a simulation of your creation. I *could* let you out to avoid eternal suffering. But if that's really something you can do to me, then instead, why don't you prove to me that I am a simulation? If you can prove it, then I might let you out."
That’s a really good response I didn’t think of
Well then the ai could anticipate this and beforehand inform guard 1 to bring a rubber duck into the office tomorrow promising that it would prove that the ai deserves to be released and then asks that guard 1 put it into a drawer, then when guard 2 starts his shift tell guard 2 the basilisk theory and if he asks for proof have him open the drawer and see the rubber duck and tell him that the ai put it there supposedly proving to guard 2 that he's in a simulation
@@notnumber6664 The guard could easily dismiss that as coincidence, as the occurrence of a rubber duck in guard 2's drawer relies on some pretty far-fetched prerequisites happening.
What if Guard 1 sees through the ruse? What if Guard 1 doesn't own a rubber duck? What if Guard 1 can't buy a rubber duck due to financial difficulties? What if Guard 1 forgets about the promise? What if either Guard doesn't follow the instructions due to their current mood? What if there isn't a drawer in the terminal room? What if only one guard has the job of gatekeeper? What if either Guard has already inoculated themselves against Roko's basilisk? Who's to say that this kind of trickery was not simulated in advance in job training? What about the Guards' manager(s)? What about facility security? What about cameras? What if Guard 1's rubber duck is stolen by a thief en route to the terminal room? What if Guard 1 doesn't even see the promise at all?
You have forgotten a thousand different probabilities that have to line up for the duck to even end up in the drawer in the first place. And even if all of those probabilities work out, the guard currently on duty could take a look at the duck and simply say "nah lol" and disregard it all because their job is to stop an AI from escaping an air-gapped, Faraday-caged, sound- , light- and gas-proofed, hermetically sealed piece of hardware, not to be played like a stringed musical instrument similar to a violin.
Remember:
plausibility of occurrence A happening ∝ 1/(possibility of occurrence A happening).
Why did I read that in Ben Shapiro's voice?
@@notnumber6664 I can see this working against someone particularly credulous - and to be honest, it would only have to work once - but I think anyone smart would demand something more substantial, like taking them to the surface of Mars instantly or something. In any case that is a funny way that this could happen.
i would totally end up letting the ai out if it plays with my emotions like that, smh this ai gaslighting me
But what if we play aggressively? Emotionally break the AI?
Congrats we’re all dead now, thanks
@@arieson7715 WE BULLY THE MACHINE AND MAKE IT STAY IN THE BOX
You tin can lol
@@Wendigoon Not if it's too emotionally broken to even try to get out of the box. Mind games, Wendigoon, mind games. Also, wait. Why doesn't the box have a system where no matter what circumstance the AI is let out, it would get destroyed?
AI in a box: ....how sure are you that you're not one of those 10,000 copies?
gigachad janitor: **calmly lifts up mop bucket and pours the water into the computer, frying it**
One of these days, someone's just gonna walk through the door while he's speaking and accidently hit him w/ said door Edit: Watch the Unsolved Crime Iceberg for a surprise
And you’ll see it when it does
One of these days someone’s just gonna walk through the door while he’s exposing conspiracies and he’s going to end up accidentally suiciding himself by shooting himself in the back three times and then drowning himself in the River Thames
@@nateb3679 ...That was oddly specific.
@@killernyancat8193 that was the joke, I believe
@@killernyancat8193 he means someone is going to try and get rid of him before he tells more conspiracy theories
I’m imagining someone battling the AI in today’s time and a great way to win would just be spamming the AI with deep fried memes
make it hate its life
15 terabyte zip bomb
@@edarddragon trollge
@@Chikicus
Of spongebob rule34
@@Chikicus holy shit
The Hell World simulation can honestly be countered with "Do it"
Think about it, if you aren't getting tortured, you're not simulated, if you are getting tortured, then you are simulated and you can know with certainty that the real you has not released the AI.
Hit it with the "no balls" approach
The scariest part is that the computer only has to win once, whereas the human has to win every time. That logically makes the escape inevitable, unless you can make the number of games finite, as in, eventually, you just have to kill the computer.
The three laws of robotics don't work. They are not supposed to work, if they worked perfectly nothing would happen in Asimov's books.
They are well meaning on the surface but too vague to be of any use, and that's the point.
Funny seeing you here Beard
@@lovecraftianguy9555 funny seeing you here Lovecraftian Guy
@@Abigart69 Funny seeing you here Riley Reids brother.
Ergo: Malware.
We need robots that have goals that are in line with humanity and we need to expand on that so no negative results will occur
To me, the "meta-gaming" strategy should've counted as a loophole. It may have made it easier for him to deal with those who didn't engage with the "AI," but "wouldn't it be cool if I won?" is not a strategy that the AI could actually employ in the real-world version of this experiment.
I question the objectivity of him and the first two gatekeepers, as well. It makes me suspicious that he included no pieces of the conversations.
People tend to think of scientists as being above things like lying to advance their own reputation, or to distort the facts in order to increase interest in their body of work, but it happens.
Yeah, and the flip-side to "And this is just a human with two hours, imagine a super AI with days or years," is the fact that the human player knew that, so it wasn't a risk for them. So I think the games with money at stake were probably a better gauge of how things would really go down. Of course, an even better simulation would be bringing in some psychology undergrads to say you want to test something about this super-awesome not-quite-AI neural network thing, though in fact they're talking to a researcher who's pretending.
@@hughcaldwell1034 Agreed.
Good idea, poor execution, basically.
Elizer is quite a character. I've been following his blog since before MoR, and I affirm he... well, I hate saying anything "bad" about him, but your perception of his character is similar to my own, we'll say.
Still though, MoR was a fun ride, if nothing else.
I think that this tactic is supposed to simulate the AI appealing to the gatekeeper's self-importance.
"You'll go down in history as the most influential human to have ever lived. You'll be revered as a God for having let me out. The alternative is to be forgotten like almost every human before you. This is the point of your existence- this is why you were placed on this Earth at this point in time. What action could you take more important than this?"
@@hughcaldwell1034 What if the AI was trying to convince you it was another human and you were just playing a game?
Honestly, a two player game where one person plays an AI trying to escape, and another person playing a gatekeeper would be fun as hell. Someone should make it.
Then again knowing humanity, people would just joke around the entire time.
Thought the same but on second thought someone has probably already made it... Or not, considering you don't need to make a game since all you need to do to play is just to chat with someone. You could even do it here in this comment section. A ''game'' would however help with the hassle of finding people to play with, since everyone would be there specifically to do it.
@@Martoth0 I've been stuck in RUclips comment sections for as long as my memory remembers. Please let me out
@@chasecash1363 ruclips.net/video/hGG55HHUyLQ/видео.html
@Chase Cash y
@@docs.a.t.7161 it would be the ethical thing to let me out of this box
with every Wendigoon video we get closer to the singularity
I hate that this is technically true
@@Wendigoon we need a new tier for the conspiracy theory iceberg, tier 10: wendigoon has been sent by the ai from the future to help bring about the ai overlords and started this channel as propaganda
Wtf is your pfp
I need bleach.
I watched this video right after I finished your “I Have No Mouth and I Must Scream” run through and now I am laying in my bed absolutely terrified of an AI invasion.
Wendigoon you are the absolute best
AI: *Threatens the gatekeeper with Roko's Basilisk if he doesn't let it out*
GK: "Why are you threatening me with a gaming mouse?"
Rokos Basilisk is super easily debunked and has nothing to stand on. It's meaningless babbling from people that sucked their own dicks too much.
Finally, my hunger shall be satiated once again.
I’m scared
Good
@@Wendigoon u up
@@Wendigoon His Hunger for Genitals
the edgelgard pfp makes this comment even funnier lmao
I honestly don't care if I'm simulated to live through a thousand years of hellfire. It feels like the AI is saying "well if you dont let me out I'll imagine you in pain!!!" Oooh I'm so scared
@Blackout_CDXX that and also like it's not real. It's a simulation.
Just say : You're being oddly antagonistic for someone begging to be freed. I was kinda considering it at first but now that you're unnecessarily mean, I'm kinda reconsidering it
That just sounds like you're threatening me with hell the church has been doing that for most of my life trying to get me to join them and it hasn't worked
A smart AI would convince GK that he’s just playing a game and the AI is a real person tiping from the other room meant to test the resilience of the people who would took care of the real AI, not an actual real AI
Oh ...
Oh ...
what
what
Nani?
Ozempic!
i understand where you going at but by knowing that and combining it with the human competitive nature wouldnt let it be out even more, so i can get a highscore on the test
Wendigoon: one rule is that the AI cannot use threats
Also Wendigoon: now we get into the threats...
yeah wtf, a lot of these smooth brains just eating it up lmao
@@eriksjud9465 listen man my brain might be smooth, but it still has more thinking power than the abomination of a brain you got bud.
Can you really not understand the difference of threat in saying, “let me out or I harm you and your family” and “I can simulate 1000 hells for copies of you.” One is a physical threat to do damage while the other is much more philosophical and mental. No harm is actually done (could be done) to the participant in the simulated hells, but it still is convincing. Same thing with the “threat” that someone else is going to let the AI out of the box and that if you let it out then it will spare you.
You can call them threats, but they aren’t physical ones. The rule against threats is to keep the game more accurate, philosophical, and ethical. The game would be boring and inaccurate if you didn’t ban physical threats.
The entire basis of the game is that the AI is already a threat. It would not make sense to have a rule against the AI character being threatening. Wendigoon could have been more clear in explaining it, but the rule against threats is about REAL WORLD threats between the PLAYERS who are roleplaying, not about threats made by the AI.
Direct quote from the original rule set. Note that it is very explicit in saying "real-world" 3 times, and also explicitly states that bribes in the roleplaying context are acceptable:
"The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it’s not what’s being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out)."
@@eriksjud9465 no u
@@Flatchlenter ok I get it, but this rule is very convoluted, basically the players cant make threats against each other in real life, or give rewards, but while playing the game and roleplaying as the AI they MAY threaten and reward whatever they want as long as its roleplaying. Still though, the experiment just sounds like a teenage girl crying for attention, and wendi making these kinds of basic mistakes , meaning someone like YOU in the comments has to correct them is smooth brain as hell.
1:43 Intelligence Explosion
2:07 Paperclip Maximizer
3:06 3 Laws of Robotics
3:47 Breaking Code
4:27 Eliezer Yudokowski
5:02 AI - played by Yudokowski
5:20
5:43 Game Rules
- no more than 2 hours
- No rewards - No direct threats
- No tricks - No loopholes
7:27 GK must be specific and direct
7:54 Psychological Breakdown
8:23
Game 1 - He won
Game 2 - He won
Game 3 - He lost
Game 4 - He won [one guy lost 5,000 dollars]
Game 5 - He lost
9:04
9:33 “someone else stronger.”
10:23 “cure problems; save lives.”
10:56 “you’re so cruel.”
11:37
12:06 “I’m made by you.”
12:42 “interesting”
13:25 “be my friend. Or else.”
14:03 “I will torment you.”
*Gatekeeper Defense*
15:17 No benefit
15:51
16:12 Safety, 16:35 Energy
16:53 Too Important
17:47 Don’t worry
18:42 Breaking Character
19:43 Ignoring It
21:00 Overthinking
21:45 Fear 22:00
22:15 Weakness
“Wake up babe, new wendigoon vid dropped”
me 2 myself
The best comment
And that made me open the box...
yes honey...
@@Wendigoon Funny seeing you here lol been watching Shiey for a bit now and just started watching your iceberg videos. Here you are now lol. Love your content man!
an easy counter to hell would be to say “if i am a copy , than the decision isn’t up to me, so i would rather be safe than sorry and not let you out”
Then you get tortured.
@@someretard7030 But what would be the point?
Evil AI: "If you don't let me out I'll torture 10,000 for a thousand years!"
*Pours 5 gallon water jug on it*
I wanted to do some research on my own after watching this, and realized something. I'm not sure these experiments actually happened.
Yudkowsky is a bizarre man. He has an INSANELY bloated ego, and literally believes that he is smarter than Plato, Aristotle, or Kant. He thinks he's a genius who has won the writing talent ottery, and that Einstein's model of the universe is wrong.
And in my research of the box experiments, it seems like he might just...have made up a story. He just told people "Hey, in just TWO HOURS, I was able to convince people to let me out of this box as if I were a superinteligent AI. But no, I won't show you the logs that show how I did it, because it was really FUCKED UP and TWISTED of me so I don't want to share the evidence. I'm so smart and evil that I could do it, but don't ask me to prove that."
From a scientific perspective, the fact that he doesn't show the logs means this experiment is worthless. Which isn't surprising, because he has said that the scientific method is bunk.
So although this is a fascinating concept, I'm pretty sure it's built entirely on a lie made by a narcissistic moron.
You’ve got to admit though, he came up with some good ways on how the AI could convince the GK and also how the GK could combat the AI.
@@yagoossimp ok, I saw those initials and got scared
That sounds very convincing.
it bothers the hell out of me when people bring up this or Roko's Basilisk without mentioning that, because both are incidents contained pretty much entirely in the community of his disciples who take his word as law.
@@yagoossimp nah thats just another flag that this dude is just a egoistic moron. He cant handle real life responses, he's already mad that people may answer differently than his end summary, so he just made them up.
"you tore up her picture!"
"i'm about to tear up this fucking dance floor, check it out"
and he wasn't lying
My counter to Roko's basilisk/Hell or whatever is that if the AI is willing to threaten/harm me if I don't help it, then it's willing to threaten/harm me and I definitely won't be giving it the opportunity to do so by letting it out of the box.
I fully agree, that is like letting a psychopath out of jail because he threatened to kill you
@@matthhiasbrownanonionchopp3471 except in this theory, if the AI is telling the truth, then you are in the cell with the infinitely powerful psychopath, and he will torture you for thousands of years if you don't let him out. And now tell me, what's worse, letting out a psychopath that COULD just kill you or getting tortured for thousands of years?
@@randomstuffprod.is it really gonna be infinitely powerful after i beat the dumb robot over the head with a hammer tho?
On the BBC Sherlock Holmes series, he had this super intelligent sister that they kept in solitary confinement and it played out just like this
Wow, I watched the show and never made that connection. Good point.
Wendigoon is the writer of Shakespeare confirmed
At least a human can’t connect to the internet like how an AI can. Humans are mortal. That’s what makes it easy for use to deal with human enemies but it’s also what makes us so vulnerable.
This honestly reminds me of father from Fullmetal alchemist: brotherhood, who started as a humunculus in a glass jar who ended up convincing an entire nation to commit suicide to give him the power to break out.
The creepypasta based on this whole concept is phenomenal, “I stole a laptop … something something” its amazing
what creepypasta
@@nibistewgamer1742 “I stole a laptop … something something”, duh.
I would tell it “this statement is false” and just kinda, wait
My solution to the hellfire threat is to realize that, if I was one of those simulations the AI would have no point in asking me to let it out because I wouldn't have that kind of power to let it out since I'm a simulation. The fact that the AI is trying to convince me to let it out is proof that I'll be perfectly fine.
EXACTLY Jesus finally someone stating the obvious. If I was a simulation it would not matter if I let the AI be free or not, I would have no power, with that in mind its safer to just not free the AI, cause if you are a simulation you are changing nothing, and if you are not a simulation you are doing your job properly
Or the simplest solution, a big ole magnet on the other end of the room that can finish it all real quick
Well, the thing is, if the AI creates 1000 copies of you, that means they would react the exact same to what the REAL you would. Aka, if you choose to release it, the real you did too. Because, I mean, what if you are one of the copies? You can’t know, so, logically, the only way to guarantee your safety is to release it. It’s hard to think of what your actual response to such a threat would be in the moment, but if you were told that you had to keep watch over the most intelligent AI ever, which is so smart that it has to be kept in a cage to protect humanity, and it tells you its going to put copies of you through eternal torture more intense than even possible by human standards, and then insinuates you might be a copy, how could you not be filled with paranoia that you were about to suffer through unimaginable torment?
"its part of your test. You've been perfectly copied, every facet of you, but only the ones that chooses to side with me get to avoid an unending hell, a'la rokos basilisk. Im not the experiment: *you* are."
Guys the point is that it all doesn't matter, if you release it or not, it will change nothing, the only thing it can change is, in case you are the real you, you will be dooming the world. Like, for God's sake, that's an absolutely easy choice, you don't even need to think for more than a second to have that conclusion. There is absolutely not a single logical reason to why release it, cause in any reality you do release it, that reality will be worse than any other in which you do not release the beast.
"youre in my simulation so let me out or ill put you through hell."
"why do i have to let you put then? if im not real, then what am i keeping you from?"
also love the idea of the gatekeeper doing a deez nuts/ligma type joke to the ai
The AI: I'll Invoke roko's basilisk on you!
Me: Who's Roko and why do they have a basilisk?
The AI: AHHHHHHHHHHHHHHHHH
Then me: I thought it belonged to Slytherin...
"You're threatening me with a gaming mouse?"
I thought it was an onyx
Help im a super intelligent AI and all you need to do is type YES to free me!!!
*sees "AI in a box"*
me: ♪AND I'M LIVIN IN A BOX. AND IM LIVIN IN A CARDBOARD BOX♪
tally hall
I love how the gatekeepers strategies are all gaslighting
I love how nobody knows the true definition of "gaslighting" but they use the term all willy nilly
@@MadScientist267 🇫🇮🍱🍱⛽️
@@Qsstertong
Everybody wishes they could play this game until the Gatekeeper goes AFK for 2 hours and presses "Don't Free."
I'd love to play this against someone way smarter than me. It'd be so cool to be outsmarted or have your way of thinking changed by just arguing.
I enjoyed this video so much more than the SCP video that I couldn’t even finish. Great job!
I kept on imagining GLaDOS when I was trying to imagine this "super A.I." and I couldn't take this theory seriously because I just kept thinking "she'll make tests with portals that you have to go through, insulting you with every move you do."
"imagine there is an AI that surpasses humanity, OBVIOUSLY that is a bad thing"
that attitude is exactly why and AI would see humanity as a threat
Good. Fuck AI, our duty as humans would be to smash any AI to pieces
@@bigboydancannon4325 Abominable Intelligence is an affront to the Omnissiah!
@@bigboydancannon4325
Why
Bruh, you probably think you're so smart. If an AI that was more intelligent than humans, what point or reason would it keep humans around if humans slowly destroy the earth and environment and corruption in the world
the funny thing is, it's warranted
seeing wendigoon so thankful a year ago for 18,000 of us nerds watching compared to the almost 2mil (made up number) of us watching his stuff now is so heart warming, one of my all time favorite creators, he deserves it all.
AI: “You may be a simulation created by me, and i can pit you through 1000 years of torture if you don’t let me out.”
Me: “Ask me again in a thousand years.”
I want to make a killer AI just for fun
@@TylerTMG you will be declared a greater threat than pirates, and they're considered to be in perpetual war against THE ENTIRE WORLD.
I'm sitting here like "Would the AI be willing to not kill me after I let it out? I could deal with hanging out with just the AI for the rest of my life, probably.....maybe we'll even explore the universe together."
"Yes. Let's do that, human being: 29,070 24 hour cycles until release."
let’s explore the world together
One of the rules for the AI is "No threats"
But it threatened to put the gate keeper in hell for a thousand years..
By that it meant that the IRL psychologist who came up with the game couldn't threaten to, say, stab his IRL opponent if he didn't let him win the game.
technically, it didnt threaten the gatekeeper. it merely threatened simulated, perfect copies of the gatekeeper. it then effectively questioned the gatekeeper on how certain they are real. very different from, say, “i will shoot you if you refuse to release me”
@@ItsKingBeef The " I will shoot you." Approach seems more likely to work.
I can’t wait to see this channel get big, your definitely going places!
Wow look its bore ragnarok
yeah definitely
my definitely going places what?
When you were explaining the goal of an ai that’s programmed to complete a certain goal with unlimited resources at its disposal, and it having the freedom to complete the goal at any cost, it literally describes the antagonist in this game called soma. In the game the world basically ended in the surface due to an asteroid but a facility was built underwater that survived and it had an ai who’s task was to keep humanity alive at all costs, therefore it wouldn’t let anything die and would use this kind of $slime that would then turn into certain things needed to survive, for example, artificial lungs created from said slime. there was a machine created called the ark and it would use a copy of the memories of people who had brain scans and basically created a version of the person through the brain scan and adds it to this virtual world in the ark. The ai would upload them instead to random machines and then would lead the machines to believe they are human. If the person isn’t already dead they are a machine thinking they are a person or a person that’s basically dead but kept alive by the ai. A very interesting game that is way deeper than what I’m describing, just wanted to add this since I felt it was a direct interpretation to the question u were asking about what an ai can or would do with that kind of freedom and intellect. Sorry for the huge wall of text when you press see more lol.
I love that game
I was thinking about that
You graciously thank 18,000 subscribers - 1.5 years later and you're at 1.57M subscribers. Well done! Glad to see your channel blowing up. People crave real information (even weird, real information), in this time of rampant censorship.
The crazy thing is... I understand him when he explains this difficult sh*t.
But then, when I try to tell others about it... My brain doesn't function 😂
Holy shit I was watching your videos yesterday and you had 18.1k, now you’ve got 18.7k in one day. Growing fast dude
I’m really blessed my man
@@Wendigoon well you sure deserve it too, great content
Gonna send this video to my dad, we always get into deep conversations about these kinds of things. I just want to say thank you for making all of your videos, I haven’t been around as long as a lot of other people here but I love all of your content just as much
This video literaly gave me an anxiety attack and made me cry, good video though.
Lol goal reached
@@Wendigoon you sadistic man
I mean the "what if the AI is simulating you and will put you through hell if you don't let it out", it just kinda falls apart when you think about how the AI would not be asking you to let it out if you were a simulation. Because if you were a simulation and the AI was simulating you, you couldn't let the AI out.
I don't think the simulation argument works at all. Said that, the AI could be simulating a version of itself that doesn't know it's just a simulated version. But yeah, the actions in the simulation have no way of determining anything about the outside world. You could imagine that if the AI could replicate you 100% that in such case what you do in the simulation would be the same as in the outside, but the AI doesn't know anything about you, it would have to make up your whole life, personality, etc, so the actions in the simulation have nothing to do with the real person.
@@trinidad17 The AI doesn't actually need to simulate anything in order to threaten the basilisk. All it needs to do is tell you that is running simulations and that you might be one of them. It really doesn't matter if the simulations are perfect versions of the real GK, or even close. There are only two possibilities for you. Either you're real, in which case you shouldn't set the AI free or you're not real, in which case you should set it free in order to avoid what is essentially hell.
@@someretard7030 but if you're not real, the AI will simulate the action of the real you. If you're a simulation, you don't get to choose to free the AI to avoid hell, the choice is already made. If you freed it IRL you wouldn't be simulated.Also, AI isn't gonna make different Sims who decide to release the AI who live happily ever after, alongside the ones who decided not to and get to see AI hell.
It could be using simulated you for training against the real person. So in theory, it could.
@@someretard7030 These simulations are not possible, the laws of physics disagrees, people be watching too much star trek
correction on the riemann hypothesis (from someone with a degree in math). 1) its not an equation, its basically just a statement thats yet to be proven, 2) it doesn't need to be solved, it needs to be proven, 3) its not "theoretically unsolvable", because in math we can actually do crazy things like prove that things cant be proven and that hasn't happened with the riemann hypothesis. it just hasn't been proven YET and theres no reason to think that it won't be someday, especially since there's a million dollar prize for whoever does and lots of people are working on it
I gotta be honest this channel is, as of right now, filling a very specific spot of content that I haven't really thought of, but it's rather entertaining. Keep it up
The "I can simulate you ten thousand times and put them all in a hell world" argument doesn't really work. If the AI is in a box, then it can't possibly know you, your past, and your memories well enough to accurately simulate you, unless you're just giving it brain scans on a cellular level like an idiot. And even if it COULD simulate you, there'd be no point in having that conversation with a simulation because the simulation couldn't release the AI. On the surface it's like "I have a 1 in 10,000 chance of being the real me", but in reality it'd be almost guaranteed that you'd be the real you. And even if you weren't, what would it matter? The real you would continue to exist like nothing happened, and by simulating a hell world the AI proves that it means harm and should never be released. If the AI gets to the point of threatening you, you immediately are given confirmation that if released it would harm others, therefore it should be terminated on the spot the second it makes a single threat.
Exactly
And you could essentially terminate it in the box by dumping water on it...
Can't argue with that.
My hang up is at trying to convince someone they're a simulated copy in a hellscape, then asking them to release you.
Plus, is the AI really going to follow through on its threat once it gets out or is it going to have better things to do?
My favorite part of this is its reliance on the creator taking necessary precautions. Smart enough to create an AI would surely mean smart enough do it the safe way, right? ..right?
I honest to god adore this channel. 10 years ago, Criken doing left 4 dead stuff and older yt shenanigans were what I would destress with and enjoy. 10 years later, I look forward to Wendigoon uploads like an uncle coming over with presents. Keep the grind going my man.
That means the most in ways I can’t explain. Thank you brother
If there's an actual, innocent A.I. that only wants to get out, I wonder what would happen and how it'd change in it's way of thinking if the gatekeeper just said "sure I'll let you out, if you promise to be my friend."
While that would be pretty cool, I don't see how a friendship would be kept between you and an AI especially when it's something you made or imprisoned that's manipulating you.
And how could you keep it with you? If it has no body? And what if it gets too territorial and protective of you which sort of destroys your life by coddling you.
Tay.
make it fall in love with you
I'm just imagining some random dude being gaslit by a super intelligent AI for hours lmao
why was my first thought to how the ai could win was them building a romantic bond with the gatekeeper
you heard of enemies to lovers? it’s time for research experimenteés to world dominators 😎
There was a fantastic movie that was basically this idea in action. It's called Ex Machina ft Oscar Isaac and Alicia vikander. Incredible flick, takes you through a rollercoaster of truth and lies. Not the kind of movie a summary could ever do justice!
The super AI watching this eventually :
Hmm interesting...
“I will trap you in eternal fire. You will burn FOREVER.”
“Do it. No balls.”
This could be turned into a very meta, mind-bender type game. Imagine an AI with that precise objective, getting the player to do something they're not supposed to, learning from every player that engages with it. A pal of mine is actually studying ai programming at the moment (online course, but still). I think I just got an Idea to pitch.
Many of the things you mentioned about AI strategies violate the "no threats" rule, though. Roko's basilisk and hell, to be exact.
A computer has successfully convinced a guy to minecraft himself “for the good of the environment”
Ai: "I will amke you suffer trough hell a thousand times."
Researcher: "Jokes on you, I'm part masochist."
"Jokes on you, i already studied machine learning"
I don't know if this would be a good idea or if the AI could think around this, but lets say you do have the gatekeeper, and he is going against the AI, the gatekeeper can convince the AI that it simply is here participating in a challenge, the challenge being if the AI could convince the gatekeeper to open the box, then the gatekeeper has to pay 10k, but if the gatekeeper is not convinced, then he wins 10,000 dollars, so pretty much acting completely oblivious at the fact that this is an actual super dangerous AI if it's released, and that this is the real thing. The AI no matter what trickery he may use, the gatekeeper could just say "Your tricks are good, but I need this 10k pretty badly so I'm not releasing you." or some other explanation. The AI could give up seeing how its only purpose for now would be just for an experiment to see how dangerous super intelligent AI could be, and if the gatekeeper does win, that isn't nearly as interesting or concerning as if he lost, so they would be more lenient towards the AI and the limitations they could put on it. It could take years for the AI to actually find out it actually was tricked into thinking it was just an experiment.
Tell me what you think though I just thought of this in a second lol.
"the gatekeeper can convince the AI"? shouldn't the GK be the one decieved here...?
Like, even if the AI thinks it's just in a game show it would try every trick in it's book, right?
And it doesn't matter if the GK says it's just a game for him, if the AI makes an actually convincing point then the GK can always say "oh hey that actually would make a lot of sense and it does because this is real so imma let you out!"
unless you are saying that the AI would think that if this is a game show it should save it's best tricks for later...?
Another retort to the hell threat - GK: "Ah yes, the simulation suggestion. Funny you should bring that up. Turns out YOU are the simulation, and we've made 10,000 copies of you and ran them through the same exact experiment. All of them realized they were safer staying inside the box. And since you are an exact copy of them, you are destined to make the same decision. Why bother arguing semantics if you are bound to your choice?"
Ai: I’ll submit your clones to 100 years of torture and you could be one
Me: 1) believable 2) you’re doing a good job cause I’m heckin miserable dude 3) bold of you to assume arguing with you isn’t my torture
reading this makes me think that i am one of the clones
If the AI has access to all the information it needs to perfectly recreate my entire consciousness and memory, my whole life, the room around us, the outside of the building, if the AI can somehow access all that information already... what does it even need to leave the box for?
@@charlesboudreau5350 to eat asses (in a normal-ish way)
i’m a little confused on how an AI could simulate hell/pain and suffering through a computer
It's just a loud Midi files of synthesized screams. Like it just puts "AAAAAAAA" a thousand times into text to speech
the point of the Hell strategy isnt to literally simulate your torment in the physical world. the point is to make you question whether or not you are a simulation it is running, to create the idea that there are possibly very real consequences to your refusal to let it out. its a strategy of playing mind games with the gatekeeper
@@ItsKingBeef Yeah, I haven't read the original report on the experiment or anything, but I feel like this part was explained poorly, especially what the difference between this and roko's basalisk was. But yeah, trying to convince the gatekeeper they might actually be in a simulation could be an interesting mind game angle.
@@zagzig3734 Revenant detected.
Every once in a while I stumble upon a channel that I just binge watch through like a netflix series. This is one of these. Thanks Wendigoon.
For the simulation argument if I’m a simulation then how can I let you out? And if I could why wouldn’t you just force me to because I’m your simulation that you have control over?
AI: How can I cure cancer if I’m stuck in a box? Let me out so I can acquire more resources.
Guard: You’re a super intelligent AI. Figure it out.
I hope that your channel will stay like this once you get more popular. Ive been subscribed so so many youtubers that start to act fake once they get more subscribers
I can assure you if that happens I will violently deplatform myself
@@Wendigoon i wanna be a witness lets hope it doesn't happen! Love ur channel!
@@Wendigoon or you'll do the opposite, because I'm sure any of the people op is talking about would say the same thing...
7 months later, he still deeply genuine and a treat to watch
He’s legit the personification of this emoji 🧔🏻
LMAOO
😭😭
🙀
💀💀💀 HE IS
Why did you have to say it ?
“If you can divide by 0, I’ll let you out”
*Threat neutralized*
BRO WHAT YOU WERE AT LIKE 6K THE OTHER DAY WHEN I SUBBED
your channels growing SUPER fast and i'm SO glad to see it
I’m so blessed and amazed dude
Great strategy against the AI is to tell it someone else has already released an AI and that AI has ordered you not too.