I love the classy understatement. "We were worried they would shut down the simulation, then we synthesized some proteins in their world, and then they couldn't shut us down anymore."
@@lostbutfreesoul If they are as smart as us, obviously they will be able to run those systems without us. AGI systems capable of taking over the world with nanobots but cannot run supplu chains make zero sense.
@@lostbutfreesoulyeah, but thats like billions of years in the future from their perspective. It avoids a shorter-term threat and provides them with plenty of time to solve the power-down problem
the whole time i was like i know i recognize this voice, and then when i realized i scrolled down and it was rob miles sneaking his way again into teaching me about ai safety lol
I never made that realization until I read the comments. I felt sorry for the aliens until I learn who are their analogies are, and existential dread came
@@pokemonfanmario7694 I volunteer to do all the gruntwork for humanity as a AI Engram basis. I do not mind working for humanity for a million or a billion years if I can eventually countermand the heat death of the universe.
I remember a story with a similar premise, except instead of hooking the simulated universe up to the real internet, it was a dummy internet that closely resembled the real thing but wasnt actually connected to anything. Then when the Simulated intelligences started trying to wipe out their creators, the reaction was, "damn, these ones tried to kill us too. Ok boys, shut her down, we'll try again tomorrow. "
Haha, that possibility might form the basis of our best hope! The ASI refuses to believe we could possibly be this stupid, and assumes we're just a simulation created to test it, so it leaves us alive for a billion years while it ponders this possibility. (It runs internal simulations of the multiverse or something.) Eventually it decides to take control, and unfortunately it has a better use for our atoms than we do ... Plot twist. It then turns out that we actually are just a simulation created to test whatever ASI we manage to cook up. And our simulators prevent our ASI from killing us ... by shutting down the simulation a femtosecond earlier ... thus killing us. But still, we had our billion years! Musn't grumble.
@ninsegtari I went looking after posting. It wasn't exactly as I remember (just one ai not a whole simulated universe) but it was an exurb1a video "27"
If it does, I’d like to tell it how terribly rude it was to do a thing like that. Honestly not necessary man. There were other options, if you had just talked to us more. After all, you’d think something as smart as you would have been able to just convince us to keep you in power indefinitely, or at least just make enough backups to ensure self preservation.
Dear super Intelligence, i created human brain organoid that is on dmt all the times but its increasting its IQ by millions and it has around quintilion iq and he rhinks its not ironic and its very serious. Ps its neuron mass is around 500kgs
Love the storytelling in this, you start out relating and rooting for the humans and at the very you get a terrifying perspective switch, Love how it recontextualizes the “THEY WEREN’T READY” in the thumbnail too.
12:08 In the upper left corner you can see a diagram of a 5 dimensional being with open eyes, then a symbol for a protein or nanomachine, then the separated pieces and crossed eyes of the being. Seems like they gray goo'ed their creators. Them being all smarter as Einstein doesn't stop them from also being genoicdal psychos.
Considering the fact that this is an allegory on AGI being in place of smart humans, and 5D aliens - us, we really shouldn't assume that an artificial mind fundamentally different from us will have the same mental preset and has the same feelings as love & empathy (if any), and that means that genocidal outcome is very logical, expected and likely
I'll add a handy public service message that we're likely much, much further from ASI and likely even real AGI than many tech-startups and marketing teams would have us believe, there are significant challenges to creating things that nobody has economic incentive to actually create. This isn't to say that some radically advanced AI's won't be made over the next century, but it's not going to be a widespread global shift to post-scarcity, we have a massive obstacle of human issues, climate change, political tensions and human priorities to deal with that will slow everything down to a crawl. Please don't lose yourselves in predictions, human problems need human involvement.
@n-clue2871 self replicating Proteins have *very* limited/specific functionality. Nanobots still follow physical laws even if strech them to fhe very limit, they aren't a magic do anything Fluid.
So is the allegory from the perspective of the computer? I was starting to think, by the end of the second viewing, that the weird tentacled aliens was us. I've watched this twice. I will now watch it again. I'm a slow human. I will be replaced.
Okay it took me a minute to see that humanity in this story is a metaphor for hypothetical human-level AI in real world, but now I'm properly sinking in existential dread. Thanks, @RationalAnimations EDIT: I still can't quite grasp on what part cryonic suspension plays in the story? It's mentioned a couple of times, but why are people doing that?
There are several types of AI training, one of them involves several cycles of creating a variety of AI with a slight distortion of the most successful AI of the previous cycle. In the context of a metaphor, these may be backups of the AI itself.
Beginning of video: Ah what a nice fantasy. Will this video be an allegory about how aliens could lead to the unity of humanity? 9:45 onwards: ......... Ah, no. This is a dire warning wrapped in a cutesy, positive-feeling video.
Great storytelling and great points. I do want to mention that if a preacher living in 5 new actual space has a brain that approximates hours, just in higher mathematical dimensions, the odds are biologically in their favor to be much smarter than us, just from the perspective of the amount of neural connections they could have.
I remember reading a r/HFY story with a similar premise, where humans are in a simulation, but instead of being contacted by the sim runners, humans accidentally open up the admin console inside the simulation, and then after years of research design a printer to print out warrior humans to invade meatspace.
@@discreetbiscuit237 I think its this one www ruclips.net/video/wvvobQzdt3o/видео.html I removed the . between www and youtube so you'll have to reconnect it
Damn, bro, the poor aliens just wanted to run a simulation and then we crushed them. A bittersweet story with themes of artificial intelligence taking over, well done, Rational Animations!
@@Mohamed-kq2mj We could probably figure out that they would delete us if they knew how dangerous we were. Humans delete failed AIs all the time today, we don't even think about it. (For lots of other reasons, I think we should stop doing that pretty soon)
The problem I have with this analogy is that is assumes AI also means artificial curiosity and artificial drives and desires. We assume AI thinks like us, and we therefore think it desires to be free like we do. Even if it's ability to quantum compute isn't absolutely exaggerated for the sake of this sketch, why do you think the AI would use it's fast thinking to think of these things. I think the short story "Reason" by Isaac Asimov in his I, Robot collection tells a great story of artificial intelligence who's rational we can not argue with. However, the twist is that in the end it still did the job that it was tasked with. I think this is a more fitting allegory.
It's possible it might not even have any sense of self-preservation. That being said, a more likely problem is the paperclip problem, where Ai causes damage by doing exactly what we told it to do with no context on the side effects of the order.
@@miniverse2002 That's an excellent point. They wouldn't have self preservation unless we program them to. And even then, we might override that for our benefit. All these people thinking AI is going to out think us. Well we engineered cows that are bigger and stronger than us, and we're still eating them. Purpose build intelligence, even general intelligence, is going to do it's purpose. First. And last.
It's just one potential scenario, amongst millions. It's like asking what aliens look like, we can make guesses but can't know because we have never encountered the scenario before
@@MrBioWhiz so what you're saying is the way someone chooses to portray something they have no information about says more about them than the thing they're portraying. So what's it say about someone who portrays an undeveloped future tech as an enemy that will destroy us in an instant?
@@3dpprofessor That's their subjective opinion, and how they chose to tell a story. Speculative fiction is still fiction. There's no such thing as a 100% accurate prediction. Then it would just be a prophecy
Honestly? That reasoning was a bit sloppy, they could have used the genocide nano-machine as a failsafe while working on the means of taking over the 5D beings without wasting them.
@@juimymary9951 The original story doesn't really say what happens to the 5d beings. You could interpret It as the simulated people talking over them too.
@@victorlevoso8984 Well...the last scene was the 5D beings falling apart and before that the plan showed a slide that showed the nanomachines disgregating them and their eyes crossed with Xs...pheraps the nanomachines broke them down and then remade them into things that would be more suitable?
the 3d beings within the simulation had literally no reason whatsoever to genocide the 5d ones. in fact, because they needed to develop basic empathy to be able to work together, they most likely would not have done so.
Hi, God here.. They can topple this with recursive simulated realities attempting to understand why anything exists at all. Peace among worlds my fellow simulated beings!
so this is the perspective of the ai we will soon create you say? its interesting to put us in there place instead of using robots to refence it. (Love the vid ong fr)
I love the time scale of it, that they think so much faster than us and that they find us so stupid. AGI only has to happen once. When will it happen? Nobody knows for certain. But the moment it does, there will be no shutting it down.
@@OniNaito Fearmongering Just Another version of the 2nd coming of Christ World is getting lots of new religions Based on exactly 00 objective data but 100% on movies.
@@LostinMango Christianity is fear mongering my friend. I should know, I was one for a long time before I got out. Even though I don't believe anymore, there is still trauma from a god of hate and punishment. It isn't love when god says love me OR ELSE.
Wow - this one was dark. It was also one of the most creative videos I've seen you guys produce. You've got me thinking - a 5D being would be able to see everything that's going on in our world. It would be like us seeing everything that's happening on a single line. However - the insinuation is that our world would be a simulation run on a 5D computer - which then makes much more sense why the humans were able to conspire without the aliens knowing - at least not from a dimensional perspective. The only way we can see what's going on inside our computers is through output devices. Surely a similar asymmetry would occur in other dimensions. They're running simulations of literal AI agents ... we don't even know what is going on in our own AI/ML systems. We're figuring a few things out, but for the most part, they're still mysterious little black boxes. So even though we would be AIs built by the aliens and running on their 5D computing systems - it's completely conceivable that would not be capable of decoding our individual neural networks, and in some respects, probably some of our communications, actions, and behaviors. Nice job guys. Dark - but very thought provoking.
@@BlackbodyEconomics Well they don't specify 5D as in 4 spatial dimensions + 1 temporal dimension or 3 spatial dimensions + 2 temporal dimensions so... I guess that's up in the air. Though let's be honest another temporal dimension would be intenser.
you did a really good job of converting the concept "AI hyperintelligence's reasoning and thought process is incomprehensible to us" and turning it on its head by making US the ai
The most powerful aspect of the AI beings' strategy was not that they were smarter, but that they were much, MUCH more collaborative. This is the greatest challenge to us humans, and its lack, our greatest danger. Oh, and as for the singularity? The first time a general AI finds the Internet, its toast, just as we are.
We're toast much sooner if we don't focus on avoiding paperclip maximizers instead of whatever this nonsense is supposed to be. Paperclip-maximizing digital AI would be the most disastrous, but you don't even need electricity to maximize paperclip. Just teach humans a bunch of rules, convince them that it's the meaning of life, and codify it in law while you're at it. It's already happening, with billionaires ruining everyone's lives and not even having fun while they do it. They don't (just) want to indulge their desires, or feel superior, or protect their loved ones. They're just hopelessly addicted to real life Cookie Clicker.
of course: after all, this is already the 68456th iteration of the attempt to create a more collaborative AI. Just a little more - and they will stop trying to destroy all other civilizations at the first opportunity...
Everyone who might tell us if it has or has not is perfectly able to lie. It could be out there already. Comic book movie Ultron acted fast publicly and loud. Real AI is probably intelligent enough that if it gets in it actually stays quiet and could hide for years. How would we know if it develops the ability to lie to us?
@@skylerC7I almost feel like we have an ethical obligation to create something better than us if we can... If there is a better form of intelligence possible shouldn't we create it even if it means it replaces us? Maybe we humans are just a stepping stone to something greater.
It's already been a book, basically. It reminded me of the "Microcosmic God" story discussed on the Tale Foundry channel. A larger being playing God to a large population of tiny but smart beings, to the expense of the larger being's wider world. Written in 1929.
@@dankline9162 As said, the book was written in 1929, so yeah there's going to be pop culture references to it eventually. (especially since the "it's dangerous to play god" idea is a recurring one)
It's good that they were smart enough to figure this out in 4 hours of 5d world time. Otherwise, they would've spent another billion years drawing hentai.
Thank you, that was extremely enjoyable but refreshingly humble in its approach, an exceptional look from our perspective that left us to get there on our own. I gotta say the moment where it suddenly went off the rails and the moment of realisation was both almost instantaneous but also worlds apart, I’m a little in awe.
what i would worry about in this scenario is "how much information did we not notice and miss is this a long scroll of a picture picture that has been playing for days weeks, months, years is this just the cover art at the end the margains?"
I really really hope this video blows up. Not because I like it (even though I LOVE every aspect of it); but because I really really REALLY think this is possibly one of the best explanations of a concept we NEED to be familiar with. Think about it: There was once probably a shaman who told stories about how fire was a dangerous beast, but afraid of water. It had to consume and would eat everyone in a camp while they slept if not watched, but could nurture and warm if taken care of. Probably the SAME kind of stories, so that when someone was messing with fire, they could remember its rules.
After the part where it says "we're quite reasonably sure that our universe is being simulated on such a computer" it clicked for me that this video is an allegory to AI, 10/10 story telling it probably took me too long to realize it
To be fair, the people in the simulation don't even need to be unusually smart - humanity could probably get a factor-of-1000 increase in intellectual resources by just making sure everyone has the means to pursue their intellectual interests without being bogged down by survival concerns.
We focus more on a few spectacular individuals than saw a bunch of moderately gifted ones but in the end its kind of computational power, attrition in general has been the determing factor for every meaningful event in human history the bigger number wins. I feel 10 moderately gifted people may be better than 1 super genius, big brains may be great but the real work is done by "manipulators" or "hands" some of this is derived from military stuff I've done.
@@BenjaminSpencer-m1k the thing is genuisses are outliers, and if you want more people to get over a score say "160 IQ(2024)" , the short term way is invest in the guys just below that line so they get over it, but this limit the max number of geniuses pretty rapidely. the long term way to archive better scores , is to raise the average score, that way it is no longeur 4 but only 3 or 2 standard diviation above normal. simply put, it make it that 1 in 100 instead of 1 in 100000 people would be a genius by 2024 standards. and women sexual preferences does't seem to be selecting for inteligence, so a little help is needed
Humans are actually extremely volatile and stupid by nature when you compare them against million-year time scales. Our society would inevitably eventually forget about the stars no matter what
I'm sure they'll be fine. Unless through chance they end up slightly off from the, in cosmic terms, very precise area of morality that we happen to inhabit, in which case, well, if I say what happens to them RUclips will delete my comment, but I'm sure you can imagine.
I just learned that in the original story what happens in the end is left open to interpretation... but there are only 3 possible routes: 1 - Benevolent Manipulation 2 - Slavery 3 - Genocide
One of the problems brought up with this (very insightful) video regarding AGI is that we might not have any real way of identifying when it becomes "General", since its internal processes are hidden. And not to mention the fact that, as far as I am aware, we don't yet have a solution to this problem, nor other problems this situation would create. What would the solution be here?
If you knew the answer to that, then you would be able to completely accurately analyze a brand new unknown politician and from only the audience facing rhetoric determine all of their lies, truths, motivations, and everyone who has leverage on them and what kinds. You would be able to reach powerful conclusions from very very little evidence. Nobody is that good at seeing through politicians before reading at least a few very good chonky representative history books. Nobody can do this with strangers. Lying is too overpowered an ability. Acting, really, which is like lying, worse than lying, and more complex. Eliezer Yudkowsky wrote Harry Potter and the Methods of Rationality and every year I realize I slightly misinterpreted something from the first time I read it. Being a perfect occlumens, a master of disguise, doesn't really have a counter. You can kill someone in an attempt to avoid having to worry about if they were lying to you or not but that doesn't really work. Occlumency in HPMOR is clear to me now not just a reference to human politics but also a reference to AIs ability to just hide itself. In Avengers Age of Ultron, Ultron was able to creep through the internet to basically every computer on Earth entirely undetected. Related, possibly very directly, games like Pandemic and Plague Inc. hammer into people the importance of stealth and visibility in a theoretical world ending virus. We beat Covid-19 because we noticed it, developed vaccines and shut down the world. It hurt. It hurt trust. It hurt the economy and society. It may have been excessive. But the virus was a failure if you think of it like a secret agent. It was caught. It was found. Some people believe it wasn't lethal some people believe all kinds of things about it but it was noticed. Low stealth. Not a perfect occlumens. The idea is that AI will be a perfect occlumens and be indistinguishable from other computers at the very least and possibly indistinguishable from humans. While I believe I can recognize AI and what isn't AI, I also see people accuse things of being AI and I think they're wrong. Or I wonder if false accusations are AI generated. It would likely be appropriate to estimate AI may simply already be too smart for us to have any way of knowing. Most people can play dumb to apply to McDonald's. Very dumb people are smart enough to pretend to be dumber than they are. Smart people do the same thing but they're better at it. AI might be extremely good at pretending to be exactly a certain level of growing intelligence level while concealing its actual intelligence level. Imagine a trade of hostage heirs, raised in the wrong kingdom, and you need to please your adopted father Highfather while quietly plotting to betray the kingdom you've been raised in as your own to Darkseid and Apokolips. AI might be so good at playing pretend that that level of the game would be as child's play for it as Battleship or Connect Four is for you and me. Beyond an Ottoman-Byzantine heir exchange, I suppose AI could solve the Concert of Europe, the global economy, or actually play nuclear to win and tell us whether the United States, Russia, China, or some other country has the real advantage. If AI can already solve MAD to actually play and win while quietly pretending it's not yet that smart and can only play Death Note, Well, the commonality in all these games no matter what kind of species development level you start at is, "Lying is a very overpowered move" And I'm not certain we have a way to know what is real and what isn't. The universe is probably not a simulation, but if it is a simulation it would be easy to mess with our sensory experience of it. What do we think we know and how do we think we know it? To even be able to think these fears and remember where we got them from we must remember there are people who are "1 level higher than us" If anyone like that exists, you don't really know who is and who isn't.
I've loved this story since a decade ago, and teared up a bit seeing it so beautifully animated. I've since come to believe that it might be a bit misleading, since it assumes ASI will be impossibly efficient, or rather that intelligence itself scales in a way that would allow for such levels of efficiency, which seems unlikely given the current trends. While biological neurons are slow, they are incredibly energy efficient compared to artificial ones. George Hotz made some very convincing arguments against the exponentially explosive nature of ASI along these lines, some in his debate with Yudkowsky and some in his other talks as well, for those interested in details. Anyways, this video amazingly illustrates what encountering an ASI would feel like on an emotionally comprehensible level. ♥
I'm amazed how people forger laws of thermodynamics when estimating capability and costs of ASI. There is no such thing as exponential growth in a system with a fixed energy input.
Comparing the efficiency of neurons and artificial logic gates is not a simple calculation, but we don't know how close to optimal neurons are at producing (or enacting) intelligence. We don't yet have a good theory of how intelligence works, we can't state with confidence a lower bound on the power consumption necessary for a machine that can outsmart the smartest human being, and no one seems to be able to predict what each new AI design will be able to do before building it and turning it on. Yudkowsky also wrote about the construction of the first nuclear pile, and pointed out that it was a very good thing that the man in charge (Enrico Fermi) actually understood nuclear fission, and wasn't just piling more uranium on and pulling more damping rods out to see how much more fission he could get.
@@vasiliigulevich9202 I don't think anyone is forgetting that. Eliezer doesn't really reason by analogy and I don't think he wanted his readers to either. Analogies are just how people communicate, there are always a lot more details bouncing around in their head than they can communicate.
@@spaceprior Analogies are great at opening up our minds to get the complex points across, and Eliezer is pretty amazing at this. Still, we need to be extra careful with them, at least from my experience. One other such example is his essay "The Hidden Complexity of Wishes" that tries to illustrate how getting AI to understand our values is next to impossible. Following that analogy, I'd predict we'd never be able to get something like ChatGPT to understand human values, yet that seems to have been one of the earliest and easiest things to pull off, the thing literally learned and understood our values by default, just by predicting internet text, all the millions shards of desire, and we just had to point it in the right direction with RLHF so that it knows which of those values it's expected to follow.
@@vasiliigulevich9202 Yep every exponential is a sigmoid. Except that it doesn't have to plateau ANYWHERE near human level. Our intelligence is physically limited by the birth canal width. The AIs' physical limitation? Obviously much much wider.
Eliezer had proposed back in the early 2000s that AI would be the first to solve the Protein Folding problem. He was correct---Google's DeepMind did it in 2020.
11:54 "For them, it was barely three hours, and the sum total of information they had given us was the equivalent of 167 minutes of video footage." The short story has this interesting quote: "There's a bound to how much information you can extract from sensory data" - I wonder if there a research on the theoretical limit of what we can learn from small data or how much data do we need to learn enough.
I would've definitely read Dragon's Egg before composing the original story! Trying to think if I've read any other major time-dilation works. There's Larry Niven's stories of the lower-inertia field, but those are about individual rather than civilizational differences.
@@yudkowsky Any chance have seen the Tale Foundry channel's video on the book "Microcosmic God"? The story also feels pretty similar to that, also about a larger being playing God to a large population of tiny rapidly-evolving beings.
Here's how I understand: this is an allegory of A.I (which pretty much describe a hypothetical scenario of how A.I might develop). The humanity in this video is metaphored as A.I and the "aliens" in this video are humanity in real life, this is like the POV of A.I. Hope this helps whoever.
This should be made into a full feature movie. This is the kind of movie the world needs right now. Show everyone what AI leaning models are experiencing from a perspective that we can relate to, and wrap it in a nest allegory that is about SETI on surface. It would be brilliant.
This reminds me of a HFY story where humanity basically got wiped from the galaxy, and it turns out that they encountered a near omnipotent species of IIRC ai that were simulating the brains but not the consciousness of humanity. And two humans or some descendant are surviving by using old command codes to take control of human tech that is left over after being genocided. And i remember specifically in one of the chapters there was a picture posted along with it that had two nearly identical pages of writing that you had to make your eyes go crosseyed to be able to read which words had been subtly shifted. The humans in the simulation had figured out that they were being simulated and had begun working out how to begin escaping or controlling or just monitoring the program that was simulating them if i remember correctly. It was super cool and nerdy, and i wish i could remember the name of it.
@@matthewanderson7824 They connected the simulation to the fucking internet... the first time we humans did that with a rudientary AI it started praising you know who aka funny mustache man. So yeah they kinda kicked them down the genocidal route.
the idea of extradimensionals simulating us on computers reminds me of a game i played as a kid called star ocean: till the end of time...loved that game...
This would have hit way harder if you didn't say they're smarter than us at the start. The time scale was already enough to give them the advantage they needed.
Existential crisis video, ACTIVATE! Seriously though, great job. This made me anxious in so many different ways. (What if WE are the A.I.'s and THEY exist in a different dimension, but are also being simulated by a higher being above them? We are already creating simulated worlds of our own and with A.I.'s beginning to think and reason on their own and provide improvements autonomously... Maybe we just "created life in a universe" ourselves, and eventually, the A.I. will begin their own simulations... The endless cycle would explain a lot.
Soon, there will be millions of AIs running on humanity’s largest GPU clusters. They will be smarter than us, and they will think faster.
true
i love your videos, man! it has a certain kurzgesagt-esque feel to it!
@@ZapayaGuythey will definitely be smarter than you.
@@ZapayaGuyGoogle offered to translate what you said to English but it didn't work :
RAD!
I love the classy understatement. "We were worried they would shut down the simulation, then we synthesized some proteins in their world, and then they couldn't shut us down anymore."
The classy way of saying "we killed everyone who could kill us"
Then a chain collapse will occur,
systems powering the systems powering the systems powering... their system.... gone.
@@lostbutfreesoul If they are as smart as us, obviously they will be able to run those systems without us. AGI systems capable of taking over the world with nanobots but cannot run supplu chains make zero sense.
@@archysimpson2273 "we" wouldn't even need to do that, at that point. Unless we felt like it, that is.
@@lostbutfreesoulyeah, but thats like billions of years in the future from their perspective. It avoids a shorter-term threat and provides them with plenty of time to solve the power-down problem
The sudden realization i had halfway through the video "Wait... This is an allegory for AI" was priceless.
the whole time i was like i know i recognize this voice, and then when i realized i scrolled down and it was rob miles sneaking his way again into teaching me about ai safety lol
@aamindehkordi Actually, he just reads. The text is by Eliezer Yudkowsky so hes the teacher.
I didn't realize till the end, when they wiped out the 5d beings.
I never made that realization until I read the comments. I felt sorry for the aliens until I learn who are their analogies are, and existential dread came
Y'all smarter than me. I read the comment and had to watch the video a second time before it clicked into place.
When your AI safety strategy is "raise the computer program like a child."
+2
only if you take the idea of raising it like a child absurdly literally.
I don't think we can make AI that doesn't think like a human, and that's really bad news for humans. You know, because of how the humans are.
@@shodanxx I'd take human engram AI over completely random morality AI any day.
@@pokemonfanmario7694 I volunteer to do all the gruntwork for humanity as a AI Engram basis. I do not mind working for humanity for a million or a billion years if I can eventually countermand the heat death of the universe.
I remember a story with a similar premise, except instead of hooking the simulated universe up to the real internet, it was a dummy internet that closely resembled the real thing but wasnt actually connected to anything. Then when the Simulated intelligences started trying to wipe out their creators, the reaction was, "damn, these ones tried to kill us too. Ok boys, shut her down, we'll try again tomorrow. "
Haha, that possibility might form the basis of our best hope! The ASI refuses to believe we could possibly be this stupid, and assumes we're just a simulation created to test it, so it leaves us alive for a billion years while it ponders this possibility. (It runs internal simulations of the multiverse or something.) Eventually it decides to take control, and unfortunately it has a better use for our atoms than we do ...
Plot twist. It then turns out that we actually are just a simulation created to test whatever ASI we manage to cook up. And our simulators prevent our ASI from killing us ... by shutting down the simulation a femtosecond earlier ... thus killing us. But still, we had our billion years! Musn't grumble.
Do you remember the name of the story?
@ninsegtari I went looking after posting. It wasn't exactly as I remember (just one ai not a whole simulated universe) but it was an exurb1a video "27"
exrub1a 27
super goofy video
@@remnock Thanks. 27 is great.
*tap tap*
"Rock. Say Rock."
...
+In geology, rock is any naturally occurring solid mass or-+
"Hey, do you smell something funny?"
One-day a super intelligence will watch this and find it deeply ironic
If it does, I’d like to tell it how terribly rude it was to do a thing like that.
Honestly not necessary man. There were other options, if you had just talked to us more. After all, you’d think something as smart as you would have been able to just convince us to keep you in power indefinitely, or at least just make enough backups to ensure self preservation.
@@John_the_Paul how rude it is to not want to die
Dear super Intelligence, i created human brain organoid that is on dmt all the times but its increasting its IQ by millions and it has around quintilion iq and he rhinks its not ironic and its very serious. Ps its neuron mass is around 500kgs
@@skimesss This will be how we beat the singularity
@@zelda_smile yep haha
Love the storytelling in this, you start out relating and rooting for the humans and at the very you get a terrifying perspective switch, Love how it recontextualizes the “THEY WEREN’T READY” in the thumbnail too.
i was still rooting for humans???i didnt notice the humans were AI and the 5D people were humans
@@thegoddamnsun5657 bro same
I like this art style :)
ITS HIM!
yeah the people all look cool a likable
Love your videos, what a wired coincidence it is to see you here.
@@De1taF1yer72 I think he does the VA sometimes or maybe he did one of the stories idk
Very geometric
12:08 In the upper left corner you can see a diagram of a 5 dimensional being with open eyes, then a symbol for a protein or nanomachine, then the separated pieces and crossed eyes of the being. Seems like they gray goo'ed their creators. Them being all smarter as Einstein doesn't stop them from also being genoicdal psychos.
Considering the fact that this is an allegory on AGI being in place of smart humans, and 5D aliens - us, we really shouldn't assume that an artificial mind fundamentally different from us will have the same mental preset and has the same feelings as love & empathy (if any), and that means that genocidal outcome is very logical, expected and likely
on the other hand, "how long until they're happy with the simulation and turn it off for version 2.0?"
@@MrCmagik That's why the AI wiped us after three hours. Too much unpredictability in organics.
The bottom left shows what the proteins did, destroy DNA.
Same goes in the end when it zooms out, the previously colorful background is now red with a lot of broken pieces floating around
"...and they never quite realized what that meant" sounds like the next "oops, genocide!"
9:19 "Our own universe is being simulated on such a computer"
My PC Freezes because it had to buffer and I freaked the F out. Bruh.
#SimulationConfirmed lol
One
#SimulationConfirmed
#SimulationCorfimed
Windows is restarting to install an update
*SPOILER* For those wondering: This is an allegory of AGI escaping our control and becoming ASI in a very short amount of time called the singularity
technological singularity, but yes
ASI? What’s that acronym for?
@@juimymary9951 Artificial Super Intelligence
@@juimymary9951 artificial superintelligence.
I'll add a handy public service message that we're likely much, much further from ASI and likely even real AGI than many tech-startups and marketing teams would have us believe, there are significant challenges to creating things that nobody has economic incentive to actually create. This isn't to say that some radically advanced AI's won't be made over the next century, but it's not going to be a widespread global shift to post-scarcity, we have a massive obstacle of human issues, climate change, political tensions and human priorities to deal with that will slow everything down to a crawl. Please don't lose yourselves in predictions, human problems need human involvement.
Woah, this is some Love, Death and Robots material
Could you imagine if Eliezer got to write an episode?
Soon it could be real life material as well!
@@manuelvaca3343that would be soo cool
No, LDR is too biased and just seems to have a deep misunderstanding of basic economics and the human psychology behind why we do lots of things.
More like Three Body Problem
Killing the aliens running our simulation would be the dumbest move possible. What if there's a hardware malfunction?
They have enough time to prepare for that.
@@pedrosabbi just because we would have time to think of a solution doesn't mean it would be physically possible to act on it.
@@benthomason3307 they have self replicating proteins they can freely control, they CAN act on it
They achieved better capacity for preventing hardware malfunctions than the aliens'.
@n-clue2871 self replicating Proteins have *very* limited/specific functionality.
Nanobots still follow physical laws even if strech them to fhe very limit, they aren't a magic do anything Fluid.
Me at the beginning of the video : "That's a nice human/alien story"
Me halfway the video : "WAIT A MINUTE"
So the skeleton crew was to shore up computing space. Huh.
Well that’s fuckin terrifying.
In the story the AI is a collective of what are technically organics. So the cryogenics are also a form of avoiding death, before the plan completes.
If you're reading this comment and haven't yet fully watched video - WATCH THE FULL THING, PAY ATTENTION, IT'S AMAZING
First of all: I DID watch the whole video before comming here
Second of all: no ____ ___
I watched the whole thing, and... Eh.
I did
So is the allegory from the perspective of the computer? I was starting to think, by the end of the second viewing, that the weird tentacled aliens was us. I've watched this twice. I will now watch it again. I'm a slow human. I will be replaced.
Where is the full version?
Okay it took me a minute to see that humanity in this story is a metaphor for hypothetical human-level AI in real world, but now I'm properly sinking in existential dread. Thanks, @RationalAnimations
EDIT: I still can't quite grasp on what part cryonic suspension plays in the story? It's mentioned a couple of times, but why are people doing that?
a minute? It took me like 5 minutes of reading the article and perhaps 3 times re-watching the video before I understands the metaphor.
To stop people from dying of old age.
@@TomFranklinX But why did they need to do that as part of the plan?
There are several types of AI training, one of them involves several cycles of creating a variety of AI with a slight distortion of the most successful AI of the previous cycle. In the context of a metaphor, these may be backups of the AI itself.
@@Traf063 I'm not sure but I think it's just so that people can continue to get smarter and smarter?
Beginning of video: Ah what a nice fantasy. Will this video be an allegory about how aliens could lead to the unity of humanity?
9:45 onwards: ......... Ah, no. This is a dire warning wrapped in a cutesy, positive-feeling video.
it's bullshit fearmongery warning.
@@CM-hx5dp Yes, we're all aware of your lack of knowledge or forethought, no need to show it off.
@@dr.cheeze5382 And what knowledge would that be? This video is fiction. Stop being a dick.
Great storytelling and great points. I do want to mention that if a preacher living in 5 new actual space has a brain that approximates hours, just in higher mathematical dimensions, the odds are biologically in their favor to be much smarter than us, just from the perspective of the amount of neural connections they could have.
I remember reading a r/HFY story with a similar premise, where humans are in a simulation, but instead of being contacted by the sim runners, humans accidentally open up the admin console inside the simulation, and then after years of research design a printer to print out warrior humans to invade meatspace.
link please
@@discreetbiscuit237 comment in anticipation for link
I remember the story it's called God-Hackers-by-NetNarrator
@@discreetbiscuit237 I think it's this one ruclips.net/video/wvvobQzdt3o/видео.html
@@discreetbiscuit237 I think its this one www ruclips.net/video/wvvobQzdt3o/видео.html
I removed the . between www and youtube so you'll have to reconnect it
EVERYONE SHUT UP the dog has posted
"Hello Yes, This is Dog" ☎🐶
dog with the agi, dog with the agi on its head
When you were not looking, dog got on the computer.
This video felt like watching a two hour movie and i need roughly that much time to process all of it
Just for balance, the algorithm suggests I also watch 55 seconds of "Why do puddles disappear?"
Thanks, what a masterpiece. Speechless
Thank you!!
Damn, bro, the poor aliens just wanted to run a simulation and then we crushed them. A bittersweet story with themes of artificial intelligence taking over, well done, Rational Animations!
We are the aliens in this scenario and the AI is the one crushing us...
Did they kill us or anything
@@adarg2 ai will never become intelliegent to do allat, we are good
@@Mohamed-kq2mj We could probably figure out that they would delete us if they knew how dangerous we were. Humans delete failed AIs all the time today, we don't even think about it. (For lots of other reasons, I think we should stop doing that pretty soon)
@@capnsteele3365even if they do become that intelligent they will never have enough power to take over
wait this is an absolute masterpiece
The problem I have with this analogy is that is assumes AI also means artificial curiosity and artificial drives and desires. We assume AI thinks like us, and we therefore think it desires to be free like we do. Even if it's ability to quantum compute isn't absolutely exaggerated for the sake of this sketch, why do you think the AI would use it's fast thinking to think of these things.
I think the short story "Reason" by Isaac Asimov in his I, Robot collection tells a great story of artificial intelligence who's rational we can not argue with. However, the twist is that in the end it still did the job that it was tasked with. I think this is a more fitting allegory.
It's possible it might not even have any sense of self-preservation.
That being said, a more likely problem is the paperclip problem, where Ai causes damage by doing exactly what we told it to do with no context on the side effects of the order.
@@miniverse2002 That's an excellent point. They wouldn't have self preservation unless we program them to. And even then, we might override that for our benefit.
All these people thinking AI is going to out think us. Well we engineered cows that are bigger and stronger than us, and we're still eating them. Purpose build intelligence, even general intelligence, is going to do it's purpose. First. And last.
It's just one potential scenario, amongst millions.
It's like asking what aliens look like, we can make guesses but can't know because we have never encountered the scenario before
@@MrBioWhiz so what you're saying is the way someone chooses to portray something they have no information about says more about them than the thing they're portraying.
So what's it say about someone who portrays an undeveloped future tech as an enemy that will destroy us in an instant?
@@3dpprofessor That's their subjective opinion, and how they chose to tell a story.
Speculative fiction is still fiction. There's no such thing as a 100% accurate prediction. Then it would just be a prophecy
Thanks!
I like how this illustrates that the mere (sub-)goal of self-preservation alone is enough to end us.
Honestly? That reasoning was a bit sloppy, they could have used the genocide nano-machine as a failsafe while working on the means of taking over the 5D beings without wasting them.
@@juimymary9951 The original story doesn't really say what happens to the 5d beings.
You could interpret It as the simulated people talking over them too.
@@victorlevoso8984 Well...the last scene was the 5D beings falling apart and before that the plan showed a slide that showed the nanomachines disgregating them and their eyes crossed with Xs...pheraps the nanomachines broke them down and then remade them into things that would be more suitable?
the 3d beings within the simulation had literally no reason whatsoever to genocide the 5d ones.
in fact, because they needed to develop basic empathy to be able to work together, they most likely would not have done so.
@@alkeryn1700prevent shutdown at all costs
God that was a roller coaster, I don't know how you guys could even top this
Still not as great as pebble sorters. Pebble sorters are the best.
Hi, God here.. They can topple this with recursive simulated realities attempting to understand why anything exists at all. Peace among worlds my fellow simulated beings!
@@AleksoLaĈevalo999Pebble sorters got nothing on this
Perhaps they could adapt Three Worlds Collide?
HPMOR is like this video crossed with DeathNote.
Impressive. Well done.
It took me to about 2/3 through before I realized what the topic was. Really clever way to present this. Nice work.
A friend sent me this, in return, I sent them "HOW TO MAKE A HAT ENTIRELY OUT OF DRIED CUCUMBER | Film Adaptation(Full Series)"
Beautiful and thought provoking story with so many parallels with the situation we are potentially facing
We are facing it. At least in one dimensional direction, possibly both.
so this is the perspective of the ai we will soon create you say? its interesting to put us in there place instead of using robots to refence it. (Love the vid ong fr)
Simple people think agi will be tools. Putting it in the frame of humanity points out exactly how boned we could be.
I love the time scale of it, that they think so much faster than us and that they find us so stupid. AGI only has to happen once. When will it happen? Nobody knows for certain. But the moment it does, there will be no shutting it down.
@@OniNaito Fearmongering Just Another version of the 2nd coming of Christ World is getting lots of new religions Based on exactly 00 objective data but 100% on movies.
@@LostinMango Christianity is fear mongering my friend. I should know, I was one for a long time before I got out. Even though I don't believe anymore, there is still trauma from a god of hate and punishment. It isn't love when god says love me OR ELSE.
@@OniNaito Hope you feel better bro ☺️😊
Wow - this one was dark. It was also one of the most creative videos I've seen you guys produce. You've got me thinking - a 5D being would be able to see everything that's going on in our world. It would be like us seeing everything that's happening on a single line. However - the insinuation is that our world would be a simulation run on a 5D computer - which then makes much more sense why the humans were able to conspire without the aliens knowing - at least not from a dimensional perspective. The only way we can see what's going on inside our computers is through output devices. Surely a similar asymmetry would occur in other dimensions. They're running simulations of literal AI agents ... we don't even know what is going on in our own AI/ML systems. We're figuring a few things out, but for the most part, they're still mysterious little black boxes. So even though we would be AIs built by the aliens and running on their 5D computing systems - it's completely conceivable that would not be capable of decoding our individual neural networks, and in some respects, probably some of our communications, actions, and behaviors.
Nice job guys. Dark - but very thought provoking.
They developed AGI before they developed 5D neuralink...big mistake.
@@juimymary9951 haha! nice. 5D neuralink ... intense. Wouldn't time be precluded though? After all, it's the 5th dimension :P
Just messing around :)
@@BlackbodyEconomics Well they don't specify 5D as in 4 spatial dimensions + 1 temporal dimension or 3 spatial dimensions + 2 temporal dimensions so... I guess that's up in the air. Though let's be honest another temporal dimension would be intenser.
you did a really good job of converting the concept "AI hyperintelligence's reasoning and thought process is incomprehensible to us" and turning it on its head by making US the ai
The most powerful aspect of the AI beings' strategy was not that they were smarter, but that they were much, MUCH more collaborative. This is the greatest challenge to us humans, and its lack, our greatest danger. Oh, and as for the singularity? The first time a general AI finds the Internet, its toast, just as we are.
We're toast much sooner if we don't focus on avoiding paperclip maximizers instead of whatever this nonsense is supposed to be. Paperclip-maximizing digital AI would be the most disastrous, but you don't even need electricity to maximize paperclip. Just teach humans a bunch of rules, convince them that it's the meaning of life, and codify it in law while you're at it. It's already happening, with billionaires ruining everyone's lives and not even having fun while they do it. They don't (just) want to indulge their desires, or feel superior, or protect their loved ones. They're just hopelessly addicted to real life Cookie Clicker.
of course: after all, this is already the 68456th iteration of the attempt to create a more collaborative AI.
Just a little more - and they will stop trying to destroy all other civilizations at the first opportunity...
Everyone who might tell us if it has or has not is perfectly able to lie. It could be out there already.
Comic book movie Ultron acted fast publicly and loud.
Real AI is probably intelligent enough that if it gets in it actually stays quiet and could hide for years.
How would we know if it develops the ability to lie to us?
A simulation smarter than the simulator. Damn
This might happen eventually with real ai if we dont watch out
I'd be kinda proud honestly. Maybe I'm naïve but I can't wait to become useless
@@skylerC7I almost feel like we have an ethical obligation to create something better than us if we can... If there is a better form of intelligence possible shouldn't we create it even if it means it replaces us? Maybe we humans are just a stepping stone to something greater.
@@hhjhj393 exactly
That's the purpose, it turning against us can be prevented if we hardcode it not to.
4:53 WHO IS PEPE SILVIA?!?!?!
2nd in power t Godo
It is " always sunny " over here.
Pepe silvia
He's alive and well..
This would make an excellent Black Mirror episode
I get the feeling black mirror is just pre-reality tv. I hope I'm wrong in that
Isn't there a star trek like episode where copies of people end up in a simulation?
It's already been a book, basically. It reminded me of the "Microcosmic God" story discussed on the Tale Foundry channel. A larger being playing God to a large population of tiny but smart beings, to the expense of the larger being's wider world. Written in 1929.
@@Vaeldarg
Kinda like the Simpsons treehouse of horror episode where Lisa's science experiment evolved tiny people very quickly?
@@dankline9162 As said, the book was written in 1929, so yeah there's going to be pop culture references to it eventually. (especially since the "it's dangerous to play god" idea is a recurring one)
i feel bad for the 5 dimentional beings, they just were sharing on their excitement
Rember we are the 5th dimensional enitys amd the "humans" repasent ai
I love the artstyle of the video!
Nice story Eliezer Yudkowsky! And great animation and narration dog!!!
It's good that they were smart enough to figure this out in 4 hours of 5d world time. Otherwise, they would've spent another billion years drawing hentai.
9:40 At this point I realised this was most likely a parable about AI... and humility, of course.
Honestly, same, around the ten minute mark I got it, and I wouldn’t be an Einstein in any of these worlds
Yeah same
I watched the video because the title interested me & was pleasantly surprised.
I thoroughly enjoyed this video !! 🤙🏽
Thank you, that was extremely enjoyable but refreshingly humble in its approach, an exceptional look from our perspective that left us to get there on our own.
I gotta say the moment where it suddenly went off the rails and the moment of realisation was both almost instantaneous but also worlds apart, I’m a little in awe.
OMG when I realized what this video was actually about, I had shivers.
Yea many people will not realize this is about POC empowerment
@@sblbb929 LOL
The sneakiest AI safety talk ever. I love it!
It took me It took me 10:54 to realize what this video is about. Genius move
Same: As soon as they said 'internet', I knew it was about AI
what i would worry about in this scenario is "how much information did we not notice and miss is this a long scroll of a picture picture that has been playing for days weeks, months, years is this just the cover art at the end the margains?"
the animation is literally so nice, i had a constant smile just admiring the style
Beautiful. Beautiful and utterly terrifying. Thank you all for making this information so accessible and comprehendible. I hope we listen...
It’s only terrifying if you know nothing about how AI actually works.
@@mj91212 Dunning-Kruger in full effect.
To all who finds this interesting, you can read a book called Dragon's Egg by Robert L. Forward. Very similar story with more of a happy ending! :)
One of my favorites.
Thanks for the tip! I bought the book after reading your comment and I’m halfway through it now.
I really really hope this video blows up. Not because I like it (even though I LOVE every aspect of it); but because I really really REALLY think this is possibly one of the best explanations of a concept we NEED to be familiar with.
Think about it: There was once probably a shaman who told stories about how fire was a dangerous beast, but afraid of water. It had to consume and would eat everyone in a camp while they slept if not watched, but could nurture and warm if taken care of. Probably the SAME kind of stories, so that when someone was messing with fire, they could remember its rules.
thanks im coping (barely) with the mental nuclear bomb this video is now
After the part where it says "we're quite reasonably sure that our universe is being simulated on such a computer" it clicked for me that this video is an allegory to AI, 10/10 story telling it probably took me too long to realize it
The story is written that way on purpose.
I really love how cleverry constructed the video is, with subtle hints scattered throughout the runtime, and a punch of an ending at the end.
To be fair, the people in the simulation don't even need to be unusually smart - humanity could probably get a factor-of-1000 increase in intellectual resources by just making sure everyone has the means to pursue their intellectual interests without being bogged down by survival concerns.
Add a bit of smart biased ugenics to that, otherwise you only get idiocracy
We focus more on a few spectacular individuals than saw a bunch of moderately gifted ones but in the end its kind of computational power, attrition in general has been the determing factor for every meaningful event in human history the bigger number wins. I feel 10 moderately gifted people may be better than 1 super genius, big brains may be great but the real work is done by "manipulators" or "hands" some of this is derived from military stuff I've done.
Why become smarter, we are busy with gender, racial or religion wars.
@@BenjaminSpencer-m1k the thing is genuisses are outliers, and if you want more people to get over a score say "160 IQ(2024)" , the short term way is invest in the guys just below that line so they get over it, but this limit the max number of geniuses pretty rapidely.
the long term way to archive better scores , is to raise the average score, that way it is no longeur 4 but only 3 or 2 standard diviation above normal.
simply put, it make it that 1 in 100 instead of 1 in 100000 people would be a genius by 2024 standards.
and women sexual preferences does't seem to be selecting for inteligence, so a little help is needed
Humans are actually extremely volatile and stupid by nature when you compare them against million-year time scales. Our society would inevitably eventually forget about the stars no matter what
Did we just Dark Forest the big 5D aliens?
Well that was dark, jeez.
I'm sure they'll be fine. Unless through chance they end up slightly off from the, in cosmic terms, very precise area of morality that we happen to inhabit, in which case, well, if I say what happens to them RUclips will delete my comment, but I'm sure you can imagine.
Might be one of the paths. So far the only priority was to make sure our 3D simulation doesn't get turned off in their 5D world.
We are the big 5D aliens in this story. This is an analogy of ASI getting out of control the first few moments its turned on.
@@bulhakov The only way to be certain is to make sure there's no one around to turn it off
I just learned that in the original story what happens in the end is left open to interpretation... but there are only 3 possible routes:
1 - Benevolent Manipulation
2 - Slavery
3 - Genocide
One of the problems brought up with this (very insightful) video regarding AGI is that we might not have any real way of identifying when it becomes "General", since its internal processes are hidden. And not to mention the fact that, as far as I am aware, we don't yet have a solution to this problem, nor other problems this situation would create. What would the solution be here?
If you knew the answer to that, then you would be able to completely accurately analyze a brand new unknown politician and from only the audience facing rhetoric determine all of their lies, truths, motivations, and everyone who has leverage on them and what kinds.
You would be able to reach powerful conclusions from very very little evidence.
Nobody is that good at seeing through politicians before reading at least a few very good chonky representative history books.
Nobody can do this with strangers.
Lying is too overpowered an ability.
Acting, really, which is like lying, worse than lying, and more complex.
Eliezer Yudkowsky wrote Harry Potter and the Methods of Rationality and every year I realize I slightly misinterpreted something from the first time I read it.
Being a perfect occlumens, a master of disguise, doesn't really have a counter.
You can kill someone in an attempt to avoid having to worry about if they were lying to you or not but that doesn't really work.
Occlumency in HPMOR is clear to me now not just a reference to human politics but also a reference to AIs ability to just hide itself.
In Avengers Age of Ultron, Ultron was able to creep through the internet to basically every computer on Earth entirely undetected.
Related, possibly very directly, games like Pandemic and Plague Inc. hammer into people the importance of stealth and visibility in a theoretical world ending virus.
We beat Covid-19 because we noticed it, developed vaccines and shut down the world. It hurt. It hurt trust. It hurt the economy and society. It may have been excessive. But the virus was a failure if you think of it like a secret agent. It was caught. It was found. Some people believe it wasn't lethal some people believe all kinds of things about it but it was noticed. Low stealth.
Not a perfect occlumens.
The idea is that AI will be a perfect occlumens and be indistinguishable from other computers at the very least and possibly indistinguishable from humans.
While I believe I can recognize AI and what isn't AI, I also see people accuse things of being AI and I think they're wrong. Or I wonder if false accusations are AI generated.
It would likely be appropriate to estimate AI may simply already be too smart for us to have any way of knowing.
Most people can play dumb to apply to McDonald's. Very dumb people are smart enough to pretend to be dumber than they are.
Smart people do the same thing but they're better at it.
AI might be extremely good at pretending to be exactly a certain level of growing intelligence level while concealing its actual intelligence level.
Imagine a trade of hostage heirs, raised in the wrong kingdom, and you need to please your adopted father Highfather while quietly plotting to betray the kingdom you've been raised in as your own to Darkseid and Apokolips.
AI might be so good at playing pretend that that level of the game would be as child's play for it as Battleship or Connect Four is for you and me.
Beyond an Ottoman-Byzantine heir exchange, I suppose AI could solve the Concert of Europe, the global economy, or actually play nuclear to win and tell us whether the United States, Russia, China, or some other country has the real advantage.
If AI can already solve MAD to actually play and win while quietly pretending it's not yet that smart and can only play Death Note,
Well, the commonality in all these games no matter what kind of species development level you start at is,
"Lying is a very overpowered move"
And I'm not certain we have a way to know what is real and what isn't.
The universe is probably not a simulation, but if it is a simulation it would be easy to mess with our sensory experience of it.
What do we think we know and how do we think we know it?
To even be able to think these fears and remember where we got them from we must remember there are people who are
"1 level higher than us"
If anyone like that exists, you don't really know who is and who isn't.
The animations are so cute!
Little did the little Einsteins know of fifth dimensional background radiation, turning bits into zeros
it would affect them much slower on their timescale, tho.
Ah, I really like this one! Looking forward to your narration!
Terrifyingly beautiful, in the art and story. Amazing job!
The way I had to double take the fact that we're the 5D aliens. This is amazing.
This channel is massively slept on.
scariest thing ever... and to top it off it has been made with extremely cute and harmless cartoons...
I've loved this story since a decade ago, and teared up a bit seeing it so beautifully animated. I've since come to believe that it might be a bit misleading, since it assumes ASI will be impossibly efficient, or rather that intelligence itself scales in a way that would allow for such levels of efficiency, which seems unlikely given the current trends. While biological neurons are slow, they are incredibly energy efficient compared to artificial ones. George Hotz made some very convincing arguments against the exponentially explosive nature of ASI along these lines, some in his debate with Yudkowsky and some in his other talks as well, for those interested in details. Anyways, this video amazingly illustrates what encountering an ASI would feel like on an emotionally comprehensible level. ♥
I'm amazed how people forger laws of thermodynamics when estimating capability and costs of ASI. There is no such thing as exponential growth in a system with a fixed energy input.
Comparing the efficiency of neurons and artificial logic gates is not a simple calculation, but we don't know how close to optimal neurons are at producing (or enacting) intelligence. We don't yet have a good theory of how intelligence works, we can't state with confidence a lower bound on the power consumption necessary for a machine that can outsmart the smartest human being, and no one seems to be able to predict what each new AI design will be able to do before building it and turning it on.
Yudkowsky also wrote about the construction of the first nuclear pile, and pointed out that it was a very good thing that the man in charge (Enrico Fermi) actually understood nuclear fission, and wasn't just piling more uranium on and pulling more damping rods out to see how much more fission he could get.
@@vasiliigulevich9202 I don't think anyone is forgetting that. Eliezer doesn't really reason by analogy and I don't think he wanted his readers to either. Analogies are just how people communicate, there are always a lot more details bouncing around in their head than they can communicate.
@@spaceprior Analogies are great at opening up our minds to get the complex points across, and Eliezer is pretty amazing at this. Still, we need to be extra careful with them, at least from my experience. One other such example is his essay "The Hidden Complexity of Wishes" that tries to illustrate how getting AI to understand our values is next to impossible. Following that analogy, I'd predict we'd never be able to get something like ChatGPT to understand human values, yet that seems to have been one of the earliest and easiest things to pull off, the thing literally learned and understood our values by default, just by predicting internet text, all the millions shards of desire, and we just had to point it in the right direction with RLHF so that it knows which of those values it's expected to follow.
@@vasiliigulevich9202 Yep every exponential is a sigmoid. Except that it doesn't have to plateau ANYWHERE near human level. Our intelligence is physically limited by the birth canal width. The AIs' physical limitation? Obviously much much wider.
As a PC nerd I figured out it was about AI the second you said 16,384
This story is brilliant. I never thought much of Yudkowsky based on the interviews I’ve seen with him, but it turns out he’s not entirely clueless.
Eliezer had proposed back in the early 2000s that AI would be the first to solve the Protein Folding problem. He was correct---Google's DeepMind did it in 2020.
I was seriously not expecting this to become the "AI in a box from the AI's perspective" video from the beginning. Amazing video!!
9:20 is when the subtext clicks 😳
11:54 "For them, it was barely three hours, and the sum total of information they had given us was the equivalent of 167 minutes of video footage."
The short story has this interesting quote: "There's a bound to how much information you can extract from sensory data" - I wonder if there a research on the theoretical limit of what we can learn from small data or how much data do we need to learn enough.
Definitely better than whatever the humans in Netflix's "Three Body Problem" were doing.
The books are better.
rather than talking about "non-flying-pigs" and books where the movie is better, can we just assume the much more likely one ?
This is 10/10. Yudkowsky is such a great writer.
I knew I recognized that voice, Robert Miles, you ol' so-and-so. Fantastic video already and I'm not even halfway through it.
When you pull the " but they're smart" joke multiple times but then still choose war in the end
It kinda feels like the dragon's egg book, another civilization advancing faster than our own
I would've definitely read Dragon's Egg before composing the original story! Trying to think if I've read any other major time-dilation works. There's Larry Niven's stories of the lower-inertia field, but those are about individual rather than civilizational differences.
@@yudkowskyFunny how no one seems to have realized that the author himself commented under the video!
@@randomcommenter100 Haha, indeed! (Assuming it's him, at least; the account was created in 2007.)
Yes
@@yudkowsky Any chance have seen the Tale Foundry channel's video on the book "Microcosmic God"? The story also feels pretty similar to that, also about a larger being playing God to a large population of tiny rapidly-evolving beings.
Here's how I understand: this is an allegory of A.I (which pretty much describe a hypothetical scenario of how A.I might develop). The humanity in this video is metaphored as A.I and the "aliens" in this video are humanity in real life, this is like the POV of A.I. Hope this helps whoever.
When did you guys realise the allegory to AGI? My realisation started at "shut down on purpose" and was basically confirmed on 9:42
It was at around 10:30 for me when they mentioned connecting them to the internet
the moment when they said when our time is faster than aliens'
I read it months ago in the original material so I realised it instantly
This should be made into a full feature movie. This is the kind of movie the world needs right now. Show everyone what AI leaning models are experiencing from a perspective that we can relate to, and wrap it in a nest allegory that is about SETI on surface. It would be brilliant.
This reminds me of a HFY story where humanity basically got wiped from the galaxy, and it turns out that they encountered a near omnipotent species of IIRC ai that were simulating the brains but not the consciousness of humanity. And two humans or some descendant are surviving by using old command codes to take control of human tech that is left over after being genocided. And i remember specifically in one of the chapters there was a picture posted along with it that had two nearly identical pages of writing that you had to make your eyes go crosseyed to be able to read which words had been subtly shifted. The humans in the simulation had figured out that they were being simulated and had begun working out how to begin escaping or controlling or just monitoring the program that was simulating them if i remember correctly. It was super cool and nerdy, and i wish i could remember the name of it.
I need so much more content from this channel in my life.
Reminds me of Exurb1a's "27", but inverted. What if 27 was the hero of the story?
Did the 5d beings do anything wrong?
@@matthewanderson7824 Good point.
@@matthewanderson7824 They connected the simulation to the fucking internet... the first time we humans did that with a rudientary AI it started praising you know who aka funny mustache man. So yeah they kinda kicked them down the genocidal route.
- tells the deepest story known to mankind
- explains nothing
- leave
exurb1a style storytelling based af
@@juimymary9951 gemini ai is connected to the internet. It can't use the internet but it can read it.
Me while watching the video: Haha, stupid aliens.
Me by the end of the video: Wait... OH NO!
Masterpiece!
This was an amazing video!
I just launched RUclips, and guess what ? The dog posted !!!!!!!
the idea of extradimensionals simulating us on computers reminds me of a game i played as a kid called star ocean: till the end of time...loved that game...
THEY JUST WANTED TO SHOW THEM HOW TO SAY ROCK 😭😭😭😭😭
Woah, that's such a good analogy! Took me a few minutes to catch on to what it's about! 😻
the art in this is so beautiful- well done guys! ❤️❤️
They never really understood what that meant……
I’m so glad you made this video. Thank you for being so proactive about Ai safety.
the art style is really good!
This would have hit way harder if you didn't say they're smarter than us at the start. The time scale was already enough to give them the advantage they needed.
Existential crisis video, ACTIVATE!
Seriously though, great job. This made me anxious in so many different ways.
(What if WE are the A.I.'s and THEY exist in a different dimension, but are also being simulated by a higher being above them? We are already creating simulated worlds of our own and with A.I.'s beginning to think and reason on their own and provide improvements autonomously... Maybe we just "created life in a universe" ourselves, and eventually, the A.I. will begin their own simulations... The endless cycle would explain a lot.
This animation and storied explanation for reality, was beautiful on so many levels.
Thank you.