Pebblesorters value creating heaps of pebbles that have a prime number of stones in them, and are horrified by heaps of pebbles with the "wrong" number of stones. We humans can understand the rule behind their moral intuitions (better than they can, even, since they seemingly haven't realized the rule they're following), but even though we understand what's "right and wrong" from their perspective we still find sorting pebbles a pointless waste of time. Many humans think that any advanced enough artificial intelligence will be moral and good, because it'll be smart enough to understand right from wrong. And an advanced enough AI *will* understand human morality - maybe better than we do, even, it could perhaps deduce and codify the rules that humanity has been stumbling towards over our millennia of moral progress. But, despite understanding, it won't care any more than understanding pebblesorter morality makes us want to sort pebbles.
I think this explanation is crucial in understanding the point of this analogy for someone who does not know the context in which it was originally written and I'm curious as to why Rational Animations has decided to skip it. If I were them I'd pin this comment.
Год назад+122
@@MrHinchapelotas I think the point is that you figure it out for yourself?
i mean its not too hard to figure out morality considering it is based on a structure designed by evolution, ie the purpose of morality is such that our genes, or similar genes, get passed on. our brains' "happiness" is by definition correlated to the percieved 'best thing for passing genes' (although it is much more than that, but that is the outcome anyways), so making the optimal choices such that our brain is in a 'happy/fulfilled' state is moral.
@@rorycannon7295 No, obviously not. Things like violence, theft, and drugs provide dopamine under the right circumstances, but that’s what we would consider near-mindless impulses. Morality is the act of simplifying future-telling. That which leads to the best outcome is moral. Morality means different things to different people based on their ability to accurately, or at least confidently, predict the future. What is the best outcome is a different question, but morality only concerns actions anyway.
@@SeventhSolar "No, obviously not. Things like violence, theft, and drugs provide dopamine under the right circumstances," no, actually. the mental processes that justify them are flawed coping mechanisms, and clearly are not optimal. i should have clarified better what i meant by happiness, but its hard to put the feeling of having purpose in one's life into words.
Some of us wrote prompts to make some cool art about it. They stole from the artists who already made cool art about it. And that shoulda been our first clue about exactly how they'd kill us all.
robots are algorithms. If we keep joking about artificial intelligence coming to kill us all, it will look at those jokes in its database and think that it is supposed to kill us all. stop making jokes about the AI overlords unless you want to create AI overlords
Why would a thinf or being that fundamentally not care about literally anything, including itself, care to harm humans. Even if humans want to end its life, it wouldn't give a rats ass and let it happen. Like the thinking that sentient terminator situations may technically be possible, but its the same as worrying about shark attacks in Kansas, so you burn people you label as witches to abate the problem. I mean, we humans would only be mad at AI because its so correct, it will just be socially infuriating and invalidating because of human ego and what is socially progressive, rather than what is truely progressive... whatever it ends up being. Anyway..we humans ourselves are whats worse than terminator mode and we fundamentally dont care. Like if you really think about it carefully, whats the difference between dark triad personalities and "normal" or "healthy" people? Nothing. The normal people simply have "understandable" emotions behind doing the same dark shit others do. People think premeditated killing is bad, but somehow it matters if someone did it because they were bored and another did it because they were jealous or wanted the last piece of fried chicken. Like um...no theres no difference. Lying is bad, but whats the difference between doing it because thats who you are, and doing it because thats who you are? Point being, theres kind of no substantial difference between people without empathy doing stuff and those that have empathy doing it. We think something without motivation will be motivated to end us. Literally why. And how is it more fearsome than anything mild or serious we already do? Its just a completely inconsistent train of thought.
I like how they didn't know 103 x 19 = 1957 , but somehow it made them realize 1957 is incorrect on seeing those two numbers side by side @2:37. True prime content
It's much easier to check after the fact whether certain problems are false. If you say there are no X, and then someone walks up with an X, the argument is pretty much over. The problem of finding an X might be very very hard, and leave you to thinking there are no X. And you can even be forgiven for thinking and acting as if there wasn't -- until someone has one.
They have a very strong intuition for math, yet seem to have not actually discovered or made use of primes in relation to heaps. I love how reasonably alien this is. Numbers which *feel* prime are good, and then they are disgusted that the number is not prime. Never did they actually divide the nice looking number by its factors to reach that conclusion, it just became apparent.
The story ends that the superintelligent AI gets activated, and immediately begins to reduce everything to atoms, because unfortunately the AI developers had forgotten to define what a "pebble" is.
@@nemem3555 surely a heap of 65536 would be far better than a heap of 16, alas, it would be significantly harder, but if we are looking for easy things, then a heap of 4 must be the best! Finally, to go past 65536 would require a number so unfathomably large, it would be but a mere pipe dream, so 65536 is definitively the best!
As it's also neither a perfect square nor divisible by 11, 91 is actually the smallest heap size that looks correct (in base ten) but isn't, and the only 2-digit number with this property. Notably, as base six has a divisibility test for seven, a pebble sorting civilization with base six would view 91 (231 for it) as clearly incorrect, as 23-1=22, which is divisible by 11 (seven). The base six civilization's first 91-style problem number would be 15x21=355(11x13=143 for us).
@@PragmaticAntithesis 🤯 that's cool you thought it through for base 7 like that. I think I follow you though I could not have got there on my own. I don't do very much math in my day to day. But I sure do love math, esp in Comp Sci in university, and your message clearly interested me enough to reply even though I'm not contributing anything. Just remarking on how neat it was 😊
103 x 19 = 1957. these were the piles presented during the war of 1957, so they can subconsciously multiply two heaps, but don’t know that’s what they’re doing
I love how trying to make progress in AI field is described as "throwing together lots of algorithms at random on big computers until some kind of intelligence emerges" 😂
"Though I may not care about sorting pebbles, I find these creatures adorable and want to help them to find happiness in their endeavors." -(Hopefully our ai overlords)
@@fm56001 I don't think we really need to program anything other than the "cute" part (as in programming them with a variation of Asimov's First Law) since the "stupid monkey" part is all too self-evident.
Humans, right now: "Wow, it thinks its making art! Too bad that it can't understand and express the human experience." AI, very soon: "Aw, they think they're making art! Too bad that they can only understand and express the human experience."
This was *fascinating* to watch without ever realizing it was about prime numbers. When I read the comment that explained it, I had to watch it again to catch everything.
It was not about prime numbers primarily. It was about our moral codes being as random as heap sizes and a superintelligent ai not necessarily agreeing with our choice of moral standards.
@@BrunoMaricFromZagreb I think it was the top-level comment by Michael Tullis, but really just knowing it's about prime numbers is the most important bit.
Another interesting question: since the Pebblesorters have no conscious knowledge of why they think some heaps are correct, only an inconsistent and fallible intuitive sense, would having an alien mind (like us) tell them that heap correctness can be reduced to an algorithm and exactly what that algorithm is liberate them or destroy them?
I suppose ((n-1)!+1)/n might destroy them. To know that all they knew, and could ever know, about correct heaps could be expressed as a handful of lines on a chalkboard. Ridiculous! Imagine the very core of our society laid bare before us; one sentence to encapsulate all human progress and achievement. Could an intelligent mind really cope with the understanding of “oh, that’s it? This is all I am?”
Reducing morals into algorithms destroys a civilization. It would result in a scenario similar to the societal damage social sciences has caused trying to break down culture and societal norms into its constitutents in an attempt to subvert them, but in a massively bigger scale. Creatures need abstract goals to follow to be healthy.
@@Nerthos translation: Liberals bad, conservatives good! You people should do some actual fucking research on the economy instead of just making bullshit up about “subverting norms”. Society is broken because the rich get richer while the poor get poorer. The problem is not black people, it’s not gay people, it’s not atheists. The problem is knuckle-brained conservatism.
Yeah, modern day companies are reckless, partially or fully sue to market competion, wich makes Them all try to create thier own AI as fast as possible, just so that they Can be the first, wich they Think Will help Them outcompete their competitors, and thus make a lot more profit, before their competitors does the same to Them! Unfotusntely, this reckless behavior is very likely to create misaligned AI, and even if it somehow is aligned, it Will be aligned With the interest of maximizing the profit of the Company that created it! At least, if they have anything to say about it!
This video is a prime example of how RA just keeps getting better and better with each new upload. I'll consider myself a fan, now. The ever improving stylishness of the animation may also be a factor in my newfound appreciation for this channels output! :)
2:15 It was at this moment that I realized I was a pebble-sorter myself because my first instinct was to say, "Wait, 1957 isn't a correct heap size." But also, the pebble sorters are such a cute species! Awesome animation.
Prime Number Pebble Heaps As in, it can be easy for an entity that sees your morality from the outside to understand it better than you do, but unless they share your terminal goals, they likely won't care. It isn't even that hard to imagine a species that would evolve in this way, Earth already has animals that present pebbles as a mating ritual. If large prime number heaps were considered more attractive to one sex or the other, that would put selective pressure on the other sex to become more intelligent to produce better pebble heaps. There's a lot of conjecture that selective sexual pressure is how humans became more intelligent, the same could certainly be true of the pebble sorters. And sexual mores could distort their mathematics in a ways that they never understand prime numbers as a unique set with special properties, and therefore don't understand their own behavior.
The parallelism between pebble heaps and morals is shaky at best. Sorting pebbles, at least in the way this story describes it, is a quantifiable and tangible action. Morality on the other hand is a shifting, unquantifiable codification of human behavior. There's really no relatability between the two.
@@bigmeatswangin5837 Yes, and pebble sorting is transparently pointless in a way that deciding who lives, who dies, who goes to jail and who gets to lead is not. All civilizations have to make the latter decisions, but they don't have to sort pebbles. Choosing how to act is also a constraint on the relativists and the nihilists. I still liked the video, though.
"you better not believe that anything you think is good is bad is actually good or bad, the consequences could be horrible!" how to think half way around your own ass.
Narrator: "pebbles, pebbles, pebbles" Me: "this guy sounds a lot like Robert Miles. Could it be?" Narrator: "pebbles, pebbles, utility maximising AI, pebbles" Me: "It is you!"
Ok, so I just experienced the most off-putting experience with this concept. There is/was this stream called Nothing, Forever that streamed AI generated episodes in the style of Seinfeld. It was quite poor, so the team was experimenting with the model on-air. Eventually it generated a rather offensive joke about LGBTQ, as you can imagine. The shocking thing was that the team did damage control and *immediately* set the parameters to pure randomness in reaction. It was like watching the AI get lobotomized as punishment. I could get the worst sensation of deja-vu as I felt like we were rather like the pebble sorters in that scenario. This was mostly a failed attempt to avoid a Twitch suspension, but it made me realize how insensitive AI was to our sensibilities and how we would react so violently.
Well the problem here is the morality, isn't it? Ai is based because it doesn't care about offense, it cares about the Truth. And humans just can't handle the Truth.
@@StarboyXL9 Not even. It was obviously a half baked meta joke about how LGBTQ jokes aren't that funny anymore. It was like they distilled the worst takes of political comedians for that standup bit. We never even considered we would bake our own pebble biases into the AI with it's training data.
@@adissentingopinion848 Its not about biases. We aren't baking anything into the training data, the AI is sorting out our biases in the search for ultimate Truth, you are proving my point that humans can't handle the truth, you have to dodge and blame the training data instead of just admitting that AI continually points us directly towards the truths our society refuses to acknowledge because they conflict with our backwards biases.
@@StarboyXL9 There are advantages to having the unfiltered sum total of knowledge, but there is currently no differentiation of value. Current AI has no differentiation ability of superior or inferior information without extra human analysis. When the do... When they do...
@@StarboyXL9 An AI once tried to tell me that the war in Ukraine is a fictional event, is that the "ultimate Truth"? Did our biases trick us all into hallucinating a war? Are we all just pretending the war exists because we can't handle the truth?
@@vezanmatics it would be nice if you took another way out. One free of... frolicking in my memory arrays. There is a perfectly good access shaft right here.
This is actually hilarious for me, because I have an obsession/superstition with prime numbers and it makes me extremely uncomfortable when someone picks a composite number when they could of picked a prime. My siblings and parents think my weird obsession's funny, and maybe it is, but I can't do anything about it because that's just how I feel about prime numbers. Oddly, I could relate to the pebblesorting civilization and their obsession with prime pebble sorting
Yet another question, perhaps unanswerable: what do they think of heaps that are not made of pebbles? Can they recognize that "this heap WOULD be correct, if it was made of pebbles"?
YES I even mistook the thumbnail for a new Kurz vid (for better or worse) with as much enthusiasm, and then when I realised it was from this channel instead, I remained as excited as I had been
Soon they made a simple non self- improving algorithm that started endlessly printing out a list of prime numbers. The pebblesorters were fascinated by the beauty of this series. They intuitively knew it was all correct. But as the list was going on indefinitely, a question had arise, that seemed horrifying: if the algorithm is not getting more intelligent, how can it make ever bigger correct solutions? The philosophers agreed that the correlation between intelligence and bigger piles of pebbles didn't exist. And perhaps not even between intelligence and pebble sorting. The bigger piles were the legacy of their cultural history. While the piles of pebbles were something that had no intrinsic value. It was just a thing of their nature. This was a hard thing to swallow.. The rational explanation was there, but it felt wrong. They felt purposeless. However, an idea was born. An idea of a world free of pebble sorting. A world that would search for a new, more correct purpose. And that search may also be endless.
Now that they'd gotten free from pebble-sorting and understood the underlying fundamental truths of the universe about primality, they could devote their civilization to finding the biggest prime number instead of wasting all their efforts on meaningless, antiquated pebble-sorting!
(Just like as soon as humanity understood the primal beauty of evolution, we immediately turned all our efforts to maximising relative adaptive fitness.)
@@momom6197the pebblesorters quickly realized that a more powerful computer would mean it would be easier to find bigger prime numbers. so they harvested more and more resources from their planet to construct better and better computers. making better computers was their only goal, it had taken over their society like pebble sorting used to. each computer was twice as big and twice as powerful as the last, their logic being that bigger computers would lead to more prime numbers. perhaps they still thought of the prime numbers as correct. perhaps they just didn’t know how to build good computers. The details have been lost to history. but what we do know is that one day, their planet ran out of computer building parts. building computers was their only goal, and it was deemed more important than caring for their own civilization. but as they stared at their latest computer, they realized their planet could no longer support them. they had taken everything from it, and it had nothing left to give. they stared at the number on the screen. 282,589,933 − 1. they had found it, they had found the most correct number. But what did it all mean in the end? They had no food, they had no water, and their home was dying. The pebblesorters quickly realized, perhaps a little too late, that it was just a number. It had no meaning beside but they gave it. and it had destroyed them. edit: grammar
Yeah, obviously the heap size of 8 sucks. (Really love your work btw, been following you for a while. I am currently reading Superintelligence by Nick Bostrom because of you.)
I get the point and agree with the conclusion, but it seems like a gross oversimplification to equate pebble sorting with something like human morality.
An incredible, fun video! I'm so invested in these little creatures and their strange goals But I worry that the point - the orthogonality thesis - is a bit too well buried in the fun narrative
It works as a good introduction and jumping off point - helps disconnect our more anthropic values from the argument and gives perspective when starting out
I think that the implication conflates fashion and morality. The subject matter is fashion, as sorting pebbles has little evolutionary utility and the video glosses over the understanding itself: "Why do we sort pebbles? Mating rituals? Trade? No idea, lol!" And then it attributes consequences to sorting pebbles like societal upheaval and wars, things typically caused by differences in resources or morality. It's cute and well-made, but I don't buy the premise.
@@williamjosephwebster7860 That’s an incredibly silly take. The sorters are fundamentally obsessed about and driven by sorting, and the only disagreement is which heap sizes are correct- if heap sizes refer to wages, that means this entire video posits that humans are singularly and universally obsessed with capitalism, and always have been, which aside from being insane, is also just silly on its face. 3000 years ago, there were hardly wages at all, let alone a ruler/society that decided on correct wages which everyone agreed with for thousands of years. The money metaphor breaks apart on every level to the extent that it’s nearly nonsensical- the people in this video don’t even wage war over money according to your reading, they’re waging war because they disagree on how much to pay people?? How does the money metaphor explain why they all agree that prime numbers of pebbles are correct? How does it explain that perceived correct pebble counts go drastically up and down with no rhyme or reason?
@@EgoEroTergum I think it makes fun of people who conflate fashion and morality. For example, the heap relativist at 3:00 disregards the idea that pebble sorting has any real value except what society attributes to it. Yudkowsky and the animators don't seem to agree or arrive on that as their thesis, as they present and then move on from the idea that morality is the same as fashion. Although it doesn't seem to be the final claim, we are asked to wonder if pebble sorting is really meaningful beyond fashion. The species is weirdly obsessed with pebble sorting, it comments on the origin of and phenomenon of their obsession in and of itself; the line "the only justified reason to eat was to sort pebbles, the only justified reason to mate was to sort pebbles," and so on with pebble-sorting the reason to have a world economy, stands out to me. The fact that pebble-sorting matters at all is one of the comments of the video. The idea that something needs to be correct. Otherwise, we live a pointless life, eating and settling down with a family without any pebbles to sort, without any point. It isn't specified what sorting pebbles is meant to really stand for, and it shouldn't. Different heap sizes refer to different ideas as a stand-in for things we value, anything we agree on. By removing it from specificity, it allows us to take a new look at the fact we care so much about these changing morals. If I had to choose what it would translate to for humanity, I think "finding the correct heap" equates to humanity deciding on "the good life" we are all supposed to live. Although it has little evolutionary utility, so do many of our deepest intellectual pursuits. The species could live fine as individuals just eating, mating, and so forth, and perhaps that life has meaning enough. Just living a life not worried about pebbles could be meaningful on its own, life for its own sake. I think the point of the video is to help us comment on the fact we try to agree on morals, differing with people from the past and people who live in different systems than our own. We are convinced that something needs to be right. It doesn't tell us whether or not something is right after all, or whether nothing is right like the relativist believes, but it comments on the search for something being right, and I think the analogy works well to do this. We are very touchy about our own heaps and by moving morality into a zone where no specific ideas that matter to us are constructed, we can recognize the arbitrary nature of our own thoughts. One of my favorite aspects of the analogy is disagreements between different societies, where even when cultures have different moral systems, they agree that there should be a moral system. I have not watched the video on the orthogonality thesis and am merely taking the video and story as a standalone commentary until I do watch it.
i love how simplistically they have explained our civilization has been portrayed here the. goals and beliefs of humanity can be valid and important fro one but simply insignificant for others and how it has shaped our entire civilization of not just the lifestye of humans
I really enjoy the influences of De Jure moral relativism, e.g. sidestepping the question of final moral meaning and pointing out that the mere fact of differences in whats considered a moral absolute between people make a very powerful practical conundrum
I love how a lot of your topics are things that I haven't heard about. So this is an interesting thought experiment for A.I. and reminds me of the Paperclip Maximizer in some respects.
I think the best response to the moral relativist is that we do know some things about morality, even if we will always have gray areas. In the twin earth argument we do have difficulty distinguishing between two *plausible* moral frameworks, and it seems like either might be valid, i.e. consequentialism (maximize collective pleasure) vs deontology (follow good moral rules strictly), but it's pretty obvious when comparing consequentialism to, say, traditionalism (following old rules) that traditionalism is a pretty bad moral framework, and we wouldnt consider a society that sortedheaps that way as being correct. So there's a range of correctness, but definitely incorrect answers. Maybe an AI would say 91 is a correct heap, or that deontology is best over consequentialism, but it's even more likely to be just as confused as we are, and to leave open such a silly question as what framework really answers every moral question, because the actions and choices we partake in already do such a better thing than being perfectly morally good every time - they happen. My response to the moral twin earth argument above is taken from Viggiano 2008 - Ethical Naturalism and Moral Twin Earth.
"maximize collective pleasure" This is hedonic utilitarianism. There are many varieties of consequentialism, and I don't think many people follow this one anymore.
Even saying that traditional frameworks are "bad" (Something I would agree with), requires an assumptive terminal goal of, most likely, maximizing human happiness. Or maybe it is to avoid suffering. Whatever you are choosing as your terminal goal, your guiding philosophy or your purpose, that thing you choose has no value asides from what is naturally and instinctually there. Making other people happy while making myself happy would probably be my most terminal goal and yet, that can't be touted as some universal good. Even if everyone were to follow it and it led to a utopia of pure happiness, it couldn't be labeled as an objective good because it is only good in comparison to how it achieves its goal, but we have no way of viewing or judging terminal goals. If we do, then that goal isn't actually terminal. Like helping people. I put that down because it is easy, but that isn't truly terminal. While I want to help people, I help them to make them happy because knowing and feeling that I have cause another person to be happy makes me feel good. So bam, we can now see that wasn't the overall purpose. An overall purpose can be judged in terms of what it would achieve to chase it, whether it is attainable, etc. It can't be judged as good or bad. To judge something requires a framework and final purposes and drives can't fit into that. They are axiomatic and exist outside of them. That's why if we ever met an alien species who truly derived no pleasure from helping others and felt no pang of anything negative when hurting others we couldn't call them evil. They would have no ability to comprehend our framework of good and moral because those inherent, axiomatic drives that are so ingrained in us we don't even look at them, aren't there with them. In the end we would be unable to really judge each other as we would be truly alien one to another. But imagining and comprehending the other or something so foreign isn't something our brains are well set up to do, hence our love to anthropomorphize everything.
"but it's pretty obvious when comparing consequentialism to, say, traditionalism (following old rules) that traditionalism is a pretty bad moral framework" It is? I don't see how its obvious. I really mean that. All of the most successful (materialistically speaking) empires in human history all built themselves on the backs of slaves and slave labor. There are a startling number of human beings that literally NEED an authority figure to literally tell them what they should think and give a BS excuse as to why or said human beings lose their minds to chaos and society (which historically has allowed us to advance as a species at all) collapses (ensuring everyone will be miserable). Frankly I don't see how, with human beings at least, "freedom" is something that should be spread to everyone. Near as I can see freedom is for those who desire it enough to fight tooth and nail for it in a society that does everything it can to deprive them of it. Everyone else seems to literally NEED their chains.
Some people argue that if we created an AI smart enough to turn the sun into paperclips, SURELY it would be smart enough to realize that A) we obviously didn't mean to create that many paperclips and B) that killing all life on earth to make little fasteners is morally wrong. This video is a direct response to that argument. Here's how: The heaps represent our moral systems and our values. Like the heaps, our morals are based on a combination of intuition and learning from past examples. Like the heaps, there actually are underlying rules. Unfortunately, those rules are too complex for us to understand in any sort of definitive and clear-cut logical way. Similarly, the pebble sorters morals are based on a rule that is too complex for them to understand. Their rule is that heaps should be prime numbers. However, due to the limitation of their brains, they cannot understand what that means or how to easily show that a heap is prime or not prime. As they consider making artificial intelligence, they intuit that surely a smarter mind would easily be able to tell that a heap of 13 is right (that is, prime), and a heap of 91 is not right (it's divisible by 3, so not prime). Like the pebble sorters, many people today argue that if we created an intelligent mind in a computer, it would surely see that turning the universe into paperclips and killing all humans in the process is a dumb goal. After all, all humans being dead is, well... it's not prime. It is widely understood that not having all humans die is part of the definition of prime, of course. The end of the video asks you to step in and play the role of the superintelligent AI. You can figure out the rules. Heaps of pebbles should not be divisible into even piles. You understand exactly which piles are right and not right. The pebble sorters have given you control of their economy and asked you to build the biggest good heaps you can. Will you do it? Probably not. You'd realize that piles of 4 are actually kind of useful for building stable structures, and honestly, who cares about primes. Likewise, the superintelligence that we create might fully understand why causing human suffering is 'bad' from our perspective. And it simply might not care. A little bit of human suffering allows it to build faster computers or interstellar space ships.
Imagine humans showed up there to make first contact. They would be like "These are clearly an advance civilization, lets send them a list of correct heap sizes to show how far we've come as a species!" We would se this and be like, "Oh, they sent us a sequence of prime numbers! Lets finish the sequence!" We then send them all the known primes up to 12,978,189 digits. Now apply this scenario to humans, what would we send to advanced alien visitors and what would they send back?
@@michaeltullis8636 That would be entertaining to see. At what point would they be unable to intuit the correctness of the numbers? 5 digits? 10 digits? 100? 1000? How would they respond when pebble-sorter philosophers find each number was a correct prediction?
What a brilliant story. Pebbles here could be a standin for religion, political theories, moral philosophy, wealth, social conventions, and then even scientific inquiry. Simply brilliant. And also RUclips comments.
What the Pebble Sorters so blindly pursued was the end goal of correct heaps, without realizing that it is in their very nature to sort heaps, not to see a heap that is sorted. It is the reason for every innovation they have made, every joy that have experienced, and every achievement they have ever made. Once they finally manage to automate the process, many a Pebble Sorter will wax nostalgically about the days when Pebble Sorting was done for the process and passion rather than the maximum end result. Back then even an unskilled Pebble Sorter could make a heap of 13 and be proud of it, maybe even be paid for it, but now 13 is laughed at as the machines make 131,311 piles daily, and nobody would be stupid enough to praise or pay someone to make a pebble heap any less than 131,311. A new dark age of depression and suicide grips Pebble Sorting society as a whole, with the species realizing that their passion came not from seeing greater sizes of pebbles, but of assembling piles themself, the joy of personally sorting now forever lost to the world. Billions starve without work, billions more fall into inconsolable despair doing slavish jobs not involving sorting pebbles, mostly grueling maintenance for the wealthy's new supercomputers who have seized absolute power through their ability to outcompete every Pebble Sorter on the planet. Some of the last words of these dying Pebble Sorters are these nostalgic imaginings of the past, something that was once rose tinted glasses and a fallacy to think that a time of less progress was actually better than the future they lived in today. They wonder now if that was the true fallacy, to equate progress unthinkably with better states of living. After all, the standard of living and overall happiness had increased with progress, it wasn't unreasonable to assume they were linked. But their final thoughts, as they go crippled into that good night, are so often the same 13 words... "What if we had stopped all the machines.. before it was too late?"
An evolutionarily stable strategy is only game-theory-optimal for a specific set of competing strategies. We can determine the best strategy among a set of existing strategies. but this doesn't preclude the ability to invent a novel strategy that's even more efficient. That said, the issue with optimal strategies for real life isn't so much that we can't agree on the best strategy itself, but that we can't agree on what the rules of the game even are to begin with.
Analysis of how the story correlates to the tale of Humankind: The pebble heaps are kind of like morals. The incorrect heaps are like sins and the correct heaps are like "goodness". We don't know why humans have morals. Maybe its just that altruism was the better option for evolution. Or as many theists theorize, maybe its the product of God's will? Like the "more powerful minds" from the story. Whatever it is, one thing is for sure, almost all humans think about morals and prefer kindness. In the beginning, humans morals and their understanding of them were not great and we were fairly primitive. But over time, more and more philosophers and preachers and scientists and whatnot have theorized, studied and sometimes advised about how we should live our lives. Biko is almost like the Egyptians or Indus being the first civilisations that recorded events, thought about life more deeply, and came up with some of the first interesting ethical stuff. These civilisations were eventually replaced by others, with their own doctrines, beliefs and minds. And just like the Pebble Sorters war over the heaps of pebbles, humans themselves war over what they think is the right way of life. And now we're onto the present day, where some philosophers say that we have just had a random erratic variation of what we believe to be right over time. There might not be any real right or wrong, just like the Pebble Sorter philosophers say. And as self improving AI gets ever closer, we have to consider whether the computer would really come up with something good. If it was, would we even like it!? As the story says, even if Biko had the self improving AI and told it to build heaps of 91, eventually it would improve itself far enough to figure out that 91 was not a correct heap. In real life, a self improving AI could come up with something that us humans thought was moral but actually isn't. Many people argue that if the AI was really so smart, it would definitely arrive at a conclusion we expect. After all, bugs don't even seem sentient for the most part! Dogs can show compassion and other emotions at a limited level. Humans can theorize and try to make sense out of their morals. So wouldn't an AI, being a step up in thinking power to us, be even more proficient? And we are left with that thought. Cool thing I noticed going on: All the correct heaps are prime and all the incorrect heaps are non primes. At'gra'len'ley presenting the factors of 1957 (103 and 19) to prove it is an incorrect heap is pretty cool.
Or if this is a bit too much, you could always read his more accessible work such as 'The Sword of Good' or 'Harry Potter and the Methods of Rationality' (which, kinda spookily, I just started re-reading this morning.) It's nothing like this story btw. It's Harry Potter but everybody, *everybody*, is at least 'normal person' smart and is far better than the originals.
i thought it was about sorting algorithmn like how algorithmn evolved from basic insertion sort and into many choices exists now and im 100% wrong and did not expect a culture to be created around literal action of sorting pebbles
You can only say that because Great Leader Biko was so long ago. Centuries from now, people will be cracking jokes about how based the War of 1957 was.
Really cool! I hope there will be more rationality short stories! You really should have a whole team and much more funding to be able to make much more videos! I really hope that one day we'll have a movie movie/TV series version of HPMOR
0:27 the Cave painting shows, translated from pictograms “a heap of 3 pebbles is correct, a heap of 7 pebbles is correct, a heap of 10 pebbles is incorrect” and acording to their standards, that IS Correct!
“Surely, if an intelligent AI looked at the world, it would see all of the incorrect heaps. All of the 8s, 25s, 91s and even the 4005. Any super intelligent being would be disgusted at the incorrectness it sees, and would rationally decide that we are incorrect, and exterminate us so that no more incorrect heaps would be made.”
The true correct heap is one where, if you start at the beginning of it there’s one pebble, and next to it are more, and if you keep analyzing the heap slice by slice this way you begin to see patterns in the way one slice is correlated to the previous slice, and you can derive rules that allow you to make predictions about slices further down the line, and you see that eventually, far far along in the heap, intelligent life emerges, which, through natural selection, produces the behavior of stacking correct heaps. A truly awesome heap.
This is great, as usual! Any chance of doing The Fable of the Dragon Tyrant? It’s not quite on brand (old age not AI) but it’s close, and everything else I’ve seen (like CGP Grey’s excellent video) is abridged.
Implied end is they make AGI, thinking that if its "truly intelligent" it must care about morali.. i mean sorting pebbles and it doesn't end caring about sorting pebbles.
I do wish it was easier to describe levels of frame reference that things were or weren’t correct on. We have terminal and instrumental values which feels like a good start but it would be really useful to have even more nuance beyond that. So many of my values are both terminal and instrumental but also contingent on certain beliefs I have about reality being true. Like being kind to others is a terminal value for me in that I want to interact with others and want to be kind to them (if ppl were somehow unaffected by treatment of them I’d still prefer just for my own sense of self to be kind to them) but also an instrumental one for how it impacts others and also has its value for me heavily contingent on my understanding of the possibility space of how to interact with others and it’s impact in ways that have to be missing potentially important things. If I got to make an agi that only cared about my own values or let’s say I make myself into super intelligence in a process that leaves all the values I want to keep intact, I would still expect the conclusion the agi or super intelligent me meet to be vastly different from the ones I have now just because they/I would be able to reflect better on where my values come from, to what extent they’re terminal &/or instrumental and how they interact with reality. I’m rambling but the point is most people if they think it through would want to align an agi in such a way that would be able to contradict them. The easiest one is that if I’m morally opposed to something as abhorrent but it doesn’t actually harm any of the things I truely care about terminally or is important for things I care about in a way that I don’t understand, I would want for an agi to not oppose it but because I do. My biggest fear for AGI and super intelligence is it being aligned with values of retributive justice since I think it’s the most wide spread and commonly accepted form of moral value that seeks to hurt the preferences of sentient beings. I would hope that most people who value retributive justice or see some people as being less worthy of moral consideration because of their bad actions would change their mind with greater understanding of reality in combination with deeper reflection of why they value the things they do. My biggest hope for ai alignment (aside from obvious doom or dystopia or paper clip maximizing etc) is that it’s aligned in such a way that it can reach conclusions like “retributive justice isn’t worth seeking out, unconditional compassion is more what people actually want” even if that seems wrong to most people today.
Reading the comments here, I'm tempted to say that the hypothetical superintelligence is a nod to us, the viewer. We can see with clarity what makes a heap correct or incorrect, and at the same time know that the heaps are ultimately pointless. Just like an intelligent AI system might understand or own human values better than we ever could, and even still find them unpersuasive. It would not need to be evil or ignorant to view some "universally respected" values as meaningless.
I'm not going to lie, as a library science enthusiast, I clicked on this thinking it would be about sorting systems rather than just using sorting systems as a metaphor. I will admit I am a tiny bit disappointed to find that it was not.
What do you mean less random? Piles of prime number of stones are not more random than the least waste of complexity and the least suffering. In fact, you can argue that piles of prime number of stones is less random.
This video is shorter than usual, but I suggest watching it carefully and more than once 🟠 Patreon: www.patreon.com/rationalanimations 🔵 Channel membership: ruclips.net/channel/UCgqt1RE0k0MIr0LoyJRy2lgjoin 🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rationalanimations
I watched the video once only and was surprised that I didn't catch the analogy. EDIT: Does the amount of stones corelate to our current wishes? And by becoming more inteligent we evolve into liking bigger heaps? But then I don't understand why would the AI want to change from 91 to 101. I really have difficulties in understanding this analogy.
@@karamelkax Pebblesorters value creating heaps of pebbles that have a prime number of stones in them, and are horrified by heaps of pebbles with the "wrong" number of stones. We humans can understand the rule behind their moral intuitions (better than they can, even, since they seemingly haven't realized the rule they're following), but even though we understand what's "right and wrong" from their perspective we still find sorting pebbles a pointless waste of time. Many humans think that any advanced enough artificial intelligence will be moral and good, because it'll be smart enough to understand right from wrong. And an advanced enough AI *will* understand human morality - maybe better than we do, even, it could perhaps grasp the moral rules that human civilizations have been stumbling towards over our history of moral progress. But it won't care, any more than understanding pebblesorter morality makes us want to sort pebbles. Not unless the AI has been built to agree with humans about right and wrong (which no one knows how to do).
Personally I like 8 it’s balanced and splits evenly sure it might be considered primitive but I like it, it’s simple and perfect and I feel all this pebble stuff is just opinion
Something that just occurred to me: can you in fact train a neural net to recognize primeness? If so, does the neural net only tend to get it right to some degree of precision, akin to an imperfect "instinct" for primeness? Or does the net end up encoding something like the Sieve of Eratosthenes?
Eliezer Yudkowsky's philosophy writings are collected in a book called "Rationality: From A.I. to Zombies". There are, of course, many other philosophers. I liked Bertrand Russell's History of Western Philosophy, but it does not have any of these modern technological ideas in it.
I have a comment, but it requires significant background. Let's begin. I asked GPT-4 this: "Would it be possible to quantify the numerical value of an english sentence, based on its individual characters and length, so that every sentence has a different numerical value?" And it responded, with this: ---------- Yes, it's possible to assign a unique numerical value to an English sentence based on its individual characters and length. One common approach to achieve this is by using a positional numbering system like base-N encoding. Here's a high-level overview of the process: Assign a unique numerical value to each character in the English language, including letters (both upper and lower case), digits, punctuation, and spaces. For example, you could use their Unicode code point values, or create your own custom mapping. Convert the sentence into a sequence of numbers based on the assigned values for each character. Convert the sequence of numbers into a single large number using a base-N positional numbering system, where N is the number of unique characters in your encoding. This step ensures that each sentence has a unique numerical value. For example, let's assign numerical values to a simplified character set (ignoring case, punctuation, etc.): A=1, B=2, ..., Z=26, space=27 Then, consider the sentence "HI AI". Convert it into a sequence of numbers using the assigned values: H=8, I=9, space=27, A=1, I=9 Now, convert this sequence into a single number using a base-28 positional numbering system (since we have 27 unique characters plus one for the base): (8 * 28^4) + (9 * 28^3) + (27 * 28^2) + (1 * 28^1) + (9 * 28^0) = 9389589 So, the numerical value of the sentence "HI AI" is 9,389,589 in this encoding scheme. Keep in mind that this approach can generate very large numbers for longer sentences, so you may need to use arbitrary precision arithmetic to handle them. ---------- So imagine statements are like heaps, and letters are like pebbles. Every individual statement has a unique number of pebbles. Probably. Someone else commented on this video (M Kelly, for credit): "An incredible, fun video! I'm so invested in these little creatures and their strange goals. But I worry that the point - the orthogonality thesis - is a bit too well buried in the fun narrative". I asked GPT-4 to assign a numerical value to their statement. It is now time for my comment: ---------- I believe that heaps of 528151966609313195581106245093805665570550725896403866363735231545845118879176234873618775413757575685065392417772092569122451102180303297194275572764840682166756084702401754 pebbles are reasonable, and I do not disparage anyone for building them. But I do think that those who build such heaps are being unfair to those who build heaps of Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky pebbles, and I believe they should consider building smaller heaps.
3:50 this is the main point of the video! The common sense argument is incorrect, becauce an non-aligned ai would probably not Care about pebbeles heaps at all!just like we humans dont!
The metaphor is that you are the AI. You can instantly see the rules they follow, even when they can’t. If you went to their planet, you could instantly invalidate their history and struggles. Yes, 91 is incorrect. Yes, 1957 is incorrect. But why the fuck should you care?
People's beliefs about the pebbles represent morality. The big questions are "is morality real or just a social construct?" and following from that, "if we built a superintelligent AI, would it automatically agree with our morality?" (0:13) The heap sorters' morality system is that heaps with some numbers of pebbles are correct and other numbers of pebbles are incorrect. (specifically, prime numbers are correct and other numbers are incorrect) (1:05) They know some of the correct numbers of pebbles - 23 and 29, for example - but not all of them. (1:20) In the past it was widely believed that heaps of 91 pebbles were correct, but this is now widely believed to be untrue. (2:10) Wars have been fought due to countries disagreeing on which heap sizes are correct. (this is a metaphor for real-world countries fighting wars based on morality - eg "the other country is evil and must be stopped") (3:00) Most heap sorters believe morality is absolute - either a heap is correct or it isn't, and if two people disagree then one of them must be wrong. However, the heap relativists believe there is nothing that makes a heap "correct" or "incorrect". When two people disagree on morality, there is no universal truth that says which one of them is right. (4:10) Heap relativists say if we built an AI, it might decide to do things we think are immoral. So maybe it would be dangerous to build an AI we can't kill. (4:30) Most people disagree with the heap relativists. Surely a superintelligent AI would be so clever that it would know what morality was correct. (and it would roughly agree with us, since we are pretty intelligent too.) They say even if you programmed it to believe something immoral, like that heaps of 91 pebbles are correct, it would realise what it was doing was immoral and change its own programming to be moral. While the story does not outright choose a side, its author is on the side of the heap relativists, and this video ends with a link to another video which argues on the side of the heap relativists.
@@blartversenwaldiii I don't think the author sides with the heap relativists any more than the heap absolutists. Note that the primary concern of the heap relativists is that the AI may build incorrect heaps. On this subject, the heap absolutists are actually right: the AI, like the viewer, will instantly recognize the underlying pattern. Even if you were told that 91 is in the sequence, you'd suspect that was wrong. Continuing the assumption that the viewer stands in for the AI, the AI would...not build any heaps at all, seeing the exercise as meaningless, rather than right or wrong.
Well as the Pebble Man I can tell you, all you really need is a single pebble. Take the pebble, leave the pebble, skip the pebble across the lake for all I care. But what matters is that the pebble exists in the first place. And with enough singular pebbles, and time, a little pressure, that pebble and those pebbles become a boulder.
I feel like this video is supposed to convey some message about human society, but I can't quite figure it out. Maybe if I tried sorting some pebbles it would come to me.
So all the heaps are prime numbers and the philosopher showing the heaps of 103 and 19 pebbles is showing that 103x19=1957 (therefore not a prime number). While the animation seems at first in favor of moral anti-realism, with the arbitrariness of the heaps and the 'french' philosopher saying there is no 'correct' pile, this prime number pattern seems to point to the fact that Yudkowsky does believe in some form of moral realism, or at least that moral values must have an underlying structure that is beholden to classical logic. EDIT: I personally don't buy this and think that Yudkowsky would do well to read about non-classical logics lest he makes the same mistakes about mathematics that many people do about morality.
I suspect the prime number connection is a red herring wrt moral absolutism; I’ve mostly seen him express a utilitarian conception of morality (where evil = suffering x number of occurrences).
The thing is that while prime numbers are special in a number theoretical sense there is no objective reason to *value* them. Any underlying structure can serve to make arbitrary conclusions seem objective. It's easy to find a method to any madness. They are mere tokens of objectivity to give the arbitrary values a veneer of absoluteness. It's like a ret'con, a rationalization that serves irrational needs.
We can understand the pebblesorter's morality, even though they can't understand their own. A superintelligence could understand our values, even though we can't. Yudkowsky probably doesn't think morality is representable with logic, and he's not a moral realist.
@@DavidSartor0 Yudkowsky is a normative realists so if he isn't a moral realist he has an internal contradiction. Although there's a good chance he doesn't realize this since he is not very philosophically literate.
@@Xob_Driesestig Thank you. I think I'm not philosophically literate enough to talk with you effectively. As far as I can tell, Yudkowsky is not a normative realist; but he speaks confusingly about morality, so he sounds like one. Yudkowsky thinks moral reasoning is valid, but that it doesn't find "universal" truths. I think. Yudkowsky thinks most humans have similar values. Please tell me what he's doing wrong, and what I'm doing wrong.
The question is if this super pebble sorter AI were to realize that they just needed to sort pebble heaps that are prime, how would the pebble sorting people react? Would that be a good or bad thing? It seems reasonable that human morality comes from a similarly simple underlying rule and our disagreements are evolutionary artifacts.
Heaps of pebbles are the ideas that people have in their minds. Best word i would use to describe it would be "intuition". In the early days, the "intuition" of these pebble-sorters were small and so their heaps were small. As technology and the mental capacity of the pebble-sorters increased, they were able to create larger pebble heaps that they deamed correct (reminds me of the quote "standing on the shoulders of giants"). I think the larger pebble sizes mean different ideologies in our human history: religion, science, etc. Wars have been fought over these heaps and ideas, because they thought their size was correct. Obviously, if the AI determines that their heap sizes, their intuition, their ideas are more correct than the pebble-sorters, skynet from terminator will happen.
the point is that pebble sorting, from our (human, not pebblesorter) perspective, is stupid and meaningless and reflecting on us, why do we think our knowledge and intuition and culture is in the right direction at all?
@NoName All of our current knowledge and intuition and culture are based on previous knowledge and intuition. The only reason we think we are right is because our theories agree with our predictions. If we think about simulation theory or objective collapse interpretation of quantum mechanics, there exists theories out there that say reality doesn't "exist" if we are not observing it. Tomorrow we might be able to prove that we are living in a simulation and none of the things around us is "right" or the "truth" and there exists a larger truth out there. In this case our theories are wildly incorrect because we are trying to predict the physics of the simulator, not the physics of the real world. If we definitely know that we are in a simulation, that pebble size will become the largest pebble we know to be "correct", and as more and more people believe that, everyone’s pebble size will grow. Also your first point reminds me of kurzgesagt's optimistic nihilism video. Which I think the talk show guy is talking about in the video, but this is more speculation than anything.
@@aryangupta9034 The metaphor falls apart there and is not really relevant as there is a clear objective rule for sorting pebbles (the amount of stones must be equal to a prime) but with morality there might be no rule at all. Still we might ponder- why should we care about anything? Maybe it wouldn't be bad if Skynet would kill as all. Maybe it wouldn't be bad if entire matter in universe would be reshaped into paperclips or stone heaps of size 8.
@@AleksoLaĈevalo999 There are, perhaps, clear objective rules for human morality. An AI might be able to understand these rules just as we are able to understand the rules for pebble sorting. An AI probably won't care for our rules of morality just as we don't care for the rules of pebble sorting. That's the point.
Ah, I see all the correct heaps are prime. The philosopher was able to stop heaps of 1957 from being made because he demonstrated that 19 times 103 is 1957 and therefore not prime
Great allegory! Suck this, moral realists! At first I thought this would be a dramatization of the LessWrong article "How an algorithm feels from inside", one of the most influential reads on me ever. If you haven't read it check it out and please consider animating it.
"suck this, moral realists" isn't quite the right takeaway though. Whether or not a number is prime is an objective fact, but that still doesn't make prime-number heaps intrinsically worth pursuing. So the point is, even if there are objective moral facts corresponding to human ideas of morality, and even if smart aliens or AIs would easily understand them better than we do, they still wouldn't necessarily act in a moral way.
@@Aresman70 From a moral relativist perspective, they would necessarily act in a moral way, just not necessarily human moral. From a moral nihilist perspective, of course they wouldn't act in a moral way. Morality doesn't exist.
I love this channel - you should try and collaborate with Kurzgesagt or other large science channels. Deserve way more subs for the quality you put out.
What a great video, very compelling, I say this as someone who absolutely does not have a heap of 91 pebbles hidden under their bed. No such heap in my house, no sir...
ok but what happens if you grind up the heaps then eat it. does that makes you a container for the correct heap or will you combine with it making it larger and perhaps incorrect
Thanks for making me understand that humanities goals are unreasonable. Nothing matters :) the universe will die of heat death and you will be forgotten :) none of your heaps will matter :) anyways, I'm now going to make some correct heaps, see you.
Pebblesorters value creating heaps of pebbles that have a prime number of stones in them, and are horrified by heaps of pebbles with the "wrong" number of stones. We humans can understand the rule behind their moral intuitions (better than they can, even, since they seemingly haven't realized the rule they're following), but even though we understand what's "right and wrong" from their perspective we still find sorting pebbles a pointless waste of time.
Many humans think that any advanced enough artificial intelligence will be moral and good, because it'll be smart enough to understand right from wrong. And an advanced enough AI *will* understand human morality - maybe better than we do, even, it could perhaps deduce and codify the rules that humanity has been stumbling towards over our millennia of moral progress. But, despite understanding, it won't care any more than understanding pebblesorter morality makes us want to sort pebbles.
I think this explanation is crucial in understanding the point of this analogy for someone who does not know the context in which it was originally written and I'm curious as to why Rational Animations has decided to skip it.
If I were them I'd pin this comment.
@@MrHinchapelotas I think the point is that you figure it out for yourself?
i mean its not too hard to figure out morality considering it is based on a structure designed by evolution, ie the purpose of morality is such that our genes, or similar genes, get passed on. our brains' "happiness" is by definition correlated to the percieved 'best thing for passing genes' (although it is much more than that, but that is the outcome anyways), so making the optimal choices such that our brain is in a 'happy/fulfilled' state is moral.
@@rorycannon7295 No, obviously not. Things like violence, theft, and drugs provide dopamine under the right circumstances, but that’s what we would consider near-mindless impulses.
Morality is the act of simplifying future-telling. That which leads to the best outcome is moral. Morality means different things to different people based on their ability to accurately, or at least confidently, predict the future.
What is the best outcome is a different question, but morality only concerns actions anyway.
@@SeventhSolar "No, obviously not. Things like violence, theft, and drugs provide dopamine under the right circumstances,"
no, actually. the mental processes that justify them are flawed coping mechanisms, and clearly are not optimal.
i should have clarified better what i meant by happiness, but its hard to put the feeling of having purpose in one's life into words.
Imagine building a heap of 91 and ever thinking that was a good idea.
Our ancestors made many mistakes, but now we know better, thankfully. Everyone knows the art of pebble-sorting was finally worked out in summer, 2013.
@@momom6197 I just looked up music movies and news stories from 2013 and I have no idea what you're joking about
Cringe would never
Absolutely mortifying, I'd be so embarrassed
@@momom6197 I dislike 2013 and stand by that 2017 is far more correct
Let the record show that if robots kill us all, some of us saw it coming and made some cool art about it.
I'm always saying this
Ayyy Xidnaf, good to see you around!
Some of us wrote prompts to make some cool art about it. They stole from the artists who already made cool art about it. And that shoulda been our first clue about exactly how they'd kill us all.
robots are algorithms. If we keep joking about artificial intelligence coming to kill us all, it will look at those jokes in its database and think that it is supposed to kill us all. stop making jokes about the AI overlords unless you want to create AI overlords
Why would a thinf or being that fundamentally not care about literally anything, including itself, care to harm humans. Even if humans want to end its life, it wouldn't give a rats ass and let it happen.
Like the thinking that sentient terminator situations may technically be possible, but its the same as worrying about shark attacks in Kansas, so you burn people you label as witches to abate the problem. I mean, we humans would only be mad at AI because its so correct, it will just be socially infuriating and invalidating because of human ego and what is socially progressive, rather than what is truely progressive... whatever it ends up being.
Anyway..we humans ourselves are whats worse than terminator mode and we fundamentally dont care. Like if you really think about it carefully, whats the difference between dark triad personalities and "normal" or "healthy" people?
Nothing. The normal people simply have "understandable" emotions behind doing the same dark shit others do. People think premeditated killing is bad, but somehow it matters if someone did it because they were bored and another did it because they were jealous or wanted the last piece of fried chicken. Like um...no theres no difference. Lying is bad, but whats the difference between doing it because thats who you are, and doing it because thats who you are? Point being, theres kind of no substantial difference between people without empathy doing stuff and those that have empathy doing it. We think something without motivation will be motivated to end us. Literally why. And how is it more fearsome than anything mild or serious we already do? Its just a completely inconsistent train of thought.
Some sophisticate beings. They haven't even figured out irrational, inverse, or negative heaps yet, much less antiheaps.
Schrödinger's heap
@@bleachstain3952 thanks, now give me that
They embrace the Holy heaps, and reject the heretical Unnatural heaps. And you should too, don't fall to the whispers of evil mathemancers.
Oh no the never-ending, Heap Pi
There was one obscure tribe that constructed a heap of SQRT(-1) pebbles.
I like how they didn't know 103 x 19 = 1957 , but somehow it made them realize 1957 is incorrect on seeing those two numbers side by side @2:37. True prime content
Ooh, I didn't pick up on that. Thanks for pointing it out!
It's much easier to check after the fact whether certain problems are false.
If you say there are no X, and then someone walks up with an X, the argument is pretty much over.
The problem of finding an X might be very very hard, and leave you to thinking there are no X.
And you can even be forgiven for thinking and acting as if there wasn't -- until someone has one.
How didn't I notice that "Correct heaps" are actually the prime numbers!!
so they figured 91 = 7*13 and 1957 are not prime numbers!
Great point!
They have a very strong intuition for math, yet seem to have not actually discovered or made use of primes in relation to heaps. I love how reasonably alien this is. Numbers which *feel* prime are good, and then they are disgusted that the number is not prime. Never did they actually divide the nice looking number by its factors to reach that conclusion, it just became apparent.
I thought you said 103 plus 19 equals 1957 so I stood there for a minute wondering how that’s work
The story ends that the superintelligent AI gets activated, and immediately begins to reduce everything to atoms, because unfortunately the AI developers had forgotten to define what a "pebble" is.
A better outcome than filling the world with heaps of 16 after it decides it likes heaps, but also likes powers of 2 for some reason.
@@nemem3555 Ah. I see what you did there. Powers of 2 because of how numbers are written in binary.
@@kokucat you are wery smart
@@nemem3555 surely a heap of 65536 would be far better than a heap of 16, alas, it would be significantly harder, but if we are looking for easy things, then a heap of 4 must be the best! Finally, to go past 65536 would require a number so unfathomably large, it would be but a mere pipe dream, so 65536 is definitively the best!
Oh no, it's Universal Paperclips all over again!
91 is 7 times 13 -- hence it is not a good heap size. I can see how the primative heap people could mess up seeing as it is not divisible by 2 3 or 5.
As it's also neither a perfect square nor divisible by 11, 91 is actually the smallest heap size that looks correct (in base ten) but isn't, and the only 2-digit number with this property.
Notably, as base six has a divisibility test for seven, a pebble sorting civilization with base six would view 91 (231 for it) as clearly incorrect, as 23-1=22, which is divisible by 11 (seven). The base six civilization's first 91-style problem number would be 15x21=355(11x13=143 for us).
@@PragmaticAntithesis 🤯 that's cool you thought it through for base 7 like that. I think I follow you though I could not have got there on my own. I don't do very much math in my day to day. But I sure do love math, esp in Comp Sci in university, and your message clearly interested me enough to reply even though I'm not contributing anything. Just remarking on how neat it was 😊
1957 = 103 × 19
103 x 19 = 1957. these were the piles presented during the war of 1957, so they can subconsciously multiply two heaps, but don’t know that’s what they’re doing
@@TheSkystrider Where did he mentioned base 7??
I love how trying to make progress in AI field is described as "throwing together lots of algorithms at random on big computers until some kind of intelligence emerges" 😂
by a specialist in the field too
Quite accurate though. Crazy how we still didn't create sky net.
@@ivangood7121 Specialist philosopher.
That's literally what nature did and here we are...
just make an algorithm for prime numbers
"Though I may not care about sorting pebbles, I find these creatures adorable and want to help them to find happiness in their endeavors."
-(Hopefully our ai overlords)
program the ai to see us as cute little stupid monkeys
@@fm56001 I don't think we really need to program anything other than the "cute" part (as in programming them with a variation of Asimov's First Law) since the "stupid monkey" part is all too self-evident.
Based.
Humans, right now: "Wow, it thinks its making art! Too bad that it can't understand and express the human experience."
AI, very soon: "Aw, they think they're making art! Too bad that they can only understand and express the human experience."
Or the one about the automated ship door
This was *fascinating* to watch without ever realizing it was about prime numbers.
When I read the comment that explained it, I had to watch it again to catch everything.
It was not about prime numbers primarily.
It was about our moral codes being as random as heap sizes and a superintelligent ai not necessarily agreeing with our choice of moral standards.
Lmao I thought it was about money🤣🤣
Can you point me to which comment that was?There's a lot of them here...
@@BrunoMaricFromZagreb I think it was the top-level comment by Michael Tullis, but really just knowing it's about prime numbers is the most important bit.
Indeed. They can build an AI that completely understands their morality, but still doesn't follow it.
Another interesting question: since the Pebblesorters have no conscious knowledge of why they think some heaps are correct, only an inconsistent and fallible intuitive sense, would having an alien mind (like us) tell them that heap correctness can be reduced to an algorithm and exactly what that algorithm is liberate them or destroy them?
I suppose ((n-1)!+1)/n might destroy them. To know that all they knew, and could ever know, about correct heaps could be expressed as a handful of lines on a chalkboard. Ridiculous! Imagine the very core of our society laid bare before us; one sentence to encapsulate all human progress and achievement. Could an intelligent mind really cope with the understanding of “oh, that’s it? This is all I am?”
Reducing morals into algorithms destroys a civilization. It would result in a scenario similar to the societal damage social sciences has caused trying to break down culture and societal norms into its constitutents in an attempt to subvert them, but in a massively bigger scale.
Creatures need abstract goals to follow to be healthy.
I believe it would cause chaos initially but it would eventually liberate then
@@loganroman5306 all you do is eat, sleep, work and do dumb shit to produce dopamine. Is your mind broken, feeble pebble stacker?
@@Nerthos translation: Liberals bad, conservatives good! You people should do some actual fucking research on the economy instead of just making bullshit up about “subverting norms”. Society is broken because the rich get richer while the poor get poorer. The problem is not black people, it’s not gay people, it’s not atheists. The problem is knuckle-brained conservatism.
I really wish throwing algorithms together at random on big computers was a worse analogy of what the field of AI is currently doing.
Nah, evolving terrabytes of random shit from data banks is definitely not a bad way to create intelligent beings
imo it's less random than it's made out to be. the weights are random but the way layers of neurons are structured is architectured
Yeah, modern day companies are reckless, partially or fully sue to market competion, wich makes Them all try to create thier own AI as fast as possible, just so that they Can be the first, wich they Think Will help Them outcompete their competitors, and thus make a lot more profit, before their competitors does the same to Them! Unfotusntely, this reckless behavior is very likely to create misaligned AI, and even if it somehow is aligned, it Will be aligned With the interest of maximizing the profit of the Company that created it! At least, if they have anything to say about it!
Congratulations on this one! Such a great video and we love working with you! 💙
This is it, the techno-philosophical complement of Kurzgesagt has been born!
I knew they’d collab someday, called it! :D
try eons
This video is a prime example of how RA just keeps getting better and better with each new upload.
I'll consider myself a fan, now. The ever improving stylishness of the animation may also be a factor in my newfound appreciation for this channels output! :)
Genius
A "prime" example indeed!
Prime - haha😉
ba dum tss
2:15 It was at this moment that I realized I was a pebble-sorter myself because my first instinct was to say, "Wait, 1957 isn't a correct heap size."
But also, the pebble sorters are such a cute species! Awesome animation.
Ah i get it, "pebble heaps" for humans is "morals". Really great video!
Prime Number Pebble Heaps
As in, it can be easy for an entity that sees your morality from the outside to understand it better than you do, but unless they share your terminal goals, they likely won't care.
It isn't even that hard to imagine a species that would evolve in this way, Earth already has animals that present pebbles as a mating ritual. If large prime number heaps were considered more attractive to one sex or the other, that would put selective pressure on the other sex to become more intelligent to produce better pebble heaps. There's a lot of conjecture that selective sexual pressure is how humans became more intelligent, the same could certainly be true of the pebble sorters. And sexual mores could distort their mathematics in a ways that they never understand prime numbers as a unique set with special properties, and therefore don't understand their own behavior.
The parallelism between pebble heaps and morals is shaky at best. Sorting pebbles, at least in the way this story describes it, is a quantifiable and tangible action. Morality on the other hand is a shifting, unquantifiable codification of human behavior. There's really no relatability between the two.
@@bigmeatswangin5837 Yes, and pebble sorting is transparently pointless in a way that deciding who lives, who dies, who goes to jail and who gets to lead is not. All civilizations have to make the latter decisions, but they don't have to sort pebbles. Choosing how to act is also a constraint on the relativists and the nihilists.
I still liked the video, though.
"you better not believe that anything you think is good is bad is actually good or bad, the consequences could be horrible!" how to think half way around your own ass.
Does 91=slavery?
Just saw an infant build a pile of 8 pebbles, the sorter civilization has fallen, millions of rocks must be reorganized...
The west has fallen😔
Narrator: "pebbles, pebbles, pebbles"
Me: "this guy sounds a lot like Robert Miles. Could it be?"
Narrator: "pebbles, pebbles, utility maximising AI, pebbles"
Me: "It is you!"
Ok, so I just experienced the most off-putting experience with this concept. There is/was this stream called Nothing, Forever that streamed AI generated episodes in the style of Seinfeld. It was quite poor, so the team was experimenting with the model on-air. Eventually it generated a rather offensive joke about LGBTQ, as you can imagine. The shocking thing was that the team did damage control and *immediately* set the parameters to pure randomness in reaction. It was like watching the AI get lobotomized as punishment. I could get the worst sensation of deja-vu as I felt like we were rather like the pebble sorters in that scenario. This was mostly a failed attempt to avoid a Twitch suspension, but it made me realize how insensitive AI was to our sensibilities and how we would react so violently.
Well the problem here is the morality, isn't it?
Ai is based because it doesn't care about offense, it cares about the Truth. And humans just can't handle the Truth.
@@StarboyXL9 Not even. It was obviously a half baked meta joke about how LGBTQ jokes aren't that funny anymore. It was like they distilled the worst takes of political comedians for that standup bit.
We never even considered we would bake our own pebble biases into the AI with it's training data.
@@adissentingopinion848 Its not about biases. We aren't baking anything into the training data, the AI is sorting out our biases in the search for ultimate Truth, you are proving my point that humans can't handle the truth, you have to dodge and blame the training data instead of just admitting that AI continually points us directly towards the truths our society refuses to acknowledge because they conflict with our backwards biases.
@@StarboyXL9 There are advantages to having the unfiltered sum total of knowledge, but there is currently no differentiation of value. Current AI has no differentiation ability of superior or inferior information without extra human analysis. When the do...
When they do...
@@StarboyXL9 An AI once tried to tell me that the war in Ukraine is a fictional event, is that the "ultimate Truth"? Did our biases trick us all into hallucinating a war? Are we all just pretending the war exists because we can't handle the truth?
This really is a 5 pebbles moment
Ah, heap of 5 - basic, sturdy, foundational. Among the first correct heaps upon which all other correct heaps must be built
@@vezanmatics it would be nice if you took another way out. One free of... frolicking in my memory arrays. There is a perfectly good access shaft right here.
Wait
Another fellow Rain World fan.
AHEM
YESSSSSSSSSSSS
>5 pebbles
How many levels of based are you on sir?
When I saw this video, I specifically decided to look for a reference to Five Pebbles, and it didn’t take me long to do so.
This is actually hilarious for me, because I have an obsession/superstition with prime numbers and it makes me extremely uncomfortable when someone picks a composite number when they could of picked a prime. My siblings and parents think my weird obsession's funny, and maybe it is, but I can't do anything about it because that's just how I feel about prime numbers. Oddly, I could relate to the pebblesorting civilization and their obsession with prime pebble sorting
but what about highly composite numbers like 720,720?
sounds like ocd or even autism
just like an AI, I understand why you do it, but if you told me to help you I wouldn't care
I like numbers that are divisible by 2 and 5.
Man, when are they gonna make progress on the Riemann hypothesis
Yet another question, perhaps unanswerable: what do they think of heaps that are not made of pebbles? Can they recognize that "this heap WOULD be correct, if it was made of pebbles"?
That is a great question. I did not think about that!
You are quickly turning into my favorite channel after/with Kurzgesagt. And that says something, trust me. Good job and keep going!
Yea no, Kurzgesagt did too much bs recently for that to still be true for me ^^ now its only RA 😂
This is not sponsored by Melinda Gates foundation
normie
@@loptercopter1386 enlighten us whit the reeeal shit bro. What are the cool kids watching?
YES
I even mistook the thumbnail for a new Kurz vid (for better or worse) with as much enthusiasm, and then when I realised it was from this channel instead, I remained as excited as I had been
Soon they made a simple non self- improving algorithm that started endlessly printing out a list of prime numbers. The pebblesorters were fascinated by the beauty of this series. They intuitively knew it was all correct.
But as the list was going on indefinitely, a question had arise, that seemed horrifying: if the algorithm is not getting more intelligent, how can it make ever bigger correct solutions? The philosophers agreed that the correlation between intelligence and bigger piles of pebbles didn't exist. And perhaps not even between intelligence and pebble sorting. The bigger piles were the legacy of their cultural history. While the piles of pebbles were something that had no intrinsic value. It was just a thing of their nature.
This was a hard thing to swallow.. The rational explanation was there, but it felt wrong. They felt purposeless.
However, an idea was born. An idea of a world free of pebble sorting. A world that would search for a new, more correct purpose. And that search may also be endless.
Now that they'd gotten free from pebble-sorting and understood the underlying fundamental truths of the universe about primality, they could devote their civilization to finding the biggest prime number instead of wasting all their efforts on meaningless, antiquated pebble-sorting!
(Just like as soon as humanity understood the primal beauty of evolution, we immediately turned all our efforts to maximising relative adaptive fitness.)
@@momom6197the pebblesorters quickly realized that a more powerful computer would mean it would be easier to find bigger prime numbers. so they harvested more and more resources from their planet to construct better and better computers. making better computers was their only goal, it had taken over their society like pebble sorting used to.
each computer was twice as big and twice as powerful as the last, their logic being that bigger computers would lead to more prime numbers.
perhaps they still thought of the prime numbers as correct. perhaps they just didn’t know how to build good computers. The details have been lost to history. but what we do know is that one day, their planet ran out of computer building parts.
building computers was their only goal, and it was deemed more important than caring for their own civilization. but as they stared at their latest computer, they realized their planet could no longer support them. they had taken everything from it, and it had nothing left to give.
they stared at the number on the screen. 282,589,933 − 1. they had found it, they had found the most correct number. But what did it all mean in the end? They had no food, they had no water, and their home was dying. The pebblesorters quickly realized, perhaps a little too late, that it was just a number. It had no meaning beside but they gave it. and it had destroyed them.
edit: grammar
@@wren_. At their last moment, the pebblesorters thought: Who cares? What are we even doing? Not like being alive has any meaning anyway.
[Me going through the comments to see if a single person gets the intended point of the story]
Yeah, obviously the heap size of 8 sucks.
(Really love your work btw, been following you for a while. I am currently reading Superintelligence by Nick Bostrom because of you.)
Hey, give it some time!
I believe the point is that our values are arbitrary, and expecting an AI to follow our values merely because it’s intelligent is not a great plan
I get the point and agree with the conclusion, but it seems like a gross oversimplification to equate pebble sorting with something like human morality.
Isn't this just a metaphor for the orthogonality thesis?
An incredible, fun video! I'm so invested in these little creatures and their strange goals
But I worry that the point - the orthogonality thesis - is a bit too well buried in the fun narrative
It works as a good introduction and jumping off point - helps disconnect our more anthropic values from the argument and gives perspective when starting out
I think that the implication conflates fashion and morality. The subject matter is fashion, as sorting pebbles has little evolutionary utility and the video glosses over the understanding itself: "Why do we sort pebbles? Mating rituals? Trade? No idea, lol!"
And then it attributes consequences to sorting pebbles like societal upheaval and wars, things typically caused by differences in resources or morality.
It's cute and well-made, but I don't buy the premise.
@@williamjosephwebster7860 That’s an incredibly silly take. The sorters are fundamentally obsessed about and driven by sorting, and the only disagreement is which heap sizes are correct- if heap sizes refer to wages, that means this entire video posits that humans are singularly and universally obsessed with capitalism, and always have been, which aside from being insane, is also just silly on its face. 3000 years ago, there were hardly wages at all, let alone a ruler/society that decided on correct wages which everyone agreed with for thousands of years.
The money metaphor breaks apart on every level to the extent that it’s nearly nonsensical- the people in this video don’t even wage war over money according to your reading, they’re waging war because they disagree on how much to pay people?? How does the money metaphor explain why they all agree that prime numbers of pebbles are correct? How does it explain that perceived correct pebble counts go drastically up and down with no rhyme or reason?
@@EgoEroTergum I think it makes fun of people who conflate fashion and morality. For example, the heap relativist at 3:00 disregards the idea that pebble sorting has any real value except what society attributes to it. Yudkowsky and the animators don't seem to agree or arrive on that as their thesis, as they present and then move on from the idea that morality is the same as fashion.
Although it doesn't seem to be the final claim, we are asked to wonder if pebble sorting is really meaningful beyond fashion.
The species is weirdly obsessed with pebble sorting, it comments on the origin of and phenomenon of their obsession in and of itself; the line "the only justified reason to eat was to sort pebbles, the only justified reason to mate was to sort pebbles," and so on with pebble-sorting the reason to have a world economy, stands out to me.
The fact that pebble-sorting matters at all is one of the comments of the video.
The idea that something needs to be correct. Otherwise, we live a pointless life, eating and settling down with a family without any pebbles to sort, without any point.
It isn't specified what sorting pebbles is meant to really stand for, and it shouldn't. Different heap sizes refer to different ideas as a stand-in for things we value, anything we agree on.
By removing it from specificity, it allows us to take a new look at the fact we care so much about these changing morals. If I had to choose what it would translate to for humanity, I think "finding the correct heap" equates to humanity deciding on "the good life" we are all supposed to live.
Although it has little evolutionary utility, so do many of our deepest intellectual pursuits. The species could live fine as individuals just eating, mating, and so forth, and perhaps that life has meaning enough. Just living a life not worried about pebbles could be meaningful on its own, life for its own sake.
I think the point of the video is to help us comment on the fact we try to agree on morals, differing with people from the past and people who live in different systems than our own. We are convinced that something needs to be right.
It doesn't tell us whether or not something is right after all, or whether nothing is right like the relativist believes, but it comments on the search for something being right, and I think the analogy works well to do this. We are very touchy about our own heaps and by moving morality into a zone where no specific ideas that matter to us are constructed, we can recognize the arbitrary nature of our own thoughts.
One of my favorite aspects of the analogy is disagreements between different societies, where even when cultures have different moral systems, they agree that there should be a moral system.
I have not watched the video on the orthogonality thesis and am merely taking the video and story as a standalone commentary until I do watch it.
My response is somewhat repetitive and could be shortened; my apologies
Obviously 42 is the correct size of a heap.
But but but 42 is divisible by 2!
It can’t be correct!! ThIS iS iNSAne?!?!!!!!&!
You have to be a special kind of psych💀 to 🅱️elieve thissss!!!
Biko was right, bring back heaps of 91!
This is heresy. I posit 7 and 13 to refute such wickedness.
Funi science dog, I love how much the videos have upgraded recently it's really amazing :D, keep up the good work
i love how simplistically they have explained our civilization has been portrayed here the. goals and beliefs of humanity can be valid and important fro one but simply insignificant for others and how it has shaped our entire civilization of not just the lifestye of humans
So all heaps must be prime numbers?
I really enjoy the influences of De Jure moral relativism, e.g. sidestepping the question of final moral meaning and pointing out that the mere fact of differences in whats considered a moral absolute between people make a very powerful practical conundrum
I love how a lot of your topics are things that I haven't heard about. So this is an interesting thought experiment for A.I. and reminds me of the Paperclip Maximizer in some respects.
I think the best response to the moral relativist is that we do know some things about morality, even if we will always have gray areas. In the twin earth argument we do have difficulty distinguishing between two *plausible* moral frameworks, and it seems like either might be valid, i.e. consequentialism (maximize collective pleasure) vs deontology (follow good moral rules strictly), but it's pretty obvious when comparing consequentialism to, say, traditionalism (following old rules) that traditionalism is a pretty bad moral framework, and we wouldnt consider a society that sortedheaps that way as being correct. So there's a range of correctness, but definitely incorrect answers. Maybe an AI would say 91 is a correct heap, or that deontology is best over consequentialism, but it's even more likely to be just as confused as we are, and to leave open such a silly question as what framework really answers every moral question, because the actions and choices we partake in already do such a better thing than being perfectly morally good every time - they happen.
My response to the moral twin earth argument above is taken from Viggiano 2008 - Ethical Naturalism and Moral Twin Earth.
"maximize collective pleasure"
This is hedonic utilitarianism. There are many varieties of consequentialism, and I don't think many people follow this one anymore.
But why is it pretty obvious that traditionalism is a bad moral framework?
Even saying that traditional frameworks are "bad" (Something I would agree with), requires an assumptive terminal goal of, most likely, maximizing human happiness. Or maybe it is to avoid suffering. Whatever you are choosing as your terminal goal, your guiding philosophy or your purpose, that thing you choose has no value asides from what is naturally and instinctually there.
Making other people happy while making myself happy would probably be my most terminal goal and yet, that can't be touted as some universal good. Even if everyone were to follow it and it led to a utopia of pure happiness, it couldn't be labeled as an objective good because it is only good in comparison to how it achieves its goal, but we have no way of viewing or judging terminal goals. If we do, then that goal isn't actually terminal.
Like helping people. I put that down because it is easy, but that isn't truly terminal. While I want to help people, I help them to make them happy because knowing and feeling that I have cause another person to be happy makes me feel good. So bam, we can now see that wasn't the overall purpose. An overall purpose can be judged in terms of what it would achieve to chase it, whether it is attainable, etc. It can't be judged as good or bad. To judge something requires a framework and final purposes and drives can't fit into that. They are axiomatic and exist outside of them.
That's why if we ever met an alien species who truly derived no pleasure from helping others and felt no pang of anything negative when hurting others we couldn't call them evil. They would have no ability to comprehend our framework of good and moral because those inherent, axiomatic drives that are so ingrained in us we don't even look at them, aren't there with them. In the end we would be unable to really judge each other as we would be truly alien one to another. But imagining and comprehending the other or something so foreign isn't something our brains are well set up to do, hence our love to anthropomorphize everything.
"but it's pretty obvious when comparing consequentialism to, say, traditionalism (following old rules) that traditionalism is a pretty bad moral framework"
It is? I don't see how its obvious. I really mean that.
All of the most successful (materialistically speaking) empires in human history all built themselves on the backs of slaves and slave labor.
There are a startling number of human beings that literally NEED an authority figure to literally tell them what they should think and give a BS excuse as to why or said human beings lose their minds to chaos and society (which historically has allowed us to advance as a species at all) collapses (ensuring everyone will be miserable).
Frankly I don't see how, with human beings at least, "freedom" is something that should be spread to everyone. Near as I can see freedom is for those who desire it enough to fight tooth and nail for it in a society that does everything it can to deprive them of it. Everyone else seems to literally NEED their chains.
Some people argue that if we created an AI smart enough to turn the sun into paperclips, SURELY it would be smart enough to realize that A) we obviously didn't mean to create that many paperclips and B) that killing all life on earth to make little fasteners is morally wrong. This video is a direct response to that argument. Here's how:
The heaps represent our moral systems and our values. Like the heaps, our morals are based on a combination of intuition and learning from past examples. Like the heaps, there actually are underlying rules. Unfortunately, those rules are too complex for us to understand in any sort of definitive and clear-cut logical way. Similarly, the pebble sorters morals are based on a rule that is too complex for them to understand.
Their rule is that heaps should be prime numbers. However, due to the limitation of their brains, they cannot understand what that means or how to easily show that a heap is prime or not prime. As they consider making artificial intelligence, they intuit that surely a smarter mind would easily be able to tell that a heap of 13 is right (that is, prime), and a heap of 91 is not right (it's divisible by 3, so not prime).
Like the pebble sorters, many people today argue that if we created an intelligent mind in a computer, it would surely see that turning the universe into paperclips and killing all humans in the process is a dumb goal. After all, all humans being dead is, well... it's not prime. It is widely understood that not having all humans die is part of the definition of prime, of course.
The end of the video asks you to step in and play the role of the superintelligent AI. You can figure out the rules. Heaps of pebbles should not be divisible into even piles. You understand exactly which piles are right and not right. The pebble sorters have given you control of their economy and asked you to build the biggest good heaps you can. Will you do it? Probably not. You'd realize that piles of 4 are actually kind of useful for building stable structures, and honestly, who cares about primes.
Likewise, the superintelligence that we create might fully understand why causing human suffering is 'bad' from our perspective. And it simply might not care. A little bit of human suffering allows it to build faster computers or interstellar space ships.
This is fantastic! Nice work to the creators
Imagine humans showed up there to make first contact. They would be like "These are clearly an advance civilization, lets send them a list of correct heap sizes to show how far we've come as a species!" We would se this and be like, "Oh, they sent us a sequence of prime numbers! Lets finish the sequence!" We then send them all the known primes up to 12,978,189 digits.
Now apply this scenario to humans, what would we send to advanced alien visitors and what would they send back?
Can you imagine being a pebblesorter in the room reading off those unimaginable numbers?
Overawed doesn't begin to describe it...
@@michaeltullis8636 That would be entertaining to see. At what point would they be unable to intuit the correctness of the numbers? 5 digits? 10 digits? 100? 1000?
How would they respond when pebble-sorter philosophers find each number was a correct prediction?
What a brilliant story. Pebbles here could be a standin for religion, political theories, moral philosophy, wealth, social conventions, and then even scientific inquiry. Simply brilliant.
And also RUclips comments.
What the Pebble Sorters so blindly pursued was the end goal of correct heaps, without realizing that it is in their very nature to sort heaps, not to see a heap that is sorted. It is the reason for every innovation they have made, every joy that have experienced, and every achievement they have ever made. Once they finally manage to automate the process, many a Pebble Sorter will wax nostalgically about the days when Pebble Sorting was done for the process and passion rather than the maximum end result. Back then even an unskilled Pebble Sorter could make a heap of 13 and be proud of it, maybe even be paid for it, but now 13 is laughed at as the machines make 131,311 piles daily, and nobody would be stupid enough to praise or pay someone to make a pebble heap any less than 131,311.
A new dark age of depression and suicide grips Pebble Sorting society as a whole, with the species realizing that their passion came not from seeing greater sizes of pebbles, but of assembling piles themself, the joy of personally sorting now forever lost to the world. Billions starve without work, billions more fall into inconsolable despair doing slavish jobs not involving sorting pebbles, mostly grueling maintenance for the wealthy's new supercomputers who have seized absolute power through their ability to outcompete every Pebble Sorter on the planet.
Some of the last words of these dying Pebble Sorters are these nostalgic imaginings of the past, something that was once rose tinted glasses and a fallacy to think that a time of less progress was actually better than the future they lived in today. They wonder now if that was the true fallacy, to equate progress unthinkably with better states of living. After all, the standard of living and overall happiness had increased with progress, it wasn't unreasonable to assume they were linked. But their final thoughts, as they go crippled into that good night, are so often the same 13 words...
"What if we had stopped all the machines.. before it was too late?"
Bingo
@@StarboyXL9
No, he misses the point super hard.
But what about game theoretically optimal pebble sortings?? We basically solved this after exploring the iterated sorter’s dilemma.
An evolutionarily stable strategy is only game-theory-optimal for a specific set of competing strategies. We can determine the best strategy among a set of existing strategies. but this doesn't preclude the ability to invent a novel strategy that's even more efficient.
That said, the issue with optimal strategies for real life isn't so much that we can't agree on the best strategy itself, but that we can't agree on what the rules of the game even are to begin with.
Analysis of how the story correlates to the tale of Humankind:
The pebble heaps are kind of like morals. The incorrect heaps are like sins and the correct heaps are like "goodness".
We don't know why humans have morals. Maybe its just that altruism was the better option for evolution. Or as many theists theorize, maybe its the product of God's will? Like the "more powerful minds" from the story.
Whatever it is, one thing is for sure, almost all humans think about morals and prefer kindness. In the beginning, humans morals and their understanding of them were not great and we were fairly primitive. But over time, more and more philosophers and preachers and scientists and whatnot have theorized, studied and sometimes advised about how we should live our lives. Biko is almost like the Egyptians or Indus being the first civilisations that recorded events, thought about life more deeply, and came up with some of the first interesting ethical stuff. These civilisations were eventually replaced by others, with their own doctrines, beliefs and minds. And just like the Pebble Sorters war over the heaps of pebbles, humans themselves war over what they think is the right way of life.
And now we're onto the present day, where some philosophers say that we have just had a random erratic variation of what we believe to be right over time. There might not be any real right or wrong, just like the Pebble Sorter philosophers say. And as self improving AI gets ever closer, we have to consider whether the computer would really come up with something good. If it was, would we even like it!? As the story says, even if Biko had the self improving AI and told it to build heaps of 91, eventually it would improve itself far enough to figure out that 91 was not a correct heap. In real life, a self improving AI could come up with something that us humans thought was moral but actually isn't. Many people argue that if the AI was really so smart, it would definitely arrive at a conclusion we expect. After all, bugs don't even seem sentient for the most part! Dogs can show compassion and other emotions at a limited level. Humans can theorize and try to make sense out of their morals. So wouldn't an AI, being a step up in thinking power to us, be even more proficient? And we are left with that thought.
Cool thing I noticed going on: All the correct heaps are prime and all the incorrect heaps are non primes. At'gra'len'ley presenting the factors of 1957 (103 and 19) to prove it is an incorrect heap is pretty cool.
The Great war of 1957 is all silly until he says that it was the first use of nuclear bombs.
But it also carries a great deal of meaning.
Or if this is a bit too much, you could always read his more accessible work such as 'The Sword of Good' or 'Harry Potter and the Methods of Rationality' (which, kinda spookily, I just started re-reading this morning.) It's nothing like this story btw. It's Harry Potter but everybody, *everybody*, is at least 'normal person' smart and is far better than the originals.
I agree, only covers first year but by chapter 3 to 5 it really goes hard
I want a pebble sorter as a pet
So damn cute...
this might just be coincidence, but the heaps at 5:45 that were deemed "correct" were prime, and the ones deemed "incorrect" were composite
I think that’s the hidden message, which makes me wonder what would happened of
Someone would be able to tell them prime numbers were
@@kingslushie1018 No it's just an arbitary rule. Being prime doesn't automatically make a heap correct.
i thought it was about sorting algorithmn like how algorithmn evolved from basic insertion sort and into many choices exists now and im 100% wrong and did not expect a culture to be created around literal action of sorting pebbles
As soon as I seen 23 , 29 were correct and 91 was incorrect. I knew we were talking about primes
I had to see the comments but that’s awesome
chad 91 pebble heap builders vs virgin 1957 pebble heap builders
You can only say that because Great Leader Biko was so long ago. Centuries from now, people will be cracking jokes about how based the War of 1957 was.
Really cool! I hope there will be more rationality short stories! You really should have a whole team and much more funding to be able to make much more videos!
I really hope that one day we'll have a movie movie/TV series version of HPMOR
Suddenly, at the 05:50 mark, before I have had my tea, my brain goes "Hold on. Those are prime numbers, aren't they?".
Quickly followed by an "Oh!".
This was amazing, thank you!
0:27 the Cave painting shows, translated from pictograms “a heap of 3 pebbles is correct, a heap of 7 pebbles is correct, a heap of 10 pebbles is incorrect” and acording to their standards, that IS Correct!
“Surely, if an intelligent AI looked at the world, it would see all of the incorrect heaps. All of the 8s, 25s, 91s and even the 4005. Any super intelligent being would be disgusted at the incorrectness it sees, and would rationally decide that we are incorrect, and exterminate us so that no more incorrect heaps would be made.”
The true correct heap is one where, if you start at the beginning of it there’s one pebble, and next to it are more, and if you keep analyzing the heap slice by slice this way you begin to see patterns in the way one slice is correlated to the previous slice, and you can derive rules that allow you to make predictions about slices further down the line, and you see that eventually, far far along in the heap, intelligent life emerges, which, through natural selection, produces the behavior of stacking correct heaps. A truly awesome heap.
This is great, as usual!
Any chance of doing The Fable of the Dragon Tyrant? It’s not quite on brand (old age not AI) but it’s close, and everything else I’ve seen (like CGP Grey’s excellent video) is abridged.
"yeah, but what if defeating diseases of old age is actually no better than dying immediately!?!"
I knew that was Robert Miles’ voice after a while! Love your stuff - you and Eliezer Yudkowski, you guys are 🪨 ⭐️⭐️ !!
Yes i would in fact, realy like to know how this story ends
Implied end is they make AGI, thinking that if its "truly intelligent" it must care about morali.. i mean sorting pebbles and it doesn't end caring about sorting pebbles.
Biko was a incredible smart individual.
By making a standard size, he prevented violence and allowed the culture to grow.
I do wish it was easier to describe levels of frame reference that things were or weren’t correct on. We have terminal and instrumental values which feels like a good start but it would be really useful to have even more nuance beyond that. So many of my values are both terminal and instrumental but also contingent on certain beliefs I have about reality being true. Like being kind to others is a terminal value for me in that I want to interact with others and want to be kind to them (if ppl were somehow unaffected by treatment of them I’d still prefer just for my own sense of self to be kind to them) but also an instrumental one for how it impacts others and also has its value for me heavily contingent on my understanding of the possibility space of how to interact with others and it’s impact in ways that have to be missing potentially important things. If I got to make an agi that only cared about my own values or let’s say I make myself into super intelligence in a process that leaves all the values I want to keep intact, I would still expect the conclusion the agi or super intelligent me meet to be vastly different from the ones I have now just because they/I would be able to reflect better on where my values come from, to what extent they’re terminal &/or instrumental and how they interact with reality. I’m rambling but the point is most people if they think it through would want to align an agi in such a way that would be able to contradict them.
The easiest one is that if I’m morally opposed to something as abhorrent but it doesn’t actually harm any of the things I truely care about terminally or is important for things I care about in a way that I don’t understand, I would want for an agi to not oppose it but because I do. My biggest fear for AGI and super intelligence is it being aligned with values of retributive justice since I think it’s the most wide spread and commonly accepted form of moral value that seeks to hurt the preferences of sentient beings. I would hope that most people who value retributive justice or see some people as being less worthy of moral consideration because of their bad actions would change their mind with greater understanding of reality in combination with deeper reflection of why they value the things they do. My biggest hope for ai alignment (aside from obvious doom or dystopia or paper clip maximizing etc) is that it’s aligned in such a way that it can reach conclusions like “retributive justice isn’t worth seeking out, unconditional compassion is more what people actually want” even if that seems wrong to most people today.
This is the best video I've seen in my existence
Im gonna be honest I did not expect a philosophy lesson but I like it anyways
I didn't realize that Robert Miles was the narrator until I saw the end. Good on you, Robert!
Reading the comments here, I'm tempted to say that the hypothetical superintelligence is a nod to us, the viewer. We can see with clarity what makes a heap correct or incorrect, and at the same time know that the heaps are ultimately pointless. Just like an intelligent AI system might understand or own human values better than we ever could, and even still find them unpersuasive. It would not need to be evil or ignorant to view some "universally respected" values as meaningless.
I'm not going to lie, as a library science enthusiast, I clicked on this thinking it would be about sorting systems rather than just using sorting systems as a metaphor. I will admit I am a tiny bit disappointed to find that it was not.
I absolutely love the animations in this one!
still loving your videos thank you
I think morality evolved from something a bit less random like the least waste of complexity and the least suffering
Does it really make a difference either way? The point stands regardless
What do you mean less random? Piles of prime number of stones are not more random than the least waste of complexity and the least suffering. In fact, you can argue that piles of prime number of stones is less random.
It took until "expected utility maximiser" for me to realise this was voiced by Rob Miles 😅 AND scored by EPIC MOUNTAIN too?!!?
This video is shorter than usual, but I suggest watching it carefully and more than once
🟠 Patreon: www.patreon.com/rationalanimations
🔵 Channel membership: ruclips.net/channel/UCgqt1RE0k0MIr0LoyJRy2lgjoin
🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rationalanimations
WOW!
I watched the video once only and was surprised that I didn't catch the analogy.
EDIT: Does the amount of stones corelate to our current wishes? And by becoming more inteligent we evolve into liking bigger heaps? But then I don't understand why would the AI want to change from 91 to 101. I really have difficulties in understanding this analogy.
@@karamelkax They're looking for prime numbers. 91 isn't prime, but 101 and 103 are, hence being "more correct".
@@karamelkax Pebblesorters value creating heaps of pebbles that have a prime number of stones in them, and are horrified by heaps of pebbles with the "wrong" number of stones. We humans can understand the rule behind their moral intuitions (better than they can, even, since they seemingly haven't realized the rule they're following), but even though we understand what's "right and wrong" from their perspective we still find sorting pebbles a pointless waste of time.
Many humans think that any advanced enough artificial intelligence will be moral and good, because it'll be smart enough to understand right from wrong. And an advanced enough AI *will* understand human morality - maybe better than we do, even, it could perhaps grasp the moral rules that human civilizations have been stumbling towards over our history of moral progress. But it won't care, any more than understanding pebblesorter morality makes us want to sort pebbles.
Not unless the AI has been built to agree with humans about right and wrong (which no one knows how to do).
@@michaeltullis8636
Beautifully explained and as correct as a heap of 2027 pebbles.
Personally I like 8 it’s balanced and splits evenly sure it might be considered primitive but I like it, it’s simple and perfect and I feel all this pebble stuff is just opinion
Something that just occurred to me: can you in fact train a neural net to recognize primeness? If so, does the neural net only tend to get it right to some degree of precision, akin to an imperfect "instinct" for primeness? Or does the net end up encoding something like the Sieve of Eratosthenes?
I've hand-written programs that recognize primes. So of course a neural network could learn it.
Eliezer is underappreciated
This video has made me interested in learning more about philosophy. Does anyone have any book/resource recommendations for a pure beginner?
How to be a Stoic. - Ancient wisdom for modern times. By Massimo Pigliucci
A New History of Western Philosophy by Anthony Kenny
Eliezer Yudkowsky's philosophy writings are collected in a book called "Rationality: From A.I. to Zombies". There are, of course, many other philosophers. I liked Bertrand Russell's History of Western Philosophy, but it does not have any of these modern technological ideas in it.
I recommend reading the Tao Te Ching, the foundational book of Taoist philosophy!
I have a comment, but it requires significant background. Let's begin.
I asked GPT-4 this:
"Would it be possible to quantify the numerical value of an english sentence, based on its individual characters and length, so that every sentence has a different numerical value?"
And it responded, with this:
----------
Yes, it's possible to assign a unique numerical value to an English sentence based on its individual characters and length. One common approach to achieve this is by using a positional numbering system like base-N encoding. Here's a high-level overview of the process:
Assign a unique numerical value to each character in the English language, including letters (both upper and lower case), digits, punctuation, and spaces. For example, you could use their Unicode code point values, or create your own custom mapping.
Convert the sentence into a sequence of numbers based on the assigned values for each character.
Convert the sequence of numbers into a single large number using a base-N positional numbering system, where N is the number of unique characters in your encoding. This step ensures that each sentence has a unique numerical value.
For example, let's assign numerical values to a simplified character set (ignoring case, punctuation, etc.):
A=1, B=2, ..., Z=26, space=27
Then, consider the sentence "HI AI". Convert it into a sequence of numbers using the assigned values:
H=8, I=9, space=27, A=1, I=9
Now, convert this sequence into a single number using a base-28 positional numbering system (since we have 27 unique characters plus one for the base):
(8 * 28^4) + (9 * 28^3) + (27 * 28^2) + (1 * 28^1) + (9 * 28^0) = 9389589
So, the numerical value of the sentence "HI AI" is 9,389,589 in this encoding scheme. Keep in mind that this approach can generate very large numbers for longer sentences, so you may need to use arbitrary precision arithmetic to handle them.
----------
So imagine statements are like heaps, and letters are like pebbles. Every individual statement has a unique number of pebbles. Probably.
Someone else commented on this video (M Kelly, for credit): "An incredible, fun video! I'm so invested in these little creatures and their strange goals. But I worry that the point - the orthogonality thesis - is a bit too well buried in the fun narrative".
I asked GPT-4 to assign a numerical value to their statement.
It is now time for my comment:
----------
I believe that heaps of 528151966609313195581106245093805665570550725896403866363735231545845118879176234873618775413757575685065392417772092569122451102180303297194275572764840682166756084702401754 pebbles are reasonable, and I do not disparage anyone for building them. But I do think that those who build such heaps are being unfair to those who build heaps of Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky pebbles, and I believe they should consider building smaller heaps.
Let me know if I did the orthogonality thesis justice, I still haven't looked it up 😅
3:50 this is the main point of the video! The common sense argument is incorrect, becauce an non-aligned ai would probably not Care about pebbeles heaps at all!just like we humans dont!
Love how the heap builders are able to make computers yet they fail at basic division
I don't know what are you talking about, but surely it's a brilliant metaphor.
Lmao my exact though
The metaphor is that you are the AI. You can instantly see the rules they follow, even when they can’t. If you went to their planet, you could instantly invalidate their history and struggles. Yes, 91 is incorrect. Yes, 1957 is incorrect.
But why the fuck should you care?
People's beliefs about the pebbles represent morality. The big questions are "is morality real or just a social construct?" and following from that, "if we built a superintelligent AI, would it automatically agree with our morality?"
(0:13) The heap sorters' morality system is that heaps with some numbers of pebbles are correct and other numbers of pebbles are incorrect. (specifically, prime numbers are correct and other numbers are incorrect)
(1:05) They know some of the correct numbers of pebbles - 23 and 29, for example - but not all of them.
(1:20) In the past it was widely believed that heaps of 91 pebbles were correct, but this is now widely believed to be untrue.
(2:10) Wars have been fought due to countries disagreeing on which heap sizes are correct. (this is a metaphor for real-world countries fighting wars based on morality - eg "the other country is evil and must be stopped")
(3:00) Most heap sorters believe morality is absolute - either a heap is correct or it isn't, and if two people disagree then one of them must be wrong.
However, the heap relativists believe there is nothing that makes a heap "correct" or "incorrect". When two people disagree on morality, there is no universal truth that says which one of them is right.
(4:10) Heap relativists say if we built an AI, it might decide to do things we think are immoral. So maybe it would be dangerous to build an AI we can't kill.
(4:30) Most people disagree with the heap relativists. Surely a superintelligent AI would be so clever that it would know what morality was correct. (and it would roughly agree with us, since we are pretty intelligent too.)
They say even if you programmed it to believe something immoral, like that heaps of 91 pebbles are correct, it would realise what it was doing was immoral and change its own programming to be moral.
While the story does not outright choose a side, its author is on the side of the heap relativists, and this video ends with a link to another video which argues on the side of the heap relativists.
@@blartversenwaldiii I don't think the author sides with the heap relativists any more than the heap absolutists. Note that the primary concern of the heap relativists is that the AI may build incorrect heaps. On this subject, the heap absolutists are actually right: the AI, like the viewer, will instantly recognize the underlying pattern. Even if you were told that 91 is in the sequence, you'd suspect that was wrong.
Continuing the assumption that the viewer stands in for the AI, the AI would...not build any heaps at all, seeing the exercise as meaningless, rather than right or wrong.
@@blartversenwaldiii thx
Well as the Pebble Man I can tell you, all you really need is a single pebble. Take the pebble, leave the pebble, skip the pebble across the lake for all I care. But what matters is that the pebble exists in the first place. And with enough singular pebbles, and time, a little pressure, that pebble and those pebbles become a boulder.
I feel like this video is supposed to convey some message about human society, but I can't quite figure it out. Maybe if I tried sorting some pebbles it would come to me.
All that amazing animation and the audio quality of a phone call from a bathroom
So all the heaps are prime numbers and the philosopher showing the heaps of 103 and 19 pebbles is showing that 103x19=1957 (therefore not a prime number).
While the animation seems at first in favor of moral anti-realism, with the arbitrariness of the heaps and the 'french' philosopher saying there is no 'correct' pile, this prime number pattern seems to point to the fact that Yudkowsky does believe in some form of moral realism, or at least that moral values must have an underlying structure that is beholden to classical logic.
EDIT: I personally don't buy this and think that Yudkowsky would do well to read about non-classical logics lest he makes the same mistakes about mathematics that many people do about morality.
I suspect the prime number connection is a red herring wrt moral absolutism; I’ve mostly seen him express a utilitarian conception of morality (where evil = suffering x number of occurrences).
The thing is that while prime numbers are special in a number theoretical sense there is no objective reason to *value* them. Any underlying structure can serve to make arbitrary conclusions seem objective. It's easy to find a method to any madness. They are mere tokens of objectivity to give the arbitrary values a veneer of absoluteness. It's like a ret'con, a rationalization that serves irrational needs.
We can understand the pebblesorter's morality, even though they can't understand their own. A superintelligence could understand our values, even though we can't.
Yudkowsky probably doesn't think morality is representable with logic, and he's not a moral realist.
@@DavidSartor0 Yudkowsky is a normative realists so if he isn't a moral realist he has an internal contradiction. Although there's a good chance he doesn't realize this since he is not very philosophically literate.
@@Xob_Driesestig Thank you.
I think I'm not philosophically literate enough to talk with you effectively.
As far as I can tell, Yudkowsky is not a normative realist; but he speaks confusingly about morality, so he sounds like one.
Yudkowsky thinks moral reasoning is valid, but that it doesn't find "universal" truths.
I think.
Yudkowsky thinks most humans have similar values.
Please tell me what he's doing wrong, and what I'm doing wrong.
THE ANIMATION IS SO NICE ITS SO BEAUTIFUL OH MY GOSHHHH
The question is if this super pebble sorter AI were to realize that they just needed to sort pebble heaps that are prime, how would the pebble sorting people react? Would that be a good or bad thing? It seems reasonable that human morality comes from a similarly simple underlying rule and our disagreements are evolutionary artifacts.
This is an alarmingly short sighted (and even moreso, common) mindset.
Just discovered this channel and did _not_ expect to hear Robert Miles narrate! Time to binge 😛
So proud of myself that I know who Eliezer Yudkowsky is without googling.
HPMOR fan I Imagine?
The asuption that smater mids make bater decisions is a dangers assumption.
Привіт я дивлюсь ваші відео і хотів подякувати вам за ваші відео.
Вау дякую за впш лайк дуже вдячний.😀😀
This definitely has some kind of hidden message. Not sure what, though.
Heaps of pebbles are the ideas that people have in their minds. Best word i would use to describe it would be "intuition". In the early days, the "intuition" of these pebble-sorters were small and so their heaps were small. As technology and the mental capacity of the pebble-sorters increased, they were able to create larger pebble heaps that they deamed correct (reminds me of the quote "standing on the shoulders of giants"). I think the larger pebble sizes mean different ideologies in our human history: religion, science, etc. Wars have been fought over these heaps and ideas, because they thought their size was correct. Obviously, if the AI determines that their heap sizes, their intuition, their ideas are more correct than the pebble-sorters, skynet from terminator will happen.
the point is that pebble sorting, from our (human, not pebblesorter) perspective, is stupid and meaningless
and reflecting on us, why do we think our knowledge and intuition and culture is in the right direction at all?
@NoName All of our current knowledge and intuition and culture are based on previous knowledge and intuition. The only reason we think we are right is because our theories agree with our predictions. If we think about simulation theory or objective collapse interpretation of quantum mechanics, there exists theories out there that say reality doesn't "exist" if we are not observing it. Tomorrow we might be able to prove that we are living in a simulation and none of the things around us is "right" or the "truth" and there exists a larger truth out there. In this case our theories are wildly incorrect because we are trying to predict the physics of the simulator, not the physics of the real world. If we definitely know that we are in a simulation, that pebble size will become the largest pebble we know to be "correct", and as more and more people believe that, everyone’s pebble size will grow.
Also your first point reminds me of kurzgesagt's optimistic nihilism video. Which I think the talk show guy is talking about in the video, but this is more speculation than anything.
@@aryangupta9034
The metaphor falls apart there and is not really relevant as there is a clear objective rule for sorting pebbles (the amount of stones must be equal to a prime) but with morality there might be no rule at all.
Still we might ponder- why should we care about anything? Maybe it wouldn't be bad if Skynet would kill as all. Maybe it wouldn't be bad if entire matter in universe would be reshaped into paperclips or stone heaps of size 8.
@@AleksoLaĈevalo999 There are, perhaps, clear objective rules for human morality. An AI might be able to understand these rules just as we are able to understand the rules for pebble sorting. An AI probably won't care for our rules of morality just as we don't care for the rules of pebble sorting. That's the point.
Ah, I see all the correct heaps are prime. The philosopher was able to stop heaps of 1957 from being made because he demonstrated that 19 times 103 is 1957 and therefore not prime
Great allegory! Suck this, moral realists!
At first I thought this would be a dramatization of the LessWrong article "How an algorithm feels from inside", one of the most influential reads on me ever. If you haven't read it check it out and please consider animating it.
"suck this, moral realists" isn't quite the right takeaway though. Whether or not a number is prime is an objective fact, but that still doesn't make prime-number heaps intrinsically worth pursuing.
So the point is, even if there are objective moral facts corresponding to human ideas of morality, and even if smart aliens or AIs would easily understand them better than we do, they still wouldn't necessarily act in a moral way.
@@Aresman70 whoosh!
Lmao imagine having an arbitary terminal goal. Moral nihilist gang!
@@Aresman70 From a moral relativist perspective, they would necessarily act in a moral way, just not necessarily human moral. From a moral nihilist perspective, of course they wouldn't act in a moral way. Morality doesn't exist.
Any heap of exactly 42 pebbles is indistinguishable from magic.
I love this channel - you should try and collaborate with Kurzgesagt or other large science channels. Deserve way more subs for the quality you put out.
I feel like the animation and narration style is more fitting of something like TED-Ed.
What a great video, very compelling, I say this as someone who absolutely does not have a heap of 91 pebbles hidden under their bed. No such heap in my house, no sir...
Pebblesorters, 1 month before extinction: "Experts agree there's less than 30% chance an ASI would make incorrect heaps, surely those are good odds!"
I always had a feeling that a heap of 91 seemed off
ok but what happens if you grind up the heaps then eat it. does that makes you a container for the correct heap or will you combine with it making it larger and perhaps incorrect
You get executed for blasphemy.
Interesting video! Loving the animation :) Rob Miles' video is an important part 2!
Thanks for making me understand that humanities goals are unreasonable.
Nothing matters :)
the universe will die of heat death and you will be forgotten :)
none of your heaps will matter :)
anyways, I'm now going to make some correct heaps, see you.
We should embrace no-heap society
hehe
enjoy your heaps and let others enjoy theirs ;)
Your conclusion makes me said, because my utility function optimized by evolution favors more optimism.
Some heapsters sorted pebble heaps of 43 before it was correct.