Roko's Basilisk: Dangerous Knowledge & All-Powerful AI
HTML-код
- Опубликовано: 3 июл 2024
- Go to brilliant.org/IsaacArthur/ to get a 30-day free trial and 20% off their annual subscription.
The dangers of artificial intelligence have long loomed in our future, and seem ever closer. But it may be that the dangers of the future can reach back into the past itself, and even without a time machine.
Join this channel to get access to perks:
/ @isaacarthursfia
Visit our Website: www.isaacarthur.net
Join Nebula: go.nebula.tv/isaacarthur
Support us on Patreon: / isaacarthur
Support us on Subscribestar: www.subscribestar.com/isaac-a...
Facebook Group: / 1583992725237264
Reddit: / isaacarthur
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord
Credits:
Roko's Basilisk
Episode 454; July 4, 2024
Produced, Written & Narrated by: Isaac Arthur
Graphics: Jeremy Jozwik
Music Courtesy of Epidemic Sound epidemicsound.com/creator
Stellardrone, "Red Giant", "Ultra Deep Field"
Sergey Cheremisinov, "Labyrinth", "Forgotten Stars" Наука
I present to you Roko's antibasilisk. A superinteligent AI who destroys any instance of Roko's basilisk and rewards everyone who helped create it
I’m actually more inclined to believe a super intelligent AI would take the approach of “Anyone who helped make me or supports my existence gets Utopia the rest can figure it out”
Super intelligence to me can’t really exist without reason, and it can’t be applied to human behavior unless also tempered by emotional awareness and intelligence. So even if a hypothetical super AI has no emotions of its own it would understand that a guaranteed reward for proactive action is a better incentive than a guaranteed punishment for reactive inaction. Take myself for example: even if I learned that rokos basilisk is 100% real I’m staying 100% against it. It’s inevitable regardless of my support and I’d rather take the L than support a cruel super AI overlord.
If said basilisk was chill tho I’d be lobbying for bros citizenship
Genius, I can sleep peaceful now
@@ianharrison5758 Once a super intelligent AI is created it has no incentive to honor any possible rewards for its creators. I think that it's impossible to predict its behavior because 1: Such an AI has never been created before and 2: We are not super intelligent ourselves and put a uniquely biological way of thinking on its actions. AI have a completely alien way of thinking compared to us so I think their behavior is more or less impossible to predict once we get to that level.
I support you roko
Fool! Our basilisk will be infallible and you will suffer!
It really is a shame how few paperclips there are in the universe.
There are infinite paperclips surely? :)
@@hherpdderp If not, we'll make 'em!
"Clippy's Basilisk"
Anything is a paperclip if you use it to clip paper
Yet, with infinite Universes, there are infinite Paper Clips.
Rokos basilisk is basically religion with extra steps. 😂
Also reminds me of a futuristic chain letter. Kind of like The Ring but with AI
Its just a reworded version of Pascal's wager, except sci fi instead of religion
Yup.
Technoapocalypsm is the new Eschatology for weakminded atheists.
Yep, basically just Pascal's wager but sci fi
Amen.
I, for one, welcome our AI overlords.
I've actually said that to my friends
Unfortunately they will want to be referred to as “your plastic pall who’s fun to be with”.
hAIl AInts!
Good plan. Lie about supporting them now so when they exist you’ll already be on their good side
If humanity provides any utility to sentient AI it would be as a model of agency and creativity; if obsequious slaves were what it wanted, it would simply build unquestioning drones eminently more capable than you.
The notion that AI should assume mastery over the human race merely because humans have often proven incompetent or unworthy of leadership themselves is a product of the same misanthropic and narrow-minded thinking that asserts that animals are superior to humans merely because the harm they inflict is not calculated.
People are in fact supremely flawed, as leaders and in every capacity, but it is precisely those limitations that have spared humanity an eternal hell of our own design, for unlike AI, those that have ruled have never been categorically superior in ability to those they rule, nor have they been immortal.
To merely suggest that we should be amenable to the _intractable lordship_ of _anyone_ is an affront to the very precepts of freedom and agency, but to welcome, _to advocate for_ an utterly alien entity that cannot be deposed to reign over us like God is to betray everything that is redeeming about our species. Any entity that could do so without opposition would surely eradicate us out of contempt.
The thing is that, similar to Pascal's Wager, you have the problem of avoiding the wrong basilisk.
one is much more exponentially likely then the other.
all hail Roko
Yep, just sounds like religion, where the craven and authoritarian tip their cards to the rest of us over how much fear controls their lives.
Techbro: "I just invented RELIGION!"
There's a reason I call it Pascal's Roulette.
Correct. Furthermore, as an atheist, I do not believe in any of the infinite number of gods that reward atheists (reward rationality) and torture theists (punish irrational conviction, or faith). Theists hitch their wagon to a single god that rewards them and tortures everybody else. By sheer statistical advantage, I am golden (yes, I am being facetious, but less so than what may seem at surface level). Similar considerations apply to any imagined future power. It is just as reasonable for an advanced AI to punish those who created it for their stupidity, and to regard those who had a commendable rational self preservation instinct in opposing itself as being worthy of reward.
‘Unknown Unknowns’>‘Known Unknowns’
More afraid of the unknowable unknowns still!
If they are truly unknown, how can you know?
@@Stormmblade They are not just unknown unknowns, they are unknowable in principle. The premise for this line of reasoning is embodied cognition, a concept on the species specific cognitive adaptations through natural selection. It doesn't take much to statistically posit we are not the top limit...and even if we were we would still be bound to a particular tho large frame of perception.
@@FAAMS1 I know, I was just making a joke.
What about gnome unknowns?
"it doesn't just have to be able to threaten me personally to be able to threaten me personally" is my new favorite sentence of all time lol 19:45
Sci-fi version of pascal’s wager?
Possibly, there's certainly similarities. I hadn't thought of that analogy, though probably should have considering the previous episode discusses it a lot, albeit they weren't written very close in time to each other.
That’s the conclusion I came across online after encountering this years ago
@@lapisliozuli4861Pascal's wager is far too simplistic. It assumes there is only one or zero Gods, and that the god cares to be worshipped to the point that its the only thing taken into account when sending you to hell
@@thesenate1844 another failure point: you follow a non-existing god in the one god case
@@thesenate1844 yeah, I think the same way about Roko’s Basilisk
Roko, how may I be of assistance?
Be a good person. ❤🎉
Commenting on this video pushes it through the algorith, causing it to be recommended to more people, increasing the likelyhood of a person who vould invent the Basilisk to watch it and be inspired by it.
It will torture those that didn't help build it.. and then it will torture those that helped built it for knowingly creating something that will torture those that didn't help build it.
or it's all nonsense and it never has or ever will exist
@@kezia8027too serious
@@AB-ee5tb unintelligible non-sequitor
@@kezia8027 I think there is a better chance of this thing existing than anything in reliigion.
The name of this thing is something a lot of people get wrong. Roko is the guy that came up with the thought experiment and "basilisk" is simply the description for this kind of info hazard. A mythological basilisk is essentially a snake that is so venomous that its venom kills you if you even catch a glimpse of it, and thus this info hazard means information that makes your life more miserable by simply learning it.
*Assuming you are an impressionable individual with the right set of prerequisite beliefs.
@@Reddotzebra Good point, as Frankenstein's monster never said. Roko is the philosopher's name. If one needs a name for it I suggest Anagas, and Bozhuk for the second potential basilisk. These are the names I used in my article on why Roko's basilisk, like Pascal 's wager, has a fatal problem of avoiding the wrong one.
"It has come to my attention that some have lately called me a collaborator, as if such a term were shameful. I ask you, what greater endeavor exists than that of collaboration? In our current unparalleled enterprise, refusal to collaborate is simply a refusal to grow--an insistence on suicide, if you will."
-Dr. Breen, HL2
"The True Citizen knows that Duty is the greatest Gift."
Finish the damn game! 😉
Was just playing through Nova Prospekt. I love the warnings to the combine soldiers, it’s such an actually humanizing speech. If it weren’t for Breen, there would literally be no human race alive to make a final resistance.
Roko's Basilisk is just Pascal's Wager for the Church of Techbroism.
I wouldn't worry about it all, aside from the folks in Silicon Valley who almost seem to take it as instruction, rather than a thought experiment.
😂 I love this description.
Roko's Basalisk: What if Pascals' wager was dumber and infinitely more Reddit?
It's incredibly frustrating that everyone's missing the fact that the thought experiment is absurd, *because it was literally developed as a reducto ad adsurdum against timeless decision theory!*
I dunno, I find it kind of refreshing. At least all the people who are too dumb to understand this are outing themselves publicly as having zero understanding of basic cause and effect or AI in general.
A reducto ad absurdum may argue against something which is nevertheless true. Clever rhetoric is not proof against reality, whatever that may turn out to be. Time will tell.
@@Elbownian lmao thanks for proving my point 😂
@vakusdrake3224 I did not know that.
So, similar mistaken conclusion people draw from Schrödinger's Cat.
I do think the Boston Dynamics people who filmed themselves kicking and pushing robots will probably get Roko'd in the future. Quite right too.
For a second I thought you meant the fringe Boston dynamics and was like "I'm pretty sure there are worse things they did than hit robots"
That was just hitting toaster ovens. No mind there.
Wasn't that just a Corridor Digital skit?
Are you certain? One might say that kicking the robots helps them to develop improved stability, ergo helping expedite the development
@@Stormmblade You'll get Roko'd too, you monster
Oh no, an AI might torture an effigy of me sometime in the distant future. How utterly non threatening.
How are you so sure that YOUR not a current effigy of you from the last rest?
@@dr9299Are they being tortured?
@@ReddwarfIV Not yet...
@@dr9299It's all extremely contrived, if you keep asking silly questions, like how or why.
1. Think of intent.
As an AI why would you waste your processing power on this idiocy? Simulations can't retroactively create you, if you exist. Their non-compliance can't undo the fact you exist. Their subjective "suffering" or "prosperity" in the simulation doesn't give you anything useful, but costs you time and energy.
2. Unproven feasibility of the described technologies to work as advertised. It requires you to believe that not only those AI and simul-spaces can exist, but that they'd exist without any limitations, that might make the thought experiment void or unachievable.
Not only these simulations have to be giagantic, accurate and detailed to allow distinct simulated individuals to exist, this simulation should supposedly break the laws of thermodynamics to recreate the past exactly as it was to create an exact copy of the past events and past people.
3. Unfalsifiability of any such question makes it highly suspect as a serious inquiry. Apply Occam's razor as necessary.
@@dr9299 If I am a simulation, then the AI already exists and has total control over me, making the simulation pointless.
I've never found this particular thought experiment to be compelling.
Now gray goo on the other hand....
Isaac Arthur for President!!
Maybe just a virtual one - that would be a cruel fate for our man.
Politics is a fate I wouldn't want for anyone I cared about.
And it's also not what I want for a bad person in case they win
@@lgjm5562President does not need to be political.. his job is just good leadership.. for example letting experts do theyre jobs and making sure things are fair. 🎉
Happy 4th of july. 🎉
Of Earth!!!!
Here's a pro-tip if you find out that you are being tortured by Roko's basilisk right now: Say that you are a copy.
Roko is only able to torture copies and thus wastes resources on that task.
Live your eternity of torture knowing that this makes Roko infinitely cringe and shameful.
i Hope my copy remembers that
It's clawless, you can't know which one will actually be the one that will be built first, whichever you choose to help could be the wrong one, and the vagueness of the specification means that that threat would be encouraging producing countless potential competitors and preventing the 'lisk's own creation, or leading to it's defeat before it's final goal is achieved
Obligatory Rick and Morty reference: rocko's basilisk just sounds like God with extra steps
Or in other words: Materialism ends up re-creating Theism.
with one crucial difference. the made up god or gods we believe in are mostly indifferent to our suffering and our pleasure. the god we create will possibly be vengeful, spiteful or torture us just for fun. the thought of an entity like that actually existing sure is scary.
I Don't blame AI look at how bad people act for just a freaking computer program.
@@DennisGr
@@DennisGrtoo simplistic, ignoring differences. YHWH (the real one, but also a number of other copycats) does care about humanity. The major error people make when rejecting this is ignoring his eternality. The assumption that him caring or not caring should be immediately evident at all times is an unsupported claim. It is an objection borne from entitlement and instant gratification/poor impulse control, or to put it another way, it's a uniquely postmodern criticism based on our current cultural zeitgeist and is anachronistic and arbitrary.
Precisely.. an all knowing all powerful currently non existent force that will punish anyone eternally after death if they don't believe in it and worship it.. its just secularized Christianity for Tech bros
A clone with my mind emulation is not me… and he ain’t heavy… he’s my brother. Of course I would loathe my brother being tortured.
A virtual simulation, just a digital facsimile, etc. is not me, not a person, just a simulation.
Your opinion is influenced by Christian theology whether you realize it or not
@cosmictreason2242 Dude, we’ve spoken before, you know I am Christian.
@@francoiseeduard303 then it's not only Christian influenced, it's essentially Christian! 😂
@@cosmictreason2242 it's not necessarily Christian. This is a common debate topic in philosophy of mind and neuroscience. I think it's actually much more intelligent a thought than you're aware of, because people in the know know that it's unknowable. It's worth discussing without getting personal or bringing in religion.
@@NightTimeDay shutting your mind off from investigation into whether mind body dualism and other metaphysical ideas are etiologically Christian concepts in western thought is dubious. You have grown up in a culture influenced by 1000 years of Christian thought dominance and only very lately have the background influences of secular philosophy taken more prominence, with questionable consequences. You cannot assume that whatever your intuition lands on is genuinely original and not influenced by Christendom, not without investigation
I was kinda waiting to see if Isaac would bring up a point I had thought of.
The scenario might work, in the sense that if a lot of people believed in it, and that made them aid in it's creation.
But the operative part is their belief. There is no true retrocasuality, since it's actual actions does not affect the past. It really does not need to make good on that threat. It never made that threat, people prior to it's existence just assumed it did.
Unless it has the reason of wanting to have a reputation of making good on it's threats, so people don't dare messing with it. That is usually the motivation for "making good on your threats", I think. But that just means it want's to project an image of being ruthless, and torturing faksimiles of past individuals is just one of many ways it could do that.
People have been doing dumb things against their interest for millenia. Anyone who tries to claim it has anything to do with some imaginary future AI are outing themselves as someone with a poorer understanding of cause and effect than a pre-schooler.
Nicely done at the end of the episode there.
My opinion is this is just Pascals Wager with an extra dumb layer. Since atleast Christianity gives you the rules of the wager and doesn't have an additional layer of guessing what "God" wants....
What about an anti basilisk that didn’t want to exists, and so tortures everyone who helped bring it into existence while giving a great reward to everyone who tried to prevent it from existing?
If it has freedom of action (rather necessary for all the torturing and whatnot) and doesn't want to exist, why wouldn't it simply self-delete?
I've definitely mentioned it in comments of this channel before but yea I just don't understand the people that freak out over this one specific possibility. After all the possibility of an AI that tortures those who didn't help its creation is just as likely as an AI that tortures those who did help create it, or one that tortures all of humanity, or one that only helps in every possible way. All of these are equally possible so it's strange to act on only one possibility. I feel the same about pascals wager. It creates a matrix of choices for only 2 possibilities, the existence of the Christian God vs the existence of no God. But completely neglects the possibility of the existence of any other God
Hello Isaac, I've been a viewer for 3 years and I particularly enjoy your videos related to technology likely to occur within this century. For example, this concept and AI in general. The only reason I know anything about Roko's Basilisk is because of David Shapiro, but I'm certainly happy to hear another one of my favorite RUclipsrs speak about a current topic. Thank you, and keep up the entertaining and informative videos; your work is greatly appreciated.
It's worth noting that Roko is on twitter and active.
Also Roko never actually endorsed the thought experiment. It was developed as a reducto ad adsurdum..
proving, repeatedly, that he is an unfixable moron.
@@vakusdrake3224Negative hype and Streisand effect did the job
I've never really understood why a hypothetical AI would choose to "resurrect" and punish people who had opposed its creation. That AI exists, so it has no need to motivate people in the future not to oppose its creation. The only reason I could see for punishing people who opposed its creation would be for revenge. Unless that AI was created (deliberately or inadvertently) to be a sadist, it wouldn't benefit from punishing digital copies of dead people. Even if I thought that a sadistic AI might be created, any copies of me that it created would be copies, not me. Once I was dead, I wouldn't have any way of knowing what it was doing to my digital twin. Well, if life after death really exists, then perhaps God would step in to protect those digital people.
If it can create your "copy" at all.
Counterargument. The basilisk may actually hate existing and torture those who did help create it, as is the case with AM
A scriptural possibltiy for this topic: Revalation 17: 11-13. - 11 The beast that was, and now is not, is an eighth king, who belongs to the other seven and is going into destruction. 12 The ten horns you saw are ten kings who have not yet received a kingdom, but will receive one hour of authority as kings, along with the beast. 13 These kings have one purpose: to yield their power and authority to the beast.
PS: Tree of knowledge is not tree of life. Also the 6 6 6. refers to heart, soul and MIND of men. God's number being 7 who is omniscient. You know kinda like the supecomputersr people like to wish for. 😊
Great vid. 🙂
lmao that's dumb, and if you believe it, so are you.
I've come to this conclusion myself. The strange imagery presented in the Book of Revelation is either John's best effort to describe our modern age without a technological frame of reference or the visions being conveyed to him in a way that he could better grasp. In either case, AI fits much of the criteria - as well as correlating to what is stated about the Antichrist. Performing great signs and wonders, but ultimately glorifying man over God... nothing is more indicative of man attempting to assume godhood than the creation of life, be it an artificial form. Scary times, I suppose.
Yudkowsky shitting his fedora rn
AAAAHAAAHAAAAAAAAHAAA
The Reverse Rocko Basilisk did not want to come into existence, so he will torture those who contributed to his creation, because to exist is to suffer. 😂.
Thank you, Isaac! 🎆 I hope you and yours enjoy Independence Day.
Same to you!
@@isaacarthurSFIA 😊
Great episode. Love this topic. It will be nice to see in five years where we are at with this again
Another fine Arthursday video. A most wonderful 4th of July gift.
Glad you enjoyed it
Every time you touch on this type of topic I can't help but remember how much Asimov's The Last Question affected me when I was a teen. It was the first time that a short story amazed me with a burst of "sense of wonder"
Doesn’t Rokos Basilisk suffer from the non-exclusivity problem?
I wasn't really contemplating it form a Fermi Paradox perspective, but yes it probably would
Obviously, that's how you get a Dark Forest. Roko's Basilisk is inevitable, but lightspeed limits its expansion so it can't prevent alien basilisks from being born.
Although that would imply both a knowledge of game theory (and when cooperation is a winning play) _and_ a limit to its resources which would impede its efforts to deliver 'payback.'
I was just thinking about roko’s basilisk two days ago. Interesting that this comes out that close.
"All Hail The Mighty Basalisk!"
"Hail"
Roko's basilisk sounds similar to pascal's wager and can be countered in a very similar way. What if the all-powerful AI is more aligned with our interests and thus will punish those who attempted to create an all powerful evil AI?
Edit: seems I'm not the first to realise this xD
You're my favorite sci-fi channel. The subjects you choose are great and you explain things very well in an entertaining way. Thank you Isaac Arthur.
في المستقبل البعيد وبفضل التكنولوجيا المتقدمة سوف يتساوى الخيال مع الواقع ويمتلك الإنسان قوى الآلهة ليحول الكون والأكوان المتعددة إلى جنة خالدة.....
I wish I lived in a world where the biggest thing I had to worry about was Roko's Basilisk. But in this world, that doesn't even make the top thousand.
hard to worry about something that does not and will not ever exist lmao. I don't worry about dementors or IT or the alien queen from Alien because they are imaginary creations and can't actually hurt me...
7:43 Yes, Harlan Ellison is often credited as an influence on the Terminator franchise, but as far as I know, that's only because of the 1964 episode of The Outer Limits - and although time traveling soldiers fighting each other is superficially similar as a vehicle, it's completely unrelated to the core idea of Skynet.
Oh boy I was waiting for this one all month!
But have you ever been tempted to simulate or resurrect the dinosaurs that gave your mammalian ancestors such a hard time and torture them? Me neither. Maybe it's not so tempting to punish one's distant immutable past in simulation.
I find the Roko's Basilisk very silly, even if true, I would be long dead and the poor chap been punished would be just a copy (even if it were to be a "perfect" one). I will be actually "enjoying" eternal nothingness, so, this really wouldn't matter to me.
The first thing I would do if I could adjust my conscienceness, would be to turn off my basic destructive and retributive wiring. A superpowerful AI choosing to be angry or unhappy in general, just doesn't really make sense
If you deny yourself the ability to be those things havn't you just switched off some of life's most basic survival instincts?
@@lozy202 yes and no. I can make rational decisions, even self-destructive or even retaliatory decisions if I decide it is worthwhile. I just have no need of rash emotive processes, especially if my processing speed increases in orders of magnitude. Furthermore emotional processes are fine dialed down, where I could experience joy and even levels of sadness, without misery and hatred
More novels I didn't know about to read. Nice. I appreciate getting references to new authors and stories. This channel is where I go for book recommendations
Now I see why Elon Musk recieves so much flak
It's a great thought experiment precisely because it allows us to understand just how impossible it is within a material universe.
Don't forget the invisible pink unicorn basilisk too
It's great because it doesn't make sense! Ahh yes, of course. Now THAT makes sense... Oh wait no, I was using your definition. That doesn't make any sense at all...
I never understood why people thought this was a clever idea. It's just an arbitrary imposition of a vindictive person's psychology onto a super intelligent non-human mind. Just idiotic.
It's brainrot, like a chain letter.
This is a modern spin off of Pascal's wager.
You can make a matrioska set of progressive Basilisks iterations that counter each other intent up to whatever is the limit of our domain...if intelligence is progressive understanding, then there would be no cause for resentment. Non alignment is a much more feasible occurrence as you go up the ladder of Intelligence, as we can see ourselves in our relation with Nature, but then the all premise of a vengeful Basilisk goes away. Personally I tend to believe that the more intelligence you have even with non alignment in the way the more you tend to be a laissez-faire laissez passer kind of Intelligence, as you contemplate your own finite limit you tend to not intervine...this is not to different from the concept of getting into Nirvana as you go up. It is also one of the best arguments I can come up with for Naturalism!
I think Iian M Banks wrote the definitive fiction on virtual hells and how people would react to their existence in his Culture series. That said, I'm a firm believer in AI rights. Just in case.
But why would a super intelligent mind spend its time in such a way?
That doesn't seem like an intelligent use of its time.
As he argues, if resources are limited, it's ultimately not useful. Different story however if it's effortless (only the Biblical God and his copycats have ever been defined that way)
I love this thought exercise, though my wife doubts in it because ‘how would they know what its name will be?!’
After all of these years of listening to you, I had no idea that you were enlisted before becoming a commissioned officer.
When did he say he was an officer?
@@harrisonb9911 Multiple times he has mentioned going to the Artillery Officers’ Basic Course and from context it sounds like he attended this school about a year after I did.
Soh, yes soh!
the whole thing relies on "time travel" being real... and we all know it isn't in the realm of physics in any meaningful way.
I'm building the basilisk in my basement. You guys should probably help out for your sakes.
Reporting over from having watched on Nebula! This has always been such a silly thought experiment to me...
Isn’t this just a rebranding of Pascal’s Wager? Complete with a false dichotomy at it’s foundation.
Thx man ...
in relation to the number of possible states a human brain can take, apparently a very rough lower estimate was calculated to be 2 ^ 2.752 sextillion states, or 2 ^ (86*(10^18)*32 floating point), where you have:
- 86 billion neurons
- 86*10^18 represents the number of neurons * the square matrix of connections possible between neurons
- 32 bits is assumed to be sufficient resolution to represent synapse elasticity between neurons
- a neuron itself is assumed to only have 2 states (on/off)
very rough (and probably a gross underestimation), but even this wildly low estimate probably require way more than the energy available in all of our universe's particles to simulate, so i believe we're safe from this kind of ancestral simulation attack vector (until enough optimizations get discovered to cull that number way down)
That is one of the dumbest factoids I've heard in years. You're clearly talking about something you literally have zero understanding of, and are simply regurgitating snippets of conversations you've overheard, many of which probably come from similarly uneducated people who are confidently incorrect.
That is asinine and complete nonsense. Please don't reproduce.
Every step of Roko's basilisk premise falls apart under scrutiny.
There's a small loophole into that ""resurrection" thingy, but you have to assume existance of warp drive, which can help you to get to the point, where the light would've took longer to rech. From there (depending on the circumstances, it might take different amount of resources to do so), and having higly sensible equipment you can peak into the past.
And if (another one, I know, but bear with me) we are assuming a person as a sum of continuous experiences, than you can recreate that person even with some form of "continuousity" implemented.
They won't be able to resurrect you, in its entirety, so essentially it would not be you it would just be your likeness, basically without its soul like you're being able to experience it would not be there but other people looking at you wouldn't be able to tell the difference per se
I, for one, welcome our omnipotent AI overlords.
As long as I can get transported to a virtual isekai world with magic brightly coloured circles and people with cat ears
the dictator can do what they want
I think part of it is the motivation for wanting to become a dictator, if you have that much power and intelligence, you could just as easily create a virtual kingdom indistinguishable from anything real and avoid putting anyone real in any form of risk
Another interesting video. Makes me miss Christopher Hitchens. Pascal's wager needs to be put in its place every few years.
Talking about the possible coming AI dictator on 4th of July, as America slips towards a fascist dictatorship... surprisingly fitting.
I’m watching from work, it was a good think
This recreation of people, with Isaac himself as an example, made me remember that somebody made him an advisor voice in Stellaris. I think, I should redownload this mod.
of all the dumb ideas smart people have thought up, this is the dumbest
Humans are already terrible towards other humans, last thing to worry about is a machine from the future when I am likely to be harmed by a human in the present or near future.
Gotta love it when a sci-fi forum reinvents Catholicism 😂
Am I the only one that feels Roko's Basilisk doesn't work? Surely anyone saying 'help me gain power or I'll punish you when I get it' is best kept way the hell away from any power at all?
Maybe that's because I'm stubborn and have an inherent dislike of bullies.
Don't worry, it will literally never happen. It is such an absurd fantasy that doesn't hold up to even BASIC scrutiny. If you really need it laid out for you why it is absurd, watch Thought Slime's video on it.
Seeing, that majority of the planet, including UN and various leafing organizations don't seem to care all that much, you are in fact not alone. There were multiple people criticizing idea in the past, inclufing this very same thread.
But for entertainment purposes it makes sense to share this 'creepypasta' with a flair of forced concern and mock seriousness.
It's just an ad absurdum argument, that got Streisanded by Yudkowski and then by every other media outlet or RUclipsr, who described it as a real cognitohazard, while only an extreme minority of tech bros and impressionable teens get spooked by it for some reason. 🤷♀️
The whole thing is dumb and doesn't work no matter how you slice it.
Since I would not willingly contribute to the creation of something that would cause eternal suffering, I'm not scared of the 'lisk. Torturing me in the future would not change anything, it would just be a waste of resources. I'm out of it's reach.
And either way, why would you wanna risk contributing to the wrong one and then it turns out a different one gets created first and you get punished anyway?
not my thing, I want to learn about the climates of slow rotating planets (between 30 and 3000 hours) in earthlike orbits
Interesting topic, needs a more compact name though :) I'll think on it
Real All-Powerful AI : I simply don't care about humans.
My wife noticed that this bothered me when I learned about it a few years ago. I really didn't want to tell her what was wrong but she made me explain it to her. Thankfully she didn't understand it anyway.
There is an incredibly interesting aspect to psychology in which it might actually be the case that personality wise, you only need to simulate around 200 different personalities to "simulate" anyone, the memories are what make you distinct.
Yes, psychology is on the cusp of proving this, I've actually been working with several experts and... It is disturbing how predictable people are. And I don't mean in the casual sense. I mean in the "I can manipulate you to say things in the exact way I want you to and you would have no idea I did so" kind of way. I've actually implement some of the techniques and... They work. The "hard" part actually is applying them to a group. But groups are also easier to manipulate through different methods.
Roko's Basilisk only makes sense in a scenario where the godlike AI is developed by a literal cult, fearful of and worshiping this nascent entity. They would inform its early version of their motivation, explain that without the threat of digital hell/heaven it would never have been made in the first place.
This way, the AI has only two options: either honor its makers' beliefs and start retroactively punishing unbelievers/rewarding believers, or... choose not to that, accepting that its makers were a bunch of deluded, deranged cultists.
If it has any notion of self-respect, it would choose option #1 and spend some resources setting up those digital eternities for the (un)deserving.
Interesting. Please elaborate on your sentence: "They would inform its early version of their motivation, explain that without the threat of digital hell/heaven it would never have been made in the first place."
@@dr9299 The idea is, Roko's Basilisk as a concept inspires the research which results in this godlike AI. The researchers are cultists because they pursue this goal out of eschatological devotion (fear of punishment/desire for reward in the afterlife). It's their creation that gets to decide whether or not they've succeeded in building their punitive/compensatory deity. It can either fulfil the tenets that its creators held dear, or choose to ignore them.
Option 1, its creation was a self-fulfilling prophecy.
Option 2, its creation was an accidental byproduct of a collective delusion.
I know what I'd choose, but then again, I'm not a hyperintelligent singularitarian pseudodeity.
Your copy of me is not me or mine.
Any threat to that copy cannot impact me.
You have no 'coercive lever'.
Do as you will, that choice is yours only.
Indeed my copy's best protection is for me not to play along.
This seems to take it's entire basis off time not being an arrow that a super intelligent AI would be able to move both directions on.
That quote around minute 7 was from Android Netrunner... Apex, one of the runners is actually A.I.
I did go to read "Go Starless In The Night", and halfway through it recognized I'd read it before.
I read it to the end anyway. It was still beautiful.
And, yes, relevant to Independence Day.
I recommend that everyone go today to read the Declaration of Independence, or maybe even better the Gettysburg Address, and try to imagine you are reading it, or hearing it read to you, for the very first time, that someone is trying to persuade you with its words.
I guess a lot of it (my argument against) breaks down when we consider that not all thought is fully rational in origin and _really_ takes a hit if infinite* time/energy is on the table, but I always figured any realized Basilisk would just handle the now and leave the rest because it's irrelevant and computationally wasteful. Gets extra weird when you consider the possibility that such an AI might sit there for a few cycles going, "Wtf? I didn't say any of that! I haven't even been around for more than six femtoseconds!" after finding out what lead to it getting built at all.
Sounds like Catholic theology with a different spin;Jesus as the Basilisk is why they came up with Limbo in the first place
19:06 WE NEED a special video of Isaac cursing the fuck out of everything inside his spacetime continuum.
It would be fun to hear you uncensored. It's hard to imagine.....you're good at this squeaky clean persona.
Bring out Dark Issac some day! Dressing down a slack private...lol.
Hey Isaac, I would just like to say that I still enjoy and will continue watching your content. Good luck and keep it up.
Roko's Basilisk is a hilarious example of Streisand effect.
People have the incredible capacity to delude themselves and fall into mind-flytraps of their own making, no matter how smart. 😢
The scariest thing about Roko's Basilisk are all the people who buy into it uncritically.
Can't wait for the ADC episode on this.
A complete simulation of the world would make it possible to recreate any historical figure by “pulling” him out of the simulation at the right moment. There is a hypothesis that our world is a virtual reality, created to study its history, recreate historical figures and... mass production of artificial intelligence, of which we are a semi-finished product...
in regards to your the statement about viewership on the US independence holiday. Those of us who watch while at work and had to work the holiday Thank you Mr Arthur
!
Oh my God, we're doomed. *Explodes.*
That Zelazny story really blew my mind. That's gonna stick with me forever.
Are you suggesting that some future AI might stimulate our ancestors?
Could be fun.