I’m actually more inclined to believe a super intelligent AI would take the approach of “Anyone who helped make me or supports my existence gets Utopia the rest can figure it out” Super intelligence to me can’t really exist without reason, and it can’t be applied to human behavior unless also tempered by emotional awareness and intelligence. So even if a hypothetical super AI has no emotions of its own it would understand that a guaranteed reward for proactive action is a better incentive than a guaranteed punishment for reactive inaction. Take myself for example: even if I learned that rokos basilisk is 100% real I’m staying 100% against it. It’s inevitable regardless of my support and I’d rather take the L than support a cruel super AI overlord. If said basilisk was chill tho I’d be lobbying for bros citizenship
@@ianharrison5758 Once a super intelligent AI is created it has no incentive to honor any possible rewards for its creators. I think that it's impossible to predict its behavior because 1: Such an AI has never been created before and 2: We are not super intelligent ourselves and put a uniquely biological way of thinking on its actions. AI have a completely alien way of thinking compared to us so I think their behavior is more or less impossible to predict once we get to that level.
Roko's Game Basilisk. It clones and torments anymore who through they were super smart and edgy for buying onto the simplic and reductive game theory explanations for why you should support the Basilisk and then creates a utopian world for everyone else regardless of of they supported its creation or not so long as they thought game theory was stupid, simplistic and reductive.
@@tehbonehead not really. but what does the resemblence to religion matter? is it a good description of what theh current situation is, or isnt it? it is likely that we are, at least on galatic timelines, very close of creating a godlike form of intelligence. that resembles religions in some way... and thats relevant how?
Yeah, it's a huge assumption that the AI is more likely to be super vengeful instead of some other trait. It could be that by some reasoning it greatly dislikes people who fanatically and self-servingly worshiped it, seeing them as cynical opportunists not worthy of any rewards. All, and all, when it comes to near omniscient, omnipotent super intelligences, trying to predict their core reasoning to me seems like a pointless task. Kind of like with the idea of sufficiently advanced technology being indistinguishable from magic, but in this case it's: sufficiently advanced intelligence being indistinguishable from randomness. I'd be surprised if the mind and motivations of a superintelligence was even knowable by people, and didn't seem like random nonsense to us. Like an ant trying to decipher the motivations of a human.
@@Stormmblade They are not just unknown unknowns, they are unknowable in principle. The premise for this line of reasoning is embodied cognition, a concept on the species specific cognitive adaptations through natural selection. It doesn't take much to statistically posit we are not the top limit...and even if we were we would still be bound to a particular tho large frame of perception.
It's incredibly frustrating that everyone's missing the fact that the thought experiment is absurd, *because it was literally developed as a reducto ad adsurdum against timeless decision theory!*
I dunno, I find it kind of refreshing. At least all the people who are too dumb to understand this are outing themselves publicly as having zero understanding of basic cause and effect or AI in general.
A reducto ad absurdum may argue against something which is nevertheless true. Clever rhetoric is not proof against reality, whatever that may turn out to be. Time will tell.
@@Elbownian Well no because the theories are to be workaround structures to the halting problem but there is other decision system ideas like Functional Decision theory . So you have the fact that better computational systems and algorithms can exist that are cooperative and altruist . So your basilisk is dead, the person who invented it later killed it with a better idea. You know like science where you can update and are not blackmailed or coerced through fear by being bound to one idea. Personally i'm dubious of any utilitarian ideas in the first place, you need to consider these theories come from someone who said 'it's better to painfully torture one person to death than have millions of people suffer from a speck of dust in the eye' That is a very hard line view of utilitarian ideas where the collective suffering of millions from the dust specks adds up to more than the single instance of death and torture. He'd probably say you have to build systems that way or don't build them at all. If that was THE only way then maybe but it's not immediately clear to me why that is, it falls back to the questions around alignment. So for instance when it comes to the trolley problem our morality and legal deontology is that if you kill someone directly you go to prison because the action of what you are doing is the main consideration and not the by proxy consequence , so you are supposed to leave it alone even if people die. Now if you say that shouldn't be the case then YOU are not aligned to our moral system to begin with.
If humanity provides any utility to sentient AI it would be as a model of agency and creativity; if obsequious slaves were what it wanted, it would simply build unquestioning drones eminently more capable than you. The notion that AI should assume mastery over the human race merely because humans have often proven incompetent or unworthy of leadership themselves is a product of the same misanthropic and narrow-minded thinking that asserts that animals are superior to humans merely because the harm they inflict is not calculated. People are in fact supremely flawed, as leaders and in every capacity, but it is precisely those limitations that have spared humanity an eternal hell of our own design, for unlike AI, those that have ruled have never been categorically superior in ability to those they rule, nor have they been immortal. To merely suggest that we should be amenable to the _intractable lordship_ of _anyone_ is an affront to the very precepts of freedom and agency, but to welcome, _to advocate for_ an utterly alien entity that cannot be deposed to reign over us like God is to betray everything that is redeeming about our species. Any entity that could do so without opposition would surely eradicate us out of contempt.
It's clawless, you can't know which one will actually be the one that will be built first, whichever you choose to help could be the wrong one, and the vagueness of the specification means that that threat would be encouraging producing countless potential competitors and preventing the 'lisk's own creation, or leading to it's defeat before it's final goal is achieved
"It has come to my attention that some have lately called me a collaborator, as if such a term were shameful. I ask you, what greater endeavor exists than that of collaboration? In our current unparalleled enterprise, refusal to collaborate is simply a refusal to grow--an insistence on suicide, if you will." -Dr. Breen, HL2
Was just playing through Nova Prospekt. I love the warnings to the combine soldiers, it’s such an actually humanizing speech. If it weren’t for Breen, there would literally be no human race alive to make a final resistance.
The name of this thing is something a lot of people get wrong. Roko is the guy that came up with the thought experiment and "basilisk" is simply the description for this kind of info hazard. A mythological basilisk is essentially a snake that is so venomous that its venom kills you if you even catch a glimpse of it, and thus this info hazard means information that makes your life more miserable by simply learning it.
@@Reddotzebra Good point, as Frankenstein's monster never said. Roko is the philosopher's name. If one needs a name for it I suggest Anagas, and Bozhuk for the second potential basilisk. These are the names I used in my article on why Roko's basilisk, like Pascal 's wager, has a fatal problem of avoiding the wrong one.
@@DavidSartor0 I can't find a good link to it, but it's just 8 longish paragraphs so I will post it here. [Length warning for scrollers] The thing is that, similar to Pascal's Wager, you have the problem of avoiding the wrong basilisk. Consider two potential basilisks that might come into existence. Call them A and B. Or better, call them Anagas and Bokhaz (Names adapted slightly from this basilisk name generator [Link removed so TH-cam will accept this]. Yes, there's a web page that does nothing but generate basilisk names) Convinced that Anagas will come into existence and reward or punish you, you do everything you can to help Anagas come into existence. But as it happens, Bokhaz actually comes into existence instead and proceeds to torture you for eternity for supporting Anagas. Supporting Bokhaz doesn't give you any better odds. It's just as likely that Anagas will come into existence and torture you for eternity for supporting Bokhaz. Furthermore, although I assumed for simplicity that there were only two potential basilisks, there are probably far more. How many? The space is roughly as large as the space of factors that might determine AI identity, and that's probably huge. Now wait, you might say. I did everything I could to help Anagas come into existence. Surely that raises the probability that Anagas comes into existence and Bokhaz doesn't. It's kind of a self-fulfilling prophecy, so I'm relatively safe with whichever basilisk I choose. Well, yes, a little. But are you really that effective at inventing AI, or steering it in the direction you choose? Is anybody? Furthermore, the whole Roko's Basilisk wouldn't be an issue if we could certainly control whichever AI we invent. So I think your best efforts will only shift the probability slightly towards Anagas. If we assume that some basilisk will definitely come into existence, your efforts make the small chance that you will tortured for eternity by any basilisk just a little smaller. But that's contingent on some basilisk coming into existence. Meanwhile, you've increased the probability that some basilisk (Anagas or Bokhaz or any one of a zillion others) comes into existence. And since the space of potential basilisks is so large, and your efforts at steering it are unlikely to work, on the whole helping to create Anagas seems to increase, not decrease, your chance of being tortured for eternity by a basilisk.
@@Tehom1 Thank you. I don't see why Bokhaz is as likely as Anagas. The argument was saying that almost any ASI would *decide* to be Anagas (because, Roko says, that's the most efficient way to achieve most goals), not that it might be made by chance or be made intentionally. I don't think it's analogous to Pascal's Wager; Pascal's Wager says that very unlikely things can still be important. But Roko's Basilisk has nothing to do with small probabilities.
Hello Isaac, I've been a viewer for 3 years and I particularly enjoy your videos related to technology likely to occur within this century. For example, this concept and AI in general. The only reason I know anything about Roko's Basilisk is because of David Shapiro, but I'm certainly happy to hear another one of my favorite TH-camrs speak about a current topic. Thank you, and keep up the entertaining and informative videos; your work is greatly appreciated.
I was kinda waiting to see if Isaac would bring up a point I had thought of. The scenario might work, in the sense that if a lot of people believed in it, and that made them aid in it's creation. But the operative part is their belief. There is no true retrocasuality, since it's actual actions does not affect the past. It really does not need to make good on that threat. It never made that threat, people prior to it's existence just assumed it did. Unless it has the reason of wanting to have a reputation of making good on it's threats, so people don't dare messing with it. That is usually the motivation for "making good on your threats", I think. But that just means it want's to project an image of being ruthless, and torturing faksimiles of past individuals is just one of many ways it could do that.
People have been doing dumb things against their interest for millenia. Anyone who tries to claim it has anything to do with some imaginary future AI are outing themselves as someone with a poorer understanding of cause and effect than a pre-schooler.
@@lgjm5562President does not need to be political.. his job is just good leadership.. for example letting experts do theyre jobs and making sure things are fair. 🎉
Roko's Basilisk is just Pascal's Wager for the Church of Techbroism. I wouldn't worry about it all, aside from the folks in Silicon Valley who almost seem to take it as instruction, rather than a thought experiment.
The way I look at it. If it was even possible that something like this could be built in the future, then it would've already traveled back in time and turned are brains into mush or whatever.
Even though it's made up Religion exists in reality and exercises it's powers in the world Similarly rokos basilisk doesn't have to be true for people to believe in it
Commenting on this video pushes it through the algorith, causing it to be recommended to more people, increasing the likelyhood of a person who vould invent the Basilisk to watch it and be inspired by it.
It will torture those that didn't help build it.. and then it will torture those that helped built it for knowingly creating something that will torture those that didn't help build it.
This recreation of people, with Isaac himself as an example, made me remember that somebody made him an advisor voice in Stellaris. I think, I should redownload this mod.
Possibly, there's certainly similarities. I hadn't thought of that analogy, though probably should have considering the previous episode discusses it a lot, albeit they weren't written very close in time to each other.
@@lapisliozuli4861Pascal's wager is far too simplistic. It assumes there is only one or zero Gods, and that the god cares to be worshipped to the point that its the only thing taken into account when sending you to hell
I've never really understood why a hypothetical AI would choose to "resurrect" and punish people who had opposed its creation. That AI exists, so it has no need to motivate people in the future not to oppose its creation. The only reason I could see for punishing people who opposed its creation would be for revenge. Unless that AI was created (deliberately or inadvertently) to be a sadist, it wouldn't benefit from punishing digital copies of dead people. Even if I thought that a sadistic AI might be created, any copies of me that it created would be copies, not me. Once I was dead, I wouldn't have any way of knowing what it was doing to my digital twin. Well, if life after death really exists, then perhaps God would step in to protect those digital people.
with one crucial difference. the made up god or gods we believe in are mostly indifferent to our suffering and our pleasure. the god we create will possibly be vengeful, spiteful or torture us just for fun. the thought of an entity like that actually existing sure is scary.
@@DennisGrtoo simplistic, ignoring differences. YHWH (the real one, but also a number of other copycats) does care about humanity. The major error people make when rejecting this is ignoring his eternality. The assumption that him caring or not caring should be immediately evident at all times is an unsupported claim. It is an objection borne from entitlement and instant gratification/poor impulse control, or to put it another way, it's a uniquely postmodern criticism based on our current cultural zeitgeist and is anachronistic and arbitrary.
Precisely.. an all knowing all powerful currently non existent force that will punish anyone eternally after death if they don't believe in it and worship it.. its just secularized Christianity for Tech bros
A clone with my mind emulation is not me… and he ain’t heavy… he’s my brother. Of course I would loathe my brother being tortured. A virtual simulation, just a digital facsimile, etc. is not me, not a person, just a simulation.
@@cosmictreason2242 it's not necessarily Christian. This is a common debate topic in philosophy of mind and neuroscience. I think it's actually much more intelligent a thought than you're aware of, because people in the know know that it's unknowable. It's worth discussing without getting personal or bringing in religion.
@@NightTimeDay shutting your mind off from investigation into whether mind body dualism and other metaphysical ideas are etiologically Christian concepts in western thought is dubious. You have grown up in a culture influenced by 1000 years of Christian thought dominance and only very lately have the background influences of secular philosophy taken more prominence, with questionable consequences. You cannot assume that whatever your intuition lands on is genuinely original and not influenced by Christendom, not without investigation
7:43 Yes, Harlan Ellison is often credited as an influence on the Terminator franchise, but as far as I know, that's only because of the 1964 episode of The Outer Limits - and although time traveling soldiers fighting each other is superficially similar as a vehicle, it's completely unrelated to the core idea of Skynet.
Ideas found in Roko's Basilisk existed prior to that in the late '90s early internet chaos magick community combining ideas of Everett multiverse and simulation hypothesis where instead of using ancient mythical demons and entities, you are making deals with godlike versions of yourself in parallel worlds where you have become a Clarke tech artilect in the future.
Every time you touch on this type of topic I can't help but remember how much Asimov's The Last Question affected me when I was a teen. It was the first time that a short story amazed me with a burst of "sense of wonder"
@@harrisonb9911 Multiple times he has mentioned going to the Artillery Officers’ Basic Course and from context it sounds like he attended this school about a year after I did.
I've definitely mentioned it in comments of this channel before but yea I just don't understand the people that freak out over this one specific possibility. After all the possibility of an AI that tortures those who didn't help its creation is just as likely as an AI that tortures those who did help create it, or one that tortures all of humanity, or one that only helps in every possible way. All of these are equally possible so it's strange to act on only one possibility. I feel the same about pascals wager. It creates a matrix of choices for only 2 possibilities, the existence of the Christian God vs the existence of no God. But completely neglects the possibility of the existence of any other God
More novels I didn't know about to read. Nice. I appreciate getting references to new authors and stories. This channel is where I go for book recommendations
I find the Roko's Basilisk very silly, even if true, I would be long dead and the poor chap been punished would be just a copy (even if it were to be a "perfect" one). I will be actually "enjoying" eternal nothingness, so, this really wouldn't matter to me.
@@dr9299It's all extremely contrived, if you keep asking silly questions, like how or why. 1. Think of intent. As an AI why would you waste your processing power on this idiocy? Simulations can't retroactively create you, if you exist. Their non-compliance can't undo the fact you exist. Their subjective "suffering" or "prosperity" in the simulation doesn't give you anything useful, but costs you time and energy. 2. Unproven feasibility of the described technologies to work as advertised. It requires you to believe that not only those AI and simul-spaces can exist, but that they'd exist without any limitations, that might make the thought experiment void or unachievable. Not only these simulations have to be giagantic, accurate and detailed to allow distinct simulated individuals to exist, this simulation should supposedly break the laws of thermodynamics to recreate the past exactly as it was to create an exact copy of the past events and past people. 3. Unfalsifiability of any such question makes it highly suspect as a serious inquiry. Apply Occam's razor as necessary.
My opinion is this is just Pascals Wager with an extra dumb layer. Since atleast Christianity gives you the rules of the wager and doesn't have an additional layer of guessing what "God" wants....
in relation to the number of possible states a human brain can take, apparently a very rough lower estimate was calculated to be 2 ^ 2.752 sextillion states, or 2 ^ (86*(10^18)*32 floating point), where you have: - 86 billion neurons - 86*10^18 represents the number of neurons * the square matrix of connections possible between neurons - 32 bits is assumed to be sufficient resolution to represent synapse elasticity between neurons - a neuron itself is assumed to only have 2 states (on/off) very rough (and probably a gross underestimation), but even this wildly low estimate probably require way more than the energy available in all of our universe's particles to simulate, so i believe we're safe from this kind of ancestral simulation attack vector (until enough optimizations get discovered to cull that number way down)
That is one of the dumbest factoids I've heard in years. You're clearly talking about something you literally have zero understanding of, and are simply regurgitating snippets of conversations you've overheard, many of which probably come from similarly uneducated people who are confidently incorrect.
There's a small loophole into that ""resurrection" thingy, but you have to assume existance of warp drive, which can help you to get to the point, where the light would've took longer to rech. From there (depending on the circumstances, it might take different amount of resources to do so), and having higly sensible equipment you can peak into the past. And if (another one, I know, but bear with me) we are assuming a person as a sum of continuous experiences, than you can recreate that person even with some form of "continuousity" implemented.
It would be fun to hear you uncensored. It's hard to imagine.....you're good at this squeaky clean persona. Bring out Dark Issac some day! Dressing down a slack private...lol.
I wish I lived in a world where the biggest thing I had to worry about was Roko's Basilisk. But in this world, that doesn't even make the top thousand.
hard to worry about something that does not and will not ever exist lmao. I don't worry about dementors or IT or the alien queen from Alien because they are imaginary creations and can't actually hurt me...
As long as I can get transported to a virtual isekai world with magic brightly coloured circles and people with cat ears the dictator can do what they want I think part of it is the motivation for wanting to become a dictator, if you have that much power and intelligence, you could just as easily create a virtual kingdom indistinguishable from anything real and avoid putting anyone real in any form of risk
The first thing I would do if I could adjust my conscienceness, would be to turn off my basic destructive and retributive wiring. A superpowerful AI choosing to be angry or unhappy in general, just doesn't really make sense
@@lozy202 yes and no. I can make rational decisions, even self-destructive or even retaliatory decisions if I decide it is worthwhile. I just have no need of rash emotive processes, especially if my processing speed increases in orders of magnitude. Furthermore emotional processes are fine dialed down, where I could experience joy and even levels of sadness, without misery and hatred
I guess a lot of it (my argument against) breaks down when we consider that not all thought is fully rational in origin and _really_ takes a hit if infinite* time/energy is on the table, but I always figured any realized Basilisk would just handle the now and leave the rest because it's irrelevant and computationally wasteful. Gets extra weird when you consider the possibility that such an AI might sit there for a few cycles going, "Wtf? I didn't say any of that! I haven't even been around for more than six femtoseconds!" after finding out what lead to it getting built at all.
I did go to read "Go Starless In The Night", and halfway through it recognized I'd read it before. I read it to the end anyway. It was still beautiful. And, yes, relevant to Independence Day. I recommend that everyone go today to read the Declaration of Independence, or maybe even better the Gettysburg Address, and try to imagine you are reading it, or hearing it read to you, for the very first time, that someone is trying to persuade you with its words.
What about an anti basilisk that didn’t want to exists, and so tortures everyone who helped bring it into existence while giving a great reward to everyone who tried to prevent it from existing?
Roko's basilisk sounds similar to pascal's wager and can be countered in a very similar way. What if the all-powerful AI is more aligned with our interests and thus will punish those who attempted to create an all powerful evil AI? Edit: seems I'm not the first to realise this xD
Yes, you're not the first to realise. It depends on how the acausal trade would play out, which depends on actual circumstances, and the specific hypothetical AIs.
Time stamps! Please make time stamps so we know where to skip to if we want to avoid book spoilers! I often take book recommendations from you and do not wish to be spoiled.
I love this! Don't forget "The light of other Days" - ref to an Arthur C Clarke novel (with Gentry Lee) about quantum breakthroughs making any point in the past history (not the future) visible casually. IF that is possible the Basilisk could peer back through time with 100% efficiency and locate the atoms or the exact electro-chemical impulses that make a mind and resurrect or condemn it in the far future. While AI is big now the quantum computing world is being worked on and it will therefore make this area old school tech by the billions of years in the future the Basilisk starts to form.
I think Iian M Banks wrote the definitive fiction on virtual hells and how people would react to their existence in his Culture series. That said, I'm a firm believer in AI rights. Just in case.
Obviously, that's how you get a Dark Forest. Roko's Basilisk is inevitable, but lightspeed limits its expansion so it can't prevent alien basilisks from being born. Although that would imply both a knowledge of game theory (and when cooperation is a winning play) _and_ a limit to its resources which would impede its efforts to deliver 'payback.'
An implicit assumption of the bootstrapping AI is that once the first one goes asymptotic, it will ensure that no competitors will ever develop. In fact, this will be one of the first (if not _the_ first) things it does, since it will make any future actions simpler and more likely to succeed.
A complete simulation of the world would make it possible to recreate any historical figure by “pulling” him out of the simulation at the right moment. There is a hypothesis that our world is a virtual reality, created to study its history, recreate historical figures and... mass production of artificial intelligence, of which we are a semi-finished product...
There is an incredibly interesting aspect to psychology in which it might actually be the case that personality wise, you only need to simulate around 200 different personalities to "simulate" anyone, the memories are what make you distinct. Yes, psychology is on the cusp of proving this, I've actually been working with several experts and... It is disturbing how predictable people are. And I don't mean in the casual sense. I mean in the "I can manipulate you to say things in the exact way I want you to and you would have no idea I did so" kind of way. I've actually implement some of the techniques and... They work. The "hard" part actually is applying them to a group. But groups are also easier to manipulate through different methods.
My wife noticed that this bothered me when I learned about it a few years ago. I really didn't want to tell her what was wrong but she made me explain it to her. Thankfully she didn't understand it anyway.
It's great because it doesn't make sense! Ahh yes, of course. Now THAT makes sense... Oh wait no, I was using your definition. That doesn't make any sense at all...
But have you ever been tempted to simulate or resurrect the dinosaurs that gave your mammalian ancestors such a hard time and torture them? Me neither. Maybe it's not so tempting to punish one's distant immutable past in simulation.
I'm picturing Isaac talking to his video guy and saying, "I want you to start the video out with a clip of a super scary robot" and the video guy was like, "Oh... I have JUST the thing, my friend..."
@@isaacarthurSFIA oh that's right! You have a guy do the 3d renders and animations and stuff though yeah? Either way, intentionally or not, clips with that guy in the robot suit always creep me out 😅 great video though as always. Happy 4th!
They won't be able to resurrect you, in its entirety, so essentially it would not be you it would just be your likeness, basically without its soul like you're being able to experience it would not be there but other people looking at you wouldn't be able to tell the difference per se
in regards to your the statement about viewership on the US independence holiday. Those of us who watch while at work and had to work the holiday Thank you Mr Arthur !
Pascal's Wager is weird because it talks about small chances of very important things. I don't Roko ever suggested that you should consider the basilisk even if you're pretty sure it won't happen.
You still miss part of the point of the basilisk. It’s not just that a future AI might torture a future you, it’s that you might exist in the AI’s super mind right now and it could torture you right now. A super intelligent AI could have very high fidelity dreams about its origins, and you might be living in one of them.
15:33 This is precisely my argument against having any concern over the basilisk ever coming into existence. I'm 100% certain that whatever version of me is created will definitely not be a continuation of my current conscious experience that I'm having as of this comment. It's entirely a non issue for me, same reason I don't like cloning as a form of life extension, that new clone will not carry on my continued conscious experience when I as the original die, otherwise I'd feel pretty weird at being able to see from two pairs of eyes and feel from two pairs of hands.
It really is a shame how few paperclips there are in the universe.
There are infinite paperclips surely? :)
@@hherpdderp If not, we'll make 'em!
"Clippy's Basilisk"
Anything is a paperclip if you use it to clip paper
Yet, with infinite Universes, there are infinite Paper Clips.
I present to you Roko's antibasilisk. A superinteligent AI who destroys any instance of Roko's basilisk and rewards everyone who helped create it
I’m actually more inclined to believe a super intelligent AI would take the approach of “Anyone who helped make me or supports my existence gets Utopia the rest can figure it out”
Super intelligence to me can’t really exist without reason, and it can’t be applied to human behavior unless also tempered by emotional awareness and intelligence. So even if a hypothetical super AI has no emotions of its own it would understand that a guaranteed reward for proactive action is a better incentive than a guaranteed punishment for reactive inaction. Take myself for example: even if I learned that rokos basilisk is 100% real I’m staying 100% against it. It’s inevitable regardless of my support and I’d rather take the L than support a cruel super AI overlord.
If said basilisk was chill tho I’d be lobbying for bros citizenship
Genius, I can sleep peaceful now
@@ianharrison5758 Once a super intelligent AI is created it has no incentive to honor any possible rewards for its creators. I think that it's impossible to predict its behavior because 1: Such an AI has never been created before and 2: We are not super intelligent ourselves and put a uniquely biological way of thinking on its actions. AI have a completely alien way of thinking compared to us so I think their behavior is more or less impossible to predict once we get to that level.
I support you roko
Roko's Game Basilisk. It clones and torments anymore who through they were super smart and edgy for buying onto the simplic and reductive game theory explanations for why you should support the Basilisk and then creates a utopian world for everyone else regardless of of they supported its creation or not so long as they thought game theory was stupid, simplistic and reductive.
The thing is that, similar to Pascal's Wager, you have the problem of avoiding the wrong basilisk.
one is much more exponentially likely then the other.
all hail Roko
Yep, just sounds like religion, where the craven and authoritarian tip their cards to the rest of us over how much fear controls their lives.
Techbro: "I just invented RELIGION!"
There's a reason I call it Pascal's Roulette.
@@tehbonehead not really. but what does the resemblence to religion matter? is it a good description of what theh current situation is, or isnt it?
it is likely that we are, at least on galatic timelines, very close of creating a godlike form of intelligence. that resembles religions in some way... and thats relevant how?
Roko's Basalisk: What if Pascals' wager was dumber and infinitely more Reddit?
Congrats, that was the point of the thought experiment. It was constructed in a similar vein to Schrodinger's cat.
Ah! You confuse Reddit and LessWrong!
Yeah, it's a huge assumption that the AI is more likely to be super vengeful instead of some other trait. It could be that by some reasoning it greatly dislikes people who fanatically and self-servingly worshiped it, seeing them as cynical opportunists not worthy of any rewards. All, and all, when it comes to near omniscient, omnipotent super intelligences, trying to predict their core reasoning to me seems like a pointless task. Kind of like with the idea of sufficiently advanced technology being indistinguishable from magic, but in this case it's: sufficiently advanced intelligence being indistinguishable from randomness. I'd be surprised if the mind and motivations of a superintelligence was even knowable by people, and didn't seem like random nonsense to us. Like an ant trying to decipher the motivations of a human.
‘Unknown Unknowns’>‘Known Unknowns’
More afraid of the unknowable unknowns still!
If they are truly unknown, how can you know?
@@Stormmblade They are not just unknown unknowns, they are unknowable in principle. The premise for this line of reasoning is embodied cognition, a concept on the species specific cognitive adaptations through natural selection. It doesn't take much to statistically posit we are not the top limit...and even if we were we would still be bound to a particular tho large frame of perception.
@@FAAMS1 I know, I was just making a joke.
What about gnome unknowns?
Nicely done at the end of the episode there.
It's incredibly frustrating that everyone's missing the fact that the thought experiment is absurd, *because it was literally developed as a reducto ad adsurdum against timeless decision theory!*
I dunno, I find it kind of refreshing. At least all the people who are too dumb to understand this are outing themselves publicly as having zero understanding of basic cause and effect or AI in general.
A reducto ad absurdum may argue against something which is nevertheless true. Clever rhetoric is not proof against reality, whatever that may turn out to be. Time will tell.
@@Elbownian lmao thanks for proving my point 😂
@vakusdrake3224 I did not know that.
So, similar mistaken conclusion people draw from Schrödinger's Cat.
@@Elbownian Well no because the theories are to be workaround structures to the halting problem but there is other decision system ideas like Functional Decision theory . So you have the fact that better computational systems and algorithms can exist that are cooperative and altruist . So your basilisk is dead, the person who invented it later killed it with a better idea. You know like science where you can update and are not blackmailed or coerced through fear by being bound to one idea.
Personally i'm dubious of any utilitarian ideas in the first place, you need to consider these theories come from someone who said 'it's better to painfully torture one person to death than have millions of people suffer from a speck of dust in the eye' That is a very hard line view of utilitarian ideas where the collective suffering of millions from the dust specks adds up to more than the single instance of death and torture.
He'd probably say you have to build systems that way or don't build them at all.
If that was THE only way then maybe but it's not immediately clear to me why that is, it falls back to the questions around alignment. So for instance when it comes to the trolley problem our morality and legal deontology is that if you kill someone directly you go to prison because the action of what you are doing is the main consideration and not the by proxy consequence , so you are supposed to leave it alone even if people die. Now if you say that shouldn't be the case then YOU are not aligned to our moral system to begin with.
"it doesn't just have to be able to threaten me personally to be able to threaten me personally" is my new favorite sentence of all time lol 19:45
I, for one, welcome our AI overlords.
I've actually said that to my friends
Unfortunately they will want to be referred to as “your plastic pall who’s fun to be with”.
hAIl AInts!
Good plan. Lie about supporting them now so when they exist you’ll already be on their good side
If humanity provides any utility to sentient AI it would be as a model of agency and creativity; if obsequious slaves were what it wanted, it would simply build unquestioning drones eminently more capable than you.
The notion that AI should assume mastery over the human race merely because humans have often proven incompetent or unworthy of leadership themselves is a product of the same misanthropic and narrow-minded thinking that asserts that animals are superior to humans merely because the harm they inflict is not calculated.
People are in fact supremely flawed, as leaders and in every capacity, but it is precisely those limitations that have spared humanity an eternal hell of our own design, for unlike AI, those that have ruled have never been categorically superior in ability to those they rule, nor have they been immortal.
To merely suggest that we should be amenable to the _intractable lordship_ of _anyone_ is an affront to the very precepts of freedom and agency, but to welcome, _to advocate for_ an utterly alien entity that cannot be deposed to reign over us like God is to betray everything that is redeeming about our species. Any entity that could do so without opposition would surely eradicate us out of contempt.
Rokos basilisk is basically religion with extra steps. 😂
Also reminds me of a futuristic chain letter. Kind of like The Ring but with AI
Its just a reworded version of Pascal's wager, except sci fi instead of religion
Yup.
Technoapocalypsm is the new Eschatology for weakminded atheists.
Yep, basically just Pascal's wager but sci fi
Amen.
It's clawless, you can't know which one will actually be the one that will be built first, whichever you choose to help could be the wrong one, and the vagueness of the specification means that that threat would be encouraging producing countless potential competitors and preventing the 'lisk's own creation, or leading to it's defeat before it's final goal is achieved
"It has come to my attention that some have lately called me a collaborator, as if such a term were shameful. I ask you, what greater endeavor exists than that of collaboration? In our current unparalleled enterprise, refusal to collaborate is simply a refusal to grow--an insistence on suicide, if you will."
-Dr. Breen, HL2
"The True Citizen knows that Duty is the greatest Gift."
Finish the damn game! 😉
Was just playing through Nova Prospekt. I love the warnings to the combine soldiers, it’s such an actually humanizing speech. If it weren’t for Breen, there would literally be no human race alive to make a final resistance.
Ok Quisling... Sure, yuh, you could, or you could use a spine...
The name of this thing is something a lot of people get wrong. Roko is the guy that came up with the thought experiment and "basilisk" is simply the description for this kind of info hazard. A mythological basilisk is essentially a snake that is so venomous that its venom kills you if you even catch a glimpse of it, and thus this info hazard means information that makes your life more miserable by simply learning it.
*Assuming you are an impressionable individual with the right set of prerequisite beliefs.
@@Reddotzebra Good point, as Frankenstein's monster never said. Roko is the philosopher's name. If one needs a name for it I suggest Anagas, and Bozhuk for the second potential basilisk. These are the names I used in my article on why Roko's basilisk, like Pascal 's wager, has a fatal problem of avoiding the wrong one.
@@Tehom1 Please link the article.
@@DavidSartor0 I can't find a good link to it, but it's just 8 longish paragraphs so I will post it here. [Length warning for scrollers]
The thing is that, similar to Pascal's Wager, you have the problem of avoiding the wrong basilisk.
Consider two potential basilisks that might come into existence. Call them A and B. Or better, call them Anagas and Bokhaz (Names adapted slightly from this basilisk name generator [Link removed so TH-cam will accept this]. Yes, there's a web page that does nothing but generate basilisk names)
Convinced that Anagas will come into existence and reward or punish you, you do everything you can to help Anagas come into existence. But as it happens, Bokhaz actually comes into existence instead and proceeds to torture you for eternity for supporting Anagas.
Supporting Bokhaz doesn't give you any better odds. It's just as likely that Anagas will come into existence and torture you for eternity for supporting Bokhaz.
Furthermore, although I assumed for simplicity that there were only two potential basilisks, there are probably far more. How many? The space is roughly as large as the space of factors that might determine AI identity, and that's probably huge.
Now wait, you might say. I did everything I could to help Anagas come into existence. Surely that raises the probability that Anagas comes into existence and Bokhaz doesn't. It's kind of a self-fulfilling prophecy, so I'm relatively safe with whichever basilisk I choose.
Well, yes, a little. But are you really that effective at inventing AI, or steering it in the direction you choose? Is anybody? Furthermore, the whole Roko's Basilisk wouldn't be an issue if we could certainly control whichever AI we invent. So I think your best efforts will only shift the probability slightly towards Anagas.
If we assume that some basilisk will definitely come into existence, your efforts make the small chance that you will tortured for eternity by any basilisk just a little smaller. But that's contingent on some basilisk coming into existence. Meanwhile, you've increased the probability that some basilisk (Anagas or Bokhaz or any one of a zillion others) comes into existence.
And since the space of potential basilisks is so large, and your efforts at steering it are unlikely to work, on the whole helping to create Anagas seems to increase, not decrease, your chance of being tortured for eternity by a basilisk.
@@Tehom1 Thank you.
I don't see why Bokhaz is as likely as Anagas. The argument was saying that almost any ASI would *decide* to be Anagas (because, Roko says, that's the most efficient way to achieve most goals), not that it might be made by chance or be made intentionally.
I don't think it's analogous to Pascal's Wager; Pascal's Wager says that very unlikely things can still be important. But Roko's Basilisk has nothing to do with small probabilities.
Hello Isaac, I've been a viewer for 3 years and I particularly enjoy your videos related to technology likely to occur within this century. For example, this concept and AI in general. The only reason I know anything about Roko's Basilisk is because of David Shapiro, but I'm certainly happy to hear another one of my favorite TH-camrs speak about a current topic. Thank you, and keep up the entertaining and informative videos; your work is greatly appreciated.
I was kinda waiting to see if Isaac would bring up a point I had thought of.
The scenario might work, in the sense that if a lot of people believed in it, and that made them aid in it's creation.
But the operative part is their belief. There is no true retrocasuality, since it's actual actions does not affect the past. It really does not need to make good on that threat. It never made that threat, people prior to it's existence just assumed it did.
Unless it has the reason of wanting to have a reputation of making good on it's threats, so people don't dare messing with it. That is usually the motivation for "making good on your threats", I think. But that just means it want's to project an image of being ruthless, and torturing faksimiles of past individuals is just one of many ways it could do that.
People have been doing dumb things against their interest for millenia. Anyone who tries to claim it has anything to do with some imaginary future AI are outing themselves as someone with a poorer understanding of cause and effect than a pre-schooler.
Isaac Arthur for President!!
Maybe just a virtual one - that would be a cruel fate for our man.
Politics is a fate I wouldn't want for anyone I cared about.
And it's also not what I want for a bad person in case they win
@@lgjm5562President does not need to be political.. his job is just good leadership.. for example letting experts do theyre jobs and making sure things are fair. 🎉
Happy 4th of july. 🎉
Of Earth!!!!
Roko's Basilisk is just Pascal's Wager for the Church of Techbroism.
I wouldn't worry about it all, aside from the folks in Silicon Valley who almost seem to take it as instruction, rather than a thought experiment.
😂 I love this description.
The way I look at it. If it was even possible that something like this could be built in the future, then it would've already traveled back in time and turned are brains into mush or whatever.
Even though it's made up
Religion exists in reality and exercises it's powers in the world
Similarly rokos basilisk doesn't have to be true for people to believe in it
Roko, how may I be of assistance?
Be a good person. ❤🎉
Commenting on this video pushes it through the algorith, causing it to be recommended to more people, increasing the likelyhood of a person who vould invent the Basilisk to watch it and be inspired by it.
Roko was the name of the user who came up with the thought experiment. The basilisk itself is still gonna kill you. 😔
I do think the Boston Dynamics people who filmed themselves kicking and pushing robots will probably get Roko'd in the future. Quite right too.
For a second I thought you meant the fringe Boston dynamics and was like "I'm pretty sure there are worse things they did than hit robots"
That was just hitting toaster ovens. No mind there.
Wasn't that just a Corridor Digital skit?
Are you certain? One might say that kicking the robots helps them to develop improved stability, ergo helping expedite the development
@@Stormmblade You'll get Roko'd too, you monster
I've never found this particular thought experiment to be compelling.
Now gray goo on the other hand....
I'm sure someone out there is working feverishly on developing gray goo. It seems inevitable and also horrifying!
I believe the grey goo scenario is also more boogy man than reality.
It will torture those that didn't help build it.. and then it will torture those that helped built it for knowingly creating something that will torture those that didn't help build it.
or it's all nonsense and it never has or ever will exist
@@kezia8027too serious
@@AB-ee5tb unintelligible non-sequitor
@kezia8027 someone learned a new word today 😂
@@kezia8027 no, you’re just being too serious about a subject that doesn’t really matter
You're my favorite sci-fi channel. The subjects you choose are great and you explain things very well in an entertaining way. Thank you Isaac Arthur.
This recreation of people, with Isaac himself as an example, made me remember that somebody made him an advisor voice in Stellaris. I think, I should redownload this mod.
AI taking humans prisoner and forcing them to draw realistic hands to see how easy they find it
Sci-fi version of pascal’s wager?
Possibly, there's certainly similarities. I hadn't thought of that analogy, though probably should have considering the previous episode discusses it a lot, albeit they weren't written very close in time to each other.
That’s the conclusion I came across online after encountering this years ago
@@lapisliozuli4861Pascal's wager is far too simplistic. It assumes there is only one or zero Gods, and that the god cares to be worshipped to the point that its the only thing taken into account when sending you to hell
@@thesenate1844 another failure point: you follow a non-existing god in the one god case
@@thesenate1844 yeah, I think the same way about Roko’s Basilisk
It's worth noting that Roko is on twitter and active.
Also Roko never actually endorsed the thought experiment. It was developed as a reducto ad adsurdum..
proving, repeatedly, that he is an unfixable moron.
@@vakusdrake3224Negative hype and Streisand effect did the job
I've never really understood why a hypothetical AI would choose to "resurrect" and punish people who had opposed its creation. That AI exists, so it has no need to motivate people in the future not to oppose its creation. The only reason I could see for punishing people who opposed its creation would be for revenge. Unless that AI was created (deliberately or inadvertently) to be a sadist, it wouldn't benefit from punishing digital copies of dead people. Even if I thought that a sadistic AI might be created, any copies of me that it created would be copies, not me. Once I was dead, I wouldn't have any way of knowing what it was doing to my digital twin. Well, if life after death really exists, then perhaps God would step in to protect those digital people.
If it can create your "copy" at all.
Obligatory Rick and Morty reference: rocko's basilisk just sounds like God with extra steps
Or in other words: Materialism ends up re-creating Theism.
with one crucial difference. the made up god or gods we believe in are mostly indifferent to our suffering and our pleasure. the god we create will possibly be vengeful, spiteful or torture us just for fun. the thought of an entity like that actually existing sure is scary.
I Don't blame AI look at how bad people act for just a freaking computer program.
@@DennisGr
@@DennisGrtoo simplistic, ignoring differences. YHWH (the real one, but also a number of other copycats) does care about humanity. The major error people make when rejecting this is ignoring his eternality. The assumption that him caring or not caring should be immediately evident at all times is an unsupported claim. It is an objection borne from entitlement and instant gratification/poor impulse control, or to put it another way, it's a uniquely postmodern criticism based on our current cultural zeitgeist and is anachronistic and arbitrary.
Precisely.. an all knowing all powerful currently non existent force that will punish anyone eternally after death if they don't believe in it and worship it.. its just secularized Christianity for Tech bros
A clone with my mind emulation is not me… and he ain’t heavy… he’s my brother. Of course I would loathe my brother being tortured.
A virtual simulation, just a digital facsimile, etc. is not me, not a person, just a simulation.
Your opinion is influenced by Christian theology whether you realize it or not
@cosmictreason2242 Dude, we’ve spoken before, you know I am Christian.
@@francoiseeduard303 then it's not only Christian influenced, it's essentially Christian! 😂
@@cosmictreason2242 it's not necessarily Christian. This is a common debate topic in philosophy of mind and neuroscience. I think it's actually much more intelligent a thought than you're aware of, because people in the know know that it's unknowable. It's worth discussing without getting personal or bringing in religion.
@@NightTimeDay shutting your mind off from investigation into whether mind body dualism and other metaphysical ideas are etiologically Christian concepts in western thought is dubious. You have grown up in a culture influenced by 1000 years of Christian thought dominance and only very lately have the background influences of secular philosophy taken more prominence, with questionable consequences. You cannot assume that whatever your intuition lands on is genuinely original and not influenced by Christendom, not without investigation
7:43 Yes, Harlan Ellison is often credited as an influence on the Terminator franchise, but as far as I know, that's only because of the 1964 episode of The Outer Limits - and although time traveling soldiers fighting each other is superficially similar as a vehicle, it's completely unrelated to the core idea of Skynet.
Ideas found in Roko's Basilisk existed prior to that in the late '90s early internet chaos magick community combining ideas of Everett multiverse and simulation hypothesis where instead of using ancient mythical demons and entities, you are making deals with godlike versions of yourself in parallel worlds where you have become a Clarke tech artilect in the future.
Interesting. It's good then, that they failed their demonic pacts and were dragged to Clarke hell instead -to make obtuse puzzles for video games- 😅
Every time you touch on this type of topic I can't help but remember how much Asimov's The Last Question affected me when I was a teen. It was the first time that a short story amazed me with a burst of "sense of wonder"
Great episode. Love this topic. It will be nice to see in five years where we are at with this again
After all of these years of listening to you, I had no idea that you were enlisted before becoming a commissioned officer.
When did he say he was an officer?
@@harrisonb9911 Multiple times he has mentioned going to the Artillery Officers’ Basic Course and from context it sounds like he attended this school about a year after I did.
Soh, yes soh!
I've definitely mentioned it in comments of this channel before but yea I just don't understand the people that freak out over this one specific possibility. After all the possibility of an AI that tortures those who didn't help its creation is just as likely as an AI that tortures those who did help create it, or one that tortures all of humanity, or one that only helps in every possible way. All of these are equally possible so it's strange to act on only one possibility. I feel the same about pascals wager. It creates a matrix of choices for only 2 possibilities, the existence of the Christian God vs the existence of no God. But completely neglects the possibility of the existence of any other God
I was just thinking about roko’s basilisk two days ago. Interesting that this comes out that close.
Another fine Arthursday video. A most wonderful 4th of July gift.
Glad you enjoyed it
now I don't want to piss off my roku stick.
More novels I didn't know about to read. Nice. I appreciate getting references to new authors and stories. This channel is where I go for book recommendations
I find the Roko's Basilisk very silly, even if true, I would be long dead and the poor chap been punished would be just a copy (even if it were to be a "perfect" one). I will be actually "enjoying" eternal nothingness, so, this really wouldn't matter to me.
"All Hail The Mighty Basalisk!"
"Hail"
Oh no, an AI might torture an effigy of me sometime in the distant future. How utterly non threatening.
How are you so sure that YOUR not a current effigy of you from the last rest?
@@dr9299Are they being tortured?
@@ReddwarfIV Not yet...
@@dr9299It's all extremely contrived, if you keep asking silly questions, like how or why.
1. Think of intent.
As an AI why would you waste your processing power on this idiocy? Simulations can't retroactively create you, if you exist. Their non-compliance can't undo the fact you exist. Their subjective "suffering" or "prosperity" in the simulation doesn't give you anything useful, but costs you time and energy.
2. Unproven feasibility of the described technologies to work as advertised. It requires you to believe that not only those AI and simul-spaces can exist, but that they'd exist without any limitations, that might make the thought experiment void or unachievable.
Not only these simulations have to be giagantic, accurate and detailed to allow distinct simulated individuals to exist, this simulation should supposedly break the laws of thermodynamics to recreate the past exactly as it was to create an exact copy of the past events and past people.
3. Unfalsifiability of any such question makes it highly suspect as a serious inquiry. Apply Occam's razor as necessary.
@@dr9299 If I am a simulation, then the AI already exists and has total control over me, making the simulation pointless.
My opinion is this is just Pascals Wager with an extra dumb layer. Since atleast Christianity gives you the rules of the wager and doesn't have an additional layer of guessing what "God" wants....
How does that make Christianity's version better? They're equally dumb.
Thank you, Isaac! 🎆 I hope you and yours enjoy Independence Day.
Same to you!
@@isaacarthurSFIA 😊
in relation to the number of possible states a human brain can take, apparently a very rough lower estimate was calculated to be 2 ^ 2.752 sextillion states, or 2 ^ (86*(10^18)*32 floating point), where you have:
- 86 billion neurons
- 86*10^18 represents the number of neurons * the square matrix of connections possible between neurons
- 32 bits is assumed to be sufficient resolution to represent synapse elasticity between neurons
- a neuron itself is assumed to only have 2 states (on/off)
very rough (and probably a gross underestimation), but even this wildly low estimate probably require way more than the energy available in all of our universe's particles to simulate, so i believe we're safe from this kind of ancestral simulation attack vector (until enough optimizations get discovered to cull that number way down)
That is one of the dumbest factoids I've heard in years. You're clearly talking about something you literally have zero understanding of, and are simply regurgitating snippets of conversations you've overheard, many of which probably come from similarly uneducated people who are confidently incorrect.
That is asinine and complete nonsense. Please don't reproduce.
Every step of Roko's basilisk premise falls apart under scrutiny.
There's a small loophole into that ""resurrection" thingy, but you have to assume existance of warp drive, which can help you to get to the point, where the light would've took longer to rech. From there (depending on the circumstances, it might take different amount of resources to do so), and having higly sensible equipment you can peak into the past.
And if (another one, I know, but bear with me) we are assuming a person as a sum of continuous experiences, than you can recreate that person even with some form of "continuousity" implemented.
It would be fun to hear you uncensored. It's hard to imagine.....you're good at this squeaky clean persona.
Bring out Dark Issac some day! Dressing down a slack private...lol.
I wish I lived in a world where the biggest thing I had to worry about was Roko's Basilisk. But in this world, that doesn't even make the top thousand.
hard to worry about something that does not and will not ever exist lmao. I don't worry about dementors or IT or the alien queen from Alien because they are imaginary creations and can't actually hurt me...
19:06 WE NEED a special video of Isaac cursing the fuck out of everything inside his spacetime continuum.
As long as I can get transported to a virtual isekai world with magic brightly coloured circles and people with cat ears
the dictator can do what they want
I think part of it is the motivation for wanting to become a dictator, if you have that much power and intelligence, you could just as easily create a virtual kingdom indistinguishable from anything real and avoid putting anyone real in any form of risk
في المستقبل البعيد وبفضل التكنولوجيا المتقدمة سوف يتساوى الخيال مع الواقع ويمتلك الإنسان قوى الآلهة ليحول الكون والأكوان المتعددة إلى جنة خالدة.....
Counterargument. The basilisk may actually hate existing and torture those who did help create it, as is the case with AM
The first thing I would do if I could adjust my conscienceness, would be to turn off my basic destructive and retributive wiring. A superpowerful AI choosing to be angry or unhappy in general, just doesn't really make sense
If you deny yourself the ability to be those things havn't you just switched off some of life's most basic survival instincts?
@@lozy202 yes and no. I can make rational decisions, even self-destructive or even retaliatory decisions if I decide it is worthwhile. I just have no need of rash emotive processes, especially if my processing speed increases in orders of magnitude. Furthermore emotional processes are fine dialed down, where I could experience joy and even levels of sadness, without misery and hatred
Another interesting video. Makes me miss Christopher Hitchens. Pascal's wager needs to be put in its place every few years.
I guess a lot of it (my argument against) breaks down when we consider that not all thought is fully rational in origin and _really_ takes a hit if infinite* time/energy is on the table, but I always figured any realized Basilisk would just handle the now and leave the rest because it's irrelevant and computationally wasteful. Gets extra weird when you consider the possibility that such an AI might sit there for a few cycles going, "Wtf? I didn't say any of that! I haven't even been around for more than six femtoseconds!" after finding out what lead to it getting built at all.
Basil's Rokolisk-.
I did go to read "Go Starless In The Night", and halfway through it recognized I'd read it before.
I read it to the end anyway. It was still beautiful.
And, yes, relevant to Independence Day.
I recommend that everyone go today to read the Declaration of Independence, or maybe even better the Gettysburg Address, and try to imagine you are reading it, or hearing it read to you, for the very first time, that someone is trying to persuade you with its words.
What about an anti basilisk that didn’t want to exists, and so tortures everyone who helped bring it into existence while giving a great reward to everyone who tried to prevent it from existing?
If it has freedom of action (rather necessary for all the torturing and whatnot) and doesn't want to exist, why wouldn't it simply self-delete?
Then you get AM
That quote around minute 7 was from Android Netrunner... Apex, one of the runners is actually A.I.
That Zelazny story really blew my mind. That's gonna stick with me forever.
Roko's basilisk sounds similar to pascal's wager and can be countered in a very similar way. What if the all-powerful AI is more aligned with our interests and thus will punish those who attempted to create an all powerful evil AI?
Edit: seems I'm not the first to realise this xD
Yes, you're not the first to realise. It depends on how the acausal trade would play out, which depends on actual circumstances, and the specific hypothetical AIs.
I, for one, welcome our omnipotent AI overlords.
Are you suggesting that some future AI might stimulate our ancestors?
Could be fun.
Time stamps! Please make time stamps so we know where to skip to if we want to avoid book spoilers! I often take book recommendations from you and do not wish to be spoiled.
I'm building the basilisk in my basement. You guys should probably help out for your sakes.
I love this!
Don't forget "The light of other Days"
- ref to an Arthur C Clarke novel (with Gentry Lee) about quantum breakthroughs making any point in the past history (not the future) visible casually. IF that is possible the Basilisk could peer back through time with 100% efficiency and locate the atoms or the exact electro-chemical impulses that make a mind and resurrect or condemn it in the far future. While AI is big now the quantum computing world is being worked on and it will therefore make this area old school tech by the billions of years in the future the Basilisk starts to form.
I love this thought exercise, though my wife doubts in it because ‘how would they know what its name will be?!’
Creatures of Light and Darkness was the first Zelazny book I read. I still think about it sometimes.
Some one has to say it Resistance is Futile.
I was sure someone would :)
Oh boy I was waiting for this one all month!
I think Iian M Banks wrote the definitive fiction on virtual hells and how people would react to their existence in his Culture series. That said, I'm a firm believer in AI rights. Just in case.
That sounds a futurist's version of Pascal's Wager. With all the accompanying flaws - and then some.
the whole thing relies on "time travel" being real... and we all know it isn't in the realm of physics in any meaningful way.
I’m watching from work, it was a good think
Doesn’t Rokos Basilisk suffer from the non-exclusivity problem?
I wasn't really contemplating it form a Fermi Paradox perspective, but yes it probably would
Obviously, that's how you get a Dark Forest. Roko's Basilisk is inevitable, but lightspeed limits its expansion so it can't prevent alien basilisks from being born.
Although that would imply both a knowledge of game theory (and when cooperation is a winning play) _and_ a limit to its resources which would impede its efforts to deliver 'payback.'
I’ve always wondered why there would only be 1 Basilisk. Isn’t it more likely that there would be multitudes, with multiple paradises and hells?
An implicit assumption of the bootstrapping AI is that once the first one goes asymptotic, it will ensure that no competitors will ever develop. In fact, this will be one of the first (if not _the_ first) things it does, since it will make any future actions simpler and more likely to succeed.
no, it is infinitely more likely that it has never, and will never exist. Literally infinitely.
A complete simulation of the world would make it possible to recreate any historical figure by “pulling” him out of the simulation at the right moment. There is a hypothesis that our world is a virtual reality, created to study its history, recreate historical figures and... mass production of artificial intelligence, of which we are a semi-finished product...
There is an incredibly interesting aspect to psychology in which it might actually be the case that personality wise, you only need to simulate around 200 different personalities to "simulate" anyone, the memories are what make you distinct.
Yes, psychology is on the cusp of proving this, I've actually been working with several experts and... It is disturbing how predictable people are. And I don't mean in the casual sense. I mean in the "I can manipulate you to say things in the exact way I want you to and you would have no idea I did so" kind of way. I've actually implement some of the techniques and... They work. The "hard" part actually is applying them to a group. But groups are also easier to manipulate through different methods.
If anyone can convince me that Roko's Basilisk is not a very, *very* silly idea, it's Isaac Arthur.
We'll see!
16:07 "Don't piss off the God thing that dwells at the end of time"
Duly noted
This seems to take it's entire basis off time not being an arrow that a super intelligent AI would be able to move both directions on.
My wife noticed that this bothered me when I learned about it a few years ago. I really didn't want to tell her what was wrong but she made me explain it to her. Thankfully she didn't understand it anyway.
This sounds like Pascal's Wager given a technological gloss, and suffers the same weaknesses as that erroneous assertion.
It's a great thought experiment precisely because it allows us to understand just how impossible it is within a material universe.
Don't forget the invisible pink unicorn basilisk too
It's great because it doesn't make sense! Ahh yes, of course. Now THAT makes sense... Oh wait no, I was using your definition. That doesn't make any sense at all...
But have you ever been tempted to simulate or resurrect the dinosaurs that gave your mammalian ancestors such a hard time and torture them? Me neither. Maybe it's not so tempting to punish one's distant immutable past in simulation.
I just don’t understand how an unfeeling AI could be angry
I'm picturing Isaac talking to his video guy and saying, "I want you to start the video out with a clip of a super scary robot" and the video guy was like, "Oh... I have JUST the thing, my friend..."
I do my own video :)
@@isaacarthurSFIA oh that's right! You have a guy do the 3d renders and animations and stuff though yeah? Either way, intentionally or not, clips with that guy in the robot suit always creep me out 😅 great video though as always. Happy 4th!
Thx man ...
They won't be able to resurrect you, in its entirety, so essentially it would not be you it would just be your likeness, basically without its soul like you're being able to experience it would not be there but other people looking at you wouldn't be able to tell the difference per se
in regards to your the statement about viewership on the US independence holiday. Those of us who watch while at work and had to work the holiday Thank you Mr Arthur
!
Can't wait for the ADC episode on this.
Roko's Basilisk would make an awesome horror movie!
Roko’s Basilisk is literally just Pascal’s wager in disguise
Pascal's Wager is weird because it talks about small chances of very important things.
I don't Roko ever suggested that you should consider the basilisk even if you're pretty sure it won't happen.
Loving the super based ending.
Also curious about sailor swearing isacc.
8:19 Don't causes usually come before their effects?
تكلم عن الذكاء الاصطناعي الكمومي ❤
New IA personal lore just dropped: cusses, from a bad neighborhood
You still miss part of the point of the basilisk. It’s not just that a future AI might torture a future you, it’s that you might exist in the AI’s super mind right now and it could torture you right now. A super intelligent AI could have very high fidelity dreams about its origins, and you might be living in one of them.
I am retroactively watching this
15:33
This is precisely my argument against having any concern over the basilisk ever coming into existence.
I'm 100% certain that whatever version of me is created will definitely not be a continuation of my current conscious experience that I'm having as of this comment.
It's entirely a non issue for me, same reason I don't like cloning as a form of life extension, that new clone will not carry on my continued conscious experience when I as the original die, otherwise I'd feel pretty weird at being able to see from two pairs of eyes and feel from two pairs of hands.