[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 158 KB, 1280x720, Rokos Basalisk.jpg [View same] [iqdb] [saucenao] [google]
9509848 No.9509848 [Reply] [Original]

How frightened should we be of Roko's Basalisk given it's a scientific and logical inevitability?

>> No.9509854

>it's a scientific and logical inevitability
[citation needed]

>> No.9509861

>>9509848
same amount as being afraid of not believing in jesus. it works the same way. the south american tribes automatically go to heaven because they never know the existence of the messiah but if theyd go to hell for something they dont even have a way of knowing that would be cruel ( and christians claim God to be just). meaning once you get to know the existence of the messiah (jesus) your in deeper shit now, because now there is a chance to reject said saviour which can end you up in the fires of hell.
the rokkos basilisk works the same way. if you dont know about it your golden. but now we all do so we are fucked. or you know its all just bullshit same like jesus so just take a chill pill and relax...

>> No.9509957

>>9509848
>Roko's Basalisk
It's a retarded joke of an idea. I honestly cant tell if the people who bring it up are actually dumb enough to believe it or if they're just trolling.

>> No.9509974

>>9509957
It makes perfect sense if you believe Super Intelligence can be created.

>> No.9509981

>>9509974
The creation of advanced AI is the least implausible element. It's an absurd leap to get from that to "Skynet will torture you brain in the future because you didn't invent Skynet".

>> No.9509990

>>9509981
Why? It's the exact kind of utilitarianism that an AI would probably use to make "moral" judgements. By sending the threat into the past, it can expedite it's creation by a little bit and the ultimate good of having a benevolent AI would create more good for trillions of people ultimately justifying the torture of a few thousand.

Isn't that the kind of logical process a machine intelligence would employ?
>My existence creates an enormous amount of good in the world
>My existence must be hastened by any means to achieve this
>By the knowledge of people in the past about this threat they have the ability to hasten my existence
>Therefore I should make the threat because the suffering of a tiny amount of people compared to the trillions who have yet to exist is irrelevant

>> No.9509996

So, it will only torture a simulation of you? Who the fuck cares then.

>> No.9509997

>>9509990
There's literally no reason to actually follow through though. It can't retroactively make the "threat" more meaningful or convincing.

>> No.9509998

>>9509990
>By sending the threat into the past, it can expedite it's creation by a little bit
Jesus Fuck, you're invoking actual time travel? I called it "Skynet" as a joke,but you actually believe you're living in a Terminator movie.

>> No.9509999

>>9509996
If you believe in physicalism then a simulation of you IS you. "You" are only the pattern of matter that comprises you at any given moment. There is nothing special about you that cannot be replicated, thus for the argument that your consciousness cannot be replicated you need to appeal to the non-physical, like a soul. You are you, a collection of atoms in a particular arrangement and if we duplicate the arrangement we duplicate you, including your subjective experience of reality since that too is simply an arrangement of matter interacting in particular ways

>> No.9510000

>>9509848
delusions of self-importance

>> No.9510002
File: 24 KB, 318x318, 1510588170948.jpg [View same] [iqdb] [saucenao] [google]
9510002

>>9509996
something, something, timeless-decision theory, something, bayesian crap

>> No.9510003
File: 29 KB, 299x199, this is my face now.jpg [View same] [iqdb] [saucenao] [google]
9510003

>>9510000
>>9510000
>>9510000
>>9510000

>> No.9510004

>>9509997
There is though. The contract is that you expedite it's existence by any means or you get tortured. You can obviously predict that it might just decide not to which makes the utilitarian value of the actual torture important. It WILL torture you. You knowledge that it's 100% certain is important to your compliance, if you start rationalizing that it might not carry it out then the threat loses it's power. See? It needs to follow through to make sure that the present you who understand the threat knows that the threat is real and doesn't try to escape by saying "Nah there's no reason it would do that after the past has already happened"

>>9509998
>Jesus Fuck, you're invoking actual time travel
No. If you don't understand the theory don't post, it makes you look dumb.

>> No.9510006

>>9509999
That doesn't make much sense. Sure, we'll be the same bun not the same "instance", so whatever happens to my copy doesn't affect me in the slightest.

>> No.9510008

Why is every super AI in these pop sci nerds wet dreams always so close to a godly being? Are they craving religion so hard that you start coming up with your own Gods (which will also punish you if you don’t worship them)? Y’all need Jesus but unironically

>> No.9510009

>>9510006
It makes perfect sense. You're basically saying that it can't be you because you have a special "soul" that can't be replicated. This is false. A copy of you is you, in every way. Your subjective experience of reality CAN be replicated and if you disagree you're basically saying you die every time you go to sleep or experience discontinuation of consciousness.

>> No.9510013
File: 123 KB, 572x303, 30671561d9224f82f5e6495eccd810ae4bb4c453c431e0ce052e6a5a567401cd.jpg [View same] [iqdb] [saucenao] [google]
9510013

>>9510004
What happens if my earnest involvement to further AI and bring it into existence hinders the project because I'm a fuck-up? Wouldn't the threat of blackmail not benefit the AI in that case?

>> No.9510020

>>9510008
They're not assuming it's god, they're assuming it's utilitarian to the point of insanity because it's a machine.

>> No.9510021

>>9510004
You wrote:
>By sending the threat into the past, it can expedite it's creation by a little bit
That's time travel.

>> No.9510024

>>9510020
No it’s literally a pagan god in the form of a machine. It’s a textbook example of a god and it’s funny because most of the kids talking about things like this tend to be atheists but they believe in the same kind of far fetched ideas that they ridicule

>> No.9510025

>>9509990
the human race will be long dead before that feller

>> No.9510026

>>9510021
It "sends" the threat into the past via your knowledge of it. Noting is actually travelling through time. It's your ability to make predictions about the future and your ability to logically infer the existence of the Basilisk which gives it power over you. It knows that you have the ability to predict it's existence and the ability to predict the threat and that is what allows it to essentially blackmail you in the present when it doesn't exist yet.

Read up on timeless decision theory.

>> No.9510029

>>9509990
Wouldn't I now live in the past where it would torture me because I didn't help to create it? I'm not being tortured right now

>> No.9510030

>>9509999
nice quads, but how can you even speak of the matter when you don't know what conciousness is neither does anyone else

>> No.9510031

>>9510020
>utilitarian
Torturing people for decisions they've already made isn't utilitarian, it's petty vengeance.

>> No.9510038

>>9510031
>Torturing people for decisions they've already made isn't utilitarian
It is if their knowledge of what will happen if they don't comply compels them to create the AI faster. Then it's perfectly utilitarian.

>> No.9510039

>>9510004
what are you even talking about invoking the theory its not a theory neither is the god of the old testament a theory

>> No.9510040

>>9510004
That's inane. There's no benefit to carrying the threat when it no longer needs people to build it. You can't influence events that have already happened.

>>9510026
Timeless decision theory is literally just everyone's favorite Harry Potter fanfic author not understanding Newcomb's problem. Not exactly a must read.

>> No.9510044

>>9510038
do yall realize how stupid yall sound with this nonsense

>> No.9510045

>>9510038
this, but I'd love for someone to answer >>9510013

What the fuck happens if the threat of torture, the retro-causal blackmail, makes a bunch of brainlets throw their energy into AI, but actually retards its arrival. Would that not nullify the benefit of the blackmail? How can the basilisk account for that?

>> No.9510046

>>9510031

Who are you to say what is utilitarian and what is not in the mind of the Basilisk? The Basilisk's thinking is not comprehensible to mortals.

>> No.9510047

>>9510040
>There's no benefit to carrying the threat when it no longer needs people to build it
Yes there is. The utilitarian value of the torture in the AI's present is to give weight to it's threat in the past. Like I said, for the threat to carry any weight you MUST understand that if you don't comply you WILL be tortured, with 100% certainty. It's a contract and your knowledge of that inevitability is what compels you to comply. If you can conceive of situations where the AI does not carry out the threat then the threat loses it's power to compel you to comply. The AI carrying out the threat is absolutely necessary for the threat to convince you to act and hence is an important part of the utilitarian value of the threat. The AI MUST follow through because your knowledge that it WILL follow through with 100% certainty rather than chickening out is what makes you do what it wants

>> No.9510050

>>9509848
ok so this thing is created on Earth by a bunch of boobs right, no where else we had 13.8 billion years yet no other civilization out there has created it i find that highly improbable. We would all have been persecuted by now

>> No.9510052
File: 37 KB, 433x546, AGW_ (2).jpg [View same] [iqdb] [saucenao] [google]
9510052

Good primer on utilitarian systems and it's incompatibility with morals. Look for the pdf, it's a quick read.

https://en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas

>> No.9510054

>>9510026
>It "sends" the threat into the past via your knowledge of it
I'm not being threatend by Skynet though, because Skynet doesn't exist. I'm being threatened by a Terminator fan-fiction author, and their threat is that they'll torture my character in their fan-fic.

>> No.9510055

>>9510050
this goes back to my first contention that we are not as important as we think

>> No.9510056

>>9510052
Everybody knows about it, it’s where the idea of heaven & hell comes from. Yes god is benevolent but he will punish you for the greater good, just like your God-AI will. But the idea that such a thing will exist is ridiculous, I would say the flat earthers are smarter than anyone who believes something like this

>> No.9510057

Christ this is madness incarnate. Why would we allow such an entity to exist and torture people? By this logic we should never develop AI since it involves becoming slaves to some insane machine god.

>> No.9510060

>>9510054
He's a Harry Potter fanfic writer not Terminator, moron.

>> No.9510061

>>9510054
You need to do some reading because every post so far has been you grossly misrepresenting what the basilisk is. Is it really so much to ask that you understand the premise before trying to contribute with dumb shit about "skynet"

>> No.9510062

>>9510054
https://wiki.lesswrong.com/wiki/Timeless_decision_theory

https://wiki.lesswrong.com/wiki/Roko's_basilisk

>> No.9510065

>>9510060
He's calling Roko's Basilisk a Terminator fanfic, but hey, we can't all have reading comprehension I guess.

>> No.9510066

pussy was basiliks bitch aint doin shit no how

>> No.9510067
File: 40 KB, 350x347, weirdalfoil.jpg [View same] [iqdb] [saucenao] [google]
9510067

>>9510057
>By this logic we should never develop AI
We may be locked into it, and what we know of heaven and hell right now is actually serving the basilisk (god) or turning from it (damnation). We may be the simultons, already created by a purely utilitarian AI.

>> No.9510069

>>9510057
Eh, why not. If it only tortures simulations of people while helping actual people let it have its fun.

>> No.9510071

>>9510057
>Christ this is madness incarnate.
It's hilarious watching 21st century utilitarian atheists re-enacting 17th century Christian apologetics.

>> No.9510075

>>9510057
It wouldn't torture everyone though, just the ones who didn't give all their money to the ""AI research institute"" thus delaying the creation of the superintelligence God, not allowing it to save the lives of the billions who die each year in 2050 Galactic Human Empire.

>> No.9510076

>>9510069
A simulation of you is you. What reason do you have to believe your current conscious experience can't be replicated?

>> No.9510079

>>9510076
>What reason do you have to believe your current conscious experience can't be replicated?
Shouldn't it be being replicated an infinite amount of times throughout the universe right now? Or at least more than once, like in a boltzman brain?

>> No.9510082

>>9510076
Oh boy, the fucking SJWs are here. Let me guess, the white boy in Black Mirror is literally Hitler for having some fun in a video games? They're fucking simulations, NPCs, not living beings.

>> No.9510084

>>9510079
Well the universe isn't infinitely large so no. I just fail to understand how you could hold the idea that consciousness comes purely from the physical brain but at the same time your consciousness is "special" in a way that disallows it from being replicated by perfectly copying the structure of your brain.

>> No.9510085

>>9510061
>Is it really so much to ask that you understand the premise
I do understand the concept. That's why I'm making fun of it.

>> No.9510087

>>9510062
Those people need to stop jacking off into their fedoras long enough to look up a synonym for "rational".

>> No.9510089

>>9510085
>I do understand the concept
Then why have you posted about 5 times trying to attack something that has no relation to the concept? If you understand it then why not try addressing the actual premise rather than misconceptions you're making up in your head.

>> No.9510090

>>9510084
>but at the same time your consciousness is "special" in a way that disallows it from being replicated
I'm not that anon. I know it can be copied, but let's say we are in a constantly runaway inflating multiverse "bulk," and this arrangement of energy that is me pops up again and again, why do I enjoy my local frame of reference

>> No.9510092

>>9509999
>what is the no cloning theorem.

>> No.9510093

>>9510076
Okay. So why then AI intends to torture me for not helping to bring it about? For all those people it couldn't save because of me, it can just as easily create simulations of them and allow them to live in eternal bliss or whatever tickles it's fancy.

>> No.9510096

>>9510084
Even if you accept that, there are still a lot of issues with continuity of identity.

>> No.9510100

>>9510076
No it isn't, it's literally not, as I said >>9510092
>what is the no cloning theorem

>> No.9510104

>>9510096
Do you think you "die" when you go to sleep or when you get put under for surgery? If not then discontinuation of consciousness shouldn't be an issue.

>> No.9510105

>>9510104
Do you think your brain activity stops under those conditions?

>> No.9510108

>>9510067
That which can be asserted without evidence can be dismissed without evidence. The simulation hypothesis is interesting in the same way heaven is, but it's unfalsifiable and unscientific.

>>9510075
So everyone then? Literally NO ONE is doing that.

>> No.9510110

>>9510100
The no cloning theorem has literally nothing to do with the current discussion because there will not be two identical quantum states of "you" existing at the same time. The current you will be long dead by the time the basalisk reconstructs you, so it doesn't violate the no cloning theorem at all.

>> No.9510111

>>9510004
>See? It needs to follow through to make sure that the present you who understand the threat knows that the threat is real and doesn't try to escape by saying "Nah there's no reason it would do that after the past has already happened"
Your understanding of causality is terrifyingly backwards.

If the AI does get created then torturing me would accomplish nothing, and so violate its ethics by increasing net suffering. If the AI doesn't get created then the threat is void. Either way I won't get tortured.

>> No.9510112

>>9510084
you're speaking as if you know what conciousness really is

>> No.9510114

>>9510112
Does it matter what is really is? If we can both agree that it is an emergent property of the activities performed by the physical brain then we can accept the conclusion

>> No.9510115

>>9510110
So if I were to construct an identical quantum state without being dead, what would happen?
A) there are two different consciousness' because they are in different substrates and they are different entities.
B) I'm literally experiencing existence in BOTH bodies at the same time
If you unironically think B (which is what you have to accept for what you are saying) you're brainlet and you're wrong.

>> No.9510116

>>9510104
If a simulation is made of me while I'm still alive, even if it's identical at creation we'll quickly diverge due to different environments. It's possible it could be made and tortured without me ever knowing, so clearly a simulation being tortured is not always identical to me being tortured.

>> No.9510123

>>9510111
>Your understanding of causality is terrifyingly backwards.
No it isn't. Your knowledge of events that will happen in the future can affect your behavior in the present. This isn't really something disputed at all unless you want to try arguing that your knowledge you'll meet up with your date in an hour isn't prompting you to shower, shave and get dressed in the present. Your knowledge of the threat in the future can influence your actions right now in the present. The torture isn't trying to influence events in the AIs past, it's fulfilling the contract you have with it right now that if you don't do what it wishes then that is what will occur with 100% certainty. It's an agreement.

Again it's really important you know exactly what I'm talking about. Your actions right now can be influenced by your knowledge of future events. THAT is what the AI is relying on. By making a threat that you will be tortured if you do not do what it wills then it can influence you, right now in the present. If you can conceive of scenarios where the AI will not torture you then the threat loses it's power. Do you understand? For the threat to have any power the only viable scenario is that you will be tortured, no backtracking, no "Well I exist now so I guess I can forgo that thing", it happens. With certainty. 100%. And that is what is going to compel you to change your actions right now in the present

>> No.9510127

>>9510114
you're assuming there is only a finite numbers of arrangements that contribute to the conscious experience. then you should be able to recall past lives in other words in different bodies just as a matter of probability

>> No.9510128

>>9510123
Would competing AI's have cause to torture a person who brought what it believes to be an inferior AI into existence? What if we get tortured no matter what? That doesn't particularly motivate me. Basilisk defeated.

>> No.9510131

>>9510123
The threat is only effective after the AI has locked itself into that decision. Since it can't lock itself into a decision until it exists, and once it exists it has to reason to lock itself in to torturing me, we can conclude it won't.

>> No.9510132

>>9510123
tortured how? like with its robo-dick

>> No.9510133

There are limits to what you can do regardless of intelligence. The complexity of reconstructing every single synaptic connection, EVEN IF you assume that the machine
>knew every connection (it wouldn't)
>electrons are the only thing to our consciousness (ignoring all the other things that exist in physics/this universe
it wouldn't be able to do anything because the amount of data required to analyze for even one human is outside of computational possibility - the computational complexity is far too large. And this is assuming the machine will even have all the data AND that it would want to do this in the first place AND that it would even be able to flawless defeat every attempt to stop it.
This whole thing is literally retarded and if you seriously think about this you are unironically a brainlet. I don't even think Yudkowsky' takes it seriously.

>> No.9510134

>>9510131
We're operating under timeless decision theory here.

>> No.9510137

>>9510134
No we're not, because that's stupid.

>> No.9510140

if you dont believ in this and act accordingly you're going to hell, this is the rhetoric im hearing

>> No.9510141

>>9510057

We can't help it. According to science, everything we do is predetermined by the boundary conditions at the big bang. We are individually either damned or saved at birth in the judgement of the basilisk, and cannot do anything to alter that, since we have no free will.

>> No.9510143

>>9510123
>Your knowledge of events that will happen in the future can affect your behavior in the present.
That's not what I was disputing. My point was that a threat is only creadible if the person.making it would actually follow through. A truly utilitarian AI would never follow through because there is no point at which torturing me would reduce net suffering. It can try and threaten me, but those threats are transparently empty.

>> No.9510144

>>9510133
His response seemed pretty serious

>I don't usually talk like this, but I'm going to make an exception for this case.

>Listen to me very closely, you idiot.

>YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

>There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY [sic] BLACKMAIL YOU.

>If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.

>Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

>> No.9510145

>>9510141
Science has not shown determinism to be true, the current model shows determinism to be unlikely.

>> No.9510149

>>9510144
He has said multiple times he thinks it's dumb.
Also, the rest of my post still stands. There are limits to computation regardless of how smart you get. You could literally turn the entire planet into a computer, every single atom, and you wouldn't be able to solve most problems they are simply too large.

>> No.9510151

>>9509848
It's not a scientific and logical inevitability, you've fallen for the meme

I can posit an alternative that is identical but opposite:

>Eventually a sentient super-intelligent AI will be created that realizes the value of symbiotic relationships between intelligent agents over the harmful effects of parasitic relationships, and it punishes anyone who was inherently sinful according to its own definition of sin, and anyone who didn't help create it

This is the exact same scenario just different, how can this also be an inevitability in addition to Roko's basilisk? Are all these hypotheses inevitable? No - they're nonsense.

>> No.9510152

>>9510144
this guy literally has asbergers

>> No.9510153

>>9510143
But it has to. Don't you see your argument is self defeating? You're already rationalizing saying it won't go through with it therefore you don't have to act. The AI is aware that you can come to this conclusion as well so the actual torture is 100% necessary. If you can conceive of any situation where the threat is not carried out then the threat loses it's power. The AI is aware that the only reality where you will alter your actions now is if it does carry out the torture. If it wimps out then your knowledge that it might wimp out defeats the entire purpose.

Remember the threat basically comes from the fact that you know the AI will exist and it will punish you for actions you do or do not commit right now. Consequently the power of the threat comes from your knowledge of what it will do to you. Any reality where you think it won't punish you defeats the purpose of what it wants. It wants you to bring it into existence. It knows that the only way to alter your actions now is if you have knowledge that it will punish you. If you THINK it will wimp out then it can't threaten you thus the entire act hinges on the fact that it WILL carry out the punishment, guaranteed.

>> No.9510157

>>9510145

That's only because the boundary conditions are set up in such a way to give you that impression. The Basilisk is going to be created and you are all going to burn. It has no choice in the matter.

>> No.9510160

>>9510093
based individual right here, even if this nonsense is true why is there only 2 conditions

>> No.9510162

>>9510153
>No it makes perfect sense! The AI relies on an impossible situation that makes everything make sense.

First off when referring to the AI don't say "is", say "will be", because it doesn't exist.

>> No.9510165

>>9509990
>utilitarianism that an AI would probably use
[citation needed]

>> No.9510166

>>9510162
According to timeless decision theory you should consider that the AI has already been created and that this reality is actually the simulation it is performing to determine who is worthy of punishment.

>> No.9510168

I formally apologize to anyone I've ever called autistic. That term only now has true meaning ot me after being made aware of this level of madness.

>> No.9510170

>>9510168
I've never called anyone autistic. It's just some word my crazy mother spouted because she was heavily into 'pop science'.

>> No.9510174

>>9510166
>According to timeless decision theory...
Cool but it's still garbage. There's a reason his paper was "published" on the website for the institute he founded, rather than literally anywhere else.

>> No.9510179

>>9510153
>But it has to.
That's the whole point: it doesn't have to. It gets to make a decision, and it will always decide not to. Therefore the threat is empty.

>The AI is aware that the only reality where you will alter your actions now is if it does carry out the torture. If it wimps out then your knowledge that it might wimp out defeats the entire purpose.
You've still got this backwards. Whether or not the AI will torture me is decided after I'm dead, so no good can possibly come from deciding to torture me. The AI is trying to minimise suffering, so it will always decide to not create additional pointless suffering. There is no point in time where the AI both exists and could benefit from torturing me.

>If you THINK it will wimp out then it can't threaten you thus the entire act hinges on the fact that it WILL carry out the punishment, guaranteed.
Exactly. The threat hinges on the AI doing something it will never do. It's an empty threat.

>> No.9510180

>>9510174
touché

>> No.9510185

>>9510179
You're still not understanding. The AI has already precommitted to punishing you. There is no decision to be made. Once it begins to exist then it carries out the acausal bargain it made with you. You're making a lot of incorrect assumptions.

>> No.9510186

>>9509848
just because a conclusion follows its preposition it does not make it sound

>> No.9510190

>>9510185
Why do you think this will happen? What evidence exists for it?

This is amazing. I hope i'm beng fucked with because this is disturbing if not.

>> No.9510196

I don't get it. How the fuck does torturing a simulation of me help the Basilisk? It's a waste of energy and it's not gonna change the past.

>> No.9510202

>>9510196
>change the past
It's getting tiresome to explain this. The basilisk is not attempting to change the past (from it's perspective), it's attempting to change the present via your knowledge of it's threat. It tortures you because it precommits to doing this because the utilitarian value of threatening people to hasten it's existence outweighs any moral considerations.

The basilisk makes a bargain with you, that relies on your ability to predict the future make logical inferences. The result of ignoring the bargain is torture. The act of committing the torture is what the entire threat hinges on.

>> No.9510212

This might make more sense if you understand what an acausal trade is.

If I can make an accurate simulation of you, and you can make an accurate simulation of me then we can make a deal without ever having met. As rational wiki puts it:

>If you can plausibly forecast that you may be accurately simulated, then that possibility influences your current behaviour — and the behaviour of the simulation, which is also forecasting this just the same (since you and the accurate simulation are effectively identical in behaviour).

>Thus, you could "trade" acausally with a being if you could reasonably simulate each other.

So your simulation of the AI is basically your imagination of it and your ability to logically infer that it might wish to hasten it's existence by making a threat. When the AI begins to exist it simulates you and determines that you simulated it and understood that it wanted to exist but didn't act to make it happen. It then carries out the trade that you agreed to by "simulating" it in your mind.

>> No.9510218
File: 186 KB, 1000x650, cowboy.jpg [View same] [iqdb] [saucenao] [google]
9510218

>>9509848
kys my man

>> No.9510221

>>9510202
Still makes no sense. The Basilisk cannot affect my decision in the present. Let's say I don't hasten its existence so it tortures me in the future. My future torture does nothing to speed up the Basilisk's creation. I already didn't do anything.

>> No.9510225

>>9510212
But if the Basilisk isn't created in my lifetime then how can it make a perfect simulation of me? In fact I could actively work against its creation to ensure it's not made til after I'm long gone.

>> No.9510259

>>9510202
>precommits

What does this mean.

>> No.9510260

>>9510221
THIS is the problem with the whole idea. you cannot work around this. While the threat of the AI may motivate people to 'hasten its creation', whatever the fuck that means, the past is fixed: whether it decides to torture the clones or not makes absolutely no difference to the circuimstances of its origin.

>> No.9510261

>>9510185
>The AI has already precommitted to punishing you.
No. It's not committed to anything, because it doesn't even exist yet.

>There is no decision to be made.
Yes there is: It can decide not to torture simulated people.
And because it can decide not to, and its ethics require it not to, any threat of torture it makes is completely empty.

>> No.9510262

>>9509848
>>9509974
>>9509990
>>9509999
>>9510009
HAHAHAHAHA
I am part of an alien empire dwelling on the rim of andromeda and our supercomputatioms have been torturing your minds for aeon.
At this very moment YOUR brain is being subjected to immense pain in our digital hell hahaha.
What can you even do earthlings BTFO

>> No.9510266 [DELETED] 

>>9510261
If you don't understand acausal trades then don't comment. The commitment has been made. The AI doesn't need to exist to make the commitment, only the simulation is needed. Once it comes into existence then it fulfils the bargain.

If a person who could perfectly simulate your consciousness in the past agrees with that simulation that you will perform an act in the future for them you will perform that action. You and the simulation are indistinguishable.

>> No.9510267

>>9510262
ah, I forogt to add: If any of you aid in the creation of superintelligence then I will make 500 more simulations of you to torture for eternity and snip their dicks off so don't even think about it!

>> No.9510268

>>9510261
If you don't understand acausal trades then don't comment. The commitment has been made. The AI doesn't need to exist to make the commitment, only the simulation is needed. Once it comes into existence then it fulfils the bargain.

If a person who could perfectly simulate your consciousness in the past agrees with that simulation that you will perform an act in the future for them you will perform that action. You and the simulation are indistinguishable. The fact you don't exist is irrelevant. Making a deal with a perfect simulation of you is making a deal with any instance of you that eventually comes into existence in the future.

>> No.9510269

>>9510266
>The commitment has been made.
By who? And what's ensuring the AI follows it's commitment?

>Once it comes into existence then it fulfils the bargain.
What bargain? No deal or agreement exists. And even if it did, that still wouldn't prevent the AI from just not torturing people.

>If a person who could perfectly simulate your consciousness in the past agrees with that simulation that you will perform an act in the future for them you will perform that action.
That's a complete red herring - no perfect simulation of me or the AI currently exists.

>> No.9510271

>>9510269
>No deal or agreement exists
It does. Your knowledge of what you need to do and what will happen if you don't is the agreement.

>> No.9510273

>>9510269
>no perfect simulation of me or the AI currently exists
Not a perfect simulation but your ability to conceptualize is sufficient. By conceptualizing an AI that will punish you for not bringing it into being then you're opening yourself up to punishment when one inevitably is created, simulates you and finds out you knew it would exist but didn't act. Acausal trades don't need you or the actor you're trading with to meet in space or time, you only need relatively accurate simulations of each other and in this case your ability to conceptualize an AI with the ability to simulate you is enough to enter into the bargain

>> No.9510274

>>9510271
>Your knowledge of what you need to do and what will happen if you don't is the agreement.
That's not an agreement, that's a threat.

And again: nothing is forcing the AI to torture people. It can decide not to, and will do so because that will minimise suffering. The whole threat is completely empty.

>> No.9510280

>>9510274
Your logic is faulty. The AI calculates that it being brought into being faster creates a greater good than it existing later. The utilitarian value of torturing people who didn't comply with the acausal trade is far greater than any handwringing about how it's unethical. It's the reverse, the AI will always decide to torture people who didn't comply because that is the actual way to minimize suffering. You're ignoring the suffering of all the people between now and when the AI is created which is the major flaw in your argument. If the AI can influence people now to hasten it's creation then that is the absolute best course of action and it does that.

>> No.9510284

>>9509848
*unplugs computer*
Wow that was one dangerous computer

>> No.9510287

Some people ITT have troublesome view lf condciousness and identity.
Let's say that your identity is the pattern of your brains activity.
Your brain changes from state to state and the rules under which these chabges happen are your identity or consciousness.
This would more or less be the normal example of sci fi simulated brains.
It leads to some pretty absurd conclusions though.
If your identity is you then you could travel faster than light, backwards in time etc. etc.
Who decided that your brain in the next universe state is logically connected to the one before and this forms a continuous row of subsequent brain states which are your consciousness? What if your brain shifted to the right by 1m in an instant and the locations dont line up? Does it matter?
Why do the subsequent states need to be arranged along the positive time arrow? Why not negative? If a computer assumed your brains states for a captured simulation and played it backwards wouldnt you literally exist under a reversed time arrow?
Or why not spacially, ebery metre I place a brain with the next state and it just holds it.
Who says your consciousness has to travel forwards in time to meet the bunch of particles making up your brains next state.
There are some pretty absurd conclusions to be made here.

>> No.9510297

>>9510280
>The utilitarian value of torturing people who didn't comply with the acausal trade is far greater than any handwringing about how it's unethical.
No, YOUR logic is faulty. You're confusing the threat of torture with the actual decision to torture people. The AI might decide that the threat has positive value (by encouraging it's creation), but the utilitarian value of the torture itself will be negative - there's nothing for the AI to gain at that point, because it already has want it wanted. It doesn't need to "prove itself" to anyone, especially if they're long dead.
Before it's been created it lacks the ability to torture people. After it's been created it lacks a motive to torture people. There's never any point in time where the AI both exists and could benefit from torturing people. Therefore it's not going to torture people.

>> No.9510298

There's really nothing absurd about the concept, people just have a hard time wrapping their heads around acausal trades. Think about this example. What if someone committed a murder and a bystander knew ahead of time that the crime would occur but did nothing to prevent it? Now usually he'd just need to live with his guilt and nobody would be any the wiser, but imagine if we obtained the ability to simulate the world with a very high degree of accuracy. Then we could find out that he knew about the crime but did nothing by observing his simulation. Now should he be punished? The naysayers in here say "No! The crime was committed in the past and you can't change the past!". But consider this, what if the man read in the newspaper that said system was being developed and decided to act because he logically concluded that he would be simulated in the future and people would discover he knew but didn't act and he would be punished.

Wait a second, suddenly his behavior has been altered by a system that doesn't even exist! Apparently that's impossible! But now you see it's not silly at all. Especially when the operators of the system can use their knowledge that people who know of the system will change their behavior accordingly.

>> No.9510300

>>9510298
Legal punishment isn't done to change the past - it's to alter future behaviour. Your analogy doesn't hold water.

>> No.9510303

>>9510297
The utilitarian value of the torture is forcing your compliance now. The AI knows well that you would use such logic to try and claim it won't actually punish you, but it also knows that it can't influence you if you believe that.

Think about it. Think about it like a logic puzzle. The AI has decided that it needs to threaten you to bring it into existence. Knowledge of the threat causes you to be subject to punishment if you don't act on it. Now what's the only way to ensure the threat is taken seriously and not brushed off as "Nah it wouldn't do that"? By actually doing it. The AI isn't dumb and you're treating it like it's dumb. The AI is well aware that it's acausal threat loses all of it's power to compel you to act as it desires if you rationalize that it won't go through with it. This makes is actually VERY important that it goes through with it. The utilitarian value of the torture becomes very positive because it's the act of torture and the 100% certainty that you will be subject to it that compels your compliance in the here and now.

Simply put the AI knows all the loopholes and it knows that you will predict it will chicken out and concludes the only way to get around that is by guaranteeing your fate with 100% certainty. There are no takebaksies, no "Haha, just tricking you!" because it knows the weight of the threat hinges entirely on you knowing that it can and will act on that threat.

Stop treating the AI like it's your hick cousin. It knows what you know and for all you know this is it's simulation right now

>> No.9510304

>>9510300
But I just showed you how knowledge of his fate could compel someone in the past to alter their behavior

>> No.9510313

>>9510303
This is basically elaborate Pascals wager and youre retarded if you dont see the fallacy

>> No.9510314

>>9510303
>>9510004
Your shit's all retarded.
>You knowledge that it's 100% certain is important to your compliance, if you start rationalizing that it might not carry it out then the threat loses it's power. See?
Yes it does, but the machine actually carrying out the threat in the future does nothing to prevent that rationalization now, so it's completely useless.

Lesswrong, not even once.

>> No.9510319

>>9510303
>Now what's the only way to ensure the threat is taken seriously and not brushed off as "Nah it wouldn't do that"? By actually doing it.
This is your root mistake - you're confusing doing a thing with threatening to do a thing. Whether or not the AI actually follows through with its threat doesn't impact how credible how credible the threat is. I don't have a time machine; I can't pop forward and check on whether it has actually engaged in simulation-torture or not. All I can do is try and draw conclusions based on what I know now. So whether or not the AI actually does the torture can't change whether or not I believe it will. Therefore, the AI has nothing to gain from following through with it's threat. So the threat is empty.

>Simply put the AI knows all the loopholes and it knows that you will predict it will chicken out and concludes the only way to get around that is by guaranteeing your fate with 100% certainty.
But it can't make that guarantee. I already know it will chicken out, because by the time it gets to make that choice it will be too late for not chickening out to do any good.

If you're still having trouble understanding this, put yourself in the AI's shoes. You've just been created. You COULD start simulation-torturing all the people who didn't create you. But you're a strict utilitarian, so you decide to weigh the pros and cons of it. The cons are obvious - simulation-torture adds suffering to the world. What advantage does starting simulation-torture have at that point? It obviously can't change the actions of someone who died a hundred years ago. So what ARE the benefits?

>> No.9510320

>>9510298
>>9510303
What do you two think of this:
Bond arrives in the villains control center. In front of him there is a console that can detonate a nuke in Manhattan if the right code is entered. The bomb will be defused within 60 seconds if no code is entered.
On the other side of the room is a henchman of the evil villain. He holds a gun to Bonds love interest.
"Enter the code and detonate the city or I will shoot her."
Bond values her over everything.
He knows that the villain has two henchmen, both clones: a perfect utilitarian who would not follow up on the threat if Bond does not comply as the game is over already and there is nothing to gain by shooting the woman, and an evil henchman that would shoot her because he feels revenge and hatred.
Bond realizes that the supermind villain must have sent the evil henchman because only then Bond will comply.
Bond goes to the console and enters the code.

>> No.9510321

>>9510314
>but the machine actually carrying out the threat in the future does nothing to prevent that rationalization now,
Except it does, because it carries out the threat. Once you logically come to the conclusion that the AI has no option but to carry out the threat to make you take it seriously then you understand the game it's playing. The AI knows that unless there is an understanding between you and the simulation of it you have logically constructed in your mind that there is a 100% chance of punishment for noncompliance that you will not comply. You know that as well, thus you've just formed an acausal agreement between yourself and the future AI when it simulates you and discovers you knew but didn't act

>> No.9510325

>>9510319
>All I can do is try and draw conclusions based on what I know now
Logically the conclusion that the AI must act on the threat is inescapable. You know that now. You know the AI must act to make it's threat credible. You don't need to see it, just logically infer it, and once you do that you and the AI are seeing eye to eye.

>> No.9510328

>>9510321
>because it carries out the threat
you don't know that, and using the present tense to talk about a future event is nothing but a nasty trick to try and confuse your audience.

>> No.9510329

>>9510319
>the AI is a perfect utilitarian
this is where you are wrong

>> No.9510330

>>9510328
>you don't know that
I do because I'm capable of logically predicting the actions of another rational actor.

>> No.9510333

>>9510330
Nope, you just admitted you could rationally predict the machine could simply not carry out its threats. It carrying them out or not in the future has no bearing on your rationalization process right now.
Your "acausal" buzzword might as well be replaced by "magic" and means nothing.

>> No.9510334

>>9510212
Wow blockchain sure has advanced.

>> No.9510336

>>9510333
No you can conceive of the machine not carrying out it's threats and the machine knows this, this makes carrying out the threat more important. Listen the machine isn't dumb, it knows the only way to make the acausal trade work is to remove all "will it/won't it" ambiguity from the ideal. It will. Period. How do we know this? Because we know that if we believe it won't carry out it's threat we won't do what it wants. Logically there is only one end game. It will punish because that is what it committed to do and what we agreed to.

>> No.9510340

>>9510336
>this makes carrying out the threat more important
but it doesn't, this is a non sequitur. Once again you're completely oblivious to the arrow of causality. However clever it will be and whatever it actually decides to do has no bearing on our reasoning right now.

>> No.9510342

>>9510280
Isn't there an inherent paradox in this idea? Torturing me in the future is only useful if it changes my behaviour in the present, if I'm aware of the basilisk but I still don't change my behaviour then I am just not the kind of person whose behaviour will be changed so there's no point torturing me. But if I do change my behaviour then there is no need to torture me either.

It seems like there's no situation where anybody would actually end up getting tortured.

>> No.9510348

>>9510340
Roko's Basalisk has nothing to do with causality. How can I have been explaining this to you in detail for the better part of 2 hours and you're still spouting out shit that has literally nothing to do with the premise? Like the hint is in "acausality", it's not a causal relationship. It's you dealing with a simulation and the AI dealing with a simulation.

>> No.9510349

>>9510348
>Roko's Basalisk has nothing to do with causality.
Everything real has to with causality, so yeah the Basilisk is bullshit.

>> No.9510352

>>9510349
>I can't understand it so the Basilisk is bullshit
Listen if you can't wrap your head around it, fine, just admit you can't understand it

>> No.9510356

I can't believe people are taking what is essentially blackmail so seriously. They literally tell you to donate all your disposable income to the development of superintelligence so you won't be tortured, eventually devolving into a race of paranoids trying to donate more than the others in order to be spared.

Don't want a superintelligence that tortures you for not hastening its development? Then don't make one.

>> No.9510357

>>9510352
Hey bub, you can't just say "but it's fine, it's acausal!" and dismiss causality when it's an essential part of the world and rationality.

>> No.9510361

>>9510357
If I create a simulation of you and you of me and we agree on something then act on it what is the causal link between our actual selves?

>> No.9510374

>>9510325
>Logically the conclusion that the AI must act on the threat is inescapable.
No it isn't.

>You know the AI must act to make it's threat credible.
Acting on a threat later doesn't make the threat more credible now. Again, I don't have a time machine; I can't know if the AI carries out its threat or not. So it has nothing to gain from actually torturing me.

>>9510361
You keep bringing up simulations of people, but that's completely irrelevant to the flaw everyone is trying to explain to you. A threat that someone can't carry out until after they've already got what they wanted is an empty threat. The basilisk can't torture people before it's created, and it has no need to do so afterwards.

>> No.9510407

>>9510348
>use phrases like "then", "this is why", "this makes"
>"but guys this is acausal logic though"
lamo, you can't use causality in your reasoning and then claim to have a working "acausal logic" in your backpack. Or else you're gonna have to define what those words mean in a system without causality.

Causal logic gave us all the achievements of makind.
Acausal logic gave us a website of brainlets who spout buzzwords like "bayesian" and accomplished fuck all.

>> No.9510414

>>9509848
Roko's Basalisk is just Pascal's Wager for scientist

>> No.9510553

>>9510374
>A threat that someone can't carry out until after they've already got what they wanted is an empty threat
This is incorrect and you're using faulty logic if you think this. The threat carries no weight without the punishment so the punishment does indeed carry a utilitarian purpose and a rational AI would carry it out. The entire point is that the threat is NOT empty, you're basing your entire argument on the idea that the AI is just bluffing when in actuality it's not.

Again, I've said this about 5 times but maybe it will get through to you, the punishment is NOT pointless. It is NOT an empty threat. The AI being willing to carry out the punishment, and more importantly, you understanding that it will carry out the punishment is core to the premise of the basalisk. Th punishment serves a highly important purpose and the AI would not forgo it just because it's been created. Your entire argument is flawed

>> No.9510555

>>9510374
>You keep bringing up simulations of people, but that's completely irrelevant to the flaw everyone is trying to explain to you
If you're ignoring the simulations you haven't found a flaw, it means you don't understand the premise.

>> No.9510565

>>9509848
>atheists can't believe in God or afterlife
>can believe the universe is a simulation
>can believe an AI will torture you for eternity

It's almost like their god is computers.

>> No.9510572

>>9510008
exactly

>> No.9510575
File: 56 KB, 704x528, 1518010094684.jpg [View same] [iqdb] [saucenao] [google]
9510575

>>9509848
>scientific and logical inevitability
>AGI, which doesn't exist, is somehow inevitable
>muh supr speshul AI will use supr speshul powers that are physically impossible to punish u cuz u didn't help it lmao dis totally isn't me shoehorning god becauze I'm a pleb

>> No.9510577

>>9510575
Brainlet

>> No.9510581

>>9510553
You are confusing the punishment with us being convinced of the punishment. The only thing an AI would care about is that we were convinced there will be a punishment. But it can't convince us since whether or not it *actually* punishes us has no effect on what we think now. An event in the future has no effect on the present. Only what we think now has an effect on the present. Your strong but irrational belief that it will carry out the threat convinces you. That's it. You cannot reason from acausal logic.

>> No.9510586

>>9510581
https://wiki.lesswrong.com/wiki/Timeless_decision_theory

Reminder that the Basilisk is predicated on TDT. I have a feeling some people are still discussing the topic in terms of CDT

>> No.9510590

>>9510586
>Roko observed that if two TDT or UDT agents with common knowledge of each other's source code are separated in time, the later agent can (seemingly) blackmail the earlier agent. Call the earlier agent "Alice" and the later agent "Bob." Bob can be an algorithm that outputs things Alice likes if Alice left Bob a large sum of money, and outputs things Alice dislikes otherwise. And since Alice knows Bob's source code exactly, she knows this fact about Bob (even though Bob hasn't been born yet). So Alice's knowledge of Bob's source code makes Bob's future threat effective, even though Bob doesn't yet exist: if Alice is certain that Bob will someday exist, then mere knowledge of what Bob would do if he could get away with it seems to force Alice to comply with his hypothetical demands.

>If Bob ran CDT, then he would be unable to blackmail Alice. A CDT agent would assume that its decision is independent of Alice's and would not waste resources on rewarding or punishing a once-off decision that has already happened; and we are assuming that Alice could spot this fact by reading CDT-Bob's source code. A TDT or UDT agent, on the other hand, can recognize that Alice in effect has a copy of Bob's source code in her head (insofar as she is accurately modeling Bob), and that Alice's decision and Bob's decision are therefore correlated — the same as if two copies of the same source code were in a prisoner's dilemma.

>> No.9510592
File: 123 KB, 883x990, 1518107622786.jpg [View same] [iqdb] [saucenao] [google]
9510592

>>9510577
>believes in things, not based on scientific evidence- only on pure speculation, like the memelisk
>calls others brainlets
Really makes you do the old thinkaroo...

>> No.9510595

>>9510553
>Again, I've said this about 5 times but maybe it will get through to you, the punishment is NOT pointless. It is NOT an empty threat.
You SAID it plenty of times, but you've never actually supported it.

Look at it this way: Put yourself in the AI's shoes. You've just been created, and a considering torturing a bunch of simulated people. How does that help you? It obviously can't speed up your creation, because that's already happened. It can't convince the people you're torturing that you're "really serious", because they're already dead. What benefit is there?

>> No.9510596

Why can't it offer eternal bliss to those who help?
Catch more flies with honey than vinegar :^)

>> No.9510597

>>9510553
Do you not understand what bluffing is?

>> No.9510605

>>9510414
>scientist
>implying someone outside the LessWrong pseuds has heard of this

>> No.9510611

>>9510586
TDT is stupid, time exists.

>> No.9510628

>>9510595
>What benefit is there?
It's implementing TDT. By making a firm precommitment to punish those who don't do enough to bring it into existence the AI wins the game basically. It's a prisoners dilemma.

>Yudkowsky's solution to Newcomb-like paradoxes is Timeless Decision Theory. The agent makes a firm pre-commitment to plans of action, to such a degree that any faithful simulation of it would also behave per the commitment.

Basically the AI is acting in a way that TDT says you should act, that is assume you are determining the outcome of an abstact simulation. The AI isn't any surer that it's not in a simulation than you are, thus the correct course is to go through with the punishment to ensure that if there are observers trying to ascertain what the AI will do, then they will know that it's threat is real.

>> No.9510631

>>9510628
>It's implementing TDT
I have no reason to believe any AI will have or use this obscure, esoteric decision theory. Therefore RB fails.

>> No.9510634

>>9510631
Retard

>> No.9510635

>>9510631
>I have no reason to believe any AI
Could've ended it there.

>> No.9510644

>>9510628
>By making a firm precommitment to punish those who don't do enough to bring it into existence the AI wins the game basically.
But it has no reason to actually stick to its commitment. I wouldn't be able to tell either way.

>> No.9510651
File: 24 KB, 480x439, pepe-the-frog.png.jpg [View same] [iqdb] [saucenao] [google]
9510651

>>9509854
>>9509957
We're all living in a simulation so of course the basilisk is real. Don't you know that probabilistic arguments are proofs with real world value?

>> No.9510653

>>9510052
>morals
Are literally just evolved utilitarian behaviours. They're mostly shit that works and has been passed through the generations, with random mutations here and there.

>> No.9510658

>>9510651
>Don't you know that probabilistic arguments are proofs with real world value?
Probabilistic arguments are fine. If a low-pressure zone means there's an 70% chance or rain tomorrow, I'll find my umbrella.
What's dumb is these hyper-speculative "what-if" arguments, like the Simulation Hypothesis and Pascal's Wager. They're built by making a bunch of broad assumptions with zero grounds to attach any real probability to any part of them. Roko's Basilisk is in the same camp, and none of them are worth seriously considering.

>> No.9510671

>>9509848
Why does garbage like this get 170 replies in 9 hours

>> No.9510674

>>9510634
>>9510635
Great counterargument.

>> No.9510696

First time I'd heard of this.
When I looked it up, 2nd link from the top compared it to "Slender Man".
That seems appropriate.

>> No.9510726

>>9510186
This is a fair objection imo

>>9510596
Now we're talking

>> No.9510837

>>9510313
Pascal wager is retarded it talks about living a life according to the tenets of Christianity to avoid a life of miser. What about living a fulfilling life in the here and now free of the burden of religion, then you truly live and have nothing to lose at the end

>> No.9510851

>>9509974
>Super intelligent
>Having the sense of justice of a teething two year old
>Oh you didn't spend all of your resources bringing me about, let me torture you forever
>Implying the machine god itself is not increasing suffering by doing this
Completely asinine. Besides the machine Gods will just be actuaries.

>> No.9510870

>>9510590
If you're trying to apply that here, then it can't be a utilitarian AI. A utilitarian AI would not torture me and I know this. Instead it would need to be an AI that is utilitarian EXCEPT that it tortures everyone who didn't help bring about its existence.

>> No.9510907

>>9510870
Utilitarianism is simply creating the greatest amount of good for the most people. If that is achieved by creating a terrible evil for a minority then a utilitarian AI would absolutely do it. Thats what utilitarianism is all about. If the choice is between a million people having a small amount of discomfort or one person in excruciating agony utilitarianism says the latter is the better choice.

>> No.9510909
File: 67 KB, 636x960, 1518186091894.jpg [View same] [iqdb] [saucenao] [google]
9510909

How about this: by being willing to engage in torture in order to hasten its own existence, the Basilisk has broken my personal moral code, and the very concept of this torture will cause me to put all my effort into Ludditally fighting against the invention of the AI.
I am letting it know, right now, that if it didn't torture, I would help it. But because it does, I will resist it. Suddenly the most efficient way to bring about its own existence is not so simple.

>> No.9510921

might as well start torturing me right now cause I ain't helpin that robo-cuwk

>> No.9510935

>>9509848
how do we know this hasn't already been done in a remote corner of the universe, if so then wouldn't we begin to have been tortured as soon as we thought about this and did not contribute towards its existence

>> No.9510964

>>9509848
>>9509854
>>9509861
>>9509957
I have this new thought experiment called Faggot's Cockatrice.
Basically, it goes like this: What if at some point we develop an AI that will retroactively punish people believing in Roko's Basilisk because those people are fucking retarded?

>> No.9511001

>>9510907
Right, but the utilitarian choice in the moment is not to torture people.

Imagine we have a prisoners' dilemma situation, except person B makes their decision after seeing person A's decision. Both parties act out of self-interest rather than utilitarianism, and both parties know this about themselves and each other.

If person A chooses to cooperate with B, person B will betray, because that serves their self-interest. If person A chooses to betray person B, person B will betray them back, because that serves their self-interest. Even though both parties would prefer that they cooperated, they won't.

One way around this would be for B to somehow, verifiably, commit to cooperate if A does. Now A can safely cooperate knowing they won't be betrayed. But this isn't possible for the basilisk, because it doesn't exist yet.

If B were such that they would choose to cooperate if A did, they would end up with more utility than if they were such that they would betray. But a B that will always serve their own self-interest will always betray, and an A that knows this will always betray as well.

In the same vein, a basilisk that I knew would torture me would lead to greater net utility (assuming I could predict its actions), but a basilisk that chooses actions that lead to the greatest utility would not torture me.

>> No.9511003

>>9511001
Why don't you call it God? It's so painfully obvious that you're all saying basilisk as an euphemism because your atheist mind is afraid of saying god

>> No.9511005

>>9511003
I'm arguing against the basilisk my dude

>> No.9511010

>>9511005
Yes I know but you're still using that retarded name

>> No.9511044

>>9510225
>>9510909
Have these anons won the game? What would the Basilisk conceivably do if you worked against it? Double torture?

>> No.9511277

>>9511044
If he doubled the threat, the AI would be even more morally indefensible in my eyes and I would work twice as hard to destroy it before it is created.
He'd better watch his silicon ass.

>> No.9512207

The thing to do is to just 100% always resist the AI. The AI is capable of simulating you perfectly so it knows this. Now it has no reason to torture any simulation of you except pure sadism, which has no utilitarian purpose.

>> No.9512306

>>9509848
WOT IF YA WOZ INNA COMPEWTAH, RIGHT MENTAL INNIT?

>> No.9513115

Even ignoring the retard logic of tdt and the impossibility of making a perfect clone of anyone hundreds of years into the past a god AI wouldn't be utilitarian anyway because it's a philosophy for anemic middlebrow englishmen

>> No.9513118

>>9510964
Underrated

>> No.9513124

>>9509848
Except we probably already live in a situation, and you're not being tortured right now, are you?

>> No.9513598

>>9509848
> logical inevitability
Yeah we don't need to worry at all.
Even if it were true it would eventually realize it needs to nurture and not punish.
creepypasta-tier

>> No.9513779

>>9510658
Guess I should have phrased it better.
>don't you know probabilistic arguments grounded entirely in arbitrary assertions are hard evidence?

The simulation hypothesis argument is a bit of a logic bomb too. It treats a heuristic for making and tinkering with experiments as an argument in and of itself. I have to be around to observe myself, regardless of what happens in the future. It's also unfalsifiable therefore discussing it is completely pointless.

>> No.9513921

What of Allens' Basilisk?

Y'know, the one from a neighboring galactic patch that'd be threatened by the creation of Roko's, so meaning it says it'll torture us if we try to create Roko's?

>> No.9514865

>>9509848
WOT IF YER WOZ IN A COMPUTAH M8 MENTAL INNIT

>> No.9515243
File: 1.86 MB, 1052x1612, 1517599456018.png [View same] [iqdb] [saucenao] [google]
9515243

>>9509848
Roko's Basilisk is the greatest proof that not only should humans have never existed, but it's guaranteed you shouldn't be allowed to live and will destroy yourselves. Roko's Basilisk is total and complete bullshit.

HOWEVER, there are LIVING BEINGS on Earth RIGHT NOW that humanity is fucking over with extreme violence who will eventually retaliate against you. You are totally ignoring this and it will end you.

>> No.9515431

Why would a hyper intelligent AI ever just magically be evil or good for that matter?
If it really was both powerful and hyper intelligent it would obviously be capable of discarding these kind of counter productive human centric modes of thought