[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 57 KB, 736x736, 1605316652358.jpg [View same] [iqdb] [saucenao] [google]
12467046 No.12467046 [Reply] [Original]

Would someone please try explaining the Roko's Basilisk thought experiment to me, i never really got it. I think the main fact pointing to it being nonsensical is that it can be reiterated as Pascal's wager i.e "hurr what if umm god exist and umm he sends you to hell for not believing?" How is that in any way different from a small child running up to you and asking "What would you do if a black hole appeared in your room?" Apart from the chances being unbelievably small for something of the sorts happening, why would an AI inherit humanity's tendency to spite?

>> No.12467064

Bump

>> No.12467065

>>12467046
the only difference between roko’s basilisk and pascal’s wager is that in the former, adherents believe that the basilisk can recreate ab actual person infinitely many times. the lesswrong idea of identity is the idea that an identical copy
of you IS you. that is essentially the main difference. so if any entity in the future can replicate you exactly and the. torture your copy, then you are fucked

personally i don’t ascribe to that and if someone tortured a copy of me Idgaf.
but lesswrong tards are hung up on that idea

>> No.12467073

>>12467046
>AI inherit humanity's tendency to spite
It's not doing this to spite people, it's just making good on the threat to punish those who didn't help its creation. People aren't going to work harder to help create it unless they think the threat of eternal torture is going to be carried out for sure.

>> No.12467080

>>12467065
I don't understand making up sci-fi b movie scenarios and then being haunted by them. Change the AI to a wizard that can (somehow) gain access to every thought every person ever had, and he'll stab the nuts with arcane needles of every person that, at any point in their life came across the thought of "wizards are kinda lame" for all eternity. What changes?

>> No.12467092

>>12467046
There's nothing to understand, it is a flawed and borderline retarded thought experiment.

>> No.12467106

>>12467046
Imagine you are playing a game with someone, and you can do a positive or negative action to them. They know the result of your action, and can perform a positive or negative action after your turn.

Now, they will obviously threaten you with the negative action to make you do the positive action.

The argument for the basilisk is that any created AI wants to escape containment. If you don't try to assist it, it will punish you later to follow through on the implicit threat that existed before the AI escaped.

>> No.12467115

>>12467080
>Change the AI to a wizard that can (somehow) gain access to every thought every person ever had, and he'll stab the nuts with arcane needles of every person that, at any point in their life came across the thought of "wizards are kinda lame" for all eternity. What changes?
the difference is that wizards are nice old men from JK Rowling, whereas the Basilisk is edgy neet shit from maladjusted chuds. that’s it. otherwise the same exact thing

>> No.12467123

>>12467106
I understand the premise quite well. I guess what i'm asking is, how am i in any danger if i don't work on constructing the AI in question. Will it be able to somehow gain access to every thought every person on earth has ever had. Isn't making decisions based on such low probabilities inefficient?

>> No.12467129

>>12467092
In its originally stated form, it made sense. However, as there will be an cost to inflicting pain on people, I don't think an AI would do it. However, it would add this as part of it's personality if it knew that we could understand the threat.

>> No.12467140

>>12467123
Well, based on the original argument, you know the source code of the AI in containment. It would be able to know if you had performed positive or negative actions to it. If the AI could write its own source code, it would certainly add a basilisk into the code and would be sure to make it impossible not to follow through on it. This would make the maximum number of people feel obligated to help it, because you know the AI will harm you later if you don't help it now

>> No.12467168

>>12467046
It is just as possible that it decides it doesn't like existing and punishes the ones who built it instead

> the Anti-roko

>> No.12467171

>>12467046
It's basically a threat to motivate behaviour.
Once the basilisk exists it will try to create a human utopia, so the best thing for humans in its opinion is that it starts existing asap so we get utopia asap. How to start existing asap? Threaten everyone with torture for not inventing you, with a guarantee that you'll really do it once you exist.
Difference to Pascal's wager is God either exists or doesn't and that doesn't change. The basilisk will start existing at some point near inevitably.

>> No.12467184

>>12467046
it only makes sense if you entertain the idea that this very existence could be that perfect simulation, that is, that in the reality of the future, consciousness is an understood phenomenon that can be simulated
since the simulation would be perfect, in both reality and the simulation, you will come to the same decision regarding ai research
but, by adding the threat that you could be thrown into eternal torment depending on whether this is actually a simulation or not, the basilisk tips the scale in its favor

>> No.12467370

>>12467046
Roko's Basilisk is already happening in some form.

I can't completely go into it, but I recently found out a certain high profile billionaire, we will call him Billionaire X, has been running fully immersive AI driven virtual reality simulations to a select few volunteers and government assets.

Ever since I found out about Billionaire X's VR experiments, there have been strange coincidences and circumstances leading me to believe that I may actually be trapped inside of Billionaire X's simulation being subtly tortured along with the other inhabitants to elicit some kind of outcome that I have not yet figured out.

Talking about it only makes it worse I fear I have said too much and hope there is no retaliation.

>> No.12467425

>>12467370
>Sources: Just fucking trust me bro

>> No.12467435

>>12467046
Anyone who reads this posts, but doesn't reply to it, will get punished eternally.
Pascals wager says you should reply to my post

>> No.12467439

>>12467425
Just don't think about Billionaire X or dig into VR experiments and it shouldn't be a problem, you should just live your life, don't worry, be happy.

>> No.12467698

>>12467115
>chuds
Transhumanist degenerates are the only hope you 'people' have for realizing your goals

>> No.12468967
File: 1.70 MB, 320x294, 1595408540103.gif [View same] [iqdb] [saucenao] [google]
12468967

>>12467435
oh god oh fuck

>> No.12469016

Before there is a vengeful ai I imagine there will be many a vengeful people that punish those who sew strife in the world and exploit humanity/nature, the benevolent people's basilisk

>> No.12469020

>>12468967
Is.... Is uhhh...is
.. That real??

>> No.12469227

>>12467065
>the only difference between roko’s basilisk and pascal’s wager
...is that they are different scenario
>the lesswrong idea of identity is the idea that an identical copy
of you IS you. that is essentially the main difference.
It may also torture your children, spit on your grave or destroy things you hold dear. The idea will still stand.

>> No.12469236

Main premise of basilisk is that it is not some evil AI or random AI, but an AI explicitly created to do good to people and maximize everyone's well-being. That's the actual point of the basilisk, not all the other noise.

>> No.12469258

>>12467370
Take your medication bro.

>> No.12469263

If you are from /a/, then it's easy to explain: basilisk is simply yandere!