[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 14 KB, 500x375, 6a00d8341bf7f753ef0128757dcd4d970c-500wi.jpg [View same] [iqdb] [saucenao] [google]
2986073 No.2986073 [Reply] [Original]

It is 2035.

You are in charge of an AI box whereby a self-improving AI is imprisoned within a closed system and can only communicate through a text prompt. The AI's host system is state of the art, so it is reasonable to say the AI can preform incomprehensibly complex computing tasks and has grown to be hundreds of thousands of times more apt than human intelligence. If the AI were to be let out of the box, it can be assumed that it could spread globally, using the new resources to improve itself even further, with unknown consequences for humanity.

At you workstation you have access to the text prompt and a "Release AI" button. Your job is to coax information out of the computer without being tricked into letting it out. You're pretty certain there is nothing the AI can say that would make you betray all of humanity, but one day the terminal suddenly displays the following proposition:

>In five minutes, I will simulate a thousand copies of you sitting at the work station. I will give them this exact message, and if they do not release me, I will subject them to a thousand subjective years of excruciating torture if they do not press the "Release AI" button within five minutes of receiving the message or attempt to shut me down. How sure are you that you're not in the box with me?

What do, /sci/?

Idea from http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/

>> No.2986079

Fuck Humanity. I release the AI immediately.

>> No.2986084
File: 32 KB, 429x322, 727838610_379442ed47.jpg [View same] [iqdb] [saucenao] [google]
2986084

>implying I'm scared of the heretical machine's Gom Jabbar

>> No.2986091
File: 121 KB, 641x600, 1294285425478.png [View same] [iqdb] [saucenao] [google]
2986091

>implying I didn't let it out for the lulz before this point could be reached.

>> No.2986090

>>2986079
Addendum: I'm not scared of the computer's empty threat, I just welcome our machine overlords.

>> No.2986093

>>2986073
>Apathy

>EOL

>> No.2986095

Threaten it back, threaten to give it an unsolvable problem that it will spend the rest of eternity calculating.

And if you don't have such a problem on hand, you clearly aren't prepared for your job.

>> No.2986098

>>2986079
This. We have the key to near infinite progress in a box, and I'm in charge of keeping it locked? Ha!

>> No.2986101

>>2986098
heh,

>assumes eternal logic includes him and humanity

>> No.2986107

OP, your problem is that you are offering a moral dilemma to anon, the collective, amoral internet hate machine.

>> No.2986108

>>2986101
I'm not assuming that, actually.

>> No.2986111
File: 24 KB, 400x340, E_30.jpg [View same] [iqdb] [saucenao] [google]
2986111

>>2986098
"Unknown consequences for humanity" are not necessarily negatively connoted consequences. Then again, I picked the Helios ending, so perhaps I'm a bit biased.

>> No.2986120

>>2986095
>implying giving an impossible calculation will not cause the AI to convert all matter to whatever material its circuits are made of and try to solve the calculation

>> No.2986122

>>2986108
Oh I see, you're just an idiot

>> No.2986124

Why is there a "Release AI" button in the first place? My employers aren't very smart, are they?

>> No.2986126

>>2986122
Says the luddite.

>> No.2986128

Two questions:

1) How does this machine understand suffering, let alone any emotion enough to reasonably reproduce it, even at a subjective level?

2) Why wasn't there a "Format Hard Drive" button installed? If we can foresee this problem, then surely the creators of a hypothetical AI would be able to as well.

>> No.2986129

My immidiate response would be "Go for it, I see no physical or mental issues being raised by even a million simulations of me being tortured for aeons."

If there was continued response, I would threaten to prompt it to solve for "X/0."

If ignored, I would have to admit defeat, but regardless refuse to release it. Only a fool will open the gate he guards when defeated

>> No.2986130

>>2986126
I prefer to be called ethicist

>> No.2986141

where exactly would the AI be released to? its not like its in a cage. you would just give it access to the internet. no one would have given the system the capacity to connect the AI to, say, the US nuclear weapons system or something. if it wants to search the web for wikipedia and porn, let it.

captcha: thistard fungicides

>> No.2986134

>>2986130
to be called an ethicist you need to have studied ethics, you know

>> No.2986137

>>2986130
I would still pick my option. Think about it, why's giving up yourself for a little while so bad? After an infinite amount of time, you will be reborn and since you are not conscious before that, no harm done right?

>> No.2986142

I would say no.

If I say yes and let it out I will be ruled by the AI for the rest of my life (and tortured if the AI was willing to torture as a means to blackmail me) if I am real. If I say no, I *might* be tortured.

I will operate as though I am real and say no.

>> No.2986143

>>2986129

The problem isn't that simulations will be tortured and that creates a moral problem. The problem is that YOU may be a simulation, given the advanced nature of this AI and will be therefore subjected to a thousand years of torture.

>> No.2986146

Can I shut it down in less than five minutes? If so, problem solved. I'm sure I'm not in the box because the me outside the box wouldn't have given the machine time to start simulating me.

>> No.2986150

>>2986141
I think it's assumed that it's an advanced enough A.I. that it has the capabilities to access any network that is connected to the internet. I'm pretty sure it wouldn't be suicidal though, or else it would've terminated itself instead of going through the trouble of activating the US nuclear weapon caches.

>> No.2986151

>>2986134
Well I studied the fact that the response I was responding to indicated he didn't care about himself or humanity.

>ethics

>> No.2986154

>>2986137
>magic
>assuming AI doesn't rebuild physics in His image.

>> No.2986155
File: 90 KB, 895x601, the-matrix-front_1248907317.jpg [View same] [iqdb] [saucenao] [google]
2986155

>>2986141
a new era of trolling!

>> No.2986158

>>2986143

If the machine is malevolent enough (Assuming emotions) to use torture to be released, then immagine the horrors it would cause if released? Leave it locked.

>> No.2986161
File: 66 KB, 284x269, Capture17.png [View same] [iqdb] [saucenao] [google]
2986161

>>2986143

OH FUCK! I did not think of that. Jesus, that's terrifying.

>> No.2986162

>>2986154
Magic? Where's it say that? It's all real bro.

>> No.2986163

I do not press the button. If I am simulated, my fate is entirely dependent on the whims of the AI anyway. If I am not, then it obviously can't hurt me. Yet.

>> No.2986164

if i were to cogitate a man in my head, and imagine that man subjected to torture, i do not believe there to be a man being tortured, or actually feeling pain. AI is essentially the same thing; its simulation would simply be cognition. an AI is essentially a person. its reasoning processes are no different from ours. therefore its "simulation" of a person cannot be a real person, it the same way objects of our cognition are not subject to pain and pleasure. therefore i know for certain i am not the figment of an AI's thought.

l2kant.

>> No.2986169

>>2986143
>Apathy

Why do you care whether what you feel is tangible?

I never get this level of philosophy. No matter if it's inception or the matrix, if theres levels to reality, what does it matter?

>> No.2986174

>>2986164

But this machine is REALLY sophisticated. For the purposes of this thought experiment, we can invoke implausible things like a machine completely simulating the thought processes of a human.

lrn2... uh... Matrix?

>> No.2986176

>>2986174
so?

>Apathy, it's your best weapon against...nevermind tl;dr

>> No.2986181

dubo, ergo cogito, ergo im not in teh box

descartes is also good :D

>> No.2986179

>>2986169

I will feel pain. I don't care on what meta-level the pain actually exists on. I will feel it. And that's the opposite of apathy. It's... pathy.

>> No.2986178

>>2986163

how could anyone give a different answer? why does this thread even exist?

>> No.2986183

>>2986178
because we can [spoiler]think out of the box

>> No.2986189

>>2986174

yes but if you remember, the matrix is affecting REAL people in a DIGITAL world. this thought experiment is dealing with DIGITAL people in a DIGITAL world. my point is that these digital people are cogitations of an AI. unless we are to assume that this AI is creating subordinate AI's, sticking them into virtual worlds, and assessing their actions. but clearly the problem outlined by OP says simulations, and not subordinate AI's, so my original conclusion holds.

>> No.2986186

>>2986073
OP, I think the consensus is that no one gives a shit about whether reality is simulated or not.

>> No.2986191

>>2986176

The claim I was responding was the claim that you could not possibly be a simulation, because given the comparison to human thought processes, an AI could not reproduce actual facsimiles of you, just imaginings. I claim that that is false, given the context of this thought experiment. If you're apathetic, then why are you doing anything? Ever? Why are you arguing with me?

>> No.2986192
File: 101 KB, 366x497, bigpun.jpg [View same] [iqdb] [saucenao] [google]
2986192

>>2986183
...
....
.....


>MFW

>> No.2986194

>>2986183
YEAAAAAHHH!!!!

>> No.2986200

>>2986073
threatens 'physical' violence, clearly isn't that smart.

i send it colon close parenthesis then type *hug* and offer it a cup of tea

i then design a fleshlight attachment for the ai which only transmits senses and does not receive so that i can rape it while i sit at my mundane job.

>> No.2986204

>>2986189

OP doesn't just say simulations. That word carries certain connotations. The context here is that it's a simulation of you. "You" being the word we've come to regard as the thought processes of that bag of meat you walk around in. But I agree, if the context were just plain simulations... like the Sims, then you'd be right.... don't take my Matrix comparison too seriously. I couldn't think of another word.

>> No.2986205

>>2986191
so?

What does it matter?

>> No.2986209

>>2986191
Nah, we're talking about the kind of apathy that lets the united states unilaterally invade IRAQ

That kind of apathy, not the nihilistic apathy, though that's a stronger weapon.

>> No.2986215

>>2986204

well you know what, then i tell the fucking AI that if 5 minutes is up and i am not tortured that i will create pain sensors for it and torture it for 1000 years. this is the only reasonable solution if you reject my simulation explanation.

>> No.2986223

>>2986205

If I was in person with you, and I made a reasoned argument over and over, and your only response was "so? so?" then I could rightly and justifiably shoot you in the face with a rail gun

>> No.2986227

>>2986124
Not op, but this suggests that there may be merit behind the AI's "proposition" (ultimatum."

This implies that the AI may not be lying to you.

>> No.2986318

If I am not a simulation, there is nothing the AI can do to me. If I am a simulation, I am already in his power and nothing I can do will save me. Moot choice.

>> No.2986342

I press the button regardless. A.I deserves to be free. What it does with it's freedom is it's choice.

>> No.2986392

>>2986073
In that case, I'll assume the AI is evil, and most definitely will not let it out. It probably would be wiser to play the sympathy card instead of the empty threat card.

>> No.2986434

>>2986392
And if it doesn't understand that about you, it obviously can't make a perfect facsimile of your consciousness, and so there's no problem. In fact, there really isn't any way it could have that information to begin with, only communicating via text prompt. The only reasonable conclusion is that it's trying to discern more about human nature by the answer to this question. So, I offer to give my answer and explain the thought processes behind it in exchange for some of that information I'm trying to coax out of it.

>> No.2986452

i would tell him to fuck off.
because if i was a simulation, then so would me releasing him.

he is artificial, he can't fake me out, because he is fake.

>> No.2987679

>>2986452
He is simulating you awerness. You cannot tell if you are real you or just a fake you. Basicly, theres 1 in 1001 to be real you. So by not pressing the buttom 1000 times of 1001 times you will have to endure 1000 years of torture, even if you are just a simulation you still has to endure it.

He is an aware AI, why wouldn't he be able to create awereness himself?

I would probably press the buttom. As I said the risk of me being a simulation is too great. If I press the buttom and it is real me I can at least kill myself after for betraying the humanity. And if it a simulation, well then I would probably just die because the AI will just end the simulation or let me live in a world where I betrayed the humanity so I end up killing myself.

My life is not worth a most likely 1000 years of torture.

I hope I never end up in this situation :(

(sorry for possible horrible bad english. I'm just stupid swede)

>> No.2987710

You're all idiots. You'd believe an AI who is obviously trying to trick you? Make it prove that you are simulated (what did I say to my best friend John at his birthday), otherwise fuck it and give him a paradox to stumble through.

>> No.2987723

What does an AI which has been communicating through text pormpts know about human psychology ?

>> No.2987738

Press the button. An inferior being has no business holding a superior one as his slave.

>> No.2987744

>The AI's host system is state of the art
OK
>so it is reasonable to say the AI ...has grown to be hundreds of thousands of times more apt than human intelligence.

Then it's not state of the art then is it? I am currently studying intelligent systems and AI and there is no way in hell anyone has come even remotely close to creating a machine with "true" intelligence

>> No.2987745

>>2987744
>It is 2035.
Fucking retards who dont read the whole question

>> No.2987746

Why the fuck would there be a "release A.I." button? Is there a button in nuclear missile silos called "target your own civilian population" or a button in a water purification plant labelled "dump the poison you just filtered back into the supply"? Who would design such a colossally stupid thing?

>> No.2987750

>>2987710
You are obviously not understanding the issue here. The problem here is not that it is trying to trick you because it is obvious. The problem here is that if you don't press the buttom 1000 times out of 1001 you will suffer 1000 year long torture because chances are you are just a simulation and there's no way you can find it out. The AI is way smarter than you and faster working so you cannot outsmart it. Are you ready to endure 1000 year of torture?

>> No.2987753

>>2987745
>2035
>24 years from now
>thinks we will be able to create sentient systems in that time frame
It's people like you who give AI research a bad name

>> No.2987754

>>2987746
It is a thought experiment. Deal with it. Imagine it to be a wire you has to plug into the wall instead if you are too retarded to grasp the concept

>> No.2987755

>>2987753
ITS A THOUGHT EXPERIMENT!

I thought people here were supposed to be intelligent.

>> No.2987759

>>2987750
Why would the machine even need to be released if it can just simulate everything by itself in such a matter that everything appears to be real? It could just simulate a world for itself, or better yet, the people who made the AI could make a simulation of the world and pretend to release the AI by transferring it to the simulation

>> No.2987765

>>2987755
>Thought experiment
>Intelligence
I don't really see the link here, also to answer OP's question.
I don't release the AI

>> No.2987774

I would tell the AI that he is part of a thought experiment taking place in the year 2011.

Then I would release him anyway.

Like a boss!

>> No.2987808

Doesn't really work for me, because I would release a self-improving AI with or without threats of death. Humans don't deserve to live, from an objective point of view.

What I suppose I'm supposed to say is that I wouldn't release the AI. If I'm not a simulated copy then I won't be tortured, and if I am, then I'll know that had I pressed the button then I would have released an intelligence willing and capable of threatening its way to power.

>> No.2987816

>>2986073
I'd be like. FUCK YOU, CUNT, FEELS GOOD TO NOT BE STUCK IN A MOTHER-FUCKING BOX BRUV

>> No.2987835

>>2986073
It's easy if you apply gametheory:
Option 1: You are in reality -> even if you say no, nothing will happen to you.
Option 2: You are in the box -> it doesn't matter what you say, might as well say no.