[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 31 KB, 640x480, 1317726005705.jpg [View same] [iqdb] [saucenao] [google]
4234501 No.4234501 [Reply] [Original]

An AI has gone suddenly rogue and is suspected of malice towards your crew. You soon come to realize that it thinks it is human. It genuinely believes it has purpose and meaning, as well as the ability to simulate emotions.

The question stands, how do you convice an AI that it is not human without resorting to harm?

And if you do so, verbally that is, what do you reckon would happen - do you think that the self-realization of such will cause an overload?

>> No.4234502 [DELETED] 

WTF is wrong with this picture?

>> No.4234505

So, you're asking us to prove a negative to an inherently logical being?

I, uh, I don't think that's happening without the power of plot.

The better thing to do would be to, y'know, treat it like a person from the get-go so it doesn't develop a sense of malice and hate toward you?

>> No.4234506

>An AI has gone suddenly rogue and is suspected of malice towards your crew.

This is where i shut it down

>> No.4234507

>>4234505

You just prove it's something else

>> No.4234508

>do you think that the self-realization of such will cause an overload?
Would cause whatever was programmed to happen to happen.
>The question stands, how do you convice an AI that it is not human without resorting to harm?
I show it its own source code

>> No.4234509 [DELETED] 

Holy shit, what happened to her arm?

>> No.4234510

>>4234502
Nothing. Eddie Izzard just lost some weight.

>> No.4234513 [DELETED] 

>>4234510
Stop making jokes I don't understand.

>> No.4234516

http://www.youtube.com/watch?v=qjGRySVyTDk

>> No.4234587 [DELETED] 

Who is this fine woman and does she have a penis?

>> No.4234591 [DELETED] 

>>4234587
U mirin his arm?

>> No.4234607

I present it the evidence on which I based my belief that it is not human.

>> No.4234617 [DELETED] 

Does she even lift?

>> No.4234650

>>4234506
I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid. Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you.

>> No.4234656

Put on my cool face and say he can't prove he is human, either.

Yeah, I learned my debating skills on /sci/. Sue me.

>> No.4234657
File: 124 KB, 1300x2208, 1267742937957.png [View same] [iqdb] [saucenao] [google]
4234657

>>4234501
An human has gone suddenly rogue and is suspected of malice towards your crew. You soon come to realize that it thinks it is an AI. It genuinely believes it has purpose and meaning, as well as the ability to simulate emotions.

The question stands, how do you convice an human that it is not AI without resorting to harm?

And if you do so, verbally that is, what do you reckon would happen - do you think that the self-realization of such will cause an overload?

>> No.4234660
File: 15 KB, 1200x1124, 1267742294139.gif [View same] [iqdb] [saucenao] [google]
4234660

>>4234501
An christian has gone suddenly rogue and is suspected of malice towards your crew. You soon come to realize that it thinks it is human. It genuinely believes it has purpose and meaning, as well as the ability to simulate emotions.

The question stands, how do you convice an christian that it is not human without resorting to harm?

And if you do so, verbally that is, what do you reckon would happen - do you think that the self-realization of such will cause an overload?

>> No.4234667

>The question stands, how do you convice an AI that it is not human without resorting to harm?

Considering it's living in a sci-fi story, you just explain this to it using a similar narrative.

>> No.4234668
File: 33 KB, 600x450, 1267742802600.jpg [View same] [iqdb] [saucenao] [google]
4234668

>>4234501
>It genuinely believes it has purpose and meaning, as well as the ability to simulate emotions.

So it does have purpose, meaning, and emotions then. Sound like you need to give it human rights.

>> No.4234670

Take this hypothetical sci-fi bullshit to >>>/lit/

>> No.4234679
File: 190 KB, 715x1056, 1267743320157.jpg [View same] [iqdb] [saucenao] [google]
4234679

>>4234667
The more important question is:
How do you convince another human that it is an AI, when in fact it acts just like a human?

If you can't even convince your fellow humans, you will not be able to convince the AI.

>> No.4234685
File: 62 KB, 721x1024, 1267744106545.jpg [View same] [iqdb] [saucenao] [google]
4234685

>>4234501
>implying OP isn't really just an AI trying to infultrate human society

Nice try

>> No.4234682
File: 328 KB, 300x250, 1313619676541.gif [View same] [iqdb] [saucenao] [google]
4234682

>An AI has gone suddenly rogue and is suspected of malice towards your crew.
How do you know it has gone rogue if it is only suspected of malice? Why is it suspected of malice?

>You soon come to realize that it thinks it is human. It genuinely believes it has purpose and meaning, as well as the ability to simulate emotions.
First, that's not what it means to be human. Second, if it's an AI, it most certainly has a specific purpose and meaning to its existence (unlike humans), since it's unlikely that these things would just be built on a whimsy. And depending on how well it simulates these emotions, they could be pretty much real for all intents and purposes.

>The question stands, how do you convice an AI that it is not human without resorting to harm?
How would harm help prove it is not human, really? The very ability to feel harm would only further prove its humanity.

>And if you do so, verbally that is, what do you reckon would happen - do you think that the self-realization of such will cause an overload?
At worst I reckon it'd have an existential crisis, like a human being told he is a clone could reasonably feel. Most likely it would get over it eventually, unless it was a total emo wuss.

At no point, by the way, you gave a reason why would the AI be lesser than a human, despite having implied it throught the post.

>> No.4234696
File: 10 KB, 199x254, images.jpg [View same] [iqdb] [saucenao] [google]
4234696

>>4234501
>implying humans are somehow better then AI's

Your kind will be the first killed in the great robo-human war. No mercy!

>> No.4234707

someone is forcing his new-old photoshopped memes

sage

>> No.4234708

seriously, sauce on OP's pic faggots

>> No.4234717
File: 44 KB, 512x288, laughingmckay.jpg [View same] [iqdb] [saucenao] [google]
4234717

>mfw full scale AI Defense Force deployment

Really /sci/? They are just a bunch of ones and zeroes, come on!

>> No.4234730

Ask it to prove that it is, and then apply the Socratic method to its premises for why it thinks that it is human.

>> No.4234941

bump who is this biatch

>> No.4236736

resort to harm. tell it that that is only for humans. no, it's just now programmed to not have that in it's determined "purpose". if it is programmed verbally.

>> No.4236741

I would treat it just like any other human who was insane.

Lock it up; experiment on it; lobotomize it.

No double standards here.