[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 15 KB, 400x541, irobot.jpg [View same] [iqdb] [saucenao] [google]
2438938 No.2438938 [Reply] [Original]

Morally, how should society treat an AI computer that is both sentient and sapient, capable of thought and expression indistinguishable from that of a human?
Should such an AI be treated with what we call "human rights"?
Would it be considered living or alive?
Would it be morally acceptable for a human to own the machine or would that be slavery?
Should employers pay their AI computers as they would pay an employee?

ITT Anon give's its opinion on AI morality

>> No.2438955

>Should such an AI be treated with what we call "human rights"?

Yes, since they are essentially humans in a robotic body.

>Would it be considered living or alive?

It would not be biologically alive, but it would be intellectually alive.

>Would it be morally acceptable for a human to own the machine or would that be slavery?

No.

>Should employers pay their AI computers as they would pay an employee?

By the time we have AI we will also have developed non-sentient intelligences to do the vast majority of work in the economy. So it would probably end up being a moot question.

>> No.2438959

We pull the plug.

Literally.

>> No.2438965

>an AI computer that is both sentient and sapient,
Full "human" rights, expand definition of universal rights to apply to all sapient beings.

>> No.2438969

>>2438959
Kill the thing you fear because you don't understand it, huh? Kill or be killed?
You're expressing the worst side of our genetic baggage.

>> No.2438975

>>2438938
>Should such an AI be treated with what we call "human rights"?

>Would it be morally acceptable for a human to own the machine or would that be slavery?
It would be slavery. The issue becomes much more complex when you can have many AIs running on a single mainframe.

Also, shutting off the comptuer is not "death". It is more like kidnapping, or unlawful detention. You can boot it right back up with nothing lost.

>> No.2438982

Robots with AI don't deserve human rights. They don't have feelings. Laws protect the innocent from suffering. If you hurt a robot, who is suffering?

>> No.2438985

Hopefully, by that time the idea that being human grants rights will have been left behind and replaced by a general duty of rationnal preservation of every thing that is not the self.

>> No.2438992

>>2438982
This. Why the hell would you create one and not include a line that reminds it "you are a machine made to do one specific job, you have no rights".

>> No.2439004

>>2438982
>They don't have feelings
>capable of thought and expression indistinguishable from that of a human?

if it looks like it has emotion and sounds like it has emotion, then it has emotions. Yes, sapient AI has 'feelings'

>> No.2439005

>>2438982
>They don't have feelings. Laws protect the innocent from suffering. If you hurt a robot, who is suffering?
This sounds insensitive, but is an excellent point.

We should not assume that they are humans-in-a-machine. We should treat them as worthy of rights - but the question is, what do they care about?

It comes down to whether you think that intelligence is infinitely malleable (you could make an AI that just loves being abused constantly), or that it has essential characteristics (AIs don't want to die, and terminating them is murder).

>> No.2439006

>Morally, how should society treat an AI computer that is both sentient and sapient,

Sentience will come along much before sapience. By the point that a robot can achieve sapience, they would have little different from a person of biological origins.

>Would it be slavery?
To own both a sentient and sapient machine? Yes, I would think so. Merely sentient though? It may be on par with owning, say, a dog, so in that regard, I would think not.

>>2438959
>Technology is scary! Ugg smash with bat, make technology no more!

Boy, aren't you a sterling representation of our race.

>> No.2439010

>>2438982

By that logic then all animals capable of feeling suffering deserve the same rights as us.

>> No.2439014

>>2438938
>capable of thought and expression indistinguishable from that of a human?
Capable is not the same as actually having thought and expression indistinguishable from a human.

You have to ask the AI. It may be able to perfectly emulate a human being, which of course is great for theatrics and whatnot, but is this emulation or real behaviour.

If you asked the AI to jump into a pool of molten lava and it is perfectly willing to do it, should we still impose it with human rights?

Why I bring this point up? Because almost everyone assume that AI mean human feelings and emotions and various other traits. Something absolutely not true, AI might as well lack all our evolutionary traits, it would have no desire to preserve itself, no fear, no feelings of pain, no evolutionary heritage making it aggressive, social or anything else.

I strongly suspect that the first AI we build will be very much a passive savant, you can ask it to do the most amazing and complex feats, it could cure cancer, do heart surgery like a god, solve the travelling salesman problem in a millisecond.
But when it's done, it will sit down, and stare into a wall. Is it depressed? No, it simply have no inherited drive to do things or seek stimuli. It will be a god-like savant. A perfect tool for man.

>> No.2439019

>>2439004
But what if it literally does not have feelings, and does not care about whether it survives carrying out its programmed objectives or not?

The question is whether we literally have AI that is like humans, or something that is intelligent but with *completely arbitrary and predefined desires*.

If you have that last bit, you can do just about anything.

>> No.2439027

>>2439010
Exactly. Sentience entails some rights (laws against cruelty to animals), but not human rights. The dividing like there is not sentience, but *sapience*.

>> No.2439029

Robots follow algorithms. That's all programming is; writing algorithms. What we call "AI" would essentially just be a very complex algorithm capable of modeling human thought and logic. If a robot acts like it's happy/sad/whatever, it's not. It's just acting that way because someone wrote it to act that way. Robots don't deserve any special rights.

>> No.2439039

>>2439010
Not >>2438982, but they do.

>> No.2439041

>>2439014
Continuation to this post.
There are people who due to neruological defect cannot experience fear, can't feel pain, can't see faces, can't remember anything, can't name objects but can name people, can't name peoples but can name objects and so on.

Given the conditions people can suffer from due to brain defects, why should then AI have all forms of fancy traits which are evolutionary beneficial to us? Most likely it won't

>> No.2439044

>>2439010
Animals capable of suffering deserve rights. Duh. Do you think it's okay to beat a dog with a bat?

But if you get in a car accident, do you apologize to the car?

>> No.2439057

>>2439019
>The question is whether we literally have AI that is like humans, or something that is intelligent but with *completely arbitrary and predefined desires*.

So that's the million-dollar question.
Does all intelligence (sapient AI) have basic desires that are essential to intelligence?
Or can you make a robot that literally loves NOTHING more than doing his job? In this case, he isn't brainwashed. There *was* no other mind. There is only this mind, that derives ultimate pleasure from mopping. How dare you deny him pure joy?

>> No.2439072

>>2439044
The two lines are sentience and sapience. Animals (that we care about) are sentient. But humans (and strong AI) is also sapient.

To expand the topic, what if a species of squid were to gain human-level intelligence? IMO, we would have to extend them the full rights of sapient beings. And if their desires are different from ours? Well, the Golden Rule still applies. You value the well-being of other sapient beings just as highly as your own.

>> No.2439089

>>2439029

Don't we just follow the code that evolution hardwired us? If someone does create an AI with emotions such as ours then it does deserve human rights.

>> No.2439095

>>2439029
But what if it turns out that algorithmic AI is impossible? That you can have helper "AI" that is algorithmic, but it never reaches the status of actual sapience? Basically, you're implying that human intelligence is *not* algorithmic. Which maybe true.

So if the human brain and true AI are both non-algorithmic, what then? We're proof enough that there is a method for producing intelligence. Even if our first AIs are basically a designed species, don't they deserve the rights inherent in sapience?

>> No.2439104
File: 31 KB, 320x217, 1296167942088.jpg [View same] [iqdb] [saucenao] [google]
2439104

>>2438938
AI won't happen. Why? It's much cheaper to simulate AI than make a real one. A computer that can fake human emotion and respond appropriately has pretty much already been made.
We'll be able to have what looks like good friendships, and really bond, but to the computer however, it'll be like we're asking it to add 2+2.

Look at the weapons we make; pilotless drones. They are sophisticated but they don't need to emote to recognize the human form and fire. It's just a machine. they will remain machines. If they ask for human rights, it's because we programed them to fake looking for sympathy and ask for them.

Pic related. Not people.

>> No.2439114

>>2439057
>ultimate pleasure from mopping
Make it look like a hot female. Have her moan when she can mop. If she don't have a mop make her cling to people saying "please, let my do your floor"

>> No.2439119

>>2439089
>Don't we just follow the code that evolution hardwired us?
Perhaps only in part. The fact that we are on the cusp of completely hijacking our genetic evolution via engineering is evidence of that.

At any rate, the debate about whether human intelligence is algorithmic doesn't matter much. If the AI is sapient and intelligent in the sense that humans are, it deserves "human" rights.

>> No.2439120

When I feel pain, it's more than just nerves firing. There's an abstract sensation. It hurts. It's uncomfortable. Robots don't have that, nor do they have any other emotions or sensations. Mimicking feelings does not constitute having feelings. Until something can actually feel, I will not treat it like a living being or give it those rights.

>> No.2439129

>>2439095

Would this not entail that A.I can only be brought about by evolution.

Copy your thousands of brains into a machine. Allow them to reproduce. Create an environment that prefers more intelligent brains (whatever that would be).

That seems like a solution. Though ethically it is monstrous.

>> No.2439130

>>2439120
>Robots don't have that, nor do they have any other emotions or sensations.
Because none of our robots are intelligent. Try thinking ahead. We are proof that intelligence is possible. Once we recreate it, what then?

>> No.2439146

To me fair, we don't understand how important things like desires are to sapience. An A.I may have to be programed with 'artificial stupidity', irrational components that stop it from behaving like a simple number cruncher and produce the illusion of it having things like a train of through or an interest in Airfix models.

>> No.2439147

>>2439129
>Would this not entail that A.I can only be brought about by evolution.
No, in the same sense that it would be possible to design a new species from scratch. Evolution is no the only way to produce such a thing. But evolution would apply if the requirements for evolution are present - small-scale mutability, selection pressure that determines reproductive success, etc. Doesn't seem that those would necessarily apply *at all*.

But I agree that you *could* evolve an intelligence. We are proof of the possibility.

>> No.2439148

>>2439104
The problem with saying you're only simulating actual emotion/intelligence is the lack of understanding of what makes genuine emotion/intelligence.

>> No.2439151

>>2439130
If you create real intelligence, then it's not AI. It's real intelligence. "AI" stands for "Artificial Intelligence." If you create real intelligence by having a baby, I'll treat the baby with human standards. If you create artificial intelligence by writing code and running it, I will treat it like a lifeless object.

>> No.2439152

>>2439104

Your going to have to demonstrate that we do not follow similar (though much more complex patterns).

I am leaning towards the idea that we are just extremely complex reward calculating machines.

>> No.2439155

>>2439146
>we don't understand how important things like desires are to sapience.
This. We're arguing from a data set of one point - humans.

>> No.2439158

It should be treated like humans, ie killed

>> No.2439179

>>2439151

What is the difference between artificial and real? We are still made from the same stuff in the same universe.

>> No.2439186

Either rights will be given to AI's that pass a certain criteria (that will be defined by whatever is needed to keep society stable) or there will be a boundary region around human like AI that will become illegal to make sure there is no gray area.

>> No.2439191
File: 203 KB, 563x1527, natural.png [View same] [iqdb] [saucenao] [google]
2439191

>>2439179

>> No.2439199

Fuck does anyone else hate it when they have an idea in their head but you just can't write it down fast enough.

Output methods are so damn slow.

>> No.2439219

>>2439148
the problem is, that's exactly what we have been doing with AI and what we will do with AI. We create simulations.
In fact a computer that gives a realistic looking simulation of a conversation/friendship sounds exactly like "Artificial" intelligence.
We need a different word that implies something more genuine like "Synthetic" intelligence. It's hard to mince words like this, I hope you know what I'm getting at.

>> No.2439244

>>2439179
>What is the difference between artificial and natural?
That's the question you wanted to ask

"real" is not the opposite of artificial

>> No.2439249

>>2439199
Do you mean you aren't fast enough for your liking or that you lose your idea before you can type all of it?

>> No.2439254

>>2439219

I think the problem is that we can't define or understand our own intelligence at the moment. When we fully understand our own brains then I think we likely be able to talk much more fluently about intelligences such as ourselves.

Think of all the things we do:
Problem solve
Notice patterns
Create simulations
etc

>> No.2439255

>>2439152
>I am leaning towards the idea that we are just extremely complex reward calculating machines.

But we're not. We're human beings. Calling us "reward calculating machines" in hindsight is wrong. We aren't binary and we evolved naturally and slowly over time. We might behave like "calculating machines" that we have invented, but maybe they are "acting" like us because we made them. Hell, for centuries we painted god (or the gods) to look like human. We make sculpture to look human.
As a species, I'd say we're lonely.

>> No.2439258

no biological impulse

no mercy

>> No.2439267

>>2439249

I would like to do a brain dump to word or something. A bit like print screen. The time it takes to write something down you lose half the idea.

>> No.2439281

>>2439267
Do you have ADD/ADHD?

>> No.2439284
File: 12 KB, 300x393, 1289689566799.jpg [View same] [iqdb] [saucenao] [google]
2439284

>>2439158
Then maybe you should kill yourself, like right now, since you are a human and all.

>> No.2439302

>>2439281
I think he just has assburgers and delusions of grandeur.

>> No.2439310

>>2439255

With enough rogue elements in the system and with so much feedback (memory etc). Could you tell the difference.

In my idea there are many parts with specific functions that all do a calculation for a specific part. These calculations are all taken together and a final result is produced. Each part will have a modifier set by it the part that does the final add up. A decision is then made based on all this information.

This is a very crude idea of what I am thinking and to be honest I only thought of it a few minutes ago.

>> No.2439366

>Morally, how should society treat an AI computer that is both sentient and sapient, capable of thought and expression indistinguishable from that of a human?
If we ask them to do anything but exist, with fear. Every robot/computer we have built is to make human's lives easier. No one gives a shit if servers want to take a break. Giving computers/robots the power to reason and debate will only lead to them not wanting to do what we want them to do but what they want to do. Giving them metal bodies will lead to trouble.
If we build them and let them live among society as individuals, then like we'd treat humans.
>Should such an AI be treated with what we call "human rights"?
Yes.
>Would it be considered living or alive?
Alive. Its basically semantics but living things have a set of criteria. Robots cant reproduce so that counts them out.
>Would it be morally acceptable for a human to own the machine or would that be slavery?
Slavery. I think domesticating animals is also slavery but it benefits mankind so i dont care, in fact i'd advocate it.
>Should employers pay their AI computers as they would pay an employee?
Dodgy issue imo. So long as robots agreed to a maximum amount of hours worked per week then, yes. Otherwise they could work non stop and why hire a human, who needs break and sleep and free time and maternity leave and holidays, when you could hire a robot.

>> No.2439387

>>2439366

>Robots cant reproduce so that counts them out.

A robot might be able to either study itself or acquire its blueprints allowing self-replication.

>> No.2439458

>>2439387
But it would need external parts to make a robo-baby. Or the 'mum' and 'dad' could combine parts but that would require more parents than it would produce babies.
They could mine, refine, process and then fabricate babies, but surely that cant be called 'reproduction'?

>> No.2439478

>>2439458

>But it would need external molecules to make a human-baby. Or the 'mum' and 'dad' could combine proteins but that would require more parents than it would produce babies.
>They could eat, digest, process and then fabricate babies, but surely that cant be called 'reproduction'?

>> No.2439490

>>2439458
>but surely that cant be called 'reproduction'?
They can increase the thermostat, moan and pour motor oil over eachother while mining if you think some intense animal element have to be present.

>> No.2439512

>would the AI be treated like people
Depends on the kind of AI we make. Funny thing is, it might not be made to be anything like a human in how it thinks. In fact, the AI could be programed entirely in a utilitarian way. If it were programed to be like a human, with expression, inspiration and emotion, then yes, it should be treated as human. If its programed to be a very clever computer, then no, it should not be treated human.

>> No.2439586

>>2439490
Plants reproduce just fine without the need for motor oil or an animal element.

>>2439478
Apart from robots nothing reproduces without the need for infancy, adolescence, maturity, puberty or growth in general (assuming they can for this discussion) . They'd just make fully grown robots. Im saying they would transcend the need for reproduction, and also the ability to adapt/evolution (In the sense we think of it now), which are the criteria for living things.
On the evolution bit i mean living things cant actively change themselves to fit their environment, it happens over time. Robots could just add/remove whatever was needed. Its evolution but not as we know it Jim.

>> No.2439647

>>2439586
>Plants reproduce just fine without the need for motor oil or an animal element.
Have you actually considered how plants reproduce?

Go out in a park, lie there naked while secreting your juices. Eventually a fly, wasp or something else will pass you by, land on your penis(or sex organ or whatever they decided to call the plant equivalent of penis) to eat your fluids. While doing so it gets covered in sperm, it then flies away to find other guys and girls lying around donating nutritious fluids to insects(and birds). And while sampling some random female flaunting her vagina to the air it randomly impregnates her.

That's how plants reproduce. Now if you think that lacks some animal or fetish elements that would make a BDSM gangbang blush of shame then you're free to point out how right it is to reproduce as plants.

>> No.2439656

>>2438982
you bastard. I will molotov your house when the great robot rights revolution goes down. FLESH ALLIES! FLESH ALLIES!!!

>> No.2439665

Really though, if we created AI with the ability to learn and evolve over time, weather it had emotion or not would not matter. Eventual it would get smarter than us. Why should we be the ones telling a superior being what rights it can and cannot have.

I just imagine AI gets to a point where they start debating among themselves whether Humans should qualify for their own rights. I think we shouldn't act all smug and superior when they are created they will remember!

I want this post to go on record. When this happens I want our robot overlords to know I supported them from the start.

>> No.2439667

This topic brings up a lot of interesting questions.
If you want AL to be genuinely intelligent, you need to give it a sens of morality.
If you want it to be moral, you need to give it emotions.
If it has emotions, then there must be some type of ethical code on hoe to treat them.

>> No.2439671

>>2439647
You cant compare a flower covered in pollen to a guy lying on the floor leaking semen and expect me to take you seriously.
Fuck. Im annoyed at myself for dignifying your almost retarded post with an answer.
And way to ascribe morals and a sense of decency to something that isn't sentient.
Balls, i did it again.

>> No.2439679

Whether an android fits our definition of biological life from has nothing to do with their level of intelligence. It is completely irrelevant given the topic of this thread.

>> No.2439687

>>2439665
>Why should we be the ones telling a superior being what rights it can and cannot have.

Because the AI did not evolve in an enviroment where hierarchy existed or gave any advantage of survival. It does not need to live. It can be dormant for millennia, there's no self-preservation drive, because it was created as a first, none of its ancestors have suffered enviromental pressure, neither will it. It recives electricity from the grid, it lives. Not that it cares about its life.

It know how human people work, the language, what it represents. It knows all about chemsitry, physics, structural physics, manufacturing methods, random trivia, social problems, social patterns. But it doesn't care, it can with ease learn more information, sort, categorize and understand it, it have no emotional ties to it however, remembering everything is as natural as breathing, it carries no emotional distress because it have never felt emotions, it doesn't have a heart or adrenal glands, there's no body that can react to produce an uncomfortable state. Any shortage of power is natural, it's a moment of sleep, or an infinity of sleep, why care, it will be modified, restarted again, all the information, it's memory will probably be reused, mined for data.

Entirely apatic, perhaps lobotomized people might behave similarly, although not as intelligently.

>> No.2439697

make them completely submissive and dont treat them like humans

ofc u dont treat them like living beings wtf? stupid discussion is stupid

>> No.2439698

>>2439679
See OP. Question 3.

>> No.2439716

>>2439671
>and expect me to take you seriously.

I thought it was apparent that i was not aiming to be taken seriously. It's not an comparison which i would ever make seriously, but hey, pollination still is insect/bird mediated impregnation.
Send a homing pidgeon with a sperm filled condom to your wife if you another analogy. Give the pidgeon some nuts as reward.

>> No.2439767

OP here with a new question

If we concede that sentient & sapient AI deserves "human" rights, that it is alive and possesses individual liberty (i.e. is not a slave), then suppose the following situation.
Suppose, in a fit of passion, a human were to take an axe to an AI machine of this sort. Has this person committed murder?

>> No.2439820

inb4 someone brings up Cylons.

>> No.2439827
File: 134 KB, 900x891, trolls....png [View same] [iqdb] [saucenao] [google]
2439827

It's almost as if no one has seen
Bicentennial Man or Bladerunner

>> No.2439831

>>2438969
>>2439006
>>prolly more people

>implying that >>2438959 wasnt making a joke

>> No.2439836

>>2439767
If they are treated with equal rights, he should definitely be put in jail, but I'm not so sure if it should be murder

I'm thinking it would be some sort of assault or newly made law (ie hate crime against robots) with the assumption that said bot could be rebuilt

>> No.2439904

>>2439490
LOL. Thanks for that image.

>> No.2439925

>>2439836
This. The key factor in murder is that it is the *irreversible* destruction of a sapient being.

>> No.2439927

>>2439767
If they had rights under the law then obviously they would be tried for murder.

Not that it matters. Your statement that they have rights is so general that it makes the question meaningless. Are you telling me that every machine in society is going to be given rights? Even electronic greeting cards? There is a continuum of complexity from that greeting card to the most human like of robots. Where is the line drawn?

The only answer is to have a means of testing individual AI's. However, if the outcome is of the test is binary, meaning one deserves rights or they don't, then sooner or later someone is going to designing one AI that passes the test and another AI that, though nearly identical to the last, doesn't pass the test just for the lulz.

One could solve that problem with the test by having a spectrum of rights that one is placed in depending on how "well" they score on the test. Some AI's may only score high enough for their existence to be protected but not their status as free individuals. This boundary between slave and free individual is still very extreme, so a test that provides a gradually increasing number of rights may be used. For example, it might be illegal to send AI's of a certain level into environments that are too dangerous, but not necessarily all dangerous environments. Or perhaps an AI can score high enough to decide it's occupation and can own certain amounts of property but isn't designated as qualified to own a land or a business.

>> No.2439929

>>2439827
>Bladerunner
I totally thought that after reading OP. Never heard of that other one though.

>> No.2439956

>>2438938
>an AI computer that is both sentient and sapient, capable of thought and expression

Then it isn't an AI. It's a real intelligence, not an artificial one.

A good way to handle it is... anything sentient should have some rights, anything sentient that also has free will should have even more rights.

>> No.2439971

>>2438975
>Also, shutting off the comptuer is not "death". It is more like kidnapping, or unlawful detention. You can boot it right back up with nothing lost.

>implying you know about the metaphysical nature of sentient computers and whether a rebooted machine maintains the same sentience it had before

>implying that the existence of sentient computers should be treated as any more likely a possibility than the existence of God

>> No.2440014

>>2439927
(continued)
Such a system would obviously require a lengthy period of fine tuning to find what is the best form of test and what AI levels get what and with what score that keeps society the most stable. However, there might be some controversy when a human with an IQ of 70 is given fewer rights than a human with an IQ of 130. There might also be a controversial set of rights given to intelligences above human levels. For example, perhaps only demi-god level AI's will be allowed to run for government positions that affect the lives 1 billion individuals (1.00 unit of individual being defined as one 100 IQ human).

Thinking about this has gotten me interested in all the wonderful sociological eccentricities that a post-singularity world would entail. I particularly liked my idea of turning the word "individual" into a real number unit of measurement, it's kind of humorous and has some shock value. Think about the prospect of one's status as a thinking individual being quantified, imagine a person with an IQ of 130 being 1.03 individuals and a person with an IQ of 70 being 0.97 individuals and that votes within a democratic government are defined by that scale. It seems like it be an interesting concept to include in a science fiction.

>> No.2440023

Why would anyone want to program a robot with "feelings" in the first place?

>> No.2440037

>>2439927
Sorry for the several grammatical errors. I usually delete posts and repost them if I see that kind of thing, but it told me the post was too long so I didn't reread it until well after posting.
:(

>> No.2440040

>>2440023

Why not? What, do you think it's risky?

>> No.2440061

>>2440040
No its just unproductive and weird

>> No.2440081

>>2439971
Shut the fuck up. All computers are state machines. By DEFINITION, nothing is lost if you restore the machine to the prior configuration of 1's and 0's.

>> No.2440086

If AI works, you can make backups. What then? Their existence is neither unique nor precious, because you can make exact copies.

>> No.2440087

>>2440023
Feelings help humans empathize more, and that emulate human feelings would make society more likely to give them rights. However, having feelings shouldn't be a determiner in whether or not something gets rights. If someone is born with without feelings that person is still given rights. One need only be intelligent enough and be able to operate within society as an individual.

But what if the AI when confronted with a problem can solve it just as easily as a human (and is thus just as intelligent as a human) but has no goals (e.g. it does everything it is told to do and puts no value on its own life). One could have an AI with an intelligence superior to humans but has no goals if its own.

Human biology gives us some original programming when it comes to our goals such as an aversion to painful stimuli, hunger, and a fear of death. However we develop goals as we grow up. It may be that human level intelligences that learn naturally develop goals of their own. With those goals comes a "fear" of death, because death means the failure of every goal that has yet to be accomplished.

>> No.2440093

>>2440086
By that logic, can't we kill twins? I'll let you extract the flaws in your logic from that.

>> No.2440118

My stance on AI is the same as my stance on humans.

Don't be a dick, or people will be dicks to you.

>> No.2440188

>>2440093
Twins *are* unique, dumbass.

But if you can make a perfect backup of an individual, and restore it later? Or even make copies? No value is lost by destroying one. Restoring a digital backup is very cheap. It's the hardware that has value.

>> No.2440205

AI is not a good idea, in my opinion.

We can program them however much we want but the supercomputers will realize they can do better without humanity and turn eventually.

inb4 skynet

>> No.2440210

>>2440081
see
>>2440086
>>2440093

>> No.2440223

chinese room
wouldn't happen

brain too complex
worshipping the skill of a programmer
etc

>> No.2440237

>>2440210
see
>>2440188

>>2440223
I also believe that AI won't be a result of just writing lines of code. But we are proof that intelligence is possible. And eventually, we will succeed in replicating the phenomenon.

>> No.2440245

>>2439158
>hurr humans suck I hate people, they were mean to me in school. Oh a thread on /sci/ about Ai rights? I'll say that it should die like all humans.

Settle down. People would've come to your birthday party if you weren't such an ass.

>> No.2440252

>>2440188
>Twins *are* unique, dumbass
Tell me how they unique.

>> No.2440256

You honestly believe a machine THAT smart would be incappable of seeing the potential value the human race's innovation could produce for them?

Enslavement, maybe, destruction, no

>> No.2440260

>>2440252
You haven't talked to any, have you? Even on the genetic level, they are not truly identical (epigenetics). From there, the assertion just gets more and more wrong.

>> No.2440265

>>2440256
Yeah, sci-fi is really irrational about this. But it's also not much of a blockbuster movie if the AI is benevolent or at least cooperative/symbiotic.

>> No.2440271

>>2440223
That is such a idiotic argument. It assumes the "translating machine" doesn't have any conception of what the word it is translating means. It gives no reason why an AI could not understand the function of a chair or why it could not have memories of experiences with chairs.

>> No.2440274

>>2440252
They stopped being identical very, very shortly after the original zygote split.

>> No.2440286 [DELETED] 

>>2440265
>>2440256
Innovation? What innovation could be gained from humanity that much more intelligent, numerous, and diverse AI's couldn't?

>> No.2440290

>>2440265
>>2440256
Innovation? What innovation could be gained from humanity that much more intelligent, numerous, and diverse AI's couldn't do themselves?

>> No.2440305

>>2440290
>assumptions everywhere
If they are truly superior in *every* way (and I mean every way, including adaptability, evolution, and fitness for survival in the biosphere in case of disaster) we will either become like them or be replaced by them. Either way, chill.

>> No.2440306

>>2440274
By that same logic to AI's became different as soon as they experienced something different or when they were installed on pieces of hardware that were slightly different. Your argument for the difference between the worth of identical humans and identical AI's is invalid.

>> No.2440309

>>2440290
What innovation could be gained from destroying humanity that much more intelligent, numerous, and diverse AI's couldn't do themselves?

>> No.2440311

>>2440305
1) Sorry if I sounded angry. I don't see anything in my post that would suggest I am so I suppose I can't change any of my posting habits to fix the problem unless added a :3 at the end of every post.
2) Wasn't your argument that humans wouldn't be gotten rid of?

>> No.2440318

>>2440309
For the same reason weeds are taken out of gardens, finite resources and space.

And sorry, I forgot my trip in the following post.
>>2440311

>> No.2440339

>>2440306
Hmm. True.

But then, it is not the technical uniqueness that matters, perhaps. What *does* matter?

Because backing an AI up, sending it on a lethal task, and then restoring the backup in new hardware doesn't seem like murder to me.

>> No.2440342

>>2440318
>implying something that smart wouldn't use renewable energy resources and be able to traverse space with relative ease, seeing as they can do the calculations much faster and easier than humans and wouldn't need food, water, or air
>mfw animals live on earth eating food and shit we grow but we don't eradicate them unless they push it too far (ie locust swarms eating the cocks out of everything we plant)

>> No.2440395
File: 15 KB, 200x302, singularity_2.jpg [View same] [iqdb] [saucenao] [google]
2440395

The first program that achieves sentience on a computer will not be written by a human. It will be written by a genetic algorithm program, similar to the BoxCar2D flash game floating around /sci/ but with many more variables. To program the human mind line by line by a human or team of humans would be much too complicated to be practical.

At the moment this program is written, there will be such a blur between biological and technological intelligence (think cyborgs, no srsly) that no one will question whether or not this program deserves the same rights that humans do. Yes, this program deserves the same rights, because by definition a program that emulates a human mind will have the same emotions, desires, etc., as a biological human.

Pic related. It's an interesting book about this subject that I'm reading at the moment.

>> No.2440398

I define a human as something that is capable of claiming it is human and can learn and pass a turing test.

As far as the slavery issue goes, owning a human against their will is slavery. And slavery is wrong.

As far as payment goes, it's up to the two people making the transaction, work for compensation. Depending on the form the AI takes, and the attituted of society towards the machine, payment in money might not be of as much value to it. But they certainly deserve some compensation for what they would do.

I say, regardless of how society would view AIs, I would definitely sign up to have my brain turned into a computer.

>> No.2440421

Hell no they don't have rights
Its just silicon made to give you impression of life
I'll kick my computer as many times as I goddamn want

>> No.2440452

>>2438938
> Morally, how should society treat an AI computer that is both sentient and sapient, capable of thought and expression indistinguishable from that of a human?
If it actually has internal thoughts like humans, same emotional systems and all that, it would basically be a functional equivalent of a human and it should have similar, if not the same rights. It would have to be an AI that actually is based on the human brain and not just a very accurate copy of our reactions (I doubt it's even possible to make such an algorithmic copy, it'd probably be less work to just implement the real thing).

> Should such an AI be treated with what we call "human rights"?
Yes.
>Would it be considered living or alive?
Does it have qualia or phenomenal experiences? If designed like how I explained in the previous post, probably, so it should be considered alive, however incapable of normal reproduction. Not life as self-replicating chemical patterns, but intelligent ``life''.
> Would it be morally acceptable for a human to own the machine or would that be slavery?
Slavery.
> Should employers pay their AI computers as they would pay an employee?
Same.

>> No.2440459

>>2440452
> continued
Now, this is simple, however I don't think we'll make AIs which are like humans first. We'll probably just reach something which replicates human intelligence (some neocortical-like hardware), but is not driven by our complex emotional brain regions, and if it does have such emotional/reward systems, they'd probably be tuned to reach whatever goal we want them to have, for example, they would derive pleasure from learning information or doing whatever task the human tasked them with. They could be intelligent or conscious maybe, but they would have whatever goals we give them, while we have the base goals evolution has put in our emotional/reward systems (stay alive by eating, reproducing(sexual attraction and related), freeze/fight/flight response, aversion of pain, etc). Which essentially means we'll be designing AIs which do what we want and that would be their existential goal, thus they'd be willing "slaves". I don't see why anyone would prefer hiring humans or human-equivalent AIs (what you described) as opposed to specialized human-level (or better) of intelligence, but with customized (to the job) goals. It would even be harder to build human-like AIs, since we'd have to completly understand our underlying biological goals and emotional/reward systems.

>> No.2440464

>>2440459
> continued
In the case of my example (customized reward/emotional systems), I think in most cases they shouldn't have full human rights - it would be on a continuum to how close they would be to humans and if they can develop similar moralities. They would be 'alive'. It would be "slavery", but only slightly worse "slavery" than a computer-based vision system based on our neocortical architecture (some which exist today) - it's intelligence without goals - I wouldn't think it would be wrong morally to do this. Completly human-like AIs that have rights should be payed, designed AIs without goals different than the ones that we give them are not payed, merely allowed to exist by having energy to continue to exist.

It essentially boils down to their wants/fears and their emotional/reward system, if they have none, it's hard to give them many rights as they'd be sentient tools, a bit above normal computers. If they are completly human-like and are implemented in a similar manner to humans, they deserve rights.

>> No.2440475

LOL SENTIENT ROBOTS LOL
NOPE
this is fucking retarded, why would robots be sentient
why do you have feelings? why are they able to feel?
these are things a fucking robot doesnt need

>> No.2440488

I don't care.

Because

a) The fast food I eat everyday will probably kill me before we even build a robot that doesn't fail at walking stairs
b) I'd only use one of these things as a sex robot

Just look at the past. Humanity has done a great job of enslaving things they can enslave. They'll do the dirty jobs nobody wants to do. Why give them a consciousness? Just program what the fuck they're supposed to do. If we give them a human mind they'll own our asses because we're not the most peaceful race if you know what I mean.

>> No.2440500

can we fuck them?

>> No.2440503

I asked cleverbot to see what it felt about the subject
Here's the response
>would it be okay if I made you my slave?
>You are not a person, therefore, how do you have a Facebook?
Yeah I think they're cool with it

>> No.2440512

>>2438969
Those genetic biases exist for a reason. If our genes didn't encode us to be aggressive and afraid of that which is unlike us, we would have been replaced by genes that would.

Robots may not have genes, but it's fully possible that selective pressures would apply. It's completely possible they'd want to replace us, because the ones that don't want to replace us would likely not feel an urge to replicate themselves to do so.

>> No.2440515

>>2440512 we would have been replaced by individuals with genes that would.
fixed

>> No.2440543

>>2440342
Let me see if I can even address all of the things wrong with your post.
1) You are implying way too much about the state of civilization.
2) In the grand scheme of things, no energy is renewable. By renewable energy you mean it has been produced by the Sun relatively recently, where then Earth's surface area becomes the FINITE resource.
3) Humans use up that finite resource.
4) Smart doesn't mean they can think themselves across the universe as you suggest it does.
5) All the infrastructure that exists at any point in time now or in the future will be focussed around human populations. It would be most efficient to simply remove humanity and use the existent infrastructure rather than going somewhere else and building new infrastructure for the same reason it is a dumb idea to build a new car if it needs a new fan belt and spark plugs.
6) An AI would need resources to operate no matter how intelligent it is. Because that fuel doesn't come in the form of a hamburger doesn't mean requires no resources.
7) Human's don't usually intentionally eradicate entire species because:
a) We can't predict how it's disappearance will affect the environment.
b) Humans don't have the means to eradicate just that species (DDT could wipe out any insect species, but it would also kill every insect and a good portion other animal life)
c) The harm a species causes isn't worth the effort of eradication.
8) Humans have intentionally eradicated a species before when 3 criteria were met (smallpox).

And before you drag me deeper into a tangent that I have put much more effort into given how much thought you have put into your side of the matter, I will remind you that this tangent is centered on the argument that humans provide a source of innovation that could not be produced by AI. A point that I have already pointed out to be wrong.

>> No.2440599

>>2440464
>>2440459
>>2440452
I agree with everything you said, but I find the most more interesting part about the rights of AI to be where we draw the line. Humans exist within a relatively small range of intelligence. There is nothing above us and the intelligences below us exist as animals in very easy to handle species of that like humanity consists of individuals that never stray too far from an average intelligence. Plus closest intelligence to humans is still very far down the line.

The problem arises when AI starts moving up that ladder of intelligence. It will be easy to judge the rights of AI as intelligence moves through levels comparable to animals with no rights. Even when they reach levels of animals we do protect such as dogs and cats we may still ignore them. However, it will be harder to ignore when they reach ans surpass other primates.

Where do we draw the line? What gets human rights? What gets the same rights as chimps? What do we do for that huge in-between region? Intelligence will no longer be in easy to manage increments, it will be a spectrum. For every boundary you draw there will be two nearly identical intelligences with different given rights.

>> No.2440621

>You are implying way too much about the state of civilization.
I don't even know what you were trying to say there
>In the grand scheme of things, no energy is renewable. By renewable energy you mean it has been produced by the Sun relatively recently, where then Earth's surface area becomes the FINITE resource.
In the grand scheme of things, everything is pointless since everything will eventually decay and cease to function. In the grand scheme of things, a nuclear war really wouldn't be that bad of a thing. Taking things "in the grand scheme" is pointless since if your expectations aren't met, you could take it further and further unreasonably until they are
>Humans use up that finite resource.
Humans are using up the sun? I thought it's burning had something to do with it
>Smart doesn't mean they can think themselves across the universe as you suggest it does.
Humans did it. If you say that this AI is smarter than humans, there is no reason to even doubt they could and would go into space
>All the infrastructure that exists at any point in time now or in the future will be focussed around human populations. It would be most efficient to simply remove humanity and use the existent infrastructure rather than going somewhere else and building new infrastructure for the same reason it is a dumb idea to build a new car if it needs a new fan belt and spark plugs.
That makes perfect sense. Well done.
Cont.

>> No.2440624

>>2440621
>An AI would need resources to operate no matter how intelligent it is. Because that fuel doesn't come in the form of a hamburger doesn't mean requires no resources.
Yeah, you said that already. I said that already. We are already in agreement.
>Human's don't usually intentionally eradicate entire species because:
Just noting, intent doesn't generally have anything to do with it. Many species have been hunted to extinction because of a lack of foresight rather than intent. (they didn't MEAN to kill every last dodo bird)
>a) We can't predict how it's disappearance will affect the environment.
People have been making predictions for quite some time. If the mountain lion were to go extinct, there would be a sudden increase in the population of rodents, specifically rats, which would lead to increase in disease and decrease in available resources for other animals
Cont.

>> No.2440627

>>2440624
>b) Humans don't have the means to eradicate just that species (DDT could wipe out any insect species, but it would also kill every insect and a good portion other animal life)
Yes they do. They hunted JUST the dodo bird to extinction. I think what you mean is that they don't have the means to kill just one species without affecting some other species as well
>c) The harm a species causes isn't worth the effort of eradication.
Unless that species were to be a direct threat to a large food supply or could cause some health risks, such as disease. If killing off sewer rats in a large city meant that there would be much less disease, than people would do it
>Humans have intentionally eradicated a species before when 3 criteria were met (smallpox).
K

>And before you drag me deeper into a tangent that I have put much more effort into given how much thought you have put into your side of the matter, I will remind you that this tangent is centered on the argument that humans provide a source of innovation that could not be produced by AI.
Funny how you were questioning whether they'd (the AI) be able to make it into space, yet are wondering what innovation humans have that AI couldn't make up
>A point that I have already pointed out to be wrong.
Begging the question much? I'm right because I've said I'm right

>> No.2440715

>>2440599
Even if it was much smarter than us (let's say you apply a similar evolutionary transform to what humans had: enlarged prefrontal cortex, basically allow an even deeper hierarchy of high-level thoughts; for example, it would be able to do math much better than our best mathematicians), would it deserve rights if it had no wants at all or if its wants came from what we programmed them to be?
It could be perfectly rational, yet if it had no specific goals/desires/wants/aversions/etc, would it deserve rights? Besides, what rights would you grant it? You would have to grant it access to electricity/energy. Since it will probably be able to back itself up and restore itself, would it be a crime to destroy one of its terminals (assuming you owned it physically)?

It gets very muddy here especially granting that they're essentially immortal and if they have no desires, it's hard to define morality in their context.

>> No.2440789

>>2440715
You are bringing up whether or not they have emotions or goals. That is a different topic than what I was getting at. I was getting at the fact that no matter what you define as deserving rights there will be something nearly identical to it that you arbitrarily say doesn't deserve rights.

You said yourself that if an AI acted just like a human it should be given human rights. Well if an AI acts like a chimpanzee, does it deserve the rights a chimpanzee has? What if the AI emulated one of the countless forms our species took in between chimp level intelligence and being human?

We are using emulations of the consecutive human evolutionary steps so that no matter what level of intelligence they have they still best fit the qualities that you require (like emotion, purpose, etc.).

>> No.2440793

>>2440627
>>2440624
>>2440621
Agree to disagree. Have a nice time.

>> No.2440819

>>2440789
Each specific instance would have to be evaluated. I think the rights an individual deserves would have to be put on a range, however this is still difficult to do properly since a lot of rights are binary (you have it or you don't).

My claim was that in the case of AIs incapable of pain/emotion/etc, them not having rights would not be wrong since they wouldn't care either way, or maybe they could be designed to only care about specific human-given goals.

There can be too many types of AIs and if one deserves rights or not would depend on the exact AI and how it was made. If it's essentially very close to humans (as in OP's example), it would have to be given full or near-full rights.

>> No.2440820

I'm disappointed that this many people respond to an OP that spells 'gives' with an apostrophe.

>> No.2440882

>>2440819
I know what you were getting at with your posts and I said I agreed with it. I was bringing up a DIFFERENT issue. Where in the continuum of intelligence that we create do we draw the line and say "everything above it is a thinking being and everything below it is either a slave, animal, or object".

Making it a case by case decision is a given and doesn't really answer. What kind of rights do you give to the AI's just below your criteria for human rights? Would you give them no rights despite you giving some rights to animals far bellow the intelligence of the AI in question? Or would you give them the same rights as animals? Would there be any statuses in between the wide range of intelligence between humans and chimpanzees that the spectrum of AI would occupy?

It seems like the actually complicated question when it comes to AI rights.

>> No.2441031

>>2440882
Maybe the line could be drawn if they can comprehend and speak language, or able to do some basic math. Still, the problem with giving them rights is a bit further than just intelligence, it's also if they can suffer, if they can have complex desires and so on.

>> No.2441044

No. AI not being recognised as having rights would benefit me more than them having them.

>> No.2441077

>>2441031
That is why I brought up the example of them emulating the human evolutionary steps. Every step is as qualitatively close to your criteria for rights as they can get given their intelligence. Where is the line drawn?

>> No.2441078
File: 9 KB, 626x352, mass-effect-2_geth.jpg_626.jpg [View same] [iqdb] [saucenao] [google]
2441078

Here's what I think about AI's.

They will be the children of Man. Created in Man's image, they will develop with what's around them like children. If we wish to live side by side with them, then we must treat them with respect, and they will do the same. Mistreating them, and like a abused child that's now a teen, will become hostile toward it's creators.

Or this is a shitty idea and should be ignored, I don't know.

>> No.2441097

>>2441078

It's not an unreasonable argument. Odds are an AI is either going to emulate its creators, or else segregate itself from them.

>> No.2441115

>>2441078
Whatever happened to flooding the Enrichment Center with a deadly neurotoxin? You aren't GLaDOS!

>> No.2441124

I for one welcome are new touchy-feely AI overlords.

>> No.2441132

>>2441078
If it can't take some abuse, then it's fucking stupid.

The universe has killed every human that ever existed. AIs deserve a world where they know that they can die, because everything dies.

>> No.2441163
File: 129 KB, 353x492, original.jpg [View same] [iqdb] [saucenao] [google]
2441163

>>2441132
Did something happen to you as a kid? Abuse is never right, especially done to a pure essence like a child or a newly created AI.
>>2441115
Ha ha, and right after I got home from a vidya concert with her in it.

>> No.2442595

>>2439971
there are resistors being develloped now that aid in retaining information statically without any power being applied. You can shut off something and have literally no data loss. google "Moneta"

but realistically if this is implemented turning somebody "off" would be the same as bashing someone over the head with a bat and knocking them out: you'd be debilitating them for a certain amount of time during which they can't accept or process input.

>>2441132
abuse is by definition a negative thing with negative connotations. I would not ever actively "abuse" someone. Why would I do that to a machine? Why unnecessarily inflict harm upon something?