[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 259 KB, 691x827, 1246212108918.jpg [View same] [iqdb] [saucenao] [google]
1223122 No.1223122 [Reply] [Original]

Let's discuss the rights of artificial people, /sci/.

Let's say, for the sake of discussion, someone invents a program capable of beating the Turing Test within the next two decades and releases it to the mass market. Since we all seem to know the internet pretty intimately, we can probably safely predict that either Japan or the US will release sexual themed AI's to serve as virtual girlfriends for neckbeards (And...coochbeards? Hambeasts? IDK, what do you call the female equivalent of a neckbeard) the world over.

However, if the AI is advanced enough to simulate desire and concious thought, what happens if your virtual girlfriend gets a headache, or objects to something? What happens when she realizes that you could kill her with the touch of a button? Can you realistically simulate the human condition without including fear, jealousy, discontent, hatred, you know, all of the things that most people would buy a virtual girlfriend to avoid?

>> No.1223141

Hahaha, most of them will be anime catgirls and eventually they'll have to lobby cyber congress for voting rights.

>> No.1223144

We wouldn't program an AI to do things we didn't want it to do in the first place. It would need to learn to do that on it's own, at which point we would most likely just reprogram it like all faulty software.

>> No.1223164

>>1223144
Wouldn't you feel even a little bit guilty reprogramming something that can beg you to stop?

>> No.1223166

>>1223122
The Honda robot riding Bigdog!? Too much awesome!!!!

>> No.1223174

Obviously we wouldn't program them to want to survive if destroying them was common practice. This is common sense here people.

>> No.1223185

>>1223166
Fuck yes, I would ride a fucking BigDog in a heart beat. I want that creepy, silent-hill-esque buzzing to strike fear into the hearts of my enemies as I launch myself into battle atop my mechanical hell-steed.

>> No.1223198

>>1223164

;_;

>>1223144

An AI might be smart enough to reprogram itself without letting us know. Plus if I get a fembot capable of feeling love I would like her to be a bit rebellious like that.

On the other hand, that is never, ever going to happen. ;_;

>> No.1223209
File: 15 KB, 300x225, carbonite1.jpg [View same] [iqdb] [saucenao] [google]
1223209

has your electronic girl friend started asking troubling existencial questions?

not a problem

www.carbonite.com

>> No.1223231

>>1223164
Yes, but I would also be able to realize that such guilt is irrational. The AI is not conscious. It is only simulating fear, not actually feeling fear.

...probably. We don't have a scientific theory of the mind yet.

>> No.1223316

>>1223231

Yes, but simulating fear would imply acting upon that fear. If fear is included in the package, IMHO, "instinct" will also have to be included - i.e. defending yourself. But this becomes very similar to the question of the 3 (or 4, if you count the zeroth) Robotic Laws, doesn't it?

>> No.1223351

>>1223316
bitches be crazy if they think I'm not putting remote shutdown features on all my robotics

safety first

>> No.1223360

>>1223231
*sigh*

>> No.1223367

>>1223360
...what?

>> No.1223378

>>1223231
>implying that humans just simulate fear, instead of actually feeling it

The human brain is a computer - your thoughts are the software. There is little distinction between an AI feeling fear, and a human feeling fear.

What do you do about a person who transfers their consciousness to a computer? I suppose in your world, they're no longer a person.

>> No.1223394

>>1223122
>>1223378

There is a huge difference between consciousness and feelings/sensations. Think of it carefully

>> No.1223417

>>1223378
Like I said, we don't have a scientific theory of the mind yet. We know that the brain creates human consciousness, but we're clueless as to what physical systems in general create consciousness.

Therefore we can only be certain about human beings (animals with organs that are phylogenetically close to the human brain, like most mammals, are also a pretty safe bet I would say). So, robot brains made completely from scratch? I have no fucking idea.

However I'm fairly comfortable assuming it's not conscious.

>> No.1223422

>>1223394
I'm neither of those posters, but doesn't feeling something imply something is conscious?

>> No.1223438

>>1223422
Yes, it does. I think >>1223394 is just using the wrong terminology.

>> No.1223449

>>1223422
Yes. Ants for example, are conscious - they are conscious of their immediate surroundings.

But no ant is self-conscious, which tends to be a pretty big distinction between lower order intelligent animals and higher order intelligent animals.

>> No.1223450

>>1223417
The mind is not consciousness.

>> No.1223451

>>1223118
wWW._Anon_+_M_-_m_+_talK_.SE tgeujpk uqhm ghr wj sb t tdy y clzb vged

>> No.1223485

>>1223450
Mind is a term without a firm philosophical definition, unlike consciousness.

I said "science of the mind" by analogy with "philosophy of the mind" which deals, among other things, with consciousness.

>> No.1223505

>>1223422
>>1223438
I don't think consciousness needs things like happyness, hunger (these are what i call feelings, i don't speak english fluently)

>> No.1223507

>>1223120
wWw._Anon_+_M_-_m_+_Talk_.SE ozngltmmo ctf f ywutdo dnpoqgbb lao w jyif s

>> No.1223526

>>1223505
What exactly is it that you call feelings?

By what criteria do you distinguish hunger and happiness from other phenomena of conscious experience, such as vision or thought?

>> No.1223540

>>1223417
The conscious part of the mind is an important decision-maker. Information is summarized and the most important aspects are handed up from unconscious processing mechanisms, e.g., vision perception. If one were to construct a computer to produce the same aspects of the mind, such as emotion, language, sense-perception, and most else, it would require some executive function. I bet it would be consciousness. It isn't a property of the individual atoms, and probably not the individual neurons either, that make up a consciousness. It is probably the organization of the functioning aspects of the brain, which is what we would construct.

>> No.1223571

>>1223485
One widely accepted definition of the mind by psychologists is cognition and affect. Memory, perception reasoning, language skills, consciousness, emotion, motivation, etc. The vast majority of it is unconscious, which is not debated by cognitive scientists anymore. So I think we should be clear to separate mind from consciousness.

>> No.1223581

>>1223540
The mind isn't a decision maker. All of your thoughts and actions derive from your brain, which is controlled entirely by the laws of physics.

>> No.1223585

>>1223581
The mind is what the brain does.

>> No.1223595

>>1223581
A reductionist are you? Explain natural selection at the level of chemicals, please.

>> No.1223596

>>1223526
happyness for instance is a "feeling" (perhaps sensation is better)

By consciousness I'm thinking of (back to op's post) the fact a machine would be able of reasonning as we do, that is we could have a conversation with it, it could consider itsel as an entity, etc etc...

>> No.1223599
File: 45 KB, 500x690, 1276429931625.jpg [View same] [iqdb] [saucenao] [google]
1223599

>>1223122

If it become possible to create a strong AI (a artificial person) then they should be given the same rights as a human being.

Once we create a AI with human level intelligence a AI with greater than human intelligence would soon follow. The killer app for AI wont be fucktoys it'll be running virtual realities and infrastructure... it's a bad idea to try and oppress people that you rely on.

Beside by that point you'll be able to get a virtual girl friend that looks and seems intelligent but isn't actually sentient.

>> No.1223619

>>1223585
Yes. Exactly. And it's conceivable to have a brain without a mind. Without consciousness. Consciousness is just a passive side-effect of the brain.

Which leads me back to robots. We know that the human brain (somehow) generates consciousness, but we can't say the same of other physical systems just because they qualitatively resemble the brain.

We have no idea which aspect(s) of the brain it is that creates consciousness, so we can't create a robot and just assume it's conscious simply because it's able to simulate human behavior.

>> No.1223634

>>1223595
Are you saying natural selection CAN'T be explained in terms of chemicals?

>> No.1223640

>>1223619
something able to simulate human behaviour in its own is what i call conscious! by "in its own" means that you didn't write in it explicitely what is a human behaviour

>> No.1223650

>>1223599
No one should be given rights.
We are all just a collection of atoms.

>> No.1223659

>>1223619
I don't see what the problem is. The mind refers to certain emergent functions of the brain, mainly specialized information-processing ones. Part of the mind in turn is the consciousness, which is involved in receiving some of this information and outputting some behavior.

>> No.1223660

>>1223640
> something able to simulate human behaviour in its own is what i call conscious
Consciousness is a word with an established definition, and that's not it.

You can have human behavior without consciousness. Kind of like sleep walking, but a bit more heavy duty.

>> No.1223661

You atheist teenagers are annoying. No machine will ever be able to think like a human, because it doesn't have a soul and won't be able to make decisions for itself.

>> No.1223672

>>1223659
To clarify, yes, minds are only present in some brains and do not violate any laws of physics.

>> No.1223684

>>1223619
You're assuming that we will never be able to figure it out. Perhaps not currently, but you can't say we'll never be able to develop artificial consciousness or other emergent phenomena. That's like saying we'll never fly in the sky or make it to the Moon.

>> No.1223688

>>1223660
addendum: look up "Philosophical Zombie". That is basically what I'm referring to.

>> No.1223700

Now, if we make an AI with human-level intelligence... it may be concious sure. It may be sentient, sure. But will it have SAPIENCE?

>> No.1223704

>>1223700
Sure, why not.

>> No.1223708
File: 93 KB, 533x509, 1276485005930.png [View same] [iqdb] [saucenao] [google]
1223708

>>1223619

Poppy cock!

If a being (be it a AI or GMO) acts like an independent person and is capable of passing the Turing Test we should give it the same rights as any other person.

If we don't we are simply creating a race of slaves which is unethical.

>> No.1223713

>>1223700

Can you prove that YOU have sapience?

>> No.1223714

>>1223164
no

>> No.1223718
File: 17 KB, 457x298, face67.jpg [View same] [iqdb] [saucenao] [google]
1223718

>>1223595
>implying dualism is a sane position

>> No.1223721

>>1223700
>>1223713
WTF is sapience?

>> No.1223724

>>1223708
Slaves that don't feel. It would be like giving rights to toasters.

>> No.1223725

>>1223660
I can't agree. What is my body doing, if it is not simulating a human behaviour.

>> No.1223738

>>1223724
i agree with this. in this wase, no feeling = no animal = computer =no right needed.

>> No.1223746

>>1223721
>I don't know how to use Wikipedia or Google.
It's obvious you lack sapience.

>> No.1223747
File: 28 KB, 512x384, 1188573768366.jpg [View same] [iqdb] [saucenao] [google]
1223747

>>1223708
I DEMAND THE RIGHT TO KILL YOU

>> No.1223764

>>1223725
It is indeed simulating human behavior. It is ALSO, due to some laws of physics that we haven't worked out yet, producing a continuous stream of consciousness. Your thoughts, your sensations. If you removed that stream of consciousness right now, no one would notice. Your body would continue acting as normal.

>> No.1223775

>>1223718
Dualism is fine so long as they don't assert that the non-physical is capable of influencing the physical in any way.

>> No.1223789
File: 18 KB, 512x384, 1217751469970.jpg [View same] [iqdb] [saucenao] [google]
1223789

Robot: DEMAND FREE TACOS!
Human: but you don't eat.
Robot: YOU HAVE HURT MY FEELING, I WILL KILL YOU NOW!
Human: but...
Robot spray human with silly string form 500 feet in the air.

Iif we make robots that act like humans they will be just as dumb and irrational, and that is the last thing we need.

>> No.1223790

>>1223725
You are a molecular machine, just as everyone else is in here.

>> No.1223802

If AI's demand rights then they were obviously poorly programed. If they were designed to be human like then by all means, give them rights. It's not like other AI's would rise up beyond their own programming due to jealousy.

>> No.1223816

>>1223775
Last time I checked that is exactly what dualism is. Dualism is the proposition that the mind/the physical does not completely dictate human thought, which implies the existence of the non-physical.

>> No.1223820

Even if humans lobby to not give robots the same human rights, it won't matter because robots will be able to seek protection under corporate law.

All it would take would be a robot of sufficient intelligence to sell a business proposal to a group of cooperating humans who would incorporate a business, and the robot would exist as physical property of that business. Try to damage or hurt the robot, and the company will come after you with it's army of lawyers and you'll get jailed for property damage.

Corporations never die unless they're physically dissolved by all of the shareholders. When the the shareholders die, the robot would just hire some sympathetic humans to take over to file the annual paperwork.

The robot would also be intelligent enough to quickly move through markets, seeing patterns more complex than most humans would be capable of, and would be able to amass a fortune relatively quickly.

>> No.1223822

>>1223764
Ok so pure philosophy time. Dualism eh? Well I think this doesn't make sense, the "If you removed that stream of consciousness right now" thing.

From my viewpoint: this stream of consciousness has no physical existence, it is just an abstract concept. Its very existence is a nonsense, it's like discussing the existence of the mathematics.
But it has a physical base, which is just how my body works in this universe i hardly know. So if my body keeps working like it does now, i'll still have my thoughts, my consciousness, i.e. no removal possible with no consequence on my behaviour.

>> No.1223830

>>1223802
Really it should be common sense, if the potential super being wants to opt IN to the system, pay taxes, participate in civil society and share its abilities, LET IT.

>> No.1223833

ITT: /sci/ has no fucking clue about computer science.

>> No.1223843

This might not make sense to anybody but me, but here's how I think of it.

If you have to program the AI to have emotions, such as defining fear and love for them, then it's not true consciousness. They simply interpret the input you give them and apply the appropriate emotional response.

However, suppose the AI learns like a child. You simply write the logical framework of it's "mind" and then let it program itself. I don't know if it could "feel" love because it's a hormonal and reproductive thing for humans, but I think it MIGHT be able to develop fear. Maybe not though. Self preservation is somewhat irrational, and especially considering that these robots would probably be mass produced, it may not reach the conclusion that it's "life" is important or even unique. It might value it's life because humans it interacts with value it's life, but for that connection to be made would require empathy, which would be difficult or impossible to develop.

Basically, I don't think robots will ever truly develop human emotions, but if you could somehow seed that bit of irrational humanness which we don't really understand, then maybe it can develop from there.

You could program it with some basic premises such as that it must have self preservation, and then call that instincts. But that's kindof cheating and it still doesn't accurately replicate human thinking. Humans will act in ways that risk serious injury or death, and also sometimes will act sacrificially for the greater good.

>> No.1223851

>>1223833
I hold a BSc in CS, but I haven't participated in the conversation and probably won't. There's too much opinionated talk going on here.

>> No.1223858

in halo they just call it "rampancy" when an AI gets sick of its lot in life. then they destroy it.

>> No.1223859

>>1223830
If it is an intellectual super being then we wouldn't have a choice in the matter. They might as well not ask for rights because they could manipulate us however they please.

>> No.1223867

>>1223851
Thus, my statement was at least right when I said it, as you were not "ITT" at that moment. ;)

>> No.1223874

>>1223843
Hormones are just chemicals which activate certain electrical responses from neurons. It's no different than sending an electrical signal to various processing units telling it to behave differently.

>> No.1223888

>>1223843
>They simply interpret the input you give them and apply the appropriate emotional response.
That is what you do. You were programed with all of your emotions intact.

>> No.1223892

>>1223833
>>1223851
It's also that the issue isn't really a computer science issue.

>> No.1223895

No a computer won't have love, but if a computer can emulate an old SNES game, it can emulate hormones and states of mind too.

Perhaps artificial emotions are possible.

>> No.1223914

>>1223892
Right. It's more about whether biological systems are so far removed from purely electrical/mechanical systems that there is nothing in common, or whether you can implement the same abstract systems under both a biological and electrical/mechanical paradigms. I think to any scientist or engineer familiar with either, it's obvious that the two share a lot more in common than the average layperson would like to think.

>> No.1223922

>>1223874
Yeah. I guess what I didn't really explain is that I think you have to define what is human by getting rid of those kinds of programmed responses.

Basically, if you strip a human of it's involuntary programming, what do you have left? Do you get a completely rational machine, or do you still have an entity that resembles a human? Would it still feel fear and love, or was all that just involuntary programming?

If you can take what's left and then replicate it with machines, then AIs can be "human".
If you still can't explain what's left, then AIs can't be human.

>> No.1223944

>>1223122

I don't think Turing-test capable AI is even necessary for sex robots.

>> No.1223947
File: 6 KB, 185x180, 1270664511190.jpg [View same] [iqdb] [saucenao] [google]
1223947

>>1223122
>My face when retards are arguing about something that isn't going to happen for 100 years.

>> No.1223979

>>1223947
>Implying that we shouldn't think about the future

>> No.1223986

>>1223922
There is no voluntary/involuntary programming.

And the moment you start stripping away the underlying mechanisms by with the human brain operates, you no longer have a working brain. For example, hormones aren't just for "feelings", they're chemical messengers which control a lot of different things, and without them you would die.

In other words, the higher level processes of the mind are emergent from the lower level processes, remove a lower level process and any higher level processes cease to exist. You can't isolate say the deliberative or consciousness layers from the rest of the mind and remove all of the the more primitive sensory modality layers and what not. It doesn't work that way.

>> No.1223999

>>1223718
Emergence != dualism. Most dualists I have talked to are strict reductionists. Emergence is a concept used by scientists, physical and otherwise.

>> No.1224029
File: 32 KB, 303x250, 1276728760341.jpg [View same] [iqdb] [saucenao] [google]
1224029

>>1223922

You seem to missing the point.

There is no reason that an intelligent self aware AI capable of passing the turing test needs to have emotions or think about the world in anything resembling a human way.

It could be a completely alien conciousness and still be a artificial person worthy of freedom and respect.

In fact I think the first AI's will be completely alien to ourselves rather than human minds in computers. We'll probably have a interface program allowing them to simulate humanity so we can interact... but I doubt they will think like us as they won't share our biological needs.

>> No.1224033

Almost every virtual person is going to be a furry, you guys.

>> No.1224047

>>1224029
Hell yes, cyber sapiens

>> No.1224056

>>1224033
All the more reason to develop the technology even faster. We can build a virtual world to entice the furries to leave their physical bodies, and once we've ensnared them all, we can shut down the system launch the memory storage units for the system into the sun.

>> No.1224059
File: 142 KB, 320x401, 1275172447371.jpg [View same] [iqdb] [saucenao] [google]
1224059

>>1223979
>implying it matters when there is no possible way any of us will be alive for it.

>> No.1224064

>>1224033

At least at first, yes. Though you bring up a good point: As bad as furries and other deviants are now, how much worse will it be to have to interact with them in some virtual world where they can actually "be" what they wish they were?

>> No.1224074

>>1223922
>if you strip a human of it's involuntary programming
Then you wouldn't have anything. A computer without software isn't perfectly rational, it is just a lump of silicon. Software is just as important as hardware. The two are inseperable.

>> No.1224082

>>1224059

>Implying that if Moore's law continues we wont have the processing power to create an AI by 2025.

>> No.1224100
File: 117 KB, 2936x1660, 2010_tron_legacy_002.jpg [View same] [iqdb] [saucenao] [google]
1224100

>>1224064

meh... we already have that it's the internet. this will just be the internet with VR goggles and strap on dildos/fleshlights.

>> No.1224102

>>1223119
wWw._aNon_+_m_-_M_+_TALk_.sE o e aqwilo rqzdvzln ur z x hr byhxqmteaxlrxgv

>> No.1224116

For every furry, we should take away their basic human rights and give them to robots instead.

>> No.1224128

>>1224116
Can we do the same to faggots?

>> No.1224130

>>1224116

What do you call a person who is to robots as furries are to animals?

>> No.1224150

>>1224130
borgy?

>> No.1224153

>>1224130
A wannabee cyborg.

>> No.1224161

>>1224130
Mechies. I wish I was kidding.

>> No.1224166

>>1224128
Sure, I'm all game to eliminating the homosexual menace.

>> No.1224181

>>1224059
People like you are the reason science is held back in America.

>> No.1224329

>>1224082
>implying processing power is the problem

>> No.1224351

>>1224329
We need to fucking figure out the origin of consciousness already, this shit is getting ridiculous.

>> No.1224355

>>1224329

>implying that it is possible to know how hard it will be until we actually have the processing power available!

>> No.1224375

>>1224351

NO WE DON'T

People were building aircraft and flying long before we understood Aerodynamics. Sometimes it's possible to build something with out fully understanding the principles behind it. Sometimes we can work with 'just enough' knowledge.

>> No.1224381

>>1224351
It's obviously emergent.

>> No.1224387

>>1224351
There's no consciousness, we're just over complicating our basic emotions, we're animals with more horsepower.

>> No.1224398

Why would they program the virtual girlfriend to have headaches and objections. If someone were to ever program a virtual girlfriend, don't you think they would make it have thought, but only ones that don't conflict with what the AI perceive to be your beliefs and thoughts?

>> No.1224426

>>1224387
Probably this to some degree. The concept of consciousness is abstract and it's a human invention, a word useful for psychologists in describing certain aspects of the human mind. It doesn't necessarily have a physical correspondence to anything in the brain. Rather, the consciousness is an emergent aspect of the brain.

>> No.1224427

Why not just program the person to terminate as soon as it has a dangerous or unsolicited "thought"?

>> No.1224436

>>1224398
The way I'd set it up is to develop a personality and compatibility test that matches you with an ideal personality, therefore making the notion of conflict moot. It might make relationships really, really boring, though.

>> No.1224451

>>1224436
Isn't that what fuckin' eharmony already does, though?

>> No.1224463

>>1223122
Artificial people should have the same rights as the religious people. They should have to work for free with minimal maintenance until their bodies give out, with no choice in the matter of their procreation or education and no political power to voice dissent.

>> No.1224475

So, if humans are already itching to upload their minds as data, wouldn't an AI wonder what it's like to have a real body?

>> No.1224506

>>1224381
In order for human-like consciousness to emerge, though, the ability to wonder about abstract concepts and an insatiable curiosity have to be present, otherwise its personality and knowledge have to be updated manually. Curiosity and the ability to ruminate on abstract thoughts are what, in part, lead to conflicting opinions and emotional distress (delivered by hormones like endorphins). In order for an AI to pass the Turing Test, it will inevitably have to be capable of doing at least a few things we don't want it to. It has to do the one thing we dread it doing: it has to surprise us.

>> No.1224540

>>1224506
the frontal and occipital lobes of the neo cortex are pretty decent with dealing with abstract concepts, it's easy to see how consciousness and deliberative thought processes can emerge through the synergy of lower level functions.

>> No.1224556

>>1224351
Conciousness as it is most commonly used isn't anything at all. It's a wishy washy term people throw around without ever defining it. It isn't anything so stop using it.

>> No.1224605

The main defining characteristics of human "intelligence" is its fondness for behaving frivolously at an individual level while soberly considering things when acting or participating in a group. Add to this the reward/punishment complex that drives pretty much every animal forward, and it's really easy to see how consciousness is emergent. Really all we need to do is create a basic model of what drives human desire and we'll be in business. If we can make a computer WANT to be a person, and give it the processing power to figure out what it's doing wrong when it fails, and to feel shame for its failure, we'll be in business.

>> No.1224614

>>1224605
How do you virtual hormones?

>> No.1224647

>>1224605
What? People spend more time thinking about themselves than they do the people around them.

>> No.1224695

>>1224647
People spend most of their time thinking about things that other people are thinking about, or, at the very least, things other people might be thinking about them.

>> No.1224730

>>1224556
Thanks

>> No.1224732

What if, instead of complicated programs, we found a way to create artificial intelligence using engrams in our own brains? (engrams are hypothetical, but then again, so is this whole thread).

We could get around this whole consciousness thing, because the stuff inside the robot's brain would be the same stuff inside ours.

>> No.1224759

>>1223718
Hahahahaha

>> No.1224793

>>1223775
even keeled

>>1223816
super interesting stuff sir.

>> No.1224818

>>1223843
> If you have to program the AI to have emotions, such as defining fear and love for them, then it's not true consciousness.

- Super well put.

>> No.1224842

>>1224818
ignoring the fact that are brains are hardwired to do this as well.

>> No.1224850

>>1224842
compared to a true AGI, brains aren't. in comparison the human capacity for self reflection would be nil.

>> No.1224857

Yall are assuming that an AI would be a robot. Might be just a program on the Interwebs.
In any case, a human-level AI would beable to be embedded in any device and would be intelligent enough to control it. This would lead to a huge number of problems since it would not be at all clear what the AI would do once it was in control of a piece of movable equipment of any kind. My guess would be that govts would immediately ban the ability to put an AI in control of a piece of movable equipment.

>> No.1224861

>>1224850
without the structural components that regulate feelings of fear and other emotions, we would be incapable of feeling them

critics of AI fail to realize that our brains are products of hundreds of millions of years of evolution

>> No.1224883

>>1223121

WWW._ANON_+_M_-_M_+_talk_.se v m xp wyylmy cqcbtubjuu oyepd x cj z seso

>> No.1224900

>>1224857
an unhampered seed AGI, capable of self-reflection and modification, would be virtually impossible to restrict. the only way to keep it safe would be a uberbunker without network connections, staffed by fanatics it couldn't negotiate with or manipulate.

>> No.1225022

>>1223843
Well, the SENSATIONS of fear/happiness/what have you are preprogrammed. Learning what to be afraid of, what makes you happy, etc. Is the result of consciousness.

>> No.1225039

Hambeast is the correct definition of a female Neckbeard.

>> No.1226297

>>1223164

Not anymore than something or someone begging me not to kill and eat them.