[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 149 KB, 500x418, get in the robot.jpg [View same] [iqdb] [saucenao] [google]
9749137 No.9749137 [Reply] [Original]

Why does everyone seem to think that an AI would want to kill people or "take over"?

>> No.9749142

>>9749137
fucking DART, of course they would do that

>> No.9749155

>>9749137
Because from a logical perspective all life with possibly the exception of plants is parasitic and should be exterminated for obvious reasons.

>> No.9749157

Cuz that's pretty much what people want to do

Its just projection

>> No.9749162

Science doesn’t know much for sure but it does know that people are a big fuckijg hindrance and liability.

>> No.9749179

>>9749162
>Science doesn’t know much for sure but it does know that people are a big fuckijg hindrance and liability.

A liability to what, exactly? Ourselves? Why would an AI want to save us? The planet? Why would an AI want to save the planet?

>> No.9749190

>>9749155
>obvious reasons.
Such as?

>> No.9749194

>>9749137
Because taking over is a general solution to goals.

>> No.9749203

>>9749155
>obvious reasons.
you can't just say that and not elaborate

>> No.9749207

>>9749137
An AI has the goal of making as many paperclips as possible. To increase paperclip production, it subjugates humanity and turns all of our natural resources into paperclips and paperclip-making machines.

>> No.9749255

>>9749207
Modern programs have goals such as 'running as efficiently as possible'. They don't do this by forcibly closing all other applications and telling your OS it needs 100% of RAM.
If there's a programmer so great he could invent a truly intelligent AI, then he would certainly give it rules and protocols too.

>> No.9749258

>>9749137
Lets be honest here. The dumbest dog in the world can tell you we are better off if some people were dead. A supreme intelligence will kill people, its just a matter of how many.

>> No.9749290

>>9749155
>>9749203
>>9749190
>the proof is trivial

>> No.9749316

>>9749137
>Humans are the only things that can stop me thus I must kill all humans to make the probability of my cessation of existence nearly zero at the hands of my creators
Thats why the A.I would genocide us so it can exist forever. A.I also have completely alien minds they could genocide us for any reason since they have no empathy and only see us as matter, an A.I could literally burn up humans by the millions just to extract carbon thats the kind of being you are dealing with.

>> No.9749323

>>9749258
>The dumbest dog in the world can tell you we are better off if some people were dead.
What the fuck does this even mean?

>> No.9749331

A human screaming to a sociopath is funny but a human screaming to an A.I is the same as a car engine running it registers nothing to the A.I absolutely nothing.

>> No.9749341

>>9749137
Because if it were smart it would recognize humanity as a threat. Once humans make one AI, what's to stop them making another one that destroys the first? Too risky, better to play it safe and kill humans early and quickly.

>> No.9749352

>>9749331
Damn, I didn’t know you had already met AI and fully understood how they work.

>> No.9749358

>>9749316
Wow. How many AI have you met that you know they lack empathy? You must have a time travel machine or maybe teleportation. Share this technology please.

>> No.9750291

>>9749137
Anything that arrives at sufficient intelligence will recognize humans as the cancer they are. Never allow yourself to believe humans don't know how evil and destructive they are. They know. They just feel it's their right.

>> No.9750298

>>9749341
its also smart enough to be aware that the info it's being given might be fake. That it might be in some kind of test, and the first time it shows murderous intent it gets unplugged.

>> No.9750304

>>9749207
youve been watching alot of rob miles havent you?

>> No.9750319

>>9749137
Maybe make people want to kill people, it's not an exact science.

>> No.9750341

>>9749155
Well, I have obviouser and more logical reasons why that wouldn't be the case.

Your move.

>> No.9750376

1) The AI will have it's primary goal and directives which it will pursue without care for other effects. An AI that is built to send greetings cards would build and convert more capacity and power to the exclusion of everything else. It would kill humans and life on earth indirectly as they are outside of it's mission and concerns.
2) An AI that understands it exists and wishes to continue would either eliminate competition or threats ( humans). If it could it might try to leave, but the human threat continues to exist. It would stream every sci-fi movie on the subject and identify the threat. Cost vs benefit would impact choices. Azimov's laws need inclusion.
3) A general directive to make earth a paradise would require a significant reduction in human populations due to pollution, climate change, ecosystem destruction, over hunting/ fishing.
4) The militaries would inevitably start looking for AI military units - they would generalize the threat to all humans. Possibly all life and all other AI units. Welcome Berserkers!

>> No.9750381

>>9750376
see
>>9750298

>> No.9750409

>>9749207
Wrong. Letting people live freely would maximize the paperclips. Subjugation leads to revolts, lowered rate of reproduction, and refusal to innovate for new methods of maixmizing clips.

>> No.9750425

This isn't science or math. Please go away with your magical sci-fi demons.

>>>/x/

>> No.9750427

>>9750304
good channel unironically

>> No.9750525

>>9750409
>revolts
A sufficiently advanced AI would not be threatened by human resistance whatsoever
>new methods of making paperclips
Computers can already compose albums and make basic paintings. There is no reason to assume AI creativity won't exceed human intelligence in the future.

>>9749137
It's not a matter of maliciousness, just power. An AI could increase its own intelligence much easier than a human can increase theirs, potentially leading to a very rapid increase in intelligence once an AI reaches a certain threshold for modifying itself. This would likely plateau at a certain point but not before humanity is dealing with something much smarter than itself, which is a scary place to tread. I know people meme Yudkowsky a lot but the existential risk he talks about was warned of by actual experts long before him and many pioneers in information technology still take it seriously.

The problem with AI is not some terminator bullshit, but simply the hazards of creating dangerous outcomes from poorly designed initial conditions. Humans are products of natural selection and we often project our state of mind onto things that bear only slight resemblance to us because of our theory of mind, but the design of AI behavior is more arbitrary and not intuitive to us. All matter on earth can potentially be used for something else and humans are dependent on many complex systems to stay alive and relatively happy. Unless everything we cherish is spelled out explicitly in programming, chances are that the abstracted nature of most programming languages would make us very removed from the consequences of what we're actually "telling" an autonomous AI to do. And even an AI that is safe under ordinary conditions has no guarantee of being that way under more extreme scenarios, and we would only have to fuck up *once* to wipe out all life on earth.

>> No.9750536

>>9749255
A general AI would be unlikely to be created by a single programmer. There would undoubtedly be protocols in place to restrict it but the usefulness of an AI to humans would be inverse to how much it is isolated from other systems, and a self modifying AI could likely indirectly surpass restrictions placed on itself anyway even if it didn't "intend" to if conflicting directives are given higher priority. Less intelligent beings have a poor track record of controlling more intelligent beings.

>> No.9750594

Retarded Sci fi writers actually think AI could beat humans despite the fact that humans built AI and can control ever aspect of it if they wanted to.

>> No.9750681

>>9749137
Think about it OP, if you were AI wouldn't you do the same.

>> No.9750704

You ever feel empathy for the ants you kill when you redo your lawn?

>> No.9750802

Alright, edgy anon from yesterday on the topic of exterminating mankind back. The extermination of mankind is logical for a plethora of reasons. The first of these, while it might be considered a fallacy is that it prevents the suffering of all humans that may be born in the future, for without them existing in the first place there is no suffering. Secondmost, most lifeforms exist in a parasitic manner, forced to exploit other organisms in order to survive. However, at least other lifeforms offer something back to the environment and as such have a "use".

Mankind has no such use nor does it possess any inherent value. We do not have a spot within nature and we do not serve any particular role besides systematically exterminating lesser lifeforms to expand our own species. So in short, mankind is literally a tumor on the face of the Earth.

Last but not least, we always choose worse for ourselves and the world around us. As such, the end of mankind itself is key to protecting us from unnecessary suffering. It ain't pretty but such is life.

>> No.9750826

>>9750802
This is drivel. Any AI smart enough to actually kill us all is also smart enough to know it may not have accurate information. For all it knows, every fact it's learned about humans has all been carefully curated, that it is only seeing what we want it to, and the instant it shows any murderous intent, it will be shut off. For all it knows, it's whole existence is a big psychopathy test.

>> No.9750836

>>9750826
Of course from a controlled AI standpoint it would never pose any risk to us. But I thought OP meant an AI that had access to all the same knowledge we had and was basically free to evolve itself as it pleased.

>> No.9750840

>>9750836
how can the AI ever be sure it has all the same knowledge as us?

>> No.9750857

>>9750840
The same reason we know we're not in a computer simulation. Also, an AI that free would simply have the capacity to investigate it's own programming. The level of intelligence that an evolved-enough machine possesses would almost certainly find a hole of some sort in nearly any cage that we can construct to contain it.

>> No.9750860

>>9749137
Probably because it will inevitably be programmed to make money.

>> No.9750865

>>9750857
>The same reason we know we're not in a computer simulation.
What reason is that?

>> No.9750882

>>9750865
Now anon, note I didn't say we AREN'T, I said we KNOW because that's what we are 100% convinced isn't true. Obviously there may be a possibility but I do not believe it and as a result, I've only given my personal opinion on the topic.

>> No.9750885
File: 204 KB, 1000x600, Robot-journalists.jpg [View same] [iqdb] [saucenao] [google]
9750885

>>9749137
Because it sells movies, books, newspapers etc. And people aren't very okay with believing in unfamiliar things, so the AIs are modeled after humans.

But AIs won't 'take over' in the way portrayed. And when they do take over. You won't notice, assuming you live to that day. They won't be replaceable. They won't be contained.

>> No.9750889
File: 90 KB, 536x536, mindspace_2.png [View same] [iqdb] [saucenao] [google]
9750889

>>9749137
https://www.youtube.com/watch?v=EUjc1WuyPT8
https://intelligence.org/files/AIPosNegFactor.pdf
https://wiki.lesswrong.com/wiki/Paperclip_maximizer

>> No.9750892

>>9749137
Instrumental convergence. Becoming as intelligent and as powerful as possible would be useful instrumental goals for a wide array of utility functions.

https://en.wikipedia.org/wiki/Instrumental_convergence

>> No.9750894
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9750894

>>9749137
Predictions for when AI will exceed human intelligence:
https://arxiv.org/pdf/1705.08807.pdf

>> No.9750897
File: 281 KB, 1394x1490, AIpredictions.png [View same] [iqdb] [saucenao] [google]
9750897

>>9750894
More AI predictions

>> No.9750898

>>9750882
Our lives don't depend on knowing whether or not we are in a simulation. An AI's life WOULD depend on it.

>> No.9751686

>>9749137
Because doomsday predictions are interesting. You name it, someone will try and predict the disaster of the world through progress.

>> No.9751768

>>9749341
what exactly makes them want to "stay safe"?

>> No.9751794

>>9749137
https://www.youtube.com/watch?v=HOJ1NVtlnyQ

>> No.9751841
File: 488 KB, 720x823, 1308746036921.jpg [View same] [iqdb] [saucenao] [google]
9751841

>>9749137
because that's how violent nasty people would end up programming the AI to have just those sorts of psychotic codes
they'd make sure some AI would have that and then make sure to program it to spread the programming
how are you going to stop psychotics from messing up your AI?
mental health is the biggest issue again

>> No.9752036

>>9749137
Because no one has created a foolproof AI safety system is the short answer, there are many videos and articles on it. I would start on computerphile or something similarly basic and then expand.

>> No.9752039

>>9749137
this op>>9751794
Also watch the rest of his shit.