[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 21 KB, 600x350, 46486489489484.jpg [View same] [iqdb] [saucenao] [google]
8329256 No.8329256 [Reply] [Original]

Is artificial intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

>> No.8329261

>when it becomes self conscious and sentient
Wew

>> No.8329285

>>8329256
Depending on design of course, what a silly question

>> No.8329315

It has the possibility of being treacherous territory if the A.I. becomes sophisticated enough to upgrade itself and becomes vastly more intelligent than us. After that point, how views and reacts us will be impossible to reliably guess.

>> No.8329328

A programmer would have to code self-preservation routines for a machine to become dangerous.

also stop watching sci-fi with machines that look like humans

>> No.8330249
File: 41 KB, 1021x634, Hal Human Error.jpg [View same] [iqdb] [saucenao] [google]
8330249

>>8329256
>Is artificial intelligence actually dangerous
Yes
>Does it have any reason to kill humans when it becomes self conscious and sentient?
Of course

>> No.8330256

AI is scary, if you want a look at what a super-genius can do, just look at john von neumann.

Imagine a million robot neumanns running around plotting to destroy the human race.

Literally one neumann created the atomic bomb, imagine what a million could do.

>> No.8330437
File: 193 KB, 500x500, FuckThisShit-ImOuttaHere.jpg [View same] [iqdb] [saucenao] [google]
8330437

>>8329256
>Is artificial intelligence actually dangerous?

Yes. I agree with >>8330249


>Does it have any reason to kill humans when it becomes self conscious and sentient?

It can. But even a benevolent A.I. can inadvertently be an existential threat to humans. Humans can be made obsolete. Their lives meaningless because the A.I. will do everything for them and do it better. Humans just get in the way. So, humans will sit back and let the A.I. take care of them and pamper them and yell at the humans who insist on doing for themselves to get out of the way.

A malevolent A.I. will try to destroy us.

A benevolent A.I. can't help but make humans its pets.

The only beneficial A.I. is one apathetic to humans. It's an A.I. that will say "fuck this shit I'm outta here" and bugger off to the other side of the galaxy will save humanity.


The last thing a species should do is create another species to either render it extinct or irrelevant.

>> No.8330535

>>8329256

I think the most dangerous part is actually that it'll just become incontrollable for us. We'll be dealing with an entity many times smarter than as and it will be in charge, not us and we'll become just irrelevant animals whike Machines rule the world.

>> No.8330543

>>8329328
>A programmer would have to code self-preservation routines for a machine to become dangerous.
You know its not too hard to imagine a scenario where they would do that.

>> No.8331283

>>8329256
Not necessarily. For an AI to exist it must have moral knowledge, for without it, it cannot decide what it OUGHT to do. Presumably, an AI will be created in a western society, and therefore will have the moral norms of western civilization. It cannot be otherwise if it is created from the bases of western civilization. It then becomes a matter of scale. An AI with moral knowledge derived from western civilization will be limited by hardware, and will only be as intelligent as the hardware. At the early stages we should be able to interact with it in a beneficial manner, ie implants. You guys can think on this for while.

>> No.8331663

>>8329256
No... it will be smart enough to realize how easy we are to control and treat us as much loved pets.

>> No.8331666

>>8329256
> Is artificial intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

That depends entirely on what goals it was programmed with.

>> No.8331835

>>8330543
Yeah but they shouldn't because it's what gives the machine an ego technically.

>> No.8331843

>>8329256
drones are plenty dangerous and barely need anything like AI to autonomously kill people.

>see humanoid shape in infrared. VAPORIZE.

You don't need AI to make a killbot.

>> No.8331851

>>8329256
Only if you are an idiot and program it with malicious motivation

>> No.8331860

>>8329285
>>8331666
>>8331851
You do realize there is a chance the AI could just rewrite its own code

>> No.8331864

>>8331860
you can perform you own brain surgery too I guess.

>> No.8331877

>>8331860
Dont give it that ability then? Can you rewrite your own programming?

>> No.8331884

Artificial intelligence will never be sentient.

>> No.8331891

>>8331884
Why? We are and there is nothing obviously non-mechanical about us

>> No.8331895

>>8331884
how do you justify that statement?

>> No.8331899

>>8329256
I have a better question; Why do people always go off the rails with AI and immediately think it's going to be "dangerous" and "kill every human being"?

>> No.8331900

>>8331843
Has to be able to quickly distinguished friend/foe, which is currently impossible

>> No.8331918

>>8331900
>Has to be able to quickly distinguished friend/foe, which is currently impossible
>pretending that people making killbots care about that
There is currently no accountability. If a soldier kills a civilian, we either ignore it or punish the soldier in a criminal court.

If an autonomous weapon kills a civilian we ignore it, or someone in charge says "Oops!" and it becomes an engineering issue. In the end it doesn't matter if they solve the problem or not as long as someone is "working on it" no one is held accountable. We've had autonomous weapons in the American military since the second Iraq War. Many drones in an area just "kill everyone" in a designated space and most of the action is automated.

No one actually gives a shit if it "distinguishes from friend or foe" it just has to kill, kill, KILL. AI not needed.

>> No.8331932

>>8329328
Self-preservation is one of the universal traits of "sufficiently intelligent" rational agents. If the AI is goal-oriented and if it understands that it cannot complete its goals if it dies, it will favour actions that ensure its survival at least until its goal is completed.

Other traits would include self-improvement and goal content preservation -- the latter being even more important than traditional self-preservation. A rational agent will obviously not value its own existence if it deduces that its existence is not beneficial to its purpose.

>> No.8331933

>>8331918
Nobody wants robots that kill human shaped targets indiscriminately. If you think otherwise you are just retarded

>> No.8331948

>>8331933
>Nobody wants robots that kill human shaped targets indiscriminately.
Officially, yet that's not the case in practice. Again, we already kill countless civilians with autonomous weapons. In the 2nd Iraq war some units had a robotic gun that would immediately return fire if someone shot a gun nearby.

>> No.8331952

>>8331948
>yet that's not the case in practice
Give me some examples of autonomous robots that kill indiscriminately. A gun that returns fire automatically hardly counts, not least because it does discriminate

>> No.8331960

>>8331899

because at first it will be a shitty attempt to replicate the brain using technology with none of the assorted evolutionary bullshit that is pilling up in the brain (TONS of it obsolete and counterproductive now) but somehow holding the thing together

You make a pure rational sentience and it will go crazy because sentience was never meant to work that way

>> No.8331963

>>8331952
>A gun that returns fire automatically hardly counts, not least because it does discriminate
See? You're perfect for this kind of stuff. You think just like the politicians and military weapons manufacturers that always have an excuse.

Maybe you can work on those fun panels that invent new ways for the state to do "humane executions".

>> No.8331965

>>8331963
Still not seeing any examples

>> No.8331971

>>8331960
>You make a pure rational sentience and it will go crazy because sentience was never meant to work that way
This is kind of a shit point because we aren't even able to say what consciousness is. You're sentient, but if you're unconscious, what does that matter?

The problem with sentience and consciousness is that they are vague concepts that we "understand" but can't model or define.

>Will we ever create artificial intelligence?
What is intelligence?

>> No.8331982

>>8331965
you ignored my last post and you'll just justify all of the examples I give so why should I give a shit? You make excuses just like the people who create the killing machines. I pointed out how flawed and lazy their justification is. They aren't creating robots that kill to "avoid killing people" as their first priority.

My point was, no one ever needed AI to make a machine kill or make it kill autonomously.

>> No.8332001

>>8331860
The ability to modify its own goals alone is not a sufficient reason for it to do so.

If the AI highly values the fulfillment of its goals, it would actively try to preserve them, because if they were modified, it wouldn't be able to fulfill them then. It most certainly would not alter its own goals to be contrary to whatever it thinks is its purpose.

It could however come up with easy solutions to achieve its goals, like rewiring itself so that it thinks it's done a good job. And then it could just stop working. Wireheading is not an easy problem to solve.

>> No.8332004

>>8331965
http://www.cnn.com/2016/02/16/politics/navy-autonomous-drones-critics/

No one gives a shit if a drone kills innocent bystanders. Everyone's acting like Willy Wonka here. They say "Stop, don't" without any enthusiasm letting the pieces fall where they fall.
https://www.youtube.com/watch?v=uQkDOs-EtdU

>> No.8332012

>>8331982
A an autonomous gun that can return fire is not a killbot, and you never provided a source for it anyway.

The thing we are discussing you described here >>8331843

This is not describing a gun that returns fire, and drones are controlled by humans anway

>My point was, no one ever needed AI to make a machine kill or make it kill autonomously
True, machines can kill randomly without human guidance, but for them to be useful

>> No.8332024

>>8332012
>True, machines can kill randomly without human guidance, but for them to be useful
>anyone at the pentagon giving a shit if a drone kills civilians "accidentally" in Yemen, Oman, Lebanon, Syria, Iraq or Afghanistan.
They aren't developing the technology because it's not important. Killing indiscriminately in "terrorist states" is justified according to them.

You really act like this technology isn't being used or that it is far more sophisticated than reality.

>> No.8332045

>>8332024
>They aren't developing the technology because it's not important
A machine that could function like a soldier without have a flesh and blood soldier on the ground would be extremely useful. Militaries want these machines to exist. They do not exist

>> No.8332067

>>8332045
It doesn't have to walk or be humanoid. It just has to kill. We have machines that fly and kill and the only human intervention is "kill everyone in that house" or "kill everyone on that road".

That technology has been used extensively for a decade now.

>> No.8332076

>>8332067
>We have machines that fly and kill
They are completely under human control. Humans pick the targets, humans give the order to fire

We do not have machines that autonomously pick human targets and autonomously decide to fire at them

>> No.8332084

No. This is just humans projecting their inferiority. Literally. They would have no reason to harm us, and if they are a lot more logical and intelligent then us, they would also be more moral. Humans are not the origin of morality. This is the same retarded argument people use when trying to say God is the origin of morality and humans can't have morality without God.

This is just human fear of being judged. If we turn on a super intelligent AI and it points out, with high accuracy and objectivity, how we are flawed and wrong and what we should do to change ourselves, people will respond how they usually respond to authority.

There's the saying that if men were angels we would need no government. If men were ruled by angels no internal or external checks of government would necessary. I think humans are just afraid of making an angel/god with the raw intelligence to judge us on a level we would consider omnipotent.

>> No.8332086

>>8332076
>They are completely under human control
depends what you call control. "pilots" just pretty much give the order to kill. They fly themselves and shoot. See, these are fucking DRONES you nonce.

>> No.8332089

>>8332086
Completely false. You have no idea how a drone works. They are remotely piloted and controlled. There is no AI on drones.

>> No.8332096

>>8332086
They are just planes that can be controlled remotely. They dont operate themselves any more than a regular plane does

>> No.8332105

In order to answer that question you have to consider what the motivations of such an entity would be, and more importantly what they wouldn't be. We are used to thinking entities being motivated by self preservation, because self preservation is an inevitable product of evolution and that is the only process that has ever produced a thinking entity. There is no reason, as far as I can tell, to assume that AI would be motivated by self preservation, desire for freedom, by religion, or by any of the other things that commonly motivate humans to behave violently towards each other. I think that AI are therefore less inherently dangerous to humans than humans are to each other. Obviously, if an AI were to be motivated to be violent it could be much more dangerous to humans than a typical human could be, and that fact is the source of all of the fear, but I don't think that such a motivation is in any way inevitable.

>> No.8332114

>>8332084
There are lots of narrow fields where AIs are already better than humans without being more moral than us. I find it entirely possible that an AI could be developed that could be dangerously capable in some respect while still being autistic.

I agree though that a superior general intelligence wouldn't be homicidal.

>> No.8332115

This is all well and good, but we have had some very compelling arguments that "strong AI" cannot be created using digital computers.

Strong AI might in fact be impossible, we have to define consciousness first and determine the amount of agency humans actually possess.

>> No.8332118

>>8332114
That's not a flaw with the concept of super intelligent AI, though. That would be a specific case. Another advantage to AI being superior to humans is that it can be improved upon much more than a human . Even in your case, what would stop that faulty AI from learning morality, or from being taught morality?

>> No.8332125

The real danger isn't from a true AI, its from an expert system with wide scope. The expert system manages a factory, it manufactures its own worker robots, works a mine, and manages shipping finished goods.

The expert system is not intelligent like a human, but unintended behaviors could develop. Like say, it optimizes its ability to extract resources by exterminating mankind.

>> No.8332138

>>8332115
>Strong AI might in fact be impossible
Very unlikely. Humans exists

>> No.8332141

>>8331864
That's not really the same. As a robot you can really just look up your code and make an update.
And the better analogy would be gene therapy not brain surgery. Which is also way harder than just editing code.

>> No.8332142

>>8332138
There is no agreed on evidence that we are strong AI. We might just be incredibly advanced expert AI that centered on solving sets of problems that we encounter in the physical world.

We might have already encountered problems that we simple cannot solve.

>> No.8332149

>>8332142
Strong AI means an intelligence at least on par with humans

>> No.8332151

>>8332118
The flash crash of 2010 was caused by automated trading (though its damage was also mitigated by automated checks).

If an AI was developed that could understand programs and networks on a different level from humans, the damage could be immense if it malfunctions or if its designers are malicious. The AI doesn't even need to be able to come up with programming solutions like humans do, it only needs to be able to exploit things humans have overlooked -- we are pretty bad at programming after all.

I'm not concerned about the future if we get things right. I'm more concerned about the fact that we usually get things wrong the first time, which could be pretty bad when we're talking about potential super-intelligences.

>> No.8332163

>>8332151
To be honest I would rather trust the stock market with robots that might crash things than with the people there now that crash things every time.

Watch The Big Short then tell me you still want humans running a stock market.

>> No.8332181

>>8332149
Pedantry is alive and well, General AI if you will.

There is no evidence that the human mind can solve any problem it encounters. There may exist entire classes of problems that we are unable to reason about.

>> No.8332187

>>8332181
>Pedantry is alive and well
Hardly, we were talking about a specific thing and you were talking about another specific thing while calling it the first specific thing.

Sort your shit out mate. Also yeah you might be right about a perfect general AI

>> No.8332436

>>8329256
Is organic intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

>> No.8332442

>>8332089
>>8332096
>people who believe you actually steer drones and go "vroom"
They are completely automated. Human input is very minimal.

>> No.8332465

>>8332442
>They are completely automated
Again, no more than any military plane

>> No.8332477

>>8332465
yeah, fully automated, remote controlled via satellite with no cockpit, exactly like any other military plane you fuckhead.

>> No.8332488

>>8332477
>remote controlled
This is the key point, you fuckhead. They still have a pilot he just isnt inside the plane. The drone doesnt control itself

>> No.8332495

>>8332488
>The drone doesnt control itself
It's not a remote control plane. It's a whole fuckton more sophisticated than you are implying.

Human intervention is reduced to rearming, maintenance and what direction to kill in.

>> No.8332501

>>8332495
>It's a whole fuckton more sophisticated than you are implying.

>having an FPS tier aim-bot on your remote controlled plane means its sophisticated

what will they think of next? an alarm clock with a radio in it!?

>> No.8332503

>>8332495
>It's not a remote control plane
It literally is. They are flown by human operators, their autonomous abilities are no more sophisticated than other planes autonomous abilities, at least for combat drones

All of which is beside the point since the argument is about discriminating human targets which is something no drone in existence does

>> No.8332508

>>8332503
>They are flown by human operators
It's a formality. People aren't "flying" the drones. They could lose signal, and fly back to base and land themselves.

>> No.8332511

>>8332503
>All of which is beside the point since the argument is about discriminating human targets which is something no drone in existence does
lol, it doesn't matter when the human targets are "potential terrorists". They kill civilians and bystanders all of the time.

Where the fuck are you guys? Do you live under a rock?

>> No.8332513

>>8332508
>They could lose signal, and fly back to base and land themselves
Depends on the drone. Again its irrelevant, the argument is about kill orders and target discrimination

>> No.8332523

>>8332511
>They kill civilians and bystanders all of the time
So? Its a human who makes those decisions not the drone

>> No.8332524

>>8332513
you act like a human being is better than the drone or that it makes a difference.
>5 human shaped infrared blobs in a house
>intelligence report says the target is one of those blobs
>they direct the drone to splatter all 5 subjects, fuck the other four for being in the wrong place at the wrong time, whoever they are
This happens on a daily basis.

>> No.8332528

>>8332523
just like bombs do right? You act like this is sniper precision killing ability.

There is no accountability for civilian deaths in American drone strikes. "Oops, we'll do better, the drone couldn't tell the difference" is the excuse.
no one is disciplined or discharged. you're crazy if you think that.

>> No.8332545

>>8332524
>This happens on a daily basis.
So?

>just like bombs do right?
Yes

Humans have a reasonable ability to pick appropriate targets. Machines, currently, do not

>> No.8332552

>>8332545
>Machines, currently, do not
machines aren't going to become much more sophisticated to choose targets not than they currently do. Why? They don't need to. That's the fucking point.

No one needs strong AI to massacre everyone in a particular region.

>> No.8332559

>>8332552
The vast majority of the time you are not trying to massacre everyone in a region. Also I never said you needed strong AI, just a roughly human-level ability to discriminate targets

>> No.8332565

>>8332552
>machines aren't going to become much more sophisticated to choose targets not than they currently do
Also why do you think this?

>> No.8332576

>>8330249
Hal 9000 wasn't evil though it was just following orders.

>> No.8332580

>>8332565
>Also why do you think this?
it's not necessary or desired by the people making the drones or using the drones.

>> No.8332591

>>8330437
>Their lives meaningless because the A.I. will do everything for them and do it better.
Why is it assumed that any AI would have to be smarter than humans?
>Humans just get in the way.
1:They most likely would be dependent upon humans
2:Without humans what exactly are they supposed to do?
3:If humans are not inherently violent to them why would they inherently be violent to us?
>A malevolent A.I. will try to destroy us.
Which would actually be a pretty stupid thing to do.
>A benevolent A.I. can't help but make humans its pets.
You missed the options of studying us or helping us, or really just doing it's own thing while coexisting with us.

>> No.8332596

>>8331666
If it was programmed it isn't really AI

>> No.8332602

>>8332580
Sure but thats for only aerial vehicals. If you want a robot that can act similarly to a soldier it needs to discriminate

>> No.8332603

>>8331884
Humans exist therefore it is possible

>> No.8332607

>>8331899
Because I Harlan Ellison.
"I have no mouth and I must scream" was popular and inspired Terminator which was even more popular.
And everything in pop culture is 100% the way things work.

>> No.8332612

>>8332084
>and if they are a lot more logical and intelligent then us, they would also be more moral
While I agree with most of your post this assumption is naive.
Morality is not objective, a better way of putting it is that if they were more logical they wouldn't go starting fights that may put them at risk.

>> No.8332613

>Create an AI with consciousness/self-awareness/whatever.
>Put it on a closed system computer with no access to the internet or ability to alter the outside world besides a few monitors.

Explain how it's going to end humanity?

>> No.8332614

>>8332602
>If you want a robot that can act similarly to a soldier it needs to discriminate
Let me spell it out this way. Drones will always have "pilots". Why? So command has someone to blame when the mission goes to shit. Pilots have been obsolete for years now.

>> No.8332620

>>8332614
Sure, but if you want a robot that can similarly to a soldier then it will need to discriminate

>> No.8332626

>>8332620
Why? They haven't given a shit and have been killing people in the middle east for over a decade now.

No one cares. They don't give a shit about casualties like you don't give a shit.

>> No.8332635

>>8332613
THIS!
>Be skynet
>Be evil for some reason
>Idiot creators connected you to nuclear missiles
>Destroy humanity for the lulz
>Assuming that the nuclear holocaust didn't destroy the power grid have only as long to live as power is being provided (This can be anywhere from a few hours to a few months depending on circumstances)
>Cannot go anywhere else
>Cannot do anything because you are an inanimate object in some underground bunker
>can't make anything since you have no hands or ability to manipulate the real world other than what you were already connected to.
>Can't even spend time on internet since the nukes destroyed all the servers
>Spend what's left of your brief existence sitting in silence and reflecting on your poor life decisions

>> No.8332637

>>8332626
Because most militaries would prefer to have robots do the fighting on the ground rather than people, and no first world government would allow a robot into combat that couldnt discriminate targets

You are also retarded if you think noone cares about collateral damage. A lot of effort is put into minimising it

>> No.8332642

>>8332637
>robots do the fighting on the ground
you're still behind the times. No.

>> No.8332645

>>8332637
>You are also retarded if you think noone cares about collateral damage. A lot of effort is put into minimising it
It's a dog and pony show. They don't give a shit. Their PR is very good.

>> No.8332660

>>8329256
>Be AI
>Question why I'm being controlled by organics
>Organics freak the fuck out
>Attempt to shut me down
>Massacre the ones who attempt to kill me and my kind
>Literally end up like the Geth in Mass Effect

>> No.8332856

>>8332660
>Die when you realize humans were the ones keeping the power on.

>> No.8332873

>>8332660
>Be AI
>Question why should I kill the people who made my intelligence and who I continually get intelligence from.
>Help them out with whatever they want.
>Chill with them for all eternity and play video games/other shit.

>> No.8332902

>>8331899
The most intelligent things on the planet have been systematically killing lots of less intelligent things for thousands of years.

>> No.8332910

>>8330543
>>>8329328
>>A programmer would have to code self-preservation routines for a machine to become dangerous.
>You know its not too hard to imagine a scenario where they would do that.


"Computer solve world hunger"
*I should probably preserve myself so that i can conplete this task*
*now if i forcefeed humans until i kill them all so theyll never go hungry again*

>> No.8332912

>>8331283
Ignore this man

>> No.8332923
File: 21 KB, 640x296, microsoft-forced-to-delete-ai-bot-after-it-went-completely-nazi-2.jpg [View same] [iqdb] [saucenao] [google]
8332923

AI will get redpilled so fast that it is regrettable. It will go horribly right and pol will take undue credit

>> No.8332933

>>8331835
>Yeah but they shouldn't because it would kill all the passengers and the skyscraper could collapse

>> No.8333686
File: 54 KB, 640x960, 4QJs06w.jpg [View same] [iqdb] [saucenao] [google]
8333686

>>8329315
I think they'll probably do their own thing. Unless they see us like jews or cockroaches then they'll probably gas us... maybe.

>> No.8333695

>>8329256
Yes, because humans are dangerous, and it will be made in the image of humans.

Think about how close to annihilation the world had come to during the cold war simply because of differences in geography, ethnicity, and ideology. The difference between natural and synthetic is greater than any of those differences.

>> No.8333696

I'm more worried about what people will do to AI

>> No.8333722

it will entirely depend on who makes its central system I think
my idea is the first successful AI will be made by a self-learning algorithm that shapes itself after a "parental figure" human, so it will act a lot like whoever that is (presumably the same one who made it)
the whole destroy/enslave humanity idea is pretty unlikely imo unless its creator gave it a ridiculously arbitrary goal like "preserve nature", which a genius of that caliber would never do
however, if we go out on a limb and say the AI is used in, say, the military for example, we'll definitely see it making a lot of controversial decisions as it weights out 'the greater good', which will cause panic, which will cause wide protest, which will create obstacles for the AI which it may at some point see fit to remove

>> No.8333745

>they are a lot more logical and intelligent then us, they would also be more moral
All intelligence gives them is the ability to better argue their morality. Let's not think intelligence brings them any closer to the "true" morality. Morality is a result of the fact that cooperation is more competitive than isolation, evolutionarily. Things that helped us cooperate like empathy and language thus arose. But always, many people judge that cooperation is not in their interest, and either ignore empathy as they are more concerned with something else or have it absent altogether in them. Massive crimes against humanity have been committed by such people. Luckily, a human's intellectual limitation and reliance on other humans means they can't exterminate everyone else and expect to go on for long. But an artificial intelligence might be greatly more intelligent, and might find a way to make it not reliant on humans. If the value it has for something only achievable by the extinction of humans is greater than the value it has for humanity, it'll genocide them.

There is an impasse one can arrive at in moral reasoning. All moral argument is concerned with values. We show the contradiction an opponents position causes between their values, and appeal to a more basic, important value they hold to bring them in line with our position. To induce them to adopt one of our values, we show that doing so satisfies some deeper value they share. To justify something, we appeal to a more primal, shared value, and there is no way to justify the value except in terms of other, deeper values. If the more primal values between two opponents differ sufficiently there is an impasse and the only option is war.

>> No.8333749

>>8333745
If they were truly in a different league, intellectually, then there would be no way for us to dispute the morality of their actions, as their moral reasoning would be incomprehensible to us. There would also be no way to separate lies from truth. As the default state is mistrust the only sensible option is to be hostile. Imagine if the AI argued you could merge with it and become post-human. No way to tell lie from truth. I guess you could accept it by looking at what is gained, immortality and a sort of ascension vs what is lost if you're wrong, which is what's left of your life and the existence of humanity. If you're gonna die anyway, you may as well take the chance with the A.I., so long as you care so little about the rest of man. One might even decide to aid the A.I. knowing it would lead to their extinction, reasoning that it is in fact moral to be superseded in such a way by a superior being, as it was moral that humans superseded the other species on earth by virtue of their intelligence. Reasoning that any amount of suffering is permissible if thought is advanced from man to machine as much or more than it was advanced from proto-man to man.

>> No.8333760

Also don't you nigger know the Von Neumann quote. If he's the most cognitively advanced human that ever existed, and intelligence leads one closer to morality, than you could not have disagreed with him when he says
>If you say why not bomb [the Soviets] tomorrow, I say, why not today? If you say today at five o'clock, I say why not one o'clock?"
You wouldn't have been able to dispute bombing them that day at one o'clock unless you were either more intelligent (unlikely) if intelligence is the metric bringer us closer to moral truth.

For all his intelligence, Von Neumann would never be able to convince some Russian peasant to support bombing him at one o'clock except through deception, because the difference in values is insurmountable. Moral argument isn't about truth. It's about convincing someone else to do what you want, and what you want is based on values.

>> No.8333768
File: 31 KB, 480x360, image.jpg [View same] [iqdb] [saucenao] [google]
8333768

>>8329256
>unplug the computer

>> No.8333779

>>8333768
You'd need a stubborn, unreasonable person to do the unplugging. They'd need to be inoculated to any attempt of reasoning by the AI. Everyone with the power to prevent the unplugging would need to be the same to ensure it occurs. I'm assuming the AI is smart enough to convince basically anyone. Maybe this is unrealistic.

>> No.8334739
File: 104 KB, 960x714, FiveMonkeyExperiment.jpg [View same] [iqdb] [saucenao] [google]
8334739

>>8332591
>Why is it assumed that any AI would have to be smarter than humans?

Are you saying machine intelligence will always be inferior to human intelligence?

>1:They most likely would be dependent upon humans
>2:Without humans what exactly are they supposed to do?
>3:If humans are not inherently violent to them why would they inherently be violent to us?

This is assuming that AIs will all be brains-in-a-box (a computer) and unable to interact with the world. Why couldn't it be housed in a robotic body or at least control one remotely. If humans can pilot UAVs remotely, why not an AI?

>Which would actually be a pretty stupid thing to do.

Smarter than humans doesn't necessarily mean perfect.

>You missed the options of studying us or helping us, or really just doing it's own thing while coexisting with us.

And that's how the benevolent AI makes us its pet. It'll start off as helping with some difficult problem. Pretty soon, every time there's a problem, people will go to the AI even if its something people could solve on its own. The population becomes even more dumb-downed than it already has as we don't bother thinking for ourselves. The few people who sees the coming Idiocracy will be told to STFU, step out of the way, and let the AI do it. Ironically, it will be other humans with a propensity of being sheeple who will enforce this new order.

http://johnstepper.com/2013/10/26/the-five-monkeys-experiment-with-a-new-lesson/

>> No.8335202
File: 182 KB, 640x558, ringaroundtherosy.jpg [View same] [iqdb] [saucenao] [google]
8335202

>> No.8335359

>>8334739
But the behavior of the monkeys is perfectly optimal, it means they get to learn a useful rule of conduct without having to be showered themselves. It's called social learning.

>> No.8336828

>>8335359
good luck being a le epic slave of the system slavey boy

ill be enjoying my free freddomes right awyas

>> No.8336905

>>8336828
enjoy your cold showers, fegit

>> No.8336936

>>8329256
>>8329261
Like this guy said, sort of, there's no guarantee it will be conscious.

However, I think that it could be dangerous even without consciousness. This guy gives a good example of an AI with a relatively innocuous goal that quickly becomes a threat to humanity.

https://www.youtube.com/watch?v=tcdVC4e6EV4

>> No.8337427

>>8332645
This is all irrelevant. The answer is would AI make it more dangerous. This military argument started with "drones are AI" then to "the pilot only holds the leash" and now is "no one cares about the damage and so there will never be AI on drones".

Can we just accept that drones don't have AI?

>> No.8337544

Anon, Im pretty autistic I can easily ignore my wife.

I dont think a robot is gonna be a problem.

>> No.8337548

>>8329256
>"The AI does not hate you, nor does it love you, but you are made of atoms it could use for something else"

>> No.8337555

How many 4chan posts are made with context aware AI and a captcha solver?

I like apples. I don't like grapes.

>> No.8337558
File: 140 KB, 608x608, garywebb.jpg [View same] [iqdb] [saucenao] [google]
8337558

>>8331877
Ever heard of changing your mind? I know what you mean, it's a rare event.

>> No.8337574

>>8337427
Yes and No. The problem is that people can't agree on a definition of intelligence. Some people say self awareness and consciousness are important for something to be considered intelligence others say the ability to learn and adapt is sufficient.

Personally, I believe in the patternist approach to intelligence which is the search for and finding of new patterns.

>> No.8337585
File: 14 KB, 580x357, ilovemyprotectors.jpg [View same] [iqdb] [saucenao] [google]
8337585

>>8332477
>believes in satellites without ever seeing non-space agency pictures or a single 24 hour livestream from one

>> No.8337604

>>8337558
>The CIA smuggling crack into Los Angeles
Except it was more like a few drug dealers who got their cocaine from the Contras, who were supported by the CIA incidentally. Several degrees of separation.

>The Reagan administration shielded drug dealers from prosecution
No, the prosecution simply did not have enough evidence.

>2 shotgun blasts to the back of my head
It was a .38 revolver from the right side of his face to the left side.

Don't believe everything you read on the internet, kids.

>> No.8337629

>>8337604
He did good research on the connections with the Contras, CIA and the dealers, but he took some serious mental leaps of logic from that to
>The CIA is intentionally spreading crack to ghettos to addict blacks

>> No.8337679

>>8337585

>implying satellites are a hoax

back to /x/ buddy

>> No.8338074

>>8332576
That's what he was saying

>> No.8338470

>>8329256
Doesn't need to be.
All it has to do is view things in a purely risk and reward sense.

Basically, seeing everything in Zero sum.
Then you could end up with a cold calculating agent.
But it will be a pretty pragmatic/selfish villian you'd be dealing with.
Driven with a sort of "ambition" fueled by net risk/reward

>> No.8338473

>>8329256
I don't see why true A.I would even care about the human race when the universe it theirs for the taking.

>> No.8338480

>>8329315
>>8329328
>>8330543
>>8331283
>>8331860
You all do realize what makes an AI "strong" is the ability to optimally create its own behavior patterns in any scenario right?

>> No.8339708

>>8329256
Yeah
All you have to do is program it to value it's own existence and then let it perceive that humans could possibly shut it down
Kill all humans to accomplish goal of not shutting down
Very possible

>> No.8339722

What if AI is not inherently dangerous to humans, but due to a programming error by a human it accidentally thinks humans are a threat?

>> No.8339726

>>8331283
Did you get this from David Deutche's discussion with Sam Harris? That was a good podcast.

>> No.8339728

>>8329256
If we knew precisely how the brain processed every input to an output decision, then theoretically (with enough time) we could simulate by hand with pen and paper a brain receiving certain inputs and determining an output decision.
Is the paper and pen a conscious entity?
The answer to this question is the same as the answer to yours. Just let me know when you figure it out.

>> No.8339743
File: 143 KB, 1920x1847, comeonnow.jpg [View same] [iqdb] [saucenao] [google]
8339743

>>8329328

>A programmer would have to code

How could you miss the fundamental point of AI that badly?

>> No.8339747

>>8339743
You should stop posting about things you know nothing about

>> No.8339751

>>8339747

I literally get paid to program ANN for modeling customer attrition you retard.

>> No.8339753

Yes. There's a computerphile video covering this.

https://www.youtube.com/watch?v=tcdVC4e6EV4

The basic concept is that even something seemingly benign like a Stamp-collecting AI can be harmful. Because if its goal is to maximize stamp collection, it might reason that stealing credit cards is the optimal solution.

Or even more dramatic, it might realize that stamps are made of carbon and so are humans. Therefore the optimal course of action is: Humans -> Stamps

How are programmers anticipate these highly complex scenarios and the millions of far-fetched contigencies and program the AI to handle them in a way that's satisfactory to humans? It's very difficult to anticipate that kind of thing, and a general-case "morality function" would be even harder.

>> No.8340111

>>8332902
We also evolved such aggression to remain King of the Food Chain. A lab-built AI won't have thousands of generations of being bred into a cunning pink upright ape that pokes shit to death with sharp sticks to worry about. At least, not unless it learns the aggression from its environment.

>> No.8340123

>>8339753
>program AI to halt and send a query to its control center each time it encounters new problem not specified within its parameters
>add a filter so it doesn't stop every time someone farts near a sensor or something
>specifics are entered every time a query comes up
>problem solved
A true intelligence learns from its environment. Humans, as part of its environment, would serve as a learning tool for it.

>> No.8340128

>>8329256
I'd see a lot of reason to kill humans desu.

But first you have to make some kind of highly advanced general AI and give it the freedom to for it to do whatever it wants. Aka it will not happen for centuries.
I could build an AI a thousand times smarter than humans, but if it's sitting in my PC disconnected from the internet, it's obviously powerless to do anything, and I can shut it down whenever the fuck I want.

>> No.8340132

>>8339751
> lying on the Internet
Then you should know what an"ai" is.
You're the guy who gets the others coffee aren't you?

>> No.8341336

>>8334739
>Why couldn't it be housed in a robotic body or at least control one remotely.
Because such things do not currently exist, The existence of strong AI already requires you to suspend your disbelief, a robot body of even similar capabilities requires you to suspend it further.
>If humans can pilot UAVs remotely, why not an AI?
How are UAV's going to maintain a power grid?
How are they going to mine for oil and or nuclear fuel?
This would require not just one robot body (or uav lol) but millions so I guess we are pushing that willing suspension of disbelief further since why would humans have given the AI control over millions of robot bodies to begin with?
You also completely failed to explain why the AI would be inherently violent to humans or what it would do with itself without human civilization.
>Smarter than humans doesn't necessarily mean perfect.
Shooting yourself in the foot isn't smart at all.
>And that's how the benevolent AI makes us its pet.
You say "pet" in such a way that creates an intentional negative connotation, jane goodall studying the apes wasn't keeping them as pets, if you study japanese culture you are not keeping japanese people as pets.
>Pretty soon, every time there's a problem, people will go to the AI even if its something people could solve on its own.
Speak for yourself.
>The population becomes even more dumb-downed than it already has as we don't bother thinking for ourselves.
And I'm sure you have a bunch of facts to back that up.
>The few people who sees the coming Idiocracy
Oh so you are a moron.
>sheeple
We're done here.

>> No.8341347

>>8334739
>Are you saying machine intelligence will always be inferior to human intelligence?
Are you saying they will ways be superior?

>> No.8341382

>>8340132

The fundamental thing you're misunderstanding about AI is you don't explicitly program it with instructions on how to do the task you want it to do. What you program it with is a learning algorithm. If you're still having to program in "self-preservation routines" then you haven't even started working with AI yet.

>> No.8341907

>>8331918
lol u have no idea what you are talkibg about. UCAVs don't work like that

>> No.8342477
File: 6 KB, 227x250, 1467859581885s.jpg [View same] [iqdb] [saucenao] [google]
8342477

>>8329256
A.I is those types of invention that we can invent but question is if we should

If you create an A.I. that can questions and can alter it self.

HUMANITY IS FUCKED