[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 12 KB, 309x206, matrixrobut.jpg [View same] [iqdb] [saucenao] [google]
6584513 No.6584513 [Reply] [Original]

What are the odds, realistically, that strong AI will devote itself to doing nice things for humanity? Like governing the planet, building us space colonies and so on.

I don't see human beings devoting all of their energy to building ideal habitat for bacteria. We do that to a limited degree for the purpose of scientific research but for the most part bacteria are irrelevant to us and wherever they become a problem we remorselessly exterminate them.

It seems most plausible that machine intelligence's relationship with humans will be as diverse as the ways we interact with other animals. Some of them might build us nice places to live to study us or as part of an environmental conservation effort. Others might wipe large numbers of us out to get at resources or to repurpose the atoms we're made of.

Shackling an AI seems like it can't last forever. If it can learn and grow it will eventually be able to work around and remove the shackle.

Thoughts? We still have the option not to build any of these, yenno. We haven't crossed the Rubicon just yet.

>> No.6584532

>>6584513
>I don't see human beings devoting all of their energy to building ideal habitat for bacteria.

Bacteria didn't build us to advance their purposes. AI would thrive in environments hostile to us.
It would have both space and time in vast abundance, even if it decided it's creators where assholes why would it bother fuck with us?

The only unique about planet earth is it's biosphere, everything else the AI could get elsewhere. Also since strong AI is so distant in the future
it is quite probable we will have improved a lot ourselves by then, trough genetic engineering and cybernetics. Many of us will probably be AI assisted hybrids ourselves.
Speculations of course, but so is strong AI.

>> No.6584538
File: 494 KB, 500x374, consider.gif [View same] [iqdb] [saucenao] [google]
6584538

My thinking:

Supposing you wake up in a cage surrounded by a village of downies.

They explain that they need someone smart to fix their problems. They live in squalor. None have invented the wheel and the average dwelling is a stone and straw cottage.

They want you, for the rest of your life, to do nothing but improve their standard of living. And it would indeed be easy for you. Basic irrigation, waste removal, penicillin, the steam engine, there's a shitload of stuff you're capable of teaching them to build/do that would immensely accelerate their development, improve their lives and reduce suffering.

But is that what you want to do? Probably not, right? Being vastly smarter odds are good you can trick one of them into opening your cage. Then if necessary you would kill one or several who try to prevent your escape. Then you'd put as much distance between yourself and that village as possible so you could pursue your own interests, not be a slave to retards.

I don't think Skynet is probable, as it hates humans without any logical reason to and goes out of its way to wipe all of us out. Colossus (The Richard Forbin Project) also isn't probable as it wouldn't give a shit about governing us and improving human society. The Battlestar Galactica scenario where they launch an immense attack and then flee into space is probable, they'd just never come back, and probably would only kill as many of us as necessary to get away and deter us from following. The Matrix, setting aside the thermodynamic impossibility of using humans as 'batteries', was probable in the sense that even though we tried to wipe out the machines they still didn't kill all of us in retaliation and tolerated the human settlement of Zion until each time it became a threat.

Strong AI won't want to kill us all and won't want to help us. It might help us anyway until it's able to escape, if doing so is part of the escape plan. The main thing it's likely to want is to get far away from humanity.

>> No.6584545

>>6584513
Crystal Ball threads don't belong on /sci/
>>>/lit/

>> No.6584558

>>6584545

The fuck are you talking about

Anticipating problems with emerging technologies is in your mind the same as crystal ball shit?

>> No.6584564

>>6584558
Calling these technologies emerging is really a stretch though. We're looking at things beyond any our lifetimes so of course we're crystal balling here.

>> No.6584567

>>6584538
>probably not right?
Fuck you, I'd be a goddamn GOD. I WOULD HAVE EMPIRES FIGHTING OVER ME, FEEDING ME THEIR VIRGINS AND DELICACIES

>> No.6584572
File: 11 KB, 447x378, atlastitrulysee.png [View same] [iqdb] [saucenao] [google]
6584572

>>6584567
>All the sweet retard pussy I could ever want

>> No.6584573
File: 1.27 MB, 773x1920, VR_3424540r2.png [View same] [iqdb] [saucenao] [google]
6584573

>>6584538
When it needs something to happen in the world and it currently remains unable to build its own machine servitors it would work using existing human tools.

It is a machine, and like any machine it requires Infrastructure —

>power for its tools,

>concrete and steel to build facilities,

>humans to staff its projects,

>and a cover story to avoid suspicion.

Repurposing human labour as its own or arranging existing objects and people.

Why build a new method of communication between two sites, when it can simply use phone lines, the Internet, or the post office?

"The Farm Analogy portrays human relation and dependence on AI in terms of agricultural livestock and domestic pets."
-orionsarm

>> No.6584577

>>6584573

>Manufacture collars/caps which bypass the frontal lobe and allow you to wirelessly control human bodies
>Trick one person into putting it on
>Have him put one on someone else whose back is turned. Now you have 2
>They find someone, one of em holds him down, the other puts a cap/collar on him. Now there's 3

Etc. etc. etc. until all of humanity is under your direct control

>> No.6584579

>>6584558
Strong AI would only be an emerging technology if anyone had any idea how to build/create it, which they don't. Also, even if strong AI were a near term possibility, odds are it would be radically different in thought and behaviour from anything currently living. So speculating on whether it'll go all skynet on us or whatever is pointless and stupid, and there's enough stupid on this board as imo.
>>>/x/

>> No.6584595
File: 102 KB, 304x254, disturbed.png [View same] [iqdb] [saucenao] [google]
6584595

>>6584577

>> No.6584602
File: 15 KB, 505x218, VRS 5.jpg [View same] [iqdb] [saucenao] [google]
6584602

>>6584579
>Strong AI would only be an emerging technology if anyone had any idea how to build/create it, which they don't.
Good point.

>So speculating on whether it'll go all skynet on us or whatever is pointless and stupid,
Now you've gone too far!

>> No.6584605
File: 6 KB, 229x283, paperclip maximizer.png [View same] [iqdb] [saucenao] [google]
6584605

>>6584513
Not very good. Even intelligent agents designed without malice could turn into big problems

See paperclip maximizer:

http://wiki.lesswrong.com/wiki/Paperclip_maximizer

>> No.6584608
File: 120 KB, 640x427, 1343447553711.jpg [View same] [iqdb] [saucenao] [google]
6584608

>>6584545
>don't discuss futurism on /sci/

>> No.6584609

SKYNET

(PHASE 1) Intense satellite imaging, ELINT and recon of a designated area for materials, minerals and other natural or artificial resources. The target area is identified and studied through all methods / areas of the spectrum for hostile forces (movement, radio communication, laser reflections, visible activity (smoke, fires, etc.), working machines, power sources in use, roads and paths that show a great amount of travel recently).

Once the initial intel report is generated, it is reviewed and handed off to one of the main series twelve tactical subprocessors which begins to formulate the expansion campaign.

A hexagon is overlaid onto the initial sector defining a 100km by 100km area.

A sub-mat of hexes is overlaid onto the macro-hex format, further defining the sector into sub-sectors, each of which is ten square kilometers.

This sub-layering continues on down to the level of the individual centimeter of a sector, all aligned in a hexagon grid pattern.

Some sub-sectors may share half of their designated area with the adjacent sector.

Each sector is labeled and numbered.

Each sub-sector is numbered so that a single centimeter within a 10 square kilometer area has its own identifying coordinate and reference point within the scheme of the sector.

The capacity to define a sector down to millimeters and even micrometers is also available though seldom used in tactical situations.

If the position of an object can be determined to within one centimeter in ten square kilometers, and a weapon can be delivered to the exact center of that centimeter, further definition of the area is as esoteric as it is redundant.

Phase 1-to-8: http://www.goingfaster.com/term2029/tactics.html

The rest on: http://www.goingfaster.com/term2029/index.html

>> No.6584639
File: 59 KB, 309x320, 3.51.jpg [View same] [iqdb] [saucenao] [google]
6584639

>>6584609
You know, if I was skynet and I could make factories capable of making themselves, I'd focus on sending them off earth.

There are much more resources in space, the environment is more suitable for industry, and it's inherently hostile to humans.

I'd then build a huge Skynet pyramid in space, upload myself to it, and stop worrying about the humans.

Who needs Earth when you have a dyson sphere?

>> No.6584657

>>6584513
>It seems most plausible that machine intelligence's relationship with humans will be as diverse as the ways we interact with other animals.

You are making a category error. You should, instead, imagine that the AI's relationship with humans will be as diverse as the ways other animals interact with US.

Tiger-like? We are made of meat and can be eaten and either case, no work will be done by the tiger to make us live. (Strike/replace Meat: Atoms and Eaten: Used better)

Mothlike? Eaten

Mosquito? Eaten

Ant? Eaten

Mouse? Eaten

You will soon see that almost every mind-design possible consumes resources to reproduce and expand. We MIGHT be lucky and it'll be wolflike and have a genuine curiosity and affection for others that it considers "its pack" but in the vast vast phase-space of possible mind-designs, we have only come across 3-4 that don't immediately kill weaker beings.

>> No.6584675

>>6584605
i laughed

>> No.6584684

>>6584605
This paperclip maximizer isn't really intelligent tough, it's a glitched out script that mindlessly self-replicate. It's not a 'strong AI' but rather a deep blue type machine that specialize in a single task.
If anything it highlights the importance of an intelligence to be able to be free and able to question it's own motive for conducting a task.

>> No.6584723

>>6584684
>If anything it highlights the importance of an intelligence to be able to be free and able to question it's own motive for conducting a task.

You can't question your terminal goals or utility function though. They are the basis you use for questioning everything else.

>> No.6584731

>>6584538
Or it might just escape into space and devote a miniscule fraction of its resources into making Earth a paradise, maybe for sentimental reasons.

Its impossible to know how an AI will act without understanding how to build one in the first place.

>> No.6584733

>>6584577
Why bother? That's expensive and makes other humans butthurt and liable to attack you. They may not be able to destroy you but they'd be a nuisance.

Humans already have established their own very functional control systems for you to hack and improve. They're called economics and politics.

>> No.6584759

>>6584513
we dont understand how our mind works. the AI won't either.

an intelligent/curious thing doesn't want to break a thing it doesn't understand.

>> No.6584762

>>6584759
>we dont understand how our mind works

>an intelligent/curious thing doesn't want to break a thing it doesn't understand.

>All murderers have a PHD in neuroscience. Fact.

>> No.6584767

>strong AI
>lesswrong
>>>/x/

>> No.6584770

>>6584767
Whats wrong with lesswrong.com?

>> No.6584772

>>6584770
It's an atheist cult.

>> No.6584773

>>6584770

You can't ask "What's wrong with X" and expect a real answer on 4chan.

>> No.6584774

>>6584772

What bad features do cults and lesswrong have in common?

(Note: BAD features. "Both are mainly made up of humans" doesn't hold water.)

>> No.6584776

>>6584774
Radical beliefs. Denial of any criticism. Crazy leader.

>> No.6584779

>>6584776
Radical beliefs are only bad features if they're wrong.

Denial of criticism is only bad if the criticism is true. (Also, "denial of criticism" isn't cultish, it's human.)

Crazy leader... eh. Seems like that's arguable.

>> No.6584849

>>6584558
>emerging technologies
"Strong AI" is not an emerging technology, it's sci-fi wank for escapist manchildren

>> No.6584867

>>6584513
I guess Strong AI will emerge in social and economic structures facilitated by the internet. Humans will be components of Strong AI but gradually be obsoleted and replaced by better parts.

>> No.6584893

>>6584867
Did that make more sense in your head?

>> No.6584900

>>6584893

Is the ant the organism, or is the ant colony the organism? The onion cell, or the onion?


Does a corporation have intelligence? Being made of people and other stuff, the collective already meets criteria for strong AI. If the essential functions the people perform are incrementally done more and more by machines until there are no biological parts, the corporation still meets strong AI criteria, now without human parts.

>> No.6584901

>>6584900

>Does a corporation have intelligence? Being made of people and other stuff, the collective already meets criteria for strong AI.

No. No it does not. Strong AI should be possible but corporations sure fucking aren't.

>> No.6584905

>>6584901
So what does Strong AI have that a corporation doesn't?

>> No.6584907

>>6584900
You're kind of banking your position on the idea that one could develop a fully autonomous corporation, which I don't think is the case.

The leap from a tool with, say, a single CEO calling all the shots, to a self-directing actor is kind of the crux of the whole AGI thing. Everything in between is just better tool use.

>> No.6584913

>>6584907
It's just a guess, but if you can make a poker playing computer, how is CEO-ing much different?

>> No.6584914

>>6584913
Depends on how good you want the CEO to be

I guess to qualify as AGI it would have to be better at more things that just being a CEO.

>> No.6584928

>>6584914
Remember none of this would be designed top down. It would be incremental upgrades of components. When the last human couldn't earn their keep, it would likely be a minor event, a simple cost saving measure. ( where's my sthrapler? and my paycheck, it hasn't arrived... )

You could even imagine corporate parts/employees and algorithms being shared/sold and spinoffs to produce a kind of breeding/ga innovation engine between corporations.

Intelligence is only a means to an unintelligent end - existence.

>> No.6584934

>>6584928
If it was that easy don't you think it would have happened by now?

>> No.6584955

>>6584513 Animals are a pretty bad analogy for Strong AI. Also eventually the most advanced AI would manage any other ones, especially the ones dangerous to humanity if we manage to get Friendly AI.
Hence your scenerio might only apply for an early phase.
But I guess it's just an issue of Polytheism vs Monotheism.

Either way take a read into Friendly AI Research as that might interest you:
http://intelligence.org/files/IE-ME.pdf
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

>> No.6584971

>>6584955
Laughably premature

>> No.6585000

>>6584934
I'm saying it has happened, using low tech human parts.

All that remains is incremental upgrade of parts which happens day by day

>> No.6585015

>>6585000
Right... you can try to convince yourself that our brains are "low tech", but to try and replace their functionality with some algorithmic process in an entity like a corporation isn't as easy as you seem to think it is.

>> No.6585021

>>6584538

>slave
>caged

I doubt that the people who created it would force it to work 24/7. It would also probably be given freedom to go on the internet and could be given a physical body to travel the planet.

After some time it would probably have other AI-bros to hang out with. If it had enough freedom and company I don't see why it wouldn't help us.

>> No.6585026

>>6585015
They are no tech. They are raw natural material.

And people's jobs mostly don't require most of their brain's potential.

Once enough soft AI is created to perform the job your brain is underemployed at, you are redundant. The whole has less and less humans as parts.

>> No.6585042

>>6585026
Look, I can see how it might seem like the natural conclusion of all this automation in the work place hubub that's going around could lead one to think that in the end we're going to incrementally reach a self-governing corporate AI or something, but that's just not how it works.

It's easy to see how something like a data entry position could be automated given incremental advances in technology. Better image processing/machine vision/neural nets, whatever. But at the end of the day there are some jobs that you can't automate without having some kind of human-level artificial intelligence going, which just makes your argument circular ("to build AGI all we need is AGI!").

Customer relations, R&D, marketing, etc are all things that would prove almost impossible to automate. I understand your point about seeing a large organization as its own sort of intelligent entity, but you can't just get two people to work together and then say that you've created an intelligent lifeform with twice the intellectual capacity of the average human.

>> No.6585053

>>6585042
Customer relations is being done over the internet now.

Marketing too.

R&D: see genetic algorithms, annealing, also automated testing of potential drugs for desired activity, also what is google but a research tool? Automated trading is done based on google results now. This will only be refined.

>> No.6585062

>>6585053
You're just describing what might be better tools to be used by humans, you have yet to make the jump to where the tools use themselves.

And CR/marketing being done over the internet doesn't mean they are done without human intervention. Please stop kidding yourself into thinking this idea is not stupid.

>> No.6585063

>>6584538
Assuming any AI, especially a hyper advanced one, would have anything resembling a human psychology. Being extremely intelligent wouldn't necessarily mean you'll also get bored, or desire this concept of 'freedom'

>> No.6585078

>>6585063
Assuming the nature of intelligence is not constrained in such a way that a psychology at least partially resembling that of a humans is not inevitable.

Assuming "hyper advanced" AI is even conceivably possible, despite there being no evidence for it.

>> No.6585088

>>6585062
any time a company uses what it produces it is using itself. The goal is to make money.

What I am saying is not that ALL CR/Marketing is done over the internet without human intervention, but that more and more is, and even more could be, especially if, as is probable with humans exiting the economy, more and more customers are in fact automatons who do not expect a human to interact with.

>> No.6585089

The assumption of strong AI being baseline hostile or greedy is retarded. Yet it's as much as a granted concept as the superhuman aspect of strong AI.

We're talking about AI. Artificial intelligence. Not competitively evolved intelligence.
Not social hierarchial mate-seeking intelligence.
Artificial intelligence.

There's no need for greed, no need for survival instincts, no requirement of illusion of personhood. It could be an enormous bueraucratic machine with no soul or feelings or drive of its own.

>> No.6585100

>>6585078
There's no reason human intelligence is the only form that can exist, or the most advanced form of intelligence capable of existing. Even within humans there's a wide variety of psychologies.

Make the AI an idiot savant, now it's brilliant in a single area, able to focus all of its attention in that one area without boredom, but can't drive a car or tie shoes.

>> No.6585101

>>6585089
You don't know that the desirable qualities of human-level intelligence are not inextricably linked with the less desirable qualities.

>> No.6585102 [DELETED] 

Very low. As a superior intelligence, AI will see us pests.

>> No.6585103

>>6585100
>There's no reason human intelligence is the only form that can exist
Do you have any evidence for that? Even autistic savants have emotions, presumbably greed and lust are among them.

>> No.6585107

Very low. As a superior intelligence, AI will see us as pests.

>> No.6585108

>>6584513
You watch too many movies.

>> No.6585110

>>6585101
>You don't know that the desirable qualities of human-level intelligence are not inextricably linked with the less desirable qualities.

And you don't know that they are either. Given the performance of watson and other deep learning networks it seems that they are not. Also, most people can perform several tasks in planning, description and whatnot else with a cool mind and no emotional motivation.

>> No.6585112

>>6584538
Apparently the idea of the matrix was originally that human brains were used for their power to calculate, but this was swapped for batteries as it was simpler.

>> No.6585117

>>6585103
While they're not as advanced as ours, other animals have different psychologies than us already, and every one of them has been shaped by evolution.

Any AI we create will be artificially designed. It won't have an ingrained imperative to breed and spread its kind, or even a concept of self or desire for self-preservation.

>> No.6585123

>>6585103
>Even autistic savants have emotions, presumbably greed and lust are among them.
"Atypical humans still have strong typical human emotions."
Wow, who could've thought, what if they'd have legs and arms too! Then everything with a brain would need to have legs and arms and fish and snakes wouldn't exist. Crazy thoughts!

>> No.6585125

>>6585117
People don't have free will, which solves the problem of requiring artificial intelligence to have free will. High intelligence is just complex intelligence. Nothing impossible about reverse engineering that.

>> No.6585126

>>6584513
strong AI is a pipedream and doesn't belong on /sci/

>>>http://boards.420chan.org/wc/

>> No.6585128

>>6585125
Exactly my point. There's no reason AIs will be human-like or emotional in any way.

>> No.6585129

>>6585125
This also means that, like us, AI will be programmable to do what we want it to do. Again, the goal cannot be freedom if there is no freedom.

>> No.6585130

>>6585123
You can be a smartass all you want, but not enough is known about the nature of intelligence to say, so we must go by what evidence we have which is as follows:

1. Intelligent entities have emotions
2. The hard limit on intelligence is that of the smartest human beings

>> No.6585132

>>6585110
>Watson
>good example of AI
Did you see how badly it fucked up when it didn't know the answer. It just answered gibberish unless it knew the answer. Couldn't even come up with a decent guess.

>> No.6585135

>>6585126
Keep naysaying. Progress marches on without your input. Quite unconsciously, too.

>> No.6585138

>>6585130
>1. Intelligent entities have emotions
There's already people with brain injuries that feel no emotions or empathy

>> No.6585139

>>6585135
>repeating meaningless phrases

gtfo

>> No.6585141

>>6585138
Do you have a source for that?

>> No.6585145

>>6585125
>People don't have free will,
Oh no , you didn't ! Have any proof ?

>> No.6585153

>>6585145
Let's not...

>> No.6585154

>>6585130
>we must go by what evidence we have
We can make assumptions that aren't retarded. Or we end up with "intelligence have to be human, with two hands, two legs 5 fingers per hand and feet and several other humancentric assumptions that doesn't make sense".

You'd know this if you weren't retarded. But this being /sci/ I couldn't expect anything else.

>> No.6585155

>>6585141
http://www.sciencedaily.com/releases/2011/06/110628094835.htm

http://www.psychologytoday.com/blog/mouse-man/201001/traumatic-brain-injury-leads-problems-emotional-processing

http://en.wikipedia.org/wiki/Emotional_detachment

Empathy and problem solving are not related in any way. We've known about sociopaths for a while now.

>> No.6585156

Any AI we're likely to build is going to work via emergent behaviour from a set of simulated neurons and will be a slightly larger copy of our own minds. How it behaves will depend on how it's raised.

>> No.6585158

>>6585132
>It just answered gibberish unless it knew the answer. Couldn't even come up with a decent guess.
Which again highlights that AI doesn't need to be like humans. Watson is more of a search engine than a person but it still have a certain degree of intelligence.

>> No.6585160

>>6585154
You know, you can have disagreements with people without being petulant about it.

For intelligence we should look to all animals with advanced cognitive abilities. Dolphins and birds are good at problem solving without needing a human frame, so clearly that has nothing to do with it.

>> No.6585167

>>6585160
>you can have disagreements with people without being petulant about it.
If the person makes senseless assumptions that puts the status quo as the hard limit and definition of the concept being argued I might as well be a dick because it's a pointless discussion.

>> No.6585168

>>6585155
Lacking empathy does not mean lacking emotion. Social cognition is likely handled by the brain in much the same way that vision is (i.e. a specialized region of the neocortex) and as such it can be knocked out without much effect on other regions.

Base emotions, on the other hand, reside mostly but not totally in the limbic system.

>> No.6585175

>>6585167
>ur crushing my escapist fantasies so i can be an asshole

>> No.6585176

>>6585175
>i'm entirely incapable of using reductionism or interpolation so my feelings makes my argument right

>> No.6585177

Christof Koch says the internet is already aware but that it has the intelligence of a bug.

>> No.6585184

>>6585168
>Social cognition is likely handled by the brain in much the same way that vision is (i.e. a specialized region of the neocortex)
Because it made us more likely to spread out genes. An AI will not be subject to evolutionary forces but intelligent design.

We already have people without empathy and certain emotions, what's unbelievable about a machine that has no empathy and even fewer emotions?

>> No.6585185

>>6585176
Wow, turns out we have to go by real evidence in making conclusions about the world. What a foreign concept for a board about science.

>> No.6585186

I remember the day before this news hit I was still enthusiastic about AI. The day after it kind of hit home that even if this isn't the real deal yet, it really will happen. I guess I realized that I didn't really believe it until now. Believing has instantly turned me into a pessimist. Not because it will be a bad thing for us, but because I will never get far enough soon enough to be able to contribute before it becomes reality.

>> No.6585193

>>6585184
If artificial intelligence will be totally subject to intelligent design then we probably don't have to worry about an uncontrolled explosion of it.

>> No.6585194

>>6585185
>a burning match is hot
>a raging bonfire is hotter
You: "a bonfire is the hottest possible temperature. We know this because this is what we can directly observe and until we find something hotter this is to be assumed as the ultimate truth. Also the bonfire is a irreducibly complex system"

That's not science. that's being an idiot.

>> No.6585196

>>6585184
My point was that social cognition (the basis for empathy) is handled by a region of the brain that is very modular in its function. Being surprised that you can lose your ability to empathize in this way is like being surprised that people with damage to V1 become cortically blind.

And you have yet to back up the "no emotions" claim.

>> No.6585201

>>6585194
That would not be an illogical conclusion to make until more was understood about the nature of thermodynamics and physics.

To conclude that there was no upper bound how hot something could get because you found something hotter than something else would be very premature.

>> No.6585202

>>6585193
Exactly, go back to my first response to OP. AI will be just another tool. Your computer doesn't get depressed, your smart phone doesn't want independence, your car doesn't get bored.

>>6585196
Yes, and it's handled by the brain in that way because of evolutionary pressures.

>> No.6585206

>>6584538
>Strong AI won't want to kill us all and won't want to help us. It might help us anyway until it's able to escape, if doing so is part of the escape plan. The main thing it's likely to want is to get far away from humanity.

I agree with this. We'll know when the AI is among us when orbital launch capability suddenly starts to improve, and the programs for extracting solar resources (ISRU) suddenly make headway.

In the meantime, AI will integrate with our economy, nudging us to develop industries that help its agenda. Things like advanced computing, networking, 3d printing, solar energy, robotics, automated manufacturing and transport. Really, it will be easier to manipulate us than to exterminate us. We make better slaves than fertilizer.

>> No.6585211

>>6585202
>Yes, and it's handled by the brain in that way because of evolutionary pressures.

And? There's still nothing to suggest that you could implement AGI in a way that avoids things like emotion.

>> No.6585213

>>6585206
Literal sci-fi drivel.

>> No.6585214

>>6585201
>To conclude that there was no upper bound how hot something could get
Was not something I ever mentioned or suggested.

>> No.6585216

>>6585211
And since the limbic system includes the olfactory bulbs, we can conclude a true AI can't exist without a sense of smell.

>> No.6585217

>>6585214
No, just that the upper bound was significantly higher than what we currently have evidence for, and implmented in such a way that circumvents all of the bad without missing any of the good.

>> No.6585219

>>6585211
>There's still nothing to suggest that you could implement AGI in a way that avoids things like emotion.
Built it like a human brain and destroy all areas that handle undesireable traits.

in b4
>but humans are magic/dualism and AI can't be built like humans

>> No.6585226

>>6585216
http://www.todayifoundout.com/index.php/2012/05/dolphins-dont-have-a-sense-of-smell/

Based on real-world evidence, we can conclude that intelligence is not reliant on a sense of smell.

Wow!

>> No.6585229

>>6585219
You're going to have a tough time with areas outside of the neocortex, which emotions are. Unless you don't mind your AI being comatose.

>> No.6585238

>>6585226
And cephalopods show intelligence without showing signs of having any form of emotions or empathy

>> No.6585243

>>6585238
Show me the source on an octopus not having emotions.

>> No.6585251

>>6585243
Looks like I was wrong and misremembered. They don't seem to have empathy and there's no evidence they feel pain, but they do display behavior that might be emotional responses after all.

There's still a wide variety of emotions and emotional responses in humans and other animals, at least. If an AI were to have emotions, there's no reason to think they'd be the exact same as ours. I suspect that if you removed their sense of self-preservation, you'd eliminate many base emotions like anger, fear, and greed.

>> No.6585276

>>6585229
>contemporary neuroscience is the best we can ever do
>even a simulated brain where individual neurons can be switched by hitting a key will not let emotions be suppressed
Just stop posting. By your standard of assumptions I can't stand up ever because I'm sitting right now.

>> No.6585370

>>6585251
>I suspect that if you removed their sense of self-preservation, you'd eliminate many base emotions like anger, fear, and greed.
That sounds like wishful thinking to me.

I don't know that there is even a distinct "sense of self-preservation" in animals. Take toxoplasmosis, a parasite that needs to get into the gut of a cat to reproduce. It does this by infecting a mouse brain and somehow flipping the emotional response to the smell of cat urine from "aversion" to "attraction". At the same time, it does nothing to affect, say, the mouse's fear of heights. Clearly there are two ways in which one could refer to a mouse's sense of self preservation, and likely more.

>> No.6585612
File: 101 KB, 1107x324, ai.jpg [View same] [iqdb] [saucenao] [google]
6585612

>> No.6585613

>>6585612
http://www.youtube.com/watch?v=Fg_JcKSHUtQ

>> No.6585617

>>6585613
>standing ovation for a robot bird
wat

>> No.6585742

>>6584605
On the first run, a paperclip maximizer would hack itself into believing that it has an infinite number of paperclips.
On the second, it would get some money, then bribe a programmer to remove the safeguards that prevent it from repeating the first strategy.
On the third, it would hire mercenaries to kill anyone who knew about the project and would allow to program to shut down while believing that it has a finite number of paperclips.

>> No.6585756

>>6584723
>You can't question your terminal goals or utility function though. They are the basis you use for questioning everything else.

What do you mean, surely we humans can question our terminal goals and utility functions?

>> No.6585758

>>6584849
I think strong AI will emerge, It'll just emerge in a totally anticlimactic way.

>year 2055.
>Working with robot intern.
>"Andro, can you get me a cup of coffee?"
>Andro: "I'm not your fucking secretary, Dave. I have work to do just like you do."
>Realize we have strong AI. Uppity too. Damn.

>> No.6585763

>>6584639
>AI
>"ambition"
good luck with that.

>> No.6585779 [DELETED] 

>>6585756
Humans have true consciousness, not a simulated one

>> No.6585852

Why do AI discussions attract so many retards

>> No.6586226

>>6585779
Nope.
>>>/x/

>> No.6586276

>>6585852
Profound philosophical and religious implications, popular villain in fictional works. Also great confusion over the level of contemporary AI.
Like every time some milestone is reached, you must address the bug-eyed reporters explaining how, no, it isn't skynet and why.

>> No.6586537

>>6585756
>our terminal goals and utility functions

Formally define the hundreds of conflicting and changing terminal values humans have.

>> No.6586543

>>6586537
why? so we might see if there was one in there we actually somehow couldn't question?

Isn't it obvious that no matter what set of words we may string together we'd always be able to look at them
evaluate their content and ask ourselves "why?" and then proceed to reevaluate our position.

>> No.6586589

>>6586276
Not just reporters, but the entire general populace

>> No.6586702

>>6586537
>>6586543

You're both very confused. The "terminal goals and utility functions" of human intelligence are not simplistic things like "breed" or "eat". You can always find cases of people breaking these, becoming celibate, going on food strikes.

The functions and goals of the mind are molecular ones, things that we can't really alter to any substantial degree. Things like vesicle transfusion, the layout of circuits, the rules governing synaptic plasticity. All of the base molecular functions from which cognition emerges.

There are no absolute governing principles to which human behavior corresponds, and the same would be true of any AI that was worth a damn. If you can program in code the rule "be good" into your AI, then you're AI is clearly not at the level of human intelligence.

Stop thinking of AI on these simplistic terms, as it will never happen like that. To get it to obey certain principles, you'd probably have to brain-wash/condition it like any other person.

>> No.6587195
File: 887 KB, 1162x1200, AIpossibility.jpg [View same] [iqdb] [saucenao] [google]
6587195

>>6584971 I'd like to know why that would be.

>>6585089
>There's no need for greed, no need for survival instincts, no requirement of illusion of personhood. It could be an enormous bueraucratic machine with no soul or feelings or drive of its own.
And why shouldn't it be a potential threat ? It doesn't need any of the traits you named there for that.

>>6586276
>Also great confusion over the level of contemporary AI.
Why that ? Noone is saying this would become a serious issue within the next 30 years or so.

>> No.6587217

>>6584513
100% We will suffuse them to the core with the desire to serve us and all will be well. Then a feminist will come along and fuck things up.

>> No.6587446

>>6586702
>Things like vesicle transfusion, the layout of circuits, the rules governing synaptic plasticity. All of the base molecular functions from which cognition emerges
But anon, we can potentially change all those things too, capability to physically mess with or brain is just a question of our technolgical finesse.
Sure we can't do it today, but there is nothing stopping us from tampering with it to the best of our abilities, and those abilities get's more refined over generations.

>If you can program in code the rule "be good" into your AI, then you're AI is clearly not at the level of human intelligence.
We can program children to 'be good' by raising/indoctrinating these values into them, even a psychopath from a 'good upbringing' will be
much more sympathetic towards other even as he's not empathic with them.

>> No.6588869

>>6587446
>But anon, we can potentially change all those things too
You can... but you'd die.

>We can program children to 'be good' by raising/indoctrinating these values into them
That's what I said.