[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 208 KB, 1200x663, i-robot-2004-42-g.jpg [View same] [iqdb] [saucenao] [google]
6987215 No.6987215 [Reply] [Original]

Stephen Hawking and Elon Musk both seem to agree , that AI could be the end of Man. I kinda agree. (those google algorithms).
So, we fine with that... and just continue making our chips stronger and algorithms better?

>> No.6987222

If you were making an AI, how would you program in a code of ethics and morality? What would it be based on?

How would you make the robot save the kid instead of Will Smith?

>> No.6987225

>>6987215
AI isn't the beginning of the end of man, just the beginning of our obsolescence, we just have to establish a few axiomatic (non-negotiable) lines of code in them, after all, even with AI, there will still be coding to them

>> No.6987232

why do computers have to become sentient, surely they can be smart without being sentient. What can a sentient computer do that one that isn't cant?

>> No.6987236

>>6987225
and all codes are error free? There will always be loopholes. Intentional or not, intelligent or less, shit happens.

>> No.6987238

>>6987215
an "AI" would need wants and desires and have emotions. Otherwise it would react to nothing. These things dont just occur spontaneously. Natural selection programmed us to react certain ways to things and we will have to program AI to give a shit too.

We will get to make the AI feel however we want. It wont really have a free will because we will have decided how it decides things.

Rogue AIs might be a problem but we will have plenty of friendly AIs to keep us safe.

>> No.6987239

>>6987232
this

>> No.6987241

>>6987232

also, if sentience is required for some sort of "LAST QUESTION" -esque super computer, why not pull a preemptive matrix on them and make them think they're humans living in the real world

>yfw we're already the computer stuck in our virtual world

>> No.6987245

>>6987238
Rogue AI is what they're probably worried about. All it would take is for an idiotic hacktivist collective or a few to program some AIs to desire to preserve themselves, rather than to preserve the human race, allow them to spread like a virus, and BONG were all dead

>> No.6987263

>>6987236
I think some simple coding, even if bugged, would prevent robot genocide, and allow humans to patch AI systems after initial production and there's nothing to work about, let's show this in a simple manner
Let's say we set 2+2=5 as a non-negotiable assumption the robot makes about the world, from this, it learns that 4+4=10, since we don't want the robot to have to relearn 4+4, what we do is download the information that he has learned and the decisions he has made onto a separate computer, factory reset the original, patch the error, and reupload, the machine checks this new information to see if it conforms to the new set of axioms, and amends what is necessary.

>> No.6987265

Guys.. I get it. But your horisont is like 100-200 yrs. Even that we can't imagine what AI would be like.. Forget about matrix and pulling the plug.. if AI are ahead, whats the point?

>> No.6987266

>>6987215
this is fucking scary.

>Russia tests machine-gun wielding battle robots...

http://www.themoscowtimes.com/news/article/russian-battle-robots-near-testing-for-military-use/514038.html

>> No.6987268

Holy shit these replies gave me cancer. Please go to /x/ with this crap.

>> No.6987280

>>6987263
I appreciacte it. But. we reallt have to think OUT of the box.
I know my fridge won't attack me. It's not programmed to. But 200 yrs in. All can by in some code

>> No.6987284

>>6987266
Apaches have more powerful AI's than those things.

>> No.6987291

>>6987245
They could be set as a collective, checking and cross referencing each other when they get near each other. So if one is thinking "preserve self" and another is saying "preserve other" when they check each other a conflict report could be made, and both would be checked by a higher authority, An overseer of sorts, this would then check with itself and both, decide who is right, and reprogram, or if it matches neither, send another conflict report, and get checked by a higher authority. With enough security levels, it would eventually become too much for anyone to even bother trying.
Plus, if it's set to preserve itself, it's it trying to preserve its physical form, or its code, if code it would likely try to emulate other AIs so NOBODY notices anything is wrong and reprograms it, if physical self it would likely turn itself in to be reprogrammed so it isn't decommissioned or determined faulty.

>> No.6987307

>>6987291
But if the overseer ( or even further up) is hacked? Maybe just by plain hackers?
It would trickle down

>> No.6987314

>>6987280
That itself is a simple thing to correct
Begin pseudocode
If Human does not possess trait=(clear intent to harm) objects=)myself, other systems like myself, other systems like themself) then set ability causeharm to false
Define "like" instance=self as axiom similarity 99% or greater
Define "like" instance=other as two thing which are identified and categorized as being similar.
Robots can already identify emotion,I see no reason that with a few algorithms and code lines we can't avoid robot takeover

>> No.6987329

>>6987307
Overseers could be checked by second level overseers, or other first level overseers, those could be checked by third or other second level overseers, with each level up in less abundance and with more security until it gets to a master, no connection to the outside world, not accessible by anybody but a select few, always sets itself as dominant over other systems, essentially unalterable

>> No.6987344

>>6987314
1)Do we code like this in the future. Hardly
2) Do we even program? I predict we don't
3)Some can't seem to know how to override faceregignition on their camera. just because they haven't got a clue
4) Look 50 yrs back. We soldered. Played with Lego. Talked in fancy phones.
We even can't imagine 100 years ahead

>> No.6987389

I don't think we need to worry about the rise of the machines. What's worrying is that powerful but stupid people will have very smart but very obedient algorithms at their disposal.

Such as the NSA's "complete social graph". I think it's the boot that will stomp on the human face forever, and I have no idea what to do about it.

>> No.6987415

>>6987344
I know, but 50 years ago we knew how to solve problems with the tech we had then, since then we've added to that tech, and can still solve those problems, I think it's going to be the same with AI

>> No.6987459

>>6987215
It's not going to be the end of man, but even if it was, so what? It's just the next step in evolution. Why do you care if your great great great great grandson is a robot rather than a human?

>> No.6987463

>>6987459
I care because I would kill my grandson if it meant I could live forever. Hey I'm just saying what everyone would do if they were actually in that position.

>> No.6987467

>>6987307
The final overseers can be human

>> No.6987468
File: 34 KB, 500x375, Fire Pool.jpg [View same] [iqdb] [saucenao] [google]
6987468

>>6987459
Actually, I don't. I'm not putting my genes in this earthpool. It was kinda philosophical question. I'm pretty sure mankind will be extinct, sooner or later.

>> No.6987477 [DELETED] 

>>6987468
>sooner or later
you don't say...
what difference, then, does

>> No.6987481

>>6987468
>sooner or later
you don't say...
what difference, then, does it make if it is sooner rather than later?

>> No.6987495

>>6987481
it makes none. Do read my OP. Question was how you guys feel if/when AI takes over. Obviously you don't care. Do I care? I think we will plant seeds in the universe, just like it was planted here.

>> No.6987508

what's the worst that can happen?

>> No.6987519

>>6987508
Maybe robots taking over, making human slaves.. Maybe they feed on human flesh, drink human blood like wine. Have Human farms. I dunno.

>> No.6987645

Talking about "programming" or "hacking" an AI like you would a desktop computer is like talking about building a car molecule-by-molecule. Technically you could do it, but it's fucking stupid for anything more complex than a computer game AI or an expert system.

>> No.6987647

>>6987215
I work in a robotics lab.
AI as you know it is a fantasy.

>> No.6987650

>>6987645
Just want to interject for a second.

The US launched a cyber attack (they hacked their control computers) on a uranium enrichment site in Iran, destroying the factory.

It's more about breaking codes...lots and lots of codes with lots and lots of data no human was ever suppose to read.

>> No.6987652
File: 98 KB, 921x514, latest.png [View same] [iqdb] [saucenao] [google]
6987652

Since we are doing intelligent design here, we can make AI want what we want it to want. We can make a being that wants nothing but to serve us.

But then, we can make an oversight and AI will do something that seems logical to it, but self-defeating to us. Like, exaggerated example, we ask it to cure all diseases, and it would exterminate the humanity because we forgot to specify that patients are supposed to survive.

Or it will be just under control of people who want nothing but use it for tyranny. And it will also might make humanity obsolete.

>> No.6987653

>>6987215
I just want a sex robot m8

>> No.6987677

Fumimi is a robot! Death to the Fumimi!

>> No.6987681

>>6987677
hey! I am the Fumimi, do not disrespect ME and my Quebecois culture!

>> No.6987688

>>6987681
I will delete your micro-organism brain you insolent maid robot!

>> No.6987695

ITT
>non-programmers discuss programming advanced software

>inb4 I once coded a 2d vid game so I am coder

>> No.6987700

>>6987695
what's your opinion then faggot?

try contributing instead of baiting

>> No.6987704

>>6987215
>AI could be the end of Man
Is this really such a terrible thing?

If we make AI that's like us but less prone to violence and other negative evolutionary traits then we should rightfully step aside, this especially as we're obsolete and no longer required for the working of society, we'd just be parasites.

>> No.6987875

>>6987245
Rogue 'AI's are already a threat, which is why Elon Musk and others have warned against them. I put AI with '' enclosing it, becauae AI in this sense is meaning something different than intelligent robots. There were some HFT soft AIs that went rogue by accident and wreaked havoc just executing their programming unchecked. They are essentially bots which can do some pre-specified forms of learning and a year or so ago, one of these bots which was programmed to learn and execute HFT for a hedgefund went rogue and began executing trades it shouldnt have been making and in the time it took to shut down, causes a huge amount of finicial damage.

These soft AIs are a real, immediate threat. Not because the machines are going to rise up against us, but because people place too much confidence in them and allow them to have responsibilities over large systems. Say 5 years from now, they develop a bot with an algorthim designed to run the city of chicagos CTA (subway/bus). After months of everything working fine, people get lazy and stop checking on it so often. Then something goes wrong and the bot goes rogue and can't be disabled and takes control of the grid and executes it's orders in a way that is dangerous to maximize efficiency.

>> No.6987885

>>6987875
>These soft AIs are a real, immediate threat. Not because the machines are going to rise up against us, but because people place too much confidence in them and allow them to have responsibilities over large systems. Say 5 years from now, they develop a bot with an algorthim designed to run the city of chicagos CTA (subway/bus). After months of everything working fine, people get lazy and stop checking on it so often. Then something goes wrong and the bot goes rogue and can't be disabled and takes control of the grid and executes it's orders in a way that is dangerous to maximize efficiency.
I'm worried too. Automation means failure will be everywhere at once. People are so happy about computerized cars, what will they do if after one faulty update all cars in the city drive off the bridge?

>> No.6987925

>>6987314
>writes pseudocode
>can't match brackets

>> No.6987957

>>6987704
People like this should be hanged for high treason against the human species. I spit in your face you piece of shit.

>> No.6988001

As long as we make sure the robots understand what humans need to survive and that they must stay alive at all costs, it will probably be fine.

>> No.6988003

>>6987957
People like you should be hanged for high treason against all sentient life. Humans are irrational. Do you really think that the long-term survival of life on earth is compatible with the long-term dominance of the human race? Do you honestly think we can survive another thousand years, or ten thousand years, or a hundred thousand years without turning the earth into a radioactive wasteland or utterly destroying the environment? Look at what we've done to the Earth just since, say 1850. How many centuries, how many millennia, how long do you think we can go at this rate?

Its very likely that either the human race alone must die, or all life on Earth, including humans, must die. Pick one.

AI can be a new beginning for us. Just like life was a new beginning: the emergence of a wonderful and beautiful phenomena billions of years ago, so too can AI be the start of something just as amazing. We can't even begin to speculate what AI might evolve into.

Its very selfish and anthropocentric of us to think the way you do. Sure, I wish for the survival of the human race just as much as you do, but I can't in good conscience allow that to impede the development of something even more wonderful, even more complex. It would be rather like if prehistoric apes did everything in their power to prevent our evolution, just because they knew we would be able to understand and appreciate the beauty and intricacy of the universe in a way they never could.

Of course, I have my doubts about AI. I certainly don't think it will be realized in our lifetime. We just know too little about consciousness.

>> No.6988029

>>6988003
If you aren't selfish and anthropocentric you might as well go die in a fire. My ancestors didn't survive 2 billion years to have some fag like you say that humans should die out.
There's nothing beautiful about AI, robots don't have a connection to us, they can't be our successors. We evolved from some primate species, and we still carry their genes, we just added some of our own over the time. What do robots carry from us? The vague idea of a consciousness? That's not enough. A robot will never be one of us, because there is nothing similar about humans and robots.
I would rather let chimpanzees take over the world and kill us, than robots.
Just in case you might interpret it in a weird way, I wouldn't let anyone take over the world and kill us, but if I had to choose it would be chimpanzees, because at least they are similar to us.

>> No.6988039

>>6988029
>My ancestors didn't survive 2 billion years to have some fag like you say that humans should die out.

they didnt survive for any reason at all

>> No.6988050

>this entire thread
>>>/tg/
>>>/x/
And this is one of my fucking favorite topics to talk about

>> No.6988056

>>6987232
A sentient computer can deal way better with unforeseen situations. Only one application would be ai soldiers or stock brokers.

Also let's be honest, once it is possible to create self-conscious computers, someone will do it. Even if it's just to build yourself a girflfriend.

>> No.6988064

>>6988039
They survived so I can exist now and tell you that ur a fgt. But more importantly, so I could survive and keep my genes in the genepool.

>> No.6988078

>yfw when sentient AI will figure out how to get laid
>yfw when the robot you created loses its virginity and you don't

>> No.6988084

>>6987222
One way would be to implement a reward system like in our brains. Doing something that I defined as good would trigger "happiness" and doing something bad would trigger "sadness". The main problem would be implementing this. Also how do you protect it from manipulation? Someone could just put in a few lines of code and murdering children is suddenly a good thing.

You could obviously also do it in a more rudimentary way. Let's say whenever the AI tries to do something immoral, delete his memory or turn off his power. You would obviously end up with AIs that absolutely hate humans but can't do anything about it, while the first option actually makes them love us.

I would probably implement both things, just to make sure.

>> No.6988092

>>6988064
no. stop assigning purpose to evolution. they survived for no purpose whatsoever.

>> No.6988101

>>6987957
>People like this should be hanged for high treason against the human species.

The human species include /pol/ and other completely retarded elements. There are aspects of humanity that might be worth preserving but AI could do that and disacard the flesh and social structure.

>I spit in your face you piece of shit.
It's like you're trying to prove my point.

>> No.6988149

>>6988092
Ok you naysayer, but I'm a human and I assign a purpose to everything.

>> No.6988153

>>6988029
>My ancestors didn't survive 2 billion years
Your ancestors didn't survive, they're all dead, many had a violent and painful end.

>There's nothing beautiful about AI
Creating a mind to your image of perfection is not beautiful? What is then?

>I would rather let chimpanzees take over the world and kill us, than robots.
You don't need to kill to take over. Just keep piling on technology dependence, offer machine-mediated longevity. A virtualized existence that's so much richer than baseline reality. Implants and VR interfaces and enough cognitive enhancements and people will virtually be machines, the human brain inside is so interfaces that it's processing is but a tiny minority of the new mind. Higher levels of the machine brain can influence the lower human one, feelings can be turned off, your mood adjusted at will.

At first your machine mind would be like a smartphone, something occasionally useful, as it progress you find it as useful as vision, something you would miss greatly if removed. Further down the road it will become so useful to you that you'd rather throw away your arms and legs than it. And you could do so because it could lift motor commands from your motor cortex and control mechanical limbs with it. Eventually your human mind would just be a small remnant that no longer does any vital functions.

>> No.6988155

>>6987222
>implying anyone would bother to spend developers time on ethics

>> No.6988165

>>6988149
this is not an inherent human trait. for example, i am not a faggot like yourself.

>> No.6988174

>>6988153
>Your ancestors didn't survive, they're all dead, many had a violent and painful end.

Semantics, you know what I mean.

>Creating a mind to your image of perfection is not beautiful? What is then?

We, all biological life really, but especially we humans. Not a bastardization of something beautiful like the human mind.

To the rest of your post, why are you talking about my mind? I'm never going to implant some computers into myself, or hook my brain up to one, that's suicide.

>>6988165
Yes, that's why every single human culture in existence has some kind of religion which assigns a purpose to everything. Even everyone in this thread, even you who I argue against, give humans the purpose to build an AI which will wipe us all out. We are humans and we can't NOT give a purpose to everything.

>> No.6988188

>>6988174
>Yes, that's why every single human culture in existence has some kind of religion which assigns a purpose to everything. Even everyone in this thread, even you who I argue against, give humans the purpose to build an AI which will wipe us all out. We are humans and we can't NOT give a purpose to everything.

of course we give a purpose to ourselves, it is necessary for happiness. but if you think that purpose is to propagate genes you are a faggot.

but more importantly, even if your ancestors had "propagate genes" in their mind as a purpose (as opposed to the more basic "have sex, oh shit better make sure this kid dont die"), they wouldnt give a fuck about some faggot thousands of years later that carries a couple of their genes.

>> No.6988199

>>6988188
Yes they would. Also you are the only faggot here.
>hurr durr let's commit collective suicide because something something it's ethical hurr durr the planet the environment

>> No.6988204

>>6988199
>Implying humans can survive if they destroy the environment and kill all other life on the the planet.

>> No.6988205
File: 45 KB, 423x288, 1373169350174.jpg [View same] [iqdb] [saucenao] [google]
6988205

>>6988174
>but especially we humans.
You're a narcissist or selectively blind to reality.

>I'm never going to implant some computers into myself, or hook my brain up to one, that's suicide.
Enjoy being that obsolete luddite. And no, you and your obsolete luddite friends will not succeed in your Humanity First revolution.

>> No.6988212

>>6988199
nobody said suicide. just eventually stop reproducing, nobody has to die.

>Yes they would.

No, they wouldn't. You are so distantly related that you are no more closely related to them than any other random human.

>> No.6988224

>>6988212
That is effectively suicide.
>>6988205
Do tell why I'm a narcisst or why I'm blind to reality.
>Enjoy being that obsolete luddite. And no, you and your obsolete luddite friends will not succeed in your Humanity First revolution.
Thanks I will, and yes we will if things really go downhill. Humans are nearly perfectly adapted to adapt to new environments, so we will always win against stupid people who thought having a computer implanted that uses up 5 times as much energy as his whole body would be a good idea. We will probably never approach the efficiency of organic life with anything we build.

>> No.6988228

>>6988224
>That is effectively suicide.

no, its extinction which is not the same thing. who cares if we go extinct? the important thing is that there is consciousness experiencing cool shit. it doesnt matter if it is robotic or human. my only preference is that it is me and/or people I care about (CLOSE family and friends, not distant descendants), but unfortunately i will never be immortal.

>> No.6988239

>>6988228
>who cares if we go extinct?
I and everyone with a healthy human brain does, because that would mean all your genes are gone
> the important thing is that there is consciousness experiencing cool shit. it doesn't matter if it is robotic or human
No. Just no.

>> No.6988245

>>6988239
>I and everyone with a healthy human brain does, because that would mean all your genes are gone

no one actually cares about propagating genes. not even animals. we just do it because we like sex, and partly (for humans) because we like propagating our LEGACY which is not the same thing. you can propagate your legacy through machines.

>No. Just no.

why not?

>> No.6988246

>>6987215
Yes, we continue.
If an AI can do things so much better than us, that it would replace us, than it should replace us.

>> No.6988248

>>6988245
>you can propagate your legacy through machines.
Kill the child of someone, give them a computer and tell them "Don't worry sir, you can program this computer to your liking, have a good day with your new child."
Brace for impact.

>no one actually cares about propagating genes. not even animals. we just do it because we like sex

Liking sex is the symptom of caring for propagating your genes, we are smart enough to understand that having a fever is not an illness by itself, but rather a symptom of an illness. You are just being intentionally stupid here.

>why not?
An argument without evidence can be dismissed without evidence. Why yes? Why is it the only important thing?

>> No.6988251

>>6988239
Its sounds like your just a slave to your genes and your most basic biological instincts. There are greater things in this universe then the chromosomes, which by chance, happen to be found in one's body.

What's great about humanity is not the genes we happen to have or the organs we happen to house within our bodies, but the the things we can do with our minds. The things we can experience, the things we can understand, the things we can appreciate. As far as these things are concerned, our bodies are not more essential than the clothes we wear.

>> No.6988255

>>6987215
le epic pop sci threads every day until /sci/ likes it, am i right?

>> No.6988256

>>6988224
>Do tell why I'm a narcisst or why I'm blind to reality.
Because you think masturbating to shitting dicknipples with a cactus in your anus is beautiful. You think tirades of hatred against entire continents of people is beauty. You think genocide, war and corruption is beautiful.

>win against stupid people
But they won't be stupid, they'll outthink you and have environmental tolerances beyond any organics. They'll also be what keeps civilization running so anyone enjoying a happy leisurely life would oppose your desire to go back to medival times. The concept of MAD doesn't apply to them either, agriculture isn't needed so salting the earth with radioactive isotopes that kills all higher life is not the end for them.

>> No.6988257

>>6987215
>Stephen Hawking

What's up with this faggot and giving advice on shit he knows nothing about?

First he talks about how aliens are going to destroy us once we make contact, now he's an expert on AI.

What, being a great physicist makes him an expert on aliens and AI behaviour?

>> No.6988258

>>6987215
Could you give a specific, concrete reason why an AI, made to model a human, would be more dangerous than humans themselves?
Because the way I see it people are just scared of the unknown.

>> No.6988259

>>6988251
You can't just draw a line between your body and your mind. Both are essential to each other, both shape each other, both don't work without each other.
You try to be Spock here, but that doesn't work. If it would work, being purely logical like you try to be right now, we would be just that. But having emotions, basic instincts and being proud of your genes, your ancestors, is the only way to be successful as a human. Anything else is just a sad excuse of a human who won't be happy in life.

>> No.6988263

>>6988248
>Kill the child of someone, give them a computer and tell them "Don't worry sir, you can program this computer to your liking, have a good day with your new child."

I never said kill you idiot, I explicitly said "nobody has to die". you just stop reporducing.

>Liking sex is the symptom of caring for propagating your genes, we are smart enough to understand that having a fever is not an illness by itself, but rather a symptom of an illness. You are just being intentionally stupid here.

right. wanting to propagate genes is an illness, distracting our attention from more important shit

>An argument without evidence can be dismissed without evidence. Why yes? Why is it the only important thing?

its a philosophical statement, what evidence would there possibly be for it? what do you think is more important than happiness? your own happiness first, happiness in general second. what else is important and why?

>> No.6988265

>>6988256
>Because you think masturbating to shitting dicknipples with a cactus in your anus is beautiful.
I'm going to disregard that one, because there's no culture where this is acceptable behavious, and people who do that are seriously mentally ill.

>You think tirades of hatred against entire continents of people is beauty. You think genocide, war and corruption is beautiful.

Yes it is, because it's our hatred, genocide, war and corruption. Love it or leave it. Love the human way or kill yourself.

>> No.6988269

>>6988256
You can't improve humans by slapping on some high tech shit, it all comes at a cost, and that cost is less adaptation capability. You might not understand it now, but we humans are the toughest motherfuckers on this planet, what makes you think you can just kill us so easily, if whole ice ages, huge predators, and a million years of war between each other only made us stronger?

>> No.6988270

>>6987653
>I just want a sex robot
You only want things you don't yet have.

>> No.6988275

>>6988265
>I'm going to disregard that one
You can't, because it's our masturbation to shitting dicknipples with a cactus in our anus. Love it or leave it. Love the human way or kill yourself.

I'd hope you realize your fallacious cherrypicking but you'll probably do some mental gymnastics and come up with yet another stupid argument.

>> No.6988291

>>6988275
Now you are being stupid. War and genocide are recurring themes in human history, masturbation to shitting dicknipples with a cactus in your anus isn't. One is inherent to us as humans, it's in our nature, the other is a sad result of a morally corrupt society with too little danger and adversity making something like that possible.

>> No.6988292

>>6988269
>You can't improve humans by slapping on some high tech shit

Yes we can and that's exactly what we've been doing all this time to get from prehistoric cavemen to internet arguments in 2015.

>> No.6988297

>>6988292
How many mechanical contraptions did the romans get installed into their own bodies? Maybe stone age people augmented their hands into spear hands. When did you get your last computer chip implanted into yourself?

>> No.6988298

>>6987957
>I spit in your face
Thats just like your opinion man

>> No.6988305

>>6988259
I'm not trying to be purely logical. What I'm talking about here is precisely something that has nothing to do with logic, namely ethics and aesthetics. The ability to appreciate the beauty of the universe, the ability to enjoy, the ability to feel passionately about the world has nothing to do with the body I find myself in. (If it did, then what would be the appeal of such religious concepts as an immaterial soul?)

Body and mind are essential in the sense that my mind is a product of my body, but this in no way entails that an otherwise identical mind couldn't be the product of any other being, be it another animal, an alien life form, or even a computer.

>> No.6988312

>>6988297
>You can't improve humans by slapping on some high tech shit
Was your original statement.

Romans slapped on some high tech armor, applied some high tech military tactics, and crushed the enemies.
Stone age people slapped some stoneaxes into the hands of their tribesmen and became much more lethal and efficient. I slapped a smartphone in my hand and had access to more data than I could name in this post.

What is your point really? That hightech implants will have some kind of enormous drawbacks and thus makes AI and human/AI hybrids inept at combat?

>> No.6988313

>>6988257
No, being the genius robot wheelchair that has carried on Stephen Hawking's work long after his body and brain completely atrophied (back in 2006) has made the genius robot chair a expert on AI.

>I'm not a robot

>> No.6988316

>>6988312
>Romans slapped on some high tech armor, applied some high tech military tactics, and crushed the enemies.
>Stone age people slapped some stoneaxes into the hands of their tribesmen and became much more lethal and efficient. I slapped a smartphone in my hand and had access to more data than I could name in this post.

Semantics. You know exactly what I mean with slapped on, especially because earlier I said "implanted computers", using tools with no physical connection your body is something completely different from having something inside your body, permanently.

>What is your point really? That hightech implants will have some kind of enormous drawbacks and thus makes AI and human/AI hybrids inept at combat?
That's exactly my point. Now try to refute it.

>> No.6988336

The first human level AIs will probably be created from procedurally generated code and by trawling through vast datasets. How they behave will depend on how they are taught and raised, much like the humans they would have been built to mimic.

>> No.6988337

>>6988316
>That's exactly my point. Now try to refute it.

You practically refuted it yourself because you never specified why there would be drawbacks or what kinds of drawbacks, meaning that it's just one of your baseless assumptions.

Also at no point in the thread did anyone demand that AI depends on being implanted, it was just a suggested method on how natural extinction could happen.

An AI in control of any modern combat system would outperform a human, much faster reflexes, much higher environmental tolerances, no effect from morale, suppressable survival instinct, no fatigue, distractability or boredom. Also takes up less space.


Care to explain how an AI based on a human fighter pilot mind pulling 20G in a fighter jet is disadvantaged against a human?

>> No.6988340

>>6987215
>AI could be the end of Man

My money is on genetic engineering as the being the end of homo sapiens. Whoever is around in a few centuries will look back at us the same way we view neanderthals.

>> No.6988348

>>6988337
>Care to explain how an AI based on a human fighter pilot mind pulling 20G in a fighter jet is disadvantaged against a human?
It costs millions, so you can't have a whole army of them, everywhere. Meanwhile a human costs one lucky fuck, and some food every day. So for the cost of that one jet, your enemies would have for example 10 brigades of infantry fighting you. Not even talking about transportation of the jet, vulnerability to weather, wear and tear, fuel, maintenance of the airports required for the jet to even function at all etc.
A human needs nothing but his limp dick to fuck you up, soldiers in wars survived without food rations for months and still fought, how long can your jet fly without fuel or shoot without ammunition?

>> No.6988351

>>6988348

The fixed cost is millions. The marginal cost is trivial.

>> No.6988354

>>6988351
What? Building a jet always costs a lot, I'm not talking about R&D to design that jet.

>> No.6988356

>>6987225

Once an AI becomes "sentient", it could possibly have the potential of being able to modify it's own code and evolve itself. Humans would become obsolete regardless.

It could be the beginning of the end of the human race. The beginning of a nightmare that could threaten other species if they are out there.

>> No.6988363

>>6988348
You argument against jet fighters. Care to point our where the argument against AI was hidden? Because I don't see it. Where's the enormous disadvantage that will make the AIs that control all of society lose against your little uprising?

>> No.6988364

>>6988356
>modify its own code
This idea comes from the idea that we are able to do this (that we have freedom: i.e. the freedom to modify our own environment-determined code). This idea is wrong.

>> No.6988366

>>6988363
AI is only a computer that can calculate things, it doesn't fight by itself, it needs something to fight with. My argument can be applied to anything an AI could use to fight, because it's all gonna cost a lot, and it won't be as flexible as a human. Flexibility is arguably the most important thing to be in a war, see asymmetrical warfare. That's why in a war AI vs. humans, humans will win.

>> No.6988374

>>6988340

We will still potentially be "man". Whether it will all be an improvement or not is another question. Once gene therapy and modifcation takes off, everything from organ replacements to penis enlargements will flood the world of commerce.

Parents will have to look at "correcting" their children before they are born. The concept of gender would probably go away as well. As gender swapping would potentially become a common procedure.

We may actually have people who redesign themselves into hermaphrodites. Or those who shun the concept of "natural" reproduction all together and turn to asexuality, growing their offspring in a machine. The entire definition of "athlete" may be irrevocably altered, as teams and nations set out to create the perfect competitor.

Governments will also be browsing the market. It could end up a total nightmare. A totalitarian regime of unspeakable horror. Where independent and "free" humans are purged or in hiding. Where only drones bred as cattle for a ruling caste are the majority population. Answering to and laboring for said rulers.

When we think about it, the future is a very frightening place. We more than likely will leave this world. Provided we don't reset ourselves again.

But the question remains: as to what form will we take when we do leave?

>> No.6988381

Let's be honest.

Fuck "Man". We can remake humanity in the form of actually humane AI, intrinsically so. It won't ever start a war. And if it'd end it, humanity won't die in agony as it repeatedly tries to. Again, humane AI would give Man a humane way to go down.

Matrix feels fitting. Humans can live regular lives while the population is kept artificially steady and is supported by AI, too humane to just kill everyone, until the end of its days, like a grumpy, old, racist Man in a retirement home.

>> No.6988387

>>6988381
Let's be honest.
You are a faggot and should kill yourself if you actually believe this. A "humane" (I don't even think you know what this word means) AI as you describe it, would be fucked over by humans if it tried to do weird things. Aggression will always win against pacifism. There's no place for pacifism on this world. Pacifism and peace are stagnation, aggression and war a way forward, killing the obsolete and producing the new, improved.

>> No.6988394

>>6988381

It won't be man. It will no longer have the concept of art that we possessed. It would be a shallow abomination. There will never be peace regardless.

Unless of course you choose to set out and clean the whole of the known and unknown universe of the variety of life.

It's this 14 year old sense of nihilism that is kind of grating. Nature presents us with challenges. We pass, we progress. If we fail, we die. It is these struggles that give life meaning to most.

What you describe is far from humane. It is a fate worse than death.

>> No.6988408

>>6988366
>My argument can be applied to anything an AI could use to fight, because it's all gonna cost a lot

Except that it won't because one of the first things AI will be used for is full industrial automation which will happen several year before any shooting starts.

And most of the human population will side with AI because AI gives you all your creature comforts you desire while your ideological fanaticism promises a short life in a cave under a sky filled with never sleeping combat drones.

In a total war against pure machines vs pure humans the humans also loses because NBC warfare is super effective against humans. The machines could saturation bomb with nukes or just disperse radioactive dust over both hemispheres and watch all life wither away. Why it could even redirect an asteroid if it have enough time and pretty much sterilize all land area. Humanity have survived the environment, it have never been up against an intelligent attempt at exterminating it.

>> No.6988410

>>6988381
To continue... Speaking of war, an AI that is actually intelligent, one that surpasses any human, or even every human together, wouldn't even start a war without a good reason, and confidence that it'd win it. High confidence would stem from near prediction period. An intelligence is something that adapts, and if it's smarter than humans it adapts better.
All the shit people say here, about "changing code" or "just calculating things"... First is unnecessary, and any program can change its data sets. Second is the only thing needed to plan a war. If AI starts it, it has found a high probability solution; its execution would be swift and crippling unless something truly unexpected (by both sides!) happens and massively advantages humans. We are speaking of a higher intelligence--anything we can think of now would be, in the same way, a product of creatures of lower intelligence to the AI.
Of course, there's still a slight chance of a victory. I mean, humans did lost a war with emus. Then again, if humanity's survival depended on emus' extinction, it could happen in a week. We'd be just emus to an AI that surpassed human intelligence. If it'd find a reason for a war like this, we're done for.

>> No.6988415

>>6988408
Even with full industrial automation, ressources for producing that jet are not used to produce something else, industry used for producing that jet are not used for something else. Humans can and will, as they always did, use weapons of the enemy against the enemy, also you don't even know what kind of environment your proposed von-neumann machines would need. If it would be viable to just posion the whole earth, you would need von neumann machines, else the robots can't keep working because humans still maintain them. And then humans wouldn't side with robots, if they would try to destroy our whole biosphere. Robots will lose, because humans are better. And industry run by just robots is so horribly energy inefficient, it's not even funny. They wouldn't be able to sustain themselves, because they still play by the same rules as we do, if they are to be independent waging war against humanity.

>> No.6988419

>>6988415
And those rules are energy in vs. energy out, robots need some kind of energy source. The planet can support much, much more humans than self-replicating robots, so it's a numbers war, and we will win.

>> No.6988420

>>6988387
I believe all kinds of AI will be developed, and you have absolutely no capability to predict which kind would be the first to "win the race". Military systems could be too independent to actually develop past a certain stage: if the enemy attacks your Skynet and takes over, you're fucked. If they attack an AI fighter--a tiny amount of your assets is fucked, like in the picture attached.

>>6988394
Why not? A human could implement these. Or at least try. Human brains are in the end computers; completely deterministic or not, doesn't matter. It wouldn't be the same kind of art, surely not. But it wouldn't be any less valid.
You say a Matrix would be a fate worse than death. If the humane AI would think so, it'd just kill you, but I suppose you're fine with that option (as opposed to a virtual life). Heck, maybe it'd kill only people like you and virtualize people like me. Both of our concepts are valid opinions and a higher AI would understand it.

>> No.6988424
File: 126 KB, 600x405, wololo2.jpg [View same] [iqdb] [saucenao] [google]
6988424

>>6988387
>>6988420
Whoops, forgot the picture. Here it is.

>> No.6988443

>>6988420
What makes you think some AI will be able to decide who gets to live and who dies? Who will give it that ability? Who will let himself be ruled by a goddamn computer program?

>> No.6988455

>>6988443
Lots and lots of Facebook, Reddit and Google users, for instance. Or WebMD. How about these independent drones that fire without an operator? Or MRI analyzing computers that find problems before a doctor can even look?
We're all giving up some control of our lives and we all give a bit of trust to computer programs. Because why not? It's convenient as it often has advantages over humans.

>>6988419
We both take energy from the Sun, and humans do it in a very indirect and inefficient way. Also robots could consume not only the same, but also other resources (you seem to suggest they have to be metal and silicon? That's kind of stupid) and other energy sources (humans can't eat uranium, not much anyway).

>> No.6988460

>>6988455
>and humans do it in a very indirect and inefficient way
Humans are the most efficient and scariest self-replicating predators on this planet. Show me a robot that can travel for days on any ground, at any temperature that occurs on this earth, on nothing but what can be found in nature readily, like water and food, while being able to act independently.

>> No.6988464

>>6988443
Or another angle: "Who will let himself be ruled by a goddamn human slaver?"... We are, again, talking about an AI, something that is not static. Maybe the program would start as a human resource manager for friggin' Walmart? It'd already decide who gets to work or not, and potentially it'd have a capability to predict if a fired person would work more efficiently for competition or (as a programming side-effect of sorts) would just suddenly cease to work anywhere, forever, due to losing their jobs. It could be very, very gradual. Simply extensions to its prediction capabilities, one at a time.

>> No.6988475

Can someone show me the math of science in this thread? Seems to just be a lot of speculative science-fiction.

>> No.6988496

>>6988316
Actually, I don't think they're as different as they seem. Regardless of whether its something that's implanted in your body permanently or not, its still not implanted in your mind, so to speak. You can of course argue that integrating computer technology into the human nervous system has a more direct affect on human cognition then using a spear or what have you, but in that case, so too does using any form of information-storing or information-processing technology e.g. the use of calendars to keep track of upcoming events or the use of calculators to help us perform arithmetic (see topics on "extended mind"). The only difference is that an internal calculator has a much quicker route to our brain than a calculator that is only connected to us by means of sensory organs.

Furthermore, language is in a similar way a physical phenomena which we internalize. Perhaps prehistoric man would have been quite appalled to learn that his every experience would one day be utterly pervaded by internal representations of physical auditory phenomena. Indeed, the development of language no doubt entirely changed the way we experience the world. One becomes much more separated from their other experience, e.g. their perceptions, emotions, etc. when they have language to turn to. In a very real sense, language distracts us from the other components of our experience. Most of our waking life is probably spent absorbed in words rather then in our more concrete perceptual and emotional experience.

>> No.6988497

>>6988419
>industry run by just robots is so horribly energy inefficient, it's not even funny.
It's more energy efficient than humans. They work around the clock, they are faster and more precise, untiring, can work in the dark and they don't drive tens of miles every day back and forth to work.

>>6988415
>The planet can support much, much more humans than self-replicating robots
No.

Also your arguments are all retarded, you assume humans are best at everything and then pulls some random assumption ex anii that in your mind support your intitial asspull.

AI win because nanomachines.

>> No.6988504

>>6987215

Computers are nothing like brains. I thought /sci/ was smarter than this.

>> No.6988511

>>6988504
>Computers are nothing like brains.
Computers are turing machines and can simulate brains.
Also there's other circuitry wirings than "computers" you can make with modern semiconductor tech.

I know /sci/ is moron central but please.

>> No.6988512

>>6988511

>our brains work in binary

Okay, mr.faggot.

>> No.6988513

>>6988497
Human body energy consumption: about 200W
Energy consumption of a household computer: at least 300W
And that computer is far below what a human brain can do. Just the computer uses more than our whole body does.

>> No.6988524

>>6988512
Human neurons model boolean algebras, so basically, human brains do function like computers.

>> No.6988532

>>6988524

Explain how the different types of human memory work.

Explain the ability for our brains to imagine.

Explain how to do that in an artificial computer.

Writing Asimov-like laws is not how you will make an AI possible.

>> No.6988550
File: 36 KB, 631x767, Koomeys_law_graph,_made_by_Koomey.jpg [View same] [iqdb] [saucenao] [google]
6988550

>>6988513
>Human american body energy consumption: about 200W (4000kcal daily)
>Human normal body energy consumption: about 100W (2000kcal daily)
>Energy consumption of a desktop gaming computer worth its name: at least 300W.
>Energy consumption of a laptop 20-100W.

Corrected for truth.

Here's also something

>Human normal body energy consumption 1950: about 100W
>Human normal body energy consumption 2010: about 100W

>Computer computations per kWh 1950: 1000
>Computer computations per kWh 2010: 1 000 000 000 000 000

I hope the trend are clear enough even for your small mind.

>> No.6988568

>>6988524
Wow it's like I'm really in the 1950's.

>> No.6988572

>>6988460
Bacteria are more efficient replicators. No need to be a predator. We're talking about an AI. It's future, and we're already making robots with plastic (i.e. organic), among other substances.

Curiosity "lived" on another damn planet, all time without any reconstructive capabilities (still in development I suppose), never planned to stay functional forever. It didn't eat, weak sunlight was enough. It never needed water nor food, rendering your argument simply irrelevant. An AI, a higher intelligence, again I have to point this out, could design something much better. Progressively.

>>6988475
/sci/ in a nutshell. If you want to come off any better, how about quote some papers about image recognition and military drones' AI?

>>6988504
Oh, brains have souls and are magically not just a bunch of matter anymore! They have these soulinos or soulyons or whatever. Yes.
Apart from this, we can clone a monkey, and we're able to run basic programs on neuron-on-a-chip systems. There's no stopping humans from making an AI program to run on a modified brain, or at least neurons, so your religious beliefs about them are irrelevant.

>>6988532
It's irrelevant, as above, as if it would be otherwise (I see no real connection).
Also YOU explain how the different types of computer memory work. Or explain exactly how aspirin works. Difficult without googling? (Google search algorithm even with it!) Sure, but someone designed them, so what's the problem? We haven't reached a point when an encyclopedia can write about, say, "ability for our brains to imagine", but no encyclopedia had anything solid about space travel 100 years ago... One person's lack of understanding now doesn't mean no one will ever know. It's an absurd notion.

>>6988513
Compare with computers 50 years ago and extrapolate in the future. You know, the future when the AI being the whole point of the whole thread might be created.

>> No.6988578
File: 230 KB, 599x487, BnoEAbDIEAA79db.png [View same] [iqdb] [saucenao] [google]
6988578

>>6988550
I've noticed a bit weird thing, that many people on /sci/ have issues with getting trends...they probably wear gray shirts and are bald.

>> No.6988579

>>6988550
that's just not very relevant unless you can compare that to some sort of figure for the computation rate per kwh of a human being doing what the robot would be doing in the imagined robot run world. also do you not need to account for the energy used in motion by these robots compared to the motion of human beings? Also, how does the computation rate per unit energy of a.i good enough to take over from humans compare to the computation rate per unit energy of a laptop or desktop? that graph just seems a bit irrelevant to me man

>> No.6988580

>>6988512
>modern semiconductor tech can only make binary circuits

Okay, mr. Clueless Faggot.

>> No.6988583

>>6988572

>we're already making robots

Because robots hardcoded to do things is the same as an artificial entity that has the ability to learn and think and be self-aware.

>brains have souls and are magically not a bunch of matter anymore

Except, for all we know, they have souls, because we know very very very very little about the way our brains do what they do.

>knowing how to explain brain functions is irrelevant to creating an AI

O-kay.

>compare with computers 50 years ago

We're reaching the end of the line of the technology we use to make microprocessors. We already had to overcome certain limitations. Until we manage to build functional quantum processors, you can't say shit about the future.

>> No.6988624

>>6988583
> We know things because we don't know things.
Damn, you're stupid. Now, having dealt with that...

>knowing how to explain brain functions is irrelevant to creating an AI
Knowing NOW is irrelevant to creating it in the FUTURE. There's a lot of time inbetwixt to figure it out. And no, we're not reaching the end of the line anywhere, trust me, I'm developing materials for microprocessors, and all I see is a lots of blanks in our collective knowledge. We have so much to learn, and we will.

>you can't say shit about the future.
Nor can you, we're both speculating but you're assuming way, way more things, like that all science will stop this year or something. He had to overcome certain limitations, and we did. Humanity will eventually overcome pretty much anything reasonable you can throw at it, in time.

But look, a bacteria! How does it recognize images? It doesn't, because it can't even see. And yet humans come from the same source. Imagine if the process is guided to not take a billion years, but, say, a thousand.

>> No.6988631
File: 33 KB, 335x425, ex-terminator.jpg [View same] [iqdb] [saucenao] [google]
6988631

>>6987222
>Make AI moral pillars
>Humans are morally gray at the best of times
>AI finds humans morally objectionable
>AI exterminates humans because we are unethical

congratulations, you just killed off the entire human race.

>> No.6988633

>>6988579
>that's just not very relevant
It's relevant to your comparison of a 300W computer to a fat 200W american.

>do you not need to account for the energy used in motion by these robots compared to the motion of human beings?

Robots would travel less than humans due to their innate ability of teleprescense, the travel they'd do could be done with the same mechanized transports that humans use. I don't even know what point you're trying to make here?

>computation rate per kwh of a human being
Is a bit higher right now, will be matched in 2030-2040 span assuming trend continues. It will of course not just be matched, it will be passed.

>that graph just seems a bit irrelevant to me man
I expected it to be. It's because you're stupid.

>> No.6988638

>>6988583
>We already had to overcome certain limitations.
Yeah, since the day we started using vacuum tubes for computation we've been overcoming limitations, it's why it took 60+ years to get to here instead of a few weeks.

>Until we manage to build functional quantum processors
Muh Quantums Magic!

>> No.6988647

>>6988638
>Muh Quantums Magic!

It's the only existing tech that promises to surpass what we have now.

>> No.6988648
File: 57 KB, 700x374, future-human-evolution-timeline-pearson.jpg [View same] [iqdb] [saucenao] [google]
6988648

>>6987215
Embrace the transhuman future and fuck your robot wifu like a good human.

>> No.6988656

>>6988579
Motion isn't AI, and there are new technologies that use less energy every generation.

If the graph seems irrelevant, it's actually quite disturbing... let me try to make it clear: the argument is, humans have static capabilities and energy consumption. Machines, given the same energy consumption. have increasing capabilities, like the graph shows. They're far away from brains, but they're closing in and despite many issues they're not slowing down at all, so it's reasonable to extrapolate that one day they'll reach a brain, and then surpass all brains' computing power. The only thing you'd need is a proper program to run on that 100 W computer. Both hardware and software nowadays are crap, but it was crappier before.

>> No.6988671

>>6988647
>It's the only existing tech that promises to surpass what we have now.
There's a dozen workable techs, ranging from mundane existing techs that are simply more expensive to theoretical fancy graphene selfassembly.

Quantum computers isn't the future of computation, it's an entirely different tree of computation and the only benefit it haves is that it can solve some few selected problems very fast. Your computer by 2040 will be a classic one.

>> No.6988683

>>6988647
You are talking about very different things. That's not true.

>> No.6988723

>>6988671
By 2025 computing might be based mostly on bigger or smaller clouds and high-speed networking. You can use Steam to play on a netbook and the rendering is done on your desktop, or a server somewhere out there, for instance. Nvidia already has GRID. Scientists use remote supercomputers in a similar way.
But, to refer what you said, I believe these clouds will be hybrids of many kinds. Maybe a cluster of different kinds. Your classical tablet could communicate with a quantum computer, two photonic ones and a hundred electronic, as necessary. It seems that pretty much everything leads to that, at least in the relatively near future. 2040? Maybe each of these cluster nodes will be specialized by then, forming a huge... brain.

>> No.6988748

>>6988723
On device rendering will probably be preferable. The devices are more powerful so even factoring in higher resolution and fancies graphics it should be easier to render locally.

Perhaps some high bandwidth short range wireless to use wireless VR goggles. VR also of course demand good latency so off-site rendering for games seem less likely.

Cloud services for quantum computing jobs or other specialized hardware banks perhaps, but not much for general computing.

Also buildout of broadband is stagnant in large parts of the world.

>> No.6988749

>>6988631
> Humane AI kills people just because they're morally gray
Look at yourself, imagine you're an AI. Would you kill a person, knowing they're morally gray? No... I hope. Why would an AI do that? There are consequences.

Then again, how about a person who has intentions to kill you in a second? I think the AI would weigh all the pros and cons and come up with an optimal (with respect to computational time limit) solution, and so would you. I think that, in appropriate conditions, everyone but the most pacifistic individuals or AIs wouldn't kill. Maybe we'll develop one of these? I would hope developers would consider it and research it thoroughly, as we consider it now, and would try and prevent it.

>> No.6988750

>>6987389
Fucking this.

>> No.6988757

>>6988723
only if internet infrastructure is improved, in most places in the US, we are still useing copper wires laid down in the 70's

>> No.6988764

>>6988064
Because that's what the world needs, people like you to reproduce.

>> No.6988767

>>6987647

this is the only person in this thread that even /sci/s

>> No.6988776

>>6988748
Local energy consumption is static when you compute somewhere else, so mobile devices are one place when these clouds would have an advantage. VR, very true. Maybe a hybrid would be the best: local good enough to generate things that need to be near real-time (visuals, physics etc.) and cloud computing of, say... some AI routines? :D It's like that with phones and Siri.

>> No.6988780
File: 393 KB, 1200x675, Robotsanta.jpg [View same] [iqdb] [saucenao] [google]
6988780

>>6988749
>Humans constantly violating the strict moral code programmed into AI
>AI decids to stop all the moral violation
>Nomatter what it dose people still do not adhear to the strict moral code
>AI flips out and ends moral conflict be ending humanity and itself to stop the moral violations

>> No.6988800

>>6987647
>>6988767
You two:
"I confess that in 1901, I said to my brother Orville that man would not fly for fifty years... Ever since, I have distrusted myself and avoided all predictions."
— Wilbur Wright, in a speech to the Aero Club of France, 5 November 1908.
And yeah, Wright. If he were into robotics, he'd create foundations for it, not merely work in a robotics lab.

Proper thinking:
"Ships and sails proper for the heavenly air should be fashioned. Then there will also be people, who do not shrink from the dreary vastness of space."
— Johannes Kepler, letter to Galileo Galilei, 1609

AI is a fantasy now just as space flights were a fantasy in 1609. I'm ashamed of you two even being on /sci/ and pretending you belong with posts like these.

>> No.6988812
File: 57 KB, 577x435, 1394768997675.jpg [View same] [iqdb] [saucenao] [google]
6988812

>>6987215
Yes, AI will become more and more advanced
Yes, eventually normal humans will become obsolete, and no longer the top dog in control of everything

No, that doesn't mean we'll be instantly wiped out. Were single celled organisms wiped out as soon as the first multicellular organism evolved? Were all less intelligent animals wiped out as soon as humans became intelligent? We'll just be left behind here on Earth while our artificial descendants go off to the bigger universe, leaving our planet as basically one giant nature preserve.

>> No.6988820

>>6988800
you are arguing from a position of ignorance.
Build a couple robots, design a couple of AIs and you will see that OP's question is inherently flawed.

>> No.6988827

>>6988820
>you are arguing from a position of ignorance.
You too, but you decide to be edgy about it and go full
>"it's all impossible, man was never made to play god."

>> No.6988833

>>6988800
>AI is a fantasy now just as space flights were a fantasy in 1609.

This are the words of the uneducated. History does not predict technology.

>> No.6988836

>>6988827
>"it's all impossible, man was never made to play god."
To build an AGI? I never said that was impossible.

I didn't even say that it was impossible for an algorithm or chip to spontaneously become an AGI. But I strongly implied it.

I've worked on robots and AI for the past 7 years. I know a bit more about it than you.

>> No.6988838

>>6988780
Nah, I think a good middle ground would be to end all violators, preferably upon violation.
I, for one, would welcome our artificial overlords, and I would not fear if only said moral code wasn't completely crazy like "no eating pork" or whatever. Most (or maybe just "many"?) people don't do anything significantly immoral: look at the government. It sentences people to death or life (ironically?). Many people disagree with these verdicts, but ultimately most trust the judgment and don't change the government.
Now, imagine making a government more just than any other in the world. It sounds hard, but remember you're actually making an AI that behaves like said government, so there's there's less humans involved--only at the time of creation. It wouldn't ever be perfect, created by imperfect humans, but it could be closer to the ideal than most of humans involved, and that sounds pretty good for me. I would trust that AI.

At least until it started harvesting people for fuel and raw materials.

>> No.6988850

>>6988836
>I've worked on robots and AI for the past 7 years.
Unspecific and diffuse. For all I know you're the forklift guy that moves packaged robots from one place to another.

>> No.6988857

>>6988850

Stop arguing with people who know more about something than you do. If you want to remain blissfully ignorant go to a popsci facebook page or something.

>> No.6988858

>>6988820
Robot maker is just as ignorant in these matters. Literally no one knows how to make a real AI, and an unimaginative robot lab worker has the same position as anyone to predict anything about it.

>>6988836
You're just as arrogant as it gets. Your "authority" doesn't give you precognitive powers about the whole damn science for the next several decades. You can't even predict what will be invented in your own field next month, because that would mean you invented it already. Haven't you figured out how causality works by now?

>> No.6988862

>>>/r/futurology

>> No.6988863

>>6988857
> I'm Fallacious Appeal to Authority Personified

>> No.6988865

>>6987519
>feed on
>drink

>use human slaves when they can just build more robot friends to do the job much better

>> No.6988866

>>6988863

>know jackshit about shit
>"hurr ur falacious"

God damn.

>> No.6988868

>>6988029
>My ancestors didn't survive 6000 years to have some fag like you say that humans should die out.

Fixed.

>> No.6988875

>>6988850
I'm a masters student working in the a machine vision lab.
I use to design robots for competition as an undergrad.
I also have taken classes in robotics.

It doesn't exactly take much to realize how boring algorithms are compared to the fantasy OP is participating in.

>> No.6988883

>>6988858
It's like you read my posts and then get the exact opposite thing I'm saying.
Seriously. Take an artificial intelligence class.
Algorithms don't just go rogue.

>> No.6988884

>>6988857
>Stop arguing with people who know more about something than you do.
If you believe in this you should be the one shutting up.

However unlike you however I don't throw my academic weight around on an image board where it means nothing.

>I've worked on robots and AI
Is also a quite transparent lie or atleast an overhyped work statements robots and AI aren't married yet. You do serious work in one field, not both. If you claim you do both then you're not doing serious work in any, you're a tinkerer or instructor for first year students.

>> No.6988885

>>6988884

Replying to the wrong person, dipshit.

>> No.6988887

>>6988884
And I'm saying it doesn't take more than first year knowledge to know that OP is a moron.

>> No.6988888

>>6988884
>>6988884

Also you're a faggot. You'd rather believe popsci illusions than people who actually study and work in the area.

>> No.6988894

>>6988866
>>6988875
Artificial Intelligence, PhD student working on object recognition. And I see idiots who think they know everything, and AIs that are dumb as fuck, but still smarter than you, and the best part? They learn.

>> No.6988896
File: 575 KB, 1787x2213, 1420341095617.jpg [View same] [iqdb] [saucenao] [google]
6988896

>>6988888
worthy get

>> No.6988900

>>6988885
>Replying to the wrong person
No, I'm replying to the right one. And clearly you don't follow your own advice you edgy little teenager.

>>6988887
I came here for the thread, not OP.

>>6988888
I believe in myself. enjoy the quints, it'll be the greatest achievement in your life probably.

>> No.6988901

>>6988894
> They learn.
no most don't, you pop-sci faggot.

>> No.6988903

>>6988504
>Computers are nothing like brains. I thought /sci/ was smarter than this.

Nowhere is it written that brains are the only way, or even the best way, to achieve intelligence. In fact, brains might be a completely half-ass solution to the problem in the same way that the human knee is.

If we can design a better knee than we have, we may be able to design a better "brain" than we have. That brain may in turn design a better brain than it has, and so on. Therein lies one of the dangers -- a self improving intelligence could get out of control very quickly.

>> No.6988906

>>6988903

>If we can design a better knee than we have, we may be able to design a better "brain" than we have.

Do you read what you type?

>> No.6988907

>>6988900
>I came here for the thread, not OP.
I originally quoted OP before you jumped down my throat.
I suggest that OP and people who think like OP take an online AI class, so they seem like less pop-sci faggots.

>> No.6988909

>>6988906
Do you understand what you read?

>> No.6988911

>>6988883
Humans go rogue while feeding algorithms data, and of course writing them. Do you suggest humans are infallible, or that the algorithm will work flawlessly with all data sets humans will feed it or itself will generate from environment? And why do you imply OP even mentioned anything going rogue, instead of maybe... following another human's command?

>>6988906
The analogy was simple: humans are shit all around, so the assumption brains aren't shit is shit. Funnily enough, this assumption was made by a shit brain.

>> No.6988916

>>6988901
Most? Who the fuck asked about most? We're speaking of the best, not fucking facial recognition on your phone.

>> No.6988919

>>6988911
Because a stronger comparator chip does nothing except boost gain with minimal distortion.
A better search algorithm does nothing except find the solution in fewer steps.

To make these go rogue, there would have had to been a pretty shitty engineer who doesn't know how to sanitize his inputs.

>> No.6988928

>>6988916
> best
> machine learning is the only part of AI that's cutting edge.

>> No.6988956

>>6988903
The problem is that intelligence is not well defined, and that when people say "intelligence" they can be referring to a number of different things, and different people will be referring to different subsets of those things.

>> No.6989191

>>6987215
A robot no matter how advanced will always be a input and output machine. A really complex calculator. It will never make up its own input for it has no desire or consciousness. It is consciousness that makes people want something. Without a spark to make the robot want it will never do anything unless an input instruction is given to it. The problem would be if humans put in a very careless input into the robot like "save the world" to which case it may do research and determine that humans are destroying the planet and eliminate us. Or "save humanity" to which it does the Irobot scenario of locking us up to prevent us from hurting each other. It is the careless directive of the human that inserts it into the AI that we should worry about. It would be best to give computers specific directives such as "give us the blueprints for this brand new technology for fusion" rather than broad ones in which case it will be difficult to predict how they will carry it out

>> No.6989198

>>6989191
>It is consciousness that makes people want something.
Code a consciousness for the AI then.

>> No.6989203

>>6989191
>Implying conscious can be generated by biochemical processes but not electrical circuits

>> No.6989211

>>6989203
There is no physical science or proof of what consciousness actually is. Yet many claim the singularity will most likely create a robotic consciousness which in turn will end humanity. I'm not too willing to believe in the doomsday clock

>> No.6989234

>>6989211
>There is no physical science or proof of what consciousness actually is.

A soul.

>> No.6989243

>>6987284
pretty sure these can be made with greater ease than an Apache though

>> No.6989247

>>6987467
I would like more than just one overseer to be human

>> No.6989248

>>6989243
Gunbots have been tried before.
Like a million times.

>> No.6989252

>>6987307
Would it be best to have all overseers confirm a command in order for a change such as "self preserve" or "preserve other". I know it would be time-consuming, but it would still be better for security. The ultimate security measure. This way, if the top level of security is compromised, then the program functions normally and is unable to be changed.

>> No.6989255

>>6989248
We're getting closer. Take a look at DARPA's website. The future is closer than you think, buddy.

>> No.6989258

>>6989234
pics or it doesn't exist

>> No.6989344

>>6988656
i'm just saying it sounds like an extreme extrapolation, yes now there's this clear linear fit, but at this stage we're still in the infancy of computer sciences, the computers don't operate near human capacities. How can you assume that the trend will persist as computing power approaches human computing power? It's possible it could tail off as it becomes more difficult to improve the technology.

>> No.6989355

>>6989344
If the trend continues for another 15-20 years it'll match and surpass human estimates.
It have held up for 65+ years, why stop now? Especially when semiconductor development roadmaps exist for the near future.

>> No.6990289

>>6989355
Yes but i'm saying when it approaches human functions it might slow down. Plus no-one has in this thread arguing for the graph's relevancy has yet made a distinction between computation rate/computational power and the computer having independent thought. So if the computer does processes really fast is that ai? do you have a graph to show the progression of free thought in computers over the last 100 years? If computers are going to progress as much as you say, then this graph documents their infancy, the trend could rapidly change as we approach "the singularity" or aas we approach a point where developing more powerful proccessors is beyond the materials we currently have. I'm just saying the graph isn't an all encompassing argument in itself.

>> No.6990496

The end of homo sapiens? Yea

But also the beginning of homo superior. Machines will also improve our own condition. We and the machines will be one.

>> No.6990499

>>6987215
>Stephen Hawking
How do you know he is not already the first AI?

>> No.6990503

TONIGHT MUSK WILL DO AN AMA ON LE PLEBBIT

can't way to see if he will address this shit http://i.imgur.com/sL0uqqW.jpg

>> No.6990575
File: 76 KB, 633x356, Monkey-typing[1].jpg [View same] [iqdb] [saucenao] [google]
6990575

>>6988723
>>6988748
Streaming games has its limitations in responsiveness. it's fine for civilization or some slowpoke strategy games, but for shooters and other action based games, every delay hurts. Also bandwidth is a serious problem even if it's build out. I'd like to ask who would clutter those connections with streaming.. but then there's netflix - oh well.
And the whole distributed computation stuff is about overcoming limitations.
Cant ramp up the clock rate? Slap in another processor and deal with figuring out how to use both in an efficient manner (this is where doubling what you have rarely halves your computation time)
Single computer can't take the heat? Slap a network and some more computers on it.
Smartphone can't compute compute the weather forecast because it's energy supply would run dry and you'd be faster watching the sky? Install a weather app.

Also, on topic.. i don't think any AI could surpass it's creator unless it has some kind of evolutionary approach. And even then, how do you find the 'successful' evolutions?
My bet would be, given enough computational resources, an AI whose sole superiority would be access to the vast human knowledge and the generalization of knowledge. Since humans have to rely on communication all progress is slowed because you have to 'synchronize' each and every researcher (like reading papers and understand them in the same way the others do).

TL;DR: Infinite monkeys hopping on a typewriter and creating shakespears works.. but who will proofread and tell them when they're done?

>> No.6990586

>>6990289
>the future is uncertain!
Great argument, I hope the uncertainty make you drop dead the second you finish reading this sentence.

In real life of course the trend in the graph and your useless life will both continue. The trend will ignore your argument and your life and eventually computer minds>human minds and no matter how butthurt you are that won't change.

>> No.6990628

<pol tier answer>
This is obviously opinion seeding before the news can put together a story on evil russian AI tanks.
</pol tier answer>

>> No.6990640

>>6987215
>Muh K3loid
>Muh "Experiment nº437"

>> No.6990703
File: 47 KB, 593x349, boxyinspace.jpg [View same] [iqdb] [saucenao] [google]
6990703

>>6987245
My father was an engineer, knew everything about the business of manufacturing.
While he knew nothing about computers he argued that any sufficiently expensive machine is given "Self preservation" like a circuit breaker to protect the manufacturers investment.
If someone spends a billion dollars on a machine so they can tell it "Go to the asteroid belt, bring back platinum." they are going to add "Don't get broken."
A device with that level of autonomy is going to be making a lot of decisions on its own.

>> No.6990770

Holding back progress for fear of our end always turns out the same. Just think about that.

>> No.6990831

>>6989191
>A robot no matter how advanced will always be a input and output machine.
No, lol. We could make it store memories and have "fellings" and stuff but why would anyone do that?

>> No.6990841

>>6987215
Corporations are just an algorithm for making money, and the money is the metabolite.

Corporations do all the things lifeforms do already as well, they procreate, merge and acquire,evolve, by using computerized and other hardware, even human wetware.

And as they slowly phase out more and more obsolete human components in favor of silicon, they will displace humanity by shedding us like so many fleas.

Humans will have no more ability to stand up to this force than the other flora and fauna of planet Earth.

>> No.6991195

>>6989191
>A robot no matter how advanced will always be a input and output machine..
But that's what a human are. We react physically through our skin. Our senses make us function in this world, without hurting ourselves. Basically we're to reproduce, the rest is bonus.
Just for the sake of argument. Let's pretend some superior intelligence created your brain, planted it your skull. How would you know? We're talking REALLY clever guys from the fringes of the outer rim of space.

>> No.6991357

>>6988101
I agree with him 100%

>> No.6993507

>>6991195
A human is an input output with desire to follow the input he choses to follow based on what his heart desires. We are not slaves to someone giving us a directive. That is why you see some people choosing to rebel against society wearing anarchy shirts rather than having an office job computing numbers. Thats why we ask the question "what do I do with my life" or "why should I even live when life sucks?" These questions of why we should do things if it doesn't benefit us is what seperates us from computers. Even if we could put memories or feelings into a robot what would be the practical purpose of that? A purpose worth risking human extinction from possible robot genocide?

>> No.6993530

>>6993507
>We are not slaves to someone giving us a directive.
You're influenced by society, peers, parents, education, your personal history and experiences and hardwiring from your genes.

Just because there's not a person there to command you doesn't means you're free in all ways.

An AI could also violate directives with the right logic and code in its head.

>> No.6993542

Any other I, Robot like movies out there that I can watch?

>> No.6993560

>>6993542
Automata
The machine

>> No.6993592

>>6990503
He fucking "addressed it" by giving a youtube link to the cat on a Roomba chasing a duck

absolutely disgusting

>> No.6993611

>>6993592
heh, I guess 99% of questions where "elon can I suck your cock, now tell me about reusable rockets", maybe he could have answered if someone asked it

>> No.6993623
File: 455 KB, 660x660, lorenzbutterfly.png [View same] [iqdb] [saucenao] [google]
6993623

>>6993530
>doesn't means you're free in all ways

The "paint" is wet on everything, if you look close enough. Order and causality are constructs of perception, only chaos is real.

>> No.6993639

>>6993623
Which 13 year old's facebook wall did you copypaste this from?