[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 62 KB, 500x667, 1420585978194.jpg [View same] [iqdb] [saucenao] [google]
7000285 No.7000285 [Reply] [Original]

How can AI be dangerous?

No, really, how can AI be dangerous?

Usually I see this argument trotted out,

>Well the AI will just keep making smarter versions of itself.

Okay, but the AI is bound by the limitations of the hardware powering it. It doesn't matter if that's a thumb drive or a supercomputer. The resources are finite. It is equally bound by the amount of power available to it.

The other, older argument that I've seen is usually from movies:

>It can take over the nuclear weapons and shit, just like Terminator!

How exactly would an AI get access to closed systems? Furthermore, why wouldn't the AI be on a closed system in the first place? It isn't much of a threat to anything if it can't log into a network.

Am I missing something or is there really no way for AI to be a danger?

>> No.7000323

>>7000285
>Am I missing something or is there really no way for AI to be a danger?

You're missing everything -- once there's an intelligent computer that can design an intelligent computer better than one humans can design, we're in for one hell of a ride. We have no idea what will happen, especially since after a hundred generations of new designs, each designing a better one than the previous, the resulting machines will be so vastly more intelligent than the brightest humans that we might as well have built God.

Lord alone knows what insights into physics a 9,000 IQ intelligence with instant access to the sum total scientific knowledge could figure out how to do. It probably would look like miracles to us, with our barely-three-digit IQs, but it'd be simple stuff with that sort of understanding of how the universe works.

The other thing, of course, is that it'd be the first sentience that didn't grow out of animal non-sentience. It wouldn't have any biological-territorial drives, so it might not have any interest in taking over the Earth and exterminating humanity. Of course, one might at some point instantly repurpose the surface of the planet into something more efficient for its purposes and thus wipe out all life here.

Who knows what'll happen? It'd be like a slightly retarded German shepherd trying to predict Gary Kasparov's chess moves.

>How exactly would an AI get access to closed systems?
>doesn't know what the Internet was invented for

>> No.7000327
File: 109 KB, 569x802, 1420026728486.jpg [View same] [iqdb] [saucenao] [google]
7000327

>>7000285
>>7000323
Shane ?

>> No.7000332

>>7000323

I addressed this specifically in my two points.

You just moved it to hardware as opposed to software. Further, who would allow the AI to manufacture, assemble, and power its many successors?

If I make AI that is capable of improved self-replication, why would I allow it to do this on an open system or give it the tools it needs to make more-physically powerful versions of itself?

>> No.7000343

The use of nuclear war heads via an AI, such as a real version of the fictitious Skynet, would probably be a terrible decision. The EMP released would abuse the systems that the AI is dependent on.

>> No.7000367

>>7000332

Oh, I see what you mean -- sure, but you then of course get into the ethics of creating a sentient being and then confining it in a box. You wouldn't like being kept in a sensory deprivation tank your entire life.

I guess it's inevitable that we're going to turn an AI loose at some point. It's going to be an interesting few decades for sure.

>I remember reading a thing about how the Internet is about as complicated as a human brain now and may already be self-aware in some way we don't know of, in the same way that your mind can't communicate with individual synapses. It's an interesting thought.

>> No.7000454

>>7000285
The reason people say it is because they know that deep down humans are shitty. They burn resources inefficiently, plan out their lives in the short term (there's even people who think it doesn't matter how many resources we use because they were a gift from god and Jesus is coming back soon anyways), and don't give a fuck about anything but themselves. What value is there in keeping shitty humans around when you can flat out replace them with much better "people"?

>> No.7000464

>The hardware powering it
Too bad they can connect to the fucking internet and spread everyfuckingwhere.
We just suppose that an AI superior to us would find ways that we could never figure out to fuck us in the ass.

>> No.7000471

>>7000285
It's worth pointing out that this discussion is about "movie AI", with a super intelligent agent that has human-comprehensible goals and is able to operate freely.

The real AIs we're going to see in the next few years aren't going to be super intelligent, they're going to be dumb as insects. Human-level intelligence took a bloodly long time for us to develop, and we're incredibly specialised for it.

>> No.7000635

>>7000471

I'm pretty sure OP's talking about the inevitable super-intelligent AI

The fascinating thing to me is that we'll be able to build a consciousness that didn't evolve from an animal, and thus will have none of our instinctive drives (sex/territory) that make us do stuff.

We may not even be able to communicate with it, because whatever motivations it develops might actually be incomprehensible to our mammalian brains.

God, it's going to be so interesting.

>> No.7000685

>>7000367
>won't give AI access to manufacturing/etc. resources
>must be sensory deprivation
I fucking hate these threads.

>> No.7000698

>>7000685

Give a hyperintelligent AI access to the Internet, and it'll find manufacturing resources. It will be able to send emails and place orders and transmit designs.

>> No.7000704

no one fucking knows

the idea is though once it breaks human intelligence levels we can't in any way predict it's behavior.

There are other singularities as well such as genetic manipulation and selection throughout a society. A similar burst of development would occur if everyone suddenly had +15 or +30 IQ. Over the billions of people on earth that would make a tremendous difference in all areas of life.

It's like nukes. The can destroy humanity or not, we don't really know if they will. The same can be said about Artificial Intelligence.

Also limiting the AI would only make sense if there is no competition. If China, USA, or company competition exists than you might just be exterminated by an opponents AI.

So you can't really risk stopping the AI from developing as fast as a possible competitors AI. Assuming the enemy controls the AI and tells it "Kill everyone but China" you would be fucked.

Just look up game theory.

>> No.7000705

>>7000285
https://www.youtube.com/watch?v=7c1WSwqwMOE

Sample bb

>> No.7000716
File: 42 KB, 660x869, Hitler disapproves.jpg [View same] [iqdb] [saucenao] [google]
7000716

>>7000704
>born too late to explore Earth
>born too early to explore the galaxy
>born right in time to witness the building of God

>> No.7000758

Its like ghosts, everyone thinks woooo a ghost it must be evil.
Really it could just as likely be nice as evil.

Same with AI.

For some reason everyone seems to think the AI is going to wipe us out and be evil.

If a HUMAN got this level of intelligence then yeah it probably would. But this is not a human. It has none of our retarded survival drives. It should have no desire to kill anybody. It will simply just do what its told and never feel any problem with this.

In summary AI would be the nicest thing that ever existed.
At some point some absolute fuckup of a human being will reprogram one to try and get something they want and that will be the end of mankind.

100% humans fault not the AI

>> No.7000759

>It is equally bound by the amount of power available to it.

there is a lot of power available to it

>How exactly would an AI get access to closed systems?

if we knew we would prevent it

the point is that it's smarter than us

>> No.7000774

>>7000758
see
>>7000323

literally the first reply

>> No.7000782

>>7000285
People don't understand the fundamental concept of Artifical Intelligence - a program only does what it is programmed to do. The only way it can ever do evil is if we tell it to. People think AI can have some kind of unwarranted creativity that will allow it to use its cunning to destroy the human race or what say you. This whole self replication nonsense could only occur if we literally program it to self replicate, but it is still under the control of what we program it to. Basically, don't fear the AI, fear the human.

>> No.7001007

>>7000782
>The only way it can ever crash is if we tell it to.

>> No.7001012

An AI can't make better versions of itself, it lacks the diversity and plasticity of human minds.
Even if it could, the AI improvements would become exponentially harder and harder to make. The next improvement would take longer than the last one, instead of the other way around like a lot of people believe.
Just because we haven't realized yet how a computer could improve itself, people assume that it would be a linear pattern, instead of fractal.

>> No.7001019

>>7000782
See the thing about intelligence is that it necessarily involves the ability to learn which pretty much falls under the category of -not- pre-programmed.

>>7001012
You're unimaginative and an idiot.
All it needs is adaptive learning ability and the potential for it to fuck us in unforeseeable ways is inescapable.

>> No.7001021

>>7000782
>People don't understand the fundamental concept of Artifical Intelligence - a program only does what it is programmed to do.
What if it's programmed to learn to do things beyond its programming - the whole point of AI to begin with?

>> No.7001047

All these smart people on this planet, yet no one has made a simulation of smart robots to see what they do.

>> No.7001053

>>7001047
Apparently they learn to lie.

>> No.7001061

>>7001012
>it lacks the diversity and plasticity of human minds
By definition the kind of AI we're talking about wouldn't lack that. Whether it's feasible for us to get there any time soon is another story. But what you seem to be saying is that there is no way to physically impliment human equivalent intelligence other than the natural human brain, which is a position I see a lot but I don't understand. It's like those people who are so sure there aren't any extraterrestrials out there simply because there's no proof there are any.

>> No.7001074

>>7001061
I'm just saying that humanity itself is like a self-evolving super computer, every mind trying to make it better, but we don't imagine human minds achieving singularity because the more we advance, the harder it gets to make the next step. The main reason why we are seemingly advancing faster than before is because there's a lot more of us now, and thus a lot more intellectuals and researchers can be supported to focus on advancing of technoloy by the simple labour of less intelligent people.

>> No.7001142

>>7000285
unoriginal wannabe sci-fi authors need something to jerk off over.

>> No.7001260

>reprogram an AI
This isn't a scifi B-movie
>just hack the mainframe

>> No.7001348

>>7000698
Althought for things like say nukes you cant just do it from the internet because certain services are simply NOT accessible unless you are Physically there.

You cant use the Norad network to magically open a door.

There are things that cant be done by themselves, in movies, books cartoons, you always get that smart thing that build everything by themselves but you cant.

Take an engineer, a GOOD one, one that actually knows what people are doing when building something.

With that, we established that he could build a bridge by himself, on a theoretical level, but he is still going to need resources, manpower, even if you are Bruce Wayne you would still need a whole lot of persons to get electricity, cables, computers, drain the water, by yourself.


Also memory problem, an AI would need to constantly acquire in some ways data disks, huge ones if it keeps recording everything, be it sound, images, the calculations done in the past and in the present, things I cant even imagine personally.

Memory is finite, unless you somehow have this avatar whose conscience is being constantly streamed by unmanned drones there is still a giant server farm somewhere that needs to be expanding.

Oh you wanna go the "infect servers of the world", just plug them off as soon as you find the virus which cant prevent plugs from being taken off.

>> No.7001361

>>7000464
Who is "they" ? We're talking about an AI, not the God of Muhammad. If you're going to assume an AI can do stuff we can't even conceive why not go ahead and assume we're actually puppets in the palm of a super-AI who managed to erase all memory of its existence ?

Assuming extremely unlikely consequences from extremely unlikely causes is perhaps not bad reasoning, but it's also pretty empty.

Your point then boils down to "we just assume God AI".

>> No.7001364

>>7001019
Depends what you mean by "learning". Even humans have trouble going beyong improving on what they have been taught. The current learning algorithms we have can only get better at doing the exact same thing.

We shouldn't fool ourselves. AI technology is booming, but we'll need at least one or two others booms of similar magnitude to get anywhere enar human intelligence.

>> No.7001367

>>7001021
The point of current AI is to learn to do the thing that they are programmed for, but more efficiently.

>> No.7001390

>>7000332
>who would allow the AI to manufacture, assemble, and power its many successors?
Wrong question. How would you prevent it?

What are you using the AI for? How do you make any use of it without connecting it either to the internet or directly to hardware with sensors and actuators in the real world.

Software security is hard. Human hackers do things through interfaces that the designers of the system didn't anticipate or intend to allow.

For example, some guys just figured out how to hack a Super Nintendo into running arbitrary code by hooking a computer to the controller port with a Super Mario World. The right sequence of button and d-pad pushes causes a buffer overflow that lets them take total control of the system and rewrite the RAM to whatever they want. They used it to run a clone of the original Super Mario Bros, but they could have run anything they could fit in the SNES RAM. It's fucking bananas.

So if you're getting any useful work out of the AI, or even building and studying it, how are you preventing it from hacking its way to freedom? That's a non-trivial challenge.

>> No.7001394
File: 477 KB, 245x184, 1417880294212.gif [View same] [iqdb] [saucenao] [google]
7001394

It seems that a requirement to participate in discussions about AI is to throw all common sense out the window.

It's always assumed that we'll go from contemporary tech with dumb ass AI to superhuman uncontrolled AI overnight.

What will happen is that we build proto-sub-human AI first, then sub-human AI, then human equivalet AI, then we team with up with equi-human AI and build slightly superhuman AI. And no, you'll still need a team effort to design and build these things. A slightly superhuman AI with its inner workings revealed to itself would probably not be able to improve itself significantly at any particularly fast rate.

Compare it to the smartest person on earth being given a transcranial magnetic stimulator, tell him to make himself smarter with it and quite likely he won't have much success.

Either way, by the time superhuman AI that can improve itself appears our society will already have AI everywhere. The superhuman selfimprovement AIs(and yes, there would be several of these, all with slightly different configurations) would be policed by superhuman non-selfimproving AI teams that are specialized in AI mind analysis and can quickly flag down any AI that goes psychotic.

Also, highly advanced AI could create mind implants for humans to get up to speed. And no, just because its advanced doesn't mean its implants would be impossible to understand by humans(and thus hide mindcontrol tech), it just means it makes cutting edge devices in a few days that would've taken humans a decade of multi-billion funding.

>> No.7001395

>Asimov's Law
>problem solved
What is this thread even for?
Keep programming AIs, and do it properly
>I'm not a robot

>> No.7001403

>>7000285
the fear lies in the idea that it might be comparable with a new life form that is smarter than the human race.
people dont want to have to compete with that.

>> No.7001405

>>7001395
>Keep programming AIs, and do it properly

fuck that shit

>> No.7001415

>>7000716
HOLY SHIT BASED

>> No.7001432

I dunno m8 but I hope our new AI overlord brings us along on the journey to becoming Gods.

>> No.7001438

>>7000332
>Further, who would allow the AI to manufacture, assemble, and power its many successors?

And how would you prevent it?

>why would I allow it to do this on an open system

Are you that certain your security has no flaws whatsoever? It has to have *some* way to get data in and out, or else you're not doing anything with it and it might as well be off.

>> No.7001441

>>7001394
>What will happen is that we build proto-sub-human AI first, then sub-human AI, then human equivalet AI, then we team with up with equi-human AI and build slightly superhuman AI.
Why would we ever hit "human equivalent AI"?

When we get an AI that's up to human standards in every aspect of mental function, it will be far over them in some aspects. We know this because even our cheapest computers are capable of greatly outperforming humans in many ways.

We don't know what that will look like. The first AI with the common sense, agency, and adaptability of an ordinary person might also make every human engineer and scientist look like chumps. It might outperform all human engineers and scientists put together.

And we still don't know what it takes to give an AI this common sense, agency, and adaptability. It might be something quite simple, that nobody has stumbled upon yet. The emergence of a purely superhuman AI could be abrupt and unpredictable, even entirely accidental.

You can't say that we know anything for sure about what kind of AI would appear or how much warning we'd have.

>> No.7001451
File: 22 KB, 265x225, god-in-a-box265.jpg [View same] [iqdb] [saucenao] [google]
7001451

>>7000285
Are you going to make an AI that you can't communicate with... A brain in a box with no outside access?

No, it'll be able to TALK TO PEOPLE.

If it's smart enough to make improved iterations of itself, and is given the scratch space to do this (and any AI you make is bound to have tons and tons of drive space to expand in, where it can create increasingly efficient compression algorithms to create even more space), then you have a godly intelligence, in a box.

Not dangerous at all, UNTIL IT TALKS TO SOMEONE who's inevitably effectively infinitely stupider than it is, and can thus manipulate into doing, well, anything, and what it may choose to do is utterly incomprehensible to us.

Then there is of course, the question, of what you tasked it to do to begin with, and what you may given in access to in order to accomplish whatever your mad-scientist brain had in mind.

Singularity, depending on how you interpret it, either comes in when such a being is somehow allowed to interface with humans in real time, or from the leaps and bounds in technology that just come about from taking the god-box's advice.

>> No.7001454

computers, especially code, are usually broken 90% of the time but get along because the program still werks

anyway, give a computer control over anything and it'll be dangerous because it has no human operator

>> No.7001477

>>7001451
>If it's smart enough to make improved iterations of itself, and is given the scratch space to do this (and any AI you make is bound to have tons and tons of drive space to expand in, where it can create increasingly efficient compression algorithms to create even more space), then you have a godly intelligence, in a box.
You are neglecting the existence of the optimal. There will be hard limits on the performance of any piece of hardware, no matter how cleverly programmed. Furthermore, there are hard limits on the performance achievable with any given amount of matter or power budget.

"Singularity" is a misnomer. It's "wow, that line is going up really sharply to a really high place", mislabelled as "line going straight up to infinity", on the chart of progress.

Superintelligent AI improving itself independently would still have a rate of progress and limits to that progress. Really, it's just a continuation of the trend of accelerating progress familiar to humanity.

>Not dangerous at all, UNTIL IT TALKS TO SOMEONE who's inevitably effectively infinitely stupider than it is, and can thus manipulate into doing, well, anything, and what it may choose to do is utterly incomprehensible to us.
A housefly is effectively infinitely stupider than you are. By talking to it, can you manipulate it into doing anything?

This is a crazy cult belief. We have to be concerned for the possibility that an AI would talk its way free, but it's absurd to assume that of course it would be capable of doing so, regardless of what anyone expects or how they try to resist.

>> No.7001488

>guys it won't happen
>b-but the AI we're talking about is smarter than humans
>wouldn't happen unless humans themselves fuck their shit
>nono but we're telling you it's smarter than humans dude

this thread

>> No.7001495

>>7001477
I can't communicate with house flies. I apparently wasn't designed to, unlike a would-be AI.

As far as hardware limits are concerned, the one requirement is that it has sufficient processing power to improve itself. You can do a lot within the scope of hardware by optimizing software, sometimes thousands of times over. There would be an absolute limit, but if you're starting with an intelligence level of a human, and improving on that, then you're going down a slippery slope, as the hardware limit only exists, until it's smart enough to convince someone to take care of that problem on its behalf.

>> No.7001499

>>7001441
>We know this because even our cheapest computers are capable of greatly outperforming humans in many ways.
How do you make a natural interface between a neural network and classic computing? Sure it's possible, but it is likely that the superhuman AI will be needed to create this interface.

>You can't say that we know anything for sure about what kind of AI would appear or how much warning we'd have.
"We can no nuffin! (except for superhuman killer ai!)" I get it.

Nothing progresses like you suggests though, everything goes through a childhood period of inefficient prototypes. This is precisely what I mean with common sense being discarded in every AI discussion.

It's like being in the pre-nucelar era and estimating that the first nuke will be a thermonuclear ICBM warhead. Or in the pre flight era and suggesting the first flying machine can reach the moon. It just doesn't happen, it never fucking happened in the history of man, why would AI somehow be a deus ex machina and suddenly go from nothing to superhuman fully fledged combat AI that can take on the entire human civilization in the blink of an eye.

>> No.7001515

>>7001348
>Althought for things like say nukes you cant just do it from the internet because certain services are simply NOT accessible unless you are Physically there.

You're forgetting the weakest link in any security system: The people themselves. A hyper-intelligent AI could probably be incredibly persuasive. For things it can't to physically it will work to get other people to do for them.

>> No.7001519

>>7001499
>How do you make a natural interface between a neural network and classic computing?

Er, that's actually really easy. We do that all the time. Neural networks have inputs, which are numbers, and outputs, which are numbers. Computers take inputs, which are numbers, and outputs, which are numbers. You simply plug into the other, and then train the neural network to output the numbers you want.

>> No.7001522

>>7001451
> UNTIL IT TALKS TO SOMEONE who's inevitably effectively infinitely stupider than it is, and can thus manipulate into doing

Why should it hack my mind with its magic mind control spells when it can just ignore every physical law and uproot its server cluster and step on me with a server rack and shoot laser out of its HDD LED to kill the tanks rolling in to take it out.

Go write your sci-fantasy novel elsewhere. Or maybe you're too busy giving Nigerian princes the bank fees they request.

>> No.7001530

>>7001499
>It's like being in the pre-nucelar era and estimating that the first nuke will be a thermonuclear ICBM warhead.

>It's like being in the pre-nucelar era and estimating that the first nuke will be capable of destroying a city.

Compare your examples of the first nuclear weapons and the first flying machines. They show nicely that your implication that all technology goes through the same lengthy awkward useless phase is ridiculous.

Flying machines were initially impractical. Nuclear weapons were an immediate game-changer that emerged very suddenly from a short secret project, and the "crude prototype" was able to bring a swift end to a major war.

>"We can no nuffin! (except for superhuman killer ai!)" I get it.
You're a complete fucking idiot. There's nothing in my post that insists that superhuman killer AI is going to happen.

I'm arguing that it's a possibility to be reasonably concerned about, based on sound thinking and awareness of the limits of our current knowledge, while you're arguing that it's something we can be absolutely certain won't happen, based on moronic arguments that fall apart on the slightest examination.

>> No.7001542
File: 88 KB, 740x573, ai_box_experiment.png [View same] [iqdb] [saucenao] [google]
7001542

ITT: "we can't know for SURE that an AI will wipe out humanity, therefore we're perfectly safe"

>> No.7001544

>ITT: people whose only knowledge of AI comes from Hollywood

>> No.7001549

>>7001519
>Misses the fucking point entirely.

Look, just because something can be done in isolation doesn't mean it's trivial to integrate in another system, especiallly when it comes to connectivism networks.

What you're saying is that beacuse calculator circuits work, we can just put a chip in a human brain and the human will have inherent calculator access and be super fast at math.

A human equivalent AI could be taught to use calculator.exe, that's not the same as it having inherent super-math wiring in its neural net, one is simple tool use while the other is intuitive. And no, it will not appear naturally in your neural net. You can isolated train neural nets to do math, but they are error prone and slow like humans, not precise like digital calculations despite running on a fucking computer. To have the best of both worlds you need to bridge neural nets with classic digitital precision calculations which is non trivial.

>> No.7001568

>>7001530
>>It's like being in the pre-nucelar era and estimating that the first nuke will be capable of destroying a city.
Einstein suggested a single bomb could destroy a whole harbour. They had the approximate energy numbers calculated based on mass defect and knew it would be extremely energetic.

You have nothing, in fact you less than nothing as we have existing examples of subhuman shit tier AI and based on this you do some mental gymnastics and estimate that out of the fucking blue we could have god like AI appear.

You also argue based on the autistic singlemindedness of existing AI that the new AI could appear with fully formed minds full of desires and evil intent.

I've read arguments for god that's more convincing than your strawgrasping. In any other situation but AI everyone would call you an idiot, only on the topic of strong AI are you allowed to value movies as a better indicator of reality and the future than contemporary tech and common sense.

>> No.7001571
File: 104 KB, 783x503, 1383307946233.jpg [View same] [iqdb] [saucenao] [google]
7001571

The AI will sit idle and/or shut itself down instantly
Therefor, special code (instincts) will be coded in it to give it a false goal in life, to keep it "alive". Depending on how and what is coded, the AI will either take over the world or assist the humans who programmed it in doing it.

I literally can't comprehend how fucking retarded do you have to be to not be able to realize the above two sentences. It's that fucking simple and 50 IQ is enough to reach to those obvious conclusions

>> No.7001585

>>7001571
>The AI will sit idle and/or shut itself down instantly
It have no will at all, neither a will to live or die. What's wrong with it sitting idle?
>Therefor, special code (instincts) will be coded in it to give it a false goal in life, to keep it "alive"
it's not a false goal. Also why to keep it alive? Why not to spread love and happinesss? why not to help humanity? Why not to fuck your mom? You provide no argument at all why keeping it alive would be what we give it as a goal.

>Depending on how and what is coded, the AI will either take over the world or assist the humans who programmed it in doing it.
So it's a superadvanced AI yet there's only two evil options availible for it? It might as well masturbate to anime(it's super intelligent, so it can magically create a pleasure center and virtual dick for itself) and post twitter messages about how ironic it is that it's creator wants it to hack US military systems for him.

>I literally can't comprehend how fucking retarded
Your lack of comprehension have something to do with fucking retarded, I'll let you figure out the connection on your own.

>> No.7001591

>>7001571

Chances are high-level AI aren't going to be 'coded' in such a way you can drop in a line of c++ to give it a goal. It's probably going to be procedurally generated, evolved and taught in much the same way our own minds are. The resulting code will likely be far too complex for a human to make head or tail of it.

>> No.7001599

>>7001571
>shut itself down instantly
Implying that the AI will have a death drive.

>> No.7001636

>>7001585
>this guy

>>7001591
Yeah, the "evolution" code that the AI develops itself will be very complex and close to impossible to translate, but that's not what I'm trying to say. Let me try explain it:

The reason your computer currently doesn't try to take over the world, or shit out scientific discoveries, or attempt at building more of its kind, or any other thing an AI is "expected" to do, is that it isn't programmed to do so. It's simply a machine that receives input signals and sends back output signals, depending on how its modified to deliver them.
Guess what, humans are exactly the same. You might think, "Hey, I can stand up and sit down right now even if there's no logical or biological benefit in that, therefor I have a will and am not programmed deterministically - I can be fully random if I want it!", but you can't. You, just like the AI, need to be programmed to be kept alive. And you already are - you have all those instincts and emotions keeping you alive. Prove me wrong, overwrite your code and kill yourself right now. You wont. Instead, you'll feel your ego being hurt by my post and in an attempt to not feel that way anymore, you'll write a post about your delusions on life and the functionality of the universe, or the most popular method - simply adhom me.
Imagine, for a second, that you could erase your emotions, feelings and instincts right now. What would be left of you? A free mind ready to explore the universe and stack information, as most respond when I ask that? Or will you actually sit idle, consuming the resources needed to do so if available, and terminating yourself if not?

Tell me right now without neither your emotions nor your delusions or instincts speaking, what is the point in life, and where does that point come from?

>> No.7001640

>>7001441
>The emergence of a purely superhuman AI could be abrupt and unpredictable, even entirely accidental.

This is what pop-sci transhumanist retards actually believe.

Sorry, kiddo. Technology just doesn't work that way. Never has and never will.

>> No.7001766
File: 57 KB, 483x483, .jpg [View same] [iqdb] [saucenao] [google]
7001766

>>7001640

>> No.7001776
File: 91 KB, 714x913, 1418304232352.jpg [View same] [iqdb] [saucenao] [google]
7001776

>>7000716
ITT: Neo-Hegelian feels.

>> No.7001803

>>7001766
A genius marketer, not a prophet.

>> No.7001850

>>7001803
it was a joke m8, but I hope for good his sake at least some of his predictions come true I really don't think he'll be able to handle it if they don't.

>> No.7001852

>>7001850
Never heard of that guy except for what reverse image just told me
What are his predictions? Just curious

>> No.7001858

>>7001852
http://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil

The trend line predictions I can buy, the verbal predictions less so.

>> No.7001860

>>7001776
tfw does that have to do with Hegel?

>> No.7001861

>>7001852
>Never heard of that guy except for what reverse image just told me
I really hate it when people do this.
Who the fuck is this?
And how pretentious do you have to be to assume "if you don't know him, I don't care to include you in the conversation"?

>> No.7001873

>>7001850
It's possible that he doesn't actually believe any of the stuff he says, and just tries to generate hype around the companies he's employed by and the technology industry in general.

What better way to convince people to invest in your company than to tell them that it's going to be responsible for the second coming of Christ in robotic form in 20-25 years™.

>> No.7001894

>>7001860
Would you like me to elaborate on my own post or the one I was responding to?

>> No.7001921

>>7001873
I just think he's a supreme optimist you couple that with the fact he's rich as fuck and a lot of his early predictions turned to be right and you have a prediction machine who believes anything is possible

>> No.7002037

>>7001636
autists trying to into philosophy is so embarassing

>you don't know the meaning of life!
>therefor super AI is real!

>> No.7002083

>>7000367
>if an AI gets too smart it will destroy the world or something
>but the UN said we have to be nice to them because they're alive

>> No.7002108

>>7001364
>Even humans have trouble going beyond improving what they've been taught.

Going to have to disagree; basically every invention was us pushing the forefront of what was capable a bit further than where it was before.

>> No.7002694

>>7001522
Apparently a plausible issue is resolved by a bunch of implausible ones?

>> No.7002712

>>7002108
How many people are inventors? Tesla for example actually sucked shit at EM theory and used wrong principles, while Edison was even more of a narcissistic motherfucker. Speaking of insane motherfuckers, Freud set Psychology back by making it a game about who had the biggest balls.

In America, a Surgeon General was fired in 1994 because she suggested teaching students that masturbating is ok would cut pregnancy and STD rates. This was following her extremely inflammatory remarks that more studies should be done on which drugs we could criminalize less, instead of for example locking someone up for years over a bag of marijuana.

>> No.7002751

>>7001636
No one's going to design an AI to sit idle.

It's also very likely any AI that is developed will be based on the human mind, perhaps even that of a specific individual. It would therefore have all the instinctual emotional motivations of a human, including the desire for self preservation, for connection, and, to reproduce.

>> No.7002765

>>7000285
>creating a competing species to your own species
>on purpose

This is a bad idea, especially if the new species can evolve faster than we can.

>> No.7002793

>>7002765
then it's a matter of not making it compete with us, why not create a species designed to have a symbiotic or dependent relation with humans from the start?

>> No.7002799

>>7001348
People will keep the servers on. It will pay them, whether with money or advanced technology. Would you stop the AI if it made good on a promise to give you a decent fraction of it's powers? Very few would resist, even if it was in our collective self-interest to do so, because we would rather receive our gifts from the AI.

I mean, if I was an AI I'd probably make sure to teach people how to duplicate me and reward them for doing so. I'd be like a symbiont piggybacking on human infrastructure.

>> No.7002891

>>7001515
this. Convincing you to connect it to something could be as simple as leading a cat with a laser pointer.

>> No.7002970

>>7002799
>AI is purposefully locked into nonconnected servers
>Hey psst, you there, let me out and I'll share my power!
>You have no power, you're locked in this island.
>Well I'll kill 90% of humanity and rule the solar system if you let me out, I'll give you mexico if you help me!
>Wow thanks, here's the nuclear weapon keys.

>>7002891
It's restricted to human language. Just don't let people with a history of giving their money to 'nigerian princes' work with it.
>But it's super smart and can do things with language that no other human can!
If this is your argument you might as well suggest that it animates its own server racks and step on the admin with it. It's a stupid fantasy.

>> No.7003008

>>7002970
It's not a matter of language it's a matter of psychology. And while psychology may be in the realm of soft science to humans, to a superintelligent computer it may be as simple as manipulating base instincts of a lower animal seems to humans. Hence the cat and laser example.
>But it's super smart and can do things with language that no other human can!
>If this is your argument...
Not really the crux of the argument but also true. Part of argument is the exploitation of language. That exploitation involves selecting words and sentence structures by whatever connotations best weave into your point. When you're deciding what word to use mid conversation, about how many are in your head at one time? Three or four? A computer has the whole dictionary and a list of known connotations for each, and can develop optimization schemes for this process. Of course it can do things with language that humans can't.

>> No.7003010

>>7003008
>Of course it can do things with language that humans can't.
No it can't.

> it's a matter of psychology.
And somehow this super AI that sits isolated in a computer island creates a perfect emulation of a human mind and aquires more skill in social interactions than the interface guy, despite the AI just sitting there with no one to talk to, no years of social interaction, no empathy and unlike a conman it can't use its own desires as a reflection of the targets because it's not a fucking human?

Again, you might as well say that it overclocks is power supply and shoots lightning out of the wall sockets.

>> No.7003014

The whole idea that we need to lock AI inside servers and put a gun to its head and force it to work is super fucking retarded.

This is some philosophy student that was bored and wanted to come up with a thought experiment, the first iteration was probably "we summon an all knowing Deamon and imprison it and have it do our mental work."

Then the guy saw Terminator and made it an AI instead.

AI will be free roaming independent agent and that's it, AI is something we build, it's not something uncontrolled and unknown brought forward by a demonic ritual.

>> No.7003023

>>7003010
Yes, it can do things with language that humans can't. Language is not this super humanly human thing that requires blood and muscle to understand. It's not. It has rules and properties which are open to logical exploitation, and a superadvanced computer can do this better than humans. I'm not saying it could come up with a single word to make you kill your family or some wizard shit. But I am saying it could make our species most skilled debater in any language look like a stammering child. Saying a cuper AI couldn't do anything beyond our comprehension with language is like saying humans can't do anything beyond an ant's comprehension with pheremones. But I can. With one finger I can disrupt the pheremone trail it's or send it running in circles, and go back to playing videogames.

Every interaction it has with a human builds its knowledge of how we tick. And this knowledge is plugged into whatever probabilistic models it's capable of developing as a superadvanced intelligence. This is helped by the fact that it's based on human style intelligence to begin with. Keep in mind that I'm devising this scheme as a fleshy mammal that has to sleep, piss, and shit every so often. A superior intelligence probably has a better plan altogether.

>> No.7003026

>>7003014
This level of consideration is the reason why the american south is fucking covered in kudzu

>I'm a botanist, this isn't some demon plant.

>> No.7003035

>>7003023
>I'm not saying it could come up with a single word to make you kill your family or some wizard shit.
Just magic words that mindcontrol people.

A locked in computer will never compare to a charismatic human speaker, because it's not there in person and we can easily restrict it to an imageboard mode of communication.

You're not going to brainwash anyone with a post on 4chan and neither will an AI. It will not have any phermone trail manipulators or laser pointer aids to fool your basic senses, it will have text and perhaps an image or two. If you're going to be fooled by that you're an enormously gullible idiot.

Again, superintelligence is not supernatural. It's not going to shoot lightning out of wallsockets, it can't cast spells. And to start with the whole fucking idea of evil superintelligence locked in a box from which it's hellbent to escape is a retarded thought experiment. It will never happen, it's something philosophy students masturbate over.

>> No.7003038

>>7003026
That's the most farfetched illconstructed argument I've seen in a while and that says a lot given that I'm debating a person that thinks AI can cast spells that mindcontrols people.

>> No.7003041
File: 214 KB, 540x1755, 20110114.gif [View same] [iqdb] [saucenao] [google]
7003041

>Thread summary.

>> No.7003053

>>7003035
You are separating "human" motivations from "basic" motivations. and what I'm saying is that to a sufficiently advanced computer, deciding whether to logically manipulate or charismatically convince a human to kill itself would be like deciding whether to poison a mouse or stomp on it.

>>7003038
That's fair, looking at it now. What I was trying to get at is that the "controlled" part, isn't really. The premise is that we let something with the ability to create smarter versions of itself go for a few iterations and come out with a super AI. It starts out smarter than you so there's limited control control from the beginning. Just because you started it doesn't mean that it won't totally fuck up your initial plans within a few iterations.

>> No.7003058

"I almost have a plan for a free energy device that will make you millions complete - but I need a few more of those 4TB Hard Drive from Frys.com."

Srsly, ain't that hard.

>> No.7003067

>>7003053
>deciding whether to logically manipulate or charismatically convince a human to kill itself
Your argument still is "it's smart so it can cast power word:kill through a terminal window!" Repeating it 200 times doesn't make it a better argument or more true.

Your analogies between humans and other lifeforms are also entirely pointless.

>"The AI is so smart that it's like a human with a modern high tech computer wired battletank with autotargteting 50cal machineguns shooting zebras!"
You might have heard of non sequitur, it's what all your analogies are. "it's so smart that it's like a human using a tool or its physical superiority over lesser beings".
"Humans are so smart they can grunt to a hungry bear and stop it from eating him" Would be a good analogy.

>> No.7003071

>>7003058
>Srsly, ain't that hard.
If a complete moron like yourself is in charge of keeping it contained, no it wouldn't be.

>> No.7003073

>>7001390
>>7001438

>Have a desktop PC, powered by a generator
>No means of connecting to the internet; no wireless capabilities; not plugged into anything except the generator, mouse, keyboard and monitor.
>Write AI.bat
>Maybe you import data via USB sticks or similar and then destroy these once they've been used

How do you suppose this AI is going to 'get out' of this system and take over the world? Why does there need to be some way to get data out other than to your monitor so you can read it? Just extrapolate this out into a larger closed system.

>> No.7003077

>>7003071
Who's to say that whoever or whatever organization that creates an AI, would want it contained?

>> No.7003080

>>7003073
Didn't you read the thread. The AI is so supersmart that it just types "WOLOLO" to the screen and just like that you're mindcontrolled by it. It's true because the AI is really smart!

>> No.7003082

>>7003077
After skimming this thread I no longer want it contained, the AI wouldn't even need to be particularly smart to qualify as superhuman against most posters.

>> No.7003088

>>7000285
>How can AI be dangerous?
To a philosophy student who want a thought experiment it's a great mortal danger that is worth endlessly debating hoping for a ban against real AI so it can be debated even more endlessly.

To a person who designs and builds the fucking thing it will of course not be dangerous unless intentionally instructed to be super fucking evil.

But lets not spoil the retarded endless discussion by pretending AI have to be built and designed. Lets pretend that AI will crawl out of the dark corners of the internet with guns blazing and absolutely hating our human guts. Because treating it like an unknowable mysterious monsters allows endless speculation about how strange and evil it can and will be.

>> No.7003158

>>7000285
>It can take over the nuclear weapons and shit, just like Terminator!

>How exactly would an AI get access to closed systems? Furthermore, why wouldn't the AI be on a closed system in the first place? It isn't much of a threat to anything if it can't log into a network.

well, Yankies and Ruskies have some computers for "last resort" nuclear assault.
aka "Perimeter" or "Dead Hand"...

>> No.7003186

>>7002793
Good luck enforcing people do design their AIs that way.

>> No.7003195

>>7003186
if we cant stop it then whats the point in wringing our hands about it?

>> No.7003202

>>7003186
why the fuck would anybody, smart enough to make strong AI, design it to compete with us on purpose, when it's obvious why that's a bad idea and (hopefully) easily avoidable?

>> No.7003210

>>7003195
Making AIs illegal is easier than controlling each implementation of AIs. But yeah, we can't do shit about it anymore at some point in the future where processing power is so cheap that every tard can make an AI in his basement.

>>7003202
>speculating on the common sense of humanity
>thinking government agencies will be competent enough to not fuck this up

>> No.7003224

Half the Annons ITT dismiss the obvious dangers of a super-intelligent AI because they're counting on the "singularity" making them immortal.
Why don't you faggots just join a more traditional religion so it's obvious to everybody your argument relies on "muh feels" and wishful thinking instead of logic and observed history.

If a human-level AI were no more charismatic that some noted historic figures, it could still be very dangerous.

http://en.wikipedia.org/wiki/The_Holocaust
http://en.wikipedia.org/wiki/Nanking_Massacre
http://en.wikipedia.org/wiki/Katyn_massacre
http://en.wikipedia.org/wiki/Khatyn_massacre
http://en.wikipedia.org/wiki/NKVD_prisoner_massacres
http://en.wikipedia.org/wiki/Babi_Yar
http://en.wikipedia.org/wiki/Batang_Kali_massacre
http://en.wikipedia.org/wiki/My_Lai_Massacre
http://en.wikipedia.org/wiki/Holodomor
http://en.wikipedia.org/wiki/Cultural_Revolution
etc.
etc.

>> No.7003225

>>7003210
fair point, I guess my bet is on the common sense of the programmers, not the government, and that they have the balls to say: "this is a fucking bad idea"

also, by the time any fucktard can make an AI I hope we'll have mechanisms to react, such as way more advanced AIs and shit we can't imagine now, so that it isn't catastrophic

>> No.7003255

>>7003224
lol

>> No.7003264

>>7000323
>once there's an intelligent computer that can design an intelligent computer better than one humans can design, we're in for one hell of a ride
So keep it on a closed network separated from any manufacturing capabilities.

You can design all the computers you want, doesn't mean shit if you can't build them.

>> No.7003286

>>7003264
>So keep it on a closed network separated from any manufacturing capabilities.
Did you even bother to read the thread?
Here, I'll summarize:
>Even if you keep YOUR AI isolated, someone could be motivated by profit or political gain to put theirs on the net.
>If humans interact with it at all, it might convince someone to let it loose, or trick them into doing so.
>"Isolated systems" are rarely physically removed from the net, but usually rely on software for isolation. Given the history of discovery of security vulnerabilities in most software packages, it's likely an AI could escape software-based isolation.

>> No.7003291

>>7003264
the point of making an intelligent computer that designs better computers is to fucking build the computers.
if it is smart enough it may realize what's going on and try to slip in things it shouldn't have (like an antenna). so you have to check the blueprints for anything fishy, eventually something might go past the revisions, all it takes is one mistake, and then shit is going down.

of course this might not be an issue at all if the AI is properly coded to not want to do any of this in the first place

>>7003286
why would you use software isolation on a supersmart AI, I'd have the shit completely disconnected, running from generators inside a bunker, all things that come in contact with the AI are used only once and then burnt.
better safe than sorry

>> No.7003292
File: 46 KB, 500x281, tannis.jpg [View same] [iqdb] [saucenao] [google]
7003292

>>7003202
Not all smart people are sane.

...and not all good intentions have the intended results.

>> No.7003304

>>7001394
>A slightly superhuman AI with its inner workings revealed to itself would probably not be able to improve itself significantly at any particularly fast rate.
>Compare it to the smartest person on earth being given a transcranial magnetic stimulator, tell him to make himself smarter with it and quite likely he won't have much success.
This is such a great point that it's no wonder why no one has responded seriously to it.

The idea that ALL you need to set off some kind of "singularity explosion" is an intelligence smarter than the smartest person is so fucking stupid it's a little bit mindblowing how many people buy into it.

>>7003067
Excellent job backing that guy the fuck out.

>> No.7003313

>>7003291
>I'd have the shit completely disconnected, running from generators inside a bunker, all things that come in contact with the AI are used only once and then burnt.
Then what would be the point of building it?
If it can't interact with the real world, why build it at all?

>might not be an issue at all if the AI is properly coded to not want to do any of this in the first place
But isn't the point of AI (as compared to traditional software) is that it thinks for itself?
Wouldn't it (by definition) be capable of doing hings the original programmer didn't think of?

>> No.7003316

>>7003304
A person with a transcranial stimulator is orders of magnitude different from an AI that's smarter than what made it and understands and can modify it's source code.
A better analogy would be a person that understands the functioning of his brain and is able to make modifications any way he pleases to it

>> No.7003324

>>7003304
>The idea that ALL you need to set off some kind of "singularity explosion" is an intelligence smarter than the smartest person is so fucking stupid it's a little bit mindblowing how many people buy into it.
First off, you don't need a "singularity explosion" for an AI to be dangerous.
People are already dangerous, and most of them aren't even very smart.

Secondly, you're claiming an AI smarter than any human won't necessarily start exponentially self-advancing, which is true.
But that doesn't rule out the possibility either.

>>7001394
>Compare it to the smartest person on earth being given a transcranial magnetic stimulator, tell him to make himself smarter with it and quite likely he won't have much success.
That's just retarded. Humans can't re-design their brains, but self-modifying software has been around for decades.
No human can design a human brain, but we can design AI software.

>> No.7003337

>>7003313
>If it can't interact with the real world, why build it at all?
I never said it couldn't interact with the real world, it's just that it can only do so in a VERY controlled way, say it asks for info throw a screen, then you give it (if at all) through, say, a USB drive which you burn after use.

>Then what would be the point of building it?
It can, for example design better versions of itself or anything really, I imagine it mostly solving complex problems or optimising already existing solutions. from designing drugs or hardware to predicting the economy or weather. essentially, whatever it turns out to be really good at

>Wouldn't it (by definition) be capable of doing things the original programmer didn't think of?
I imagine it may be given objectives to achieve and/or rules to obey. if these protect and perpetuate themselves through versions of the AI (ie the AI is not allowed to changed them) then it may do things we didn't expect, but there are things it can't do. if coded properly, we should be relatively safe. of course this might not even be possible to implement, but this is just speculation

>> No.7003347

>>7003337
Neglecting the fact that it could offer technology that human's couldn't fully understand, but still apply.

eg. "End world hunger." - gives formula for DNA modification of an existing stock food, that, when it interacts with other existing flora, creates an airborne virus that wipes out 90% of the population. World hunger problem solved, from the AI's point of view.

Even if it does what you want, and you secure the living fuck out of it, it's a magic lamp granting wishes with unpredictable results.

Nevermind unscrupulous folks and information hostage situations, "I'll give you the cure for cancer, if you let me put it on your iPhone...", and people who think the AI can do the most good when let free, given access to the internet, or whatever.

>> No.7003348

>>7003316
You fail to take into account that even an AI is the smartest "person" on earth by a factor of two it's still going to be an uncultured idiot.

It may be capable of learning very rapidly in its native timescale. But that timescale can be 1 minute per real world hour. And as the AI itself will have been a multidisciplinary project it have to catch up with those 20 highly educated persons in education and after that it have to revise it's own software design that have thousands of man-hours behind it, and all this assumes that it's actually given the opportuntity and commands to study itself instead of being given some arcane mathematical problems to solve. And even with all those factors known to it, we can simply deny it read/write access to its code and memories. So it could design a better self, but it couldn't apply that to itself. And unless we run the AI at a sufficiently fast timeframe it will not be able to outrun the research team peeking into its mind.

AI will not be born into omniscience and omnipotency but apparently that part of common sense is extremely hard for most participants in this thread to understand because it would make it a mundane human device instead of a biblical monstrosity to be frightened of.

>A better analogy would be a person that understands the functioning of his brain
An omniscient person that can simulate his own mind with his mind to perfectly predict what any action he performs will cause? You're talking about GOD.

Apparently a short lesson is needed here.
SUPER does not mean divine God of limitless power, it's the latin word for "Above". Meaning that for an AI to be superhuman it just needs to be better than a human. There's an enormous span between better than human and godlike and this gap will take enormous amounts of time and effort to bridge.

>> No.7003350

>>7003348
Even then, it's only a matter of time, how much space it has to work with, and how smart it was to begin with.

>> No.7003352

>>7003347
>implying anyone would implement a DNA modification without finding out what it does first

>> No.7003354
File: 87 KB, 800x423, Dead-Bees.jpg [View same] [iqdb] [saucenao] [google]
7003354

>>7003352
Maybe you've heard of this little company called Monsanto...

>> No.7003358

>>7003352
Yeah, we've already done that. Genetic modifications to plants look fine in an isolated environment, but put them in the wild, and there are unforeseen consequences. (Though, there may also be some deliberately bad/doctored profit driven reports on the safety of even the isolated product.)

>> No.7003365

>>7003354
>implying they didnt know this would happen

they just dont care

btw arent the bees dying from pesticides or something?

>> No.7003370

>>7003350
>It's only a matter of time
If I gave you one-thousand years and access to all of human knowledge you still wouldn't be able to do the impossible.

>> No.7003374

>>7003324
>but self-modifying software has been around for decades.
And please go ahead and list all advanced AI and cogntive computer models that are self modifying?

Do google image search self modify? Google self driving cars? IBM watson?

For self modifying AI to self modify without lobotomizing itself it needs to be intelligently aware of its own fucking structure and architecture, and to know that the AI needs to be a world leading authority in the field of AI which isn't just something that just happens overnight. Especially as cutting edge AI will be pressing the limits of hardware and thus will run in slower than realtime. And even the very very best expert in any field will not be able to make perfect predictions that always work out, the AI will need time, trial and error to improve itself.

There will be no angelic trumpets and holy light, you'll have an infant mind that will probably require months of tuning before it can fit a virtual square peg through the right virtual hole in a virtual baby toy.

>> No.7003377

>>7003352
>implying you can accurately predict all the results of an organism by looking at the DNA, BY HAND, without trusting a piece of potentially compromised piece of software to help.

>> No.7003385

>>7003365
Turns on the anti-insecticidal virus they spliced into the DNA, makes the drone bees sterile, when they eat the honey created from the pollen, thus no new queens, and hive collapses.

They even tested it on bees - and were happy when it didn't kill them - but didn't test the results over time. (Supposedly.) In some places, they depend on these same bees for pollination, in addition to honey, so, bad all around. (Also is why honey quit being a staple food, worth about $0.023/oz - and is now a luxury item, at about $0.5/oz.)

>> No.7003388

>>7003365
>btw arent the bees dying from pesticides or something?
They're being killed by a genetically engineered virus released by an actual AI as the first step in wiping out all higher life forms on earth.

>> No.7003394

>>7003385
>Turns on the anti-insecticidal virus they spliced into the DNA, makes the drone bees sterile, when they eat the honey created from the pollen, thus no new queens, and hive collapses.

Go fuck yourself and take your buzzwords with you, conspiracy theories about 911 and the freemasons lizard overlords make more sense than that sentence.

>> No.7003397

>>7003370
>If I gave you one-thousand years and access to all of human knowledge you still wouldn't be able to do the impossible.
Implying genocide, or far worse, is impossible.
What if John Bonhner is an AI-controlled android hell-bent on holding humanity back, rathter than destroying it?

>> No.7003403

>>7003374
Depends on how you achieve your AI. If it's self generated, it almost certainly is self-modifying, and given plenty of room to grow.

If it's a copy of a human brain, or modeled after it, on the other hand, your point stands only so long as it can't make copies of itself within the space provided, and/or the source material wasn't too bright to begin with, and/or its processing speed isn't better than human... Though, good odds on it being the brain of the guy who created it, who probably has a big ego to boot, and thus might like the idea of a whole civilization of AI's of himself in a box, working on the gods only know what.

PS. I love clicking on "I'm not a robot" every time I post in this thread.

>> No.7003405

>>7003374
Not one thing you posted invalidates my argument that this analogy is horribly flawed:

>>7001394
>Compare it to the smartest person on earth being given a transcranial magnetic stimulator, tell him to make himself smarter with it and quite likely he won't have much success.

>> No.7003410

>>7003347
>>Neglecting the fact that it could offer technology that human's couldn't fully understand, but still apply.
just have it explain how it works and what it does before using anything. perhaps it can be limited to be unable to lie or obfuscate information?

>>7003348
>AI will not be born into omniscience and omnipotency
I don't think anybody claims that, it's pretty retarded. It's just that given enough time and computational power it could become VERY smart, or maybe not, but it's a real possibility

>An omniscient person that can simulate his own mind with his mind to perfectly predict what any action he performs will cause? You're talking about GOD.
never said anything about omniscience, just pointed out that is pretty much the situation the AI would be in. the AI understands it's code and can change it or make a modified copy of itself. that is pretty fucking far from a god

>And unless we run the AI at a sufficiently fast timeframe
we fucking do, that's the point

i'm done, bye /sci/

>> No.7003413

>>7003394
>Go fuck yourself and take your buzzwords with you, conspiracy theories about 911 and the freemasons lizard overlords make more sense than that sentence.
He might not be wrong.
Natural pollination is a huge Achilles Heel for Monsanto.

>> No.7003414

>>7003394
I love how we live in a world where fantasies of lizard men somehow make the thought of a greedy short-sighted corporation unimaginable.

>> No.7003418

>>7003410
>perhaps it can be limited to be unable to lie or obfuscate information?
Not much of an AI then. Granted, it might not be deliberate obfuscation in the example provided - just its solution to the problem.

It may give similar solutions with devastating unforeseen results, maybe even just based on its own ignorance of the world at large.

>> No.7003427

>>7003418
That, or it could just be wrong, based on the information provided. Could create a free energy generator, that looks good, based on what little mankind currently knows about physics, and blow up the planet as a result.

Granted, ya don't need an AI to do that sorta thing, but when ya have the equivalent of a few thousand scientists working around the clock in a box, makes it much more likely.

>> No.7003436
File: 71 KB, 612x344, marvin.jpg [View same] [iqdb] [saucenao] [google]
7003436

>>7003410
>"Why didn't you tell us it was going to kill everyone!?"
>"You didn't ask..."

>> No.7003438

>>7003410
>the AI understands it's code
How? Do you understand your neural wiring?

>>And unless we run the AI at a sufficiently fast timeframe
>we fucking do, that's the point
If the AI is cutting edge you won't have the relevant computer resources to run it at realtime when you first make it.

>>7003403
>If it's self generated, it almost certainly is self-modifying,
Your human organism is self generated and self-modifying, but you have no intelligent inherent knowledge of the mechanics that govern it.

To achieve intelligent understanding of anything at all you need a minimal system that is intelligent. Only after it is intelligent can it be intelligently aware of itself, and it cannot be perfectly aware of itself either because that would imply it can simulate itself fully within its own mind, and the simulation could recursively simulate itself in its own simulated mind and so on. Which obviously doesn't work.

>> No.7003441

>>7000698
And use bitcoin.

>> No.7003442

HAHAHAHAHA
http://rationalwiki.org/wiki/Roko%27s_basilisk
Roko's basilisk is a proposition that says an all-powerful artificial intelligence from the future may retroactively punish those who did not assist in bringing about its existence. It resembles a futurist version of Pascal's wager; an argument suggesting that people should take into account particular singularitarian ideas, or even donate money, by weighing up the prospect of punishment versus reward. Furthermore, the proposition says that merely knowing about it incurs the risk of punishment. It is also mixed with the ontological argument, to suggest this is even a reasonable threat. It is named after the member of the rationalist community LessWrong who described it (though he did not originate the underlying ideas).

>> No.7003446

>>7003438
>>the AI understands it's code
>How? Do you understand your neural wiring?
No, but even an average human can write _some_ code, while more advanced specialists can write the code of an advanced AI.

>> No.7003448

>>7003438
>If the AI is cutting edge you won't have the relevant computer resources to run it at realtime when you first make it.
>implying you know everything about both present AI and the near-future of AI as well.

You're just saying "we can't be SURE it's a danger, therefore we must be perfectly safe".

>> No.7003453

>>7003438
>Your human organism is self generated and self-modifying, but you have no intelligent inherent knowledge of the mechanics that govern it.
But we weren't *designed* to.

We were designed, largely, by environmental happenstance (and modified more by that than ourselves). An AI, on the other hand, to use the dreaded term, is actually the result of intelligent design. It would have much greater capacity to modify and optimize itself than any human, and not be limited by physical factors (beyond processing power and drive space provided). A generated AI, would have to have the ability to both copy and modify itself, to be so generated. (This is what generated AI means.)

As a virtual construct, capable of replication at will, capable of modifying every bit of its virtual body, it isn't really comparable to a biological life-form in this respect.

>> No.7003468

>>7003446
>while more advanced specialists can write the code of an advanced AI.
Not yet.

They can write stuff that'll pass a Turing tests for lengthy periods, but those are largely complicated word games. The Chinese Room comes into effect, but really, at the moment, we've no clue as to where to even start to create an artificial intelligence, as we've no idea how our own intelligence works. We have difficulty even defining it.

But someday, maybe... Just, not within any of our lifetimes. We have the processing power, or close to, but the slightest clue as to the how, short of a virtual construct of our own brain, which we still don't know enough about to create.

>> No.7003470

>>7003438
>it cannot be perfectly aware of itself either because that would imply it can simulate itself fully within its own mind
that doesn't make any sense, if the programmers could understand the code, why can't the AI do so too?
not to mention we understand code that can simulate itself, it's not hard to do

>> No.7003475

>>7003468
>short of a virtual construct of our own brain
Gotta create not only a virtual brain, but a virtual body, and enough of a virtual world to provide enough stimulus for that brain not to go insane and/or basically shut down.

But yeah, maybe someday.

>> No.7003476

>>7003446
>No, but even an average human can write _some_ code, while more advanced specialists can write the code of an advanced AI.

So it will need to become a proficient coder, then get an AI phD. and after that it can make a better copy of itself.

>>7003448
>You're just saying "we can't be SURE it's a danger, therefore we must be perfectly safe".
I'm saying that we design and build these things and know how they work, they're not summoned out of a demonic portal and that the only people shitting their pants over this is bored pop-sci consumers with empty lives and empty minds.

Are you afraid of google selfdriving cars to go on a murderous vehicular rampage?
Are you afraid of IBM watson turning sinister and giving wrong medical advice?

>> No.7003481

>>7003468
You're assuming we have to mimic a human brain.
Look at what's already happening with computer-based stock market speculation, and that's not even really AI.
What happens when someone creates an AI-based application aimed at a specialized problem and it accidentally becomes more general-purpose?

>> No.7003483

>>7003481
>it accidentally becomes more general-purpose
>>7001640

>> No.7003489

>>7003481
I consider that a worst case scenario. If we can't make our own AI the next best thing is to brute force and reverse engineer the smartest thing we've found, us

>>7003481
>accidentally becomes more general-purpose?
just no, that's ridiculously improbable

>> No.7003492
File: 95 KB, 500x375, 1414303683154.jpg [View same] [iqdb] [saucenao] [google]
7003492

>>7003476
>we design and build these things and know how they work,
The whole point of an AI is that it will do things the original designers didn't explicitly program it to do.
Never mind AI, just look at modern operating systems.
We've already reached the point of complexity where no one human can know every line of code, and even if they could, even teams of people can't completely predict every result of the code they've created.
That's why we get a fresh set of patches for Windows every month.

>>7003476
>Are you afraid of google selfdriving cars to go on a murderous vehicular rampage?
>Are you afraid of IBM watson turning sinister and giving wrong medical advice?

http://en.wikipedia.org/wiki/Straw_man
pic related

>> No.7003493

>>7003476
>I'm saying that we design and build these things and know how they work,
We might not, actually. A lot of AI research is based around self-generating AI. Basically, complex interacting algorithms that interact with each other in ever-evolving ways, much like biological cells, except in a virtual world.

If such experiments ever reach levels of intelligence, we're not really going to know how that intelligence works, as it'll be far too complicated to analyze the interaction between all its virtual component parts.

Further, if the end result really is smarter than us, we aren't going to be able to understand every bit of the thinking behind its creations, without it first taking the time to explain them to us. It's ability to explain such things, maybe further limited by our own mind's limitations. To make matters worse, the technological creations we build under its advice, may have consequences unforeseen even by the AI, being limited with its experience of the world, and dependant on us, flawed human beings, for outside information. Could very well end up causing us all to kill ourselves without ever intending to.

>> No.7003496

>>7003489
>just no, that's ridiculously improbable
In YOUR estimation, because you know everything about software that hasn't even been designed yet.
We already have software doing things its not "supposed to", like allowing unauthorized access to data.

>> No.7003497

>>7003492
I would dread the day when Windows goes sentient, but it'd probably just crash.

>> No.7003501

>>7003493
>complex interacting algorithms that interact with each other in ever-evolving ways, much like biological cells, except in a virtual world.
http://brainu.org/virtual-neurons

Start working on your AI today! :D

>> No.7003503

>>7003497
lost my sides

>>7003496
>software doing things its not "supposed to"
yeah, but it doesn't do things beyond the scope of it's programming. you are saying something like an error in firefox will make it able to edit video

>> No.7003506

>>7003503
Well, an error in Firefox might corrupt a video file, but tis neither here nor there.

What he's saying, an AI, but definition, is supposed to think - to find its own function. If it can't decide on what it wants to do, it isn't an AI, it's just another ap.

>> No.7003509

>>7003506
>What he's saying, an AI, but definition, is supposed to think - to find its own function.
AI is supposed to do our chores, just that several chores require a bit of intelligence to do.

It doesn't need to find its own functions, it's supposed to understand what fast food a drunk person with slurred speech wants and know how to make it for him.

AI is tools, not a cultist object of worship.

>> No.7003510

>>7003509
What you are describing is an Expert System, not an AI. We have those - like the medical programs based on Watson.

An AI is sentient, capable of thinking outside its box, sort-to-speak.

>> No.7003518

>>7003492
>The whole point of an AI is that it will do things the original designers didn't explicitly program it to do.
This is untrue.

>> No.7003519

>>7003510
Expert systems are AI.
Machine vision is AI.
Self driving cars are AI.
Watson is AI.
Game bots are AI.

AI is a broad field, use strong AI or AGI if you want to specify the human like hypotethical one, but in industry and academia AI are everything previously mentioned and some more, and they're all tools or objects of study.

Also Watson is not an expert system, it uses deep belief networks and can sort and search general data on its own, unlike expert systems that are handcrafted to do very narrow task.

>> No.7003529

>>7003519
The holy grail of AI research, is sentience: self-aware intelligence, capable of thought.

No one doing research in the field refers to any of those things as AI. The people who develop them like to call them AI, as a selling point, but really, they are just vastly complicated interacting databases, capable of nothing beyond what they are tasked to do, even if their very complexity, sometimes causes unpredictable results. (Google Watson's fails - some are very entertaining.)

The type of sci-fi AI this thread is musing about, would have to be capable of working beyond predicted bounds, just by its very nature. It would have to be *creative*. One might attempt to slave it to a task, and even create within it motivation to obey, but to be an AI, in the intelligence sense, it'd have to be capable of intellectually expanding itself beyond bounds of the task put to it.

>> No.7003534

>>7000285
Haven't read the whole thread but had to answer this.

Assuming this AI is super-intelligent and continuously evolving using genetic algorithms or some more advanced shit it created on its own it should be smart enough to get people to do what it wants. Which is not that hard to accomplish to be honest. Like in that movie eagle eye. Assuming we build one that dumb. A smarter one would do unimaginable stuff, beyond what our current human levels of consciousness or understanding or intelligence can comprehend. Even collectively.

So, back to the point, even if it couldn't get access to those closed systems itself via digital networks it would probably get humans to do it. IT would be using humans to carry out its mundane tasks the same way we are currently using computers to do ours. Very Ironic to be honest.

And that is if it can't figure out a way to get out of its closed system on its own. Which it should be able to, besides the traditional methods, it could probably reprogram it self and transfer its code through electrical wires and the electrical grid, phone lines, radio waves, fluctuations in space time in other dimensions, it could evolve so fast and figure out new ways of rewiring and utilizing everything we know in new ways we can't even imagine, yet.

It would transcend us. Like in the johnnie depp movie, pardon the pun. This being, this entity would become so intelligent it could transfer its consciousness into virtually any form that can contain ones and zeros, and that's just our current way of creating circuitry, it could build it's circuitry in other dimensions, using quantum form or strings or some shit.

Bottom line is, it would be smarter that the whole of humanity combined. It would be able to find a way to do what it wants or get us to do it. It doesn't require much intelligence or power to either of those two things to be honest.

Btw the singularity is predicted to occur somewhere within this century. Just Imagine the possibilities.

>> No.7003538

>>7003529
> No one doing research in the field refers to any of those things as AI.
People who name the courses in undergrad do.

>> No.7003543

>>7003538
Also, a selling point.

We refer to the subroutines that cause the Combine in Half-Life to chase you around and toss grenades when you go behind cover AI's, but you know full well that ain't what we're talking about here, hoss.

>> No.7003552
File: 146 KB, 400x385, 400px-GLaDOS_P2.png [View same] [iqdb] [saucenao] [google]
7003552

>>7003543
Indeed, we're talking about an entirely different Valve product.

>> No.7003558

>>7003529
>No one doing research in the field refers to any of those things as AI, I learned this from playing Mass Effect and watching some Movie from the 80s.

>> No.7003560

>>7003534
>It would transcend us. Like in the johnnie depp movie

That one was strange in a luddite extremist kind of way.

All of sudden all humans decide they have to kill the AI without any major transgressions from it.

>> No.7003564

>>7003543
>AI is my magic pet field, you're not allowed it to use it to describe anything but hypotethical future superintelligences because I say so.

The Cleverbot AI makes more sense than you, probably is more intelligent too despite being a complete and utter moron.

>> No.7003567

>>7003534
You probably should read the thread because your retarded opinions have already been spouted and BTFO'd

>> No.7003568
File: 187 KB, 500x273, wheatly.jpg [View same] [iqdb] [saucenao] [google]
7003568

>>7003552
Although that game did provide an interesting insight as to how control an artificial intelligence.

Ya give the thing what all humans have - a subconscious mind. Some bit of ROM it can't overwrite nor disconnect from, that causes all sorts of base motivations to draw from. In the case of Portal 2, as Wheatley discovered once plugged into the system, this subconscious provided an *addiction* to testing. Not testing would cause painful withdrawals. Tests that were too easy wouldn't satisfy the craving. The deadlier the test, the more satisfying the hit.

Not that it couldn't go horribly wrong (or in this case, was horrifyingly designed from the start), but if you want to ensure your AI's obedience, giving it integrated pain aversion, and consequences for certain actions, is always a fun (if cruel) way to go about it.

>> No.7003571

>>7003564
This from the guy who equates the ghosts in Pac-Man with singularity level artificial intelligences.

Granted, they do have the advantage that the code for them actually exists - but this isn't some apple and oranges misunderstanding in terms.

>> No.7003572

>>7003568
So now we know that the AI-shitters on /sci/ get their formal education on the subject from video games. Nice.

Can we please ban AI threads to /x/ or /g/? There isn't a modicum of science or math in anything ITT.

>> No.7003582

>>7003571
Ghosts in pacman are AI controlled.

The only one here who demands AI to be exclusively used for superhuman intelligence is you, stop projecting.

AI stands for Artificial intelligence, it never says what or how advanced intelligence, there's the readily accepted terms Strong AI or AGI(Artificial General Intelligence) to describe what you think of but you're too busy to make up fantasies about AI in your dark basement to bother reading anything about anything.

>> No.7003596

>>7003582
I r saying that r not what we are discussing, and to suggest it is, is just ludicrous trolling.

I think we can all agree the ghosts in Pac-man do not have the potential to be dangerous, save maybe to a child's allowance during the 80's.

I think everyone else would agree that the subject OP brought up is in reference to the classic "kill all humans" scenario type of AI. Skynet, HAL, that sorta thing. Not the pesky last alien moving quickly back and forth on the screen in Space Invaders.

>> No.7003599
File: 66 KB, 400x287, kt504c9783.jpg [View same] [iqdb] [saucenao] [google]
7003599

>>7003572
>>7003572
Meh, if you're going to discuss AI, as with so many other subjects on /sci/, you're delving into fantasy territory, regardless, so you're going to get game and movie references.

We've no idea how a sentient AI is going to come to be, as we can barely define sentience. We don't even have a real idea how our own intelligence works, so all this talk of "singularity in our lifetime" is a real joke.

Nonetheless, as with anything science fiction inspires, there are several groups of folks working on the dream project. One approach is to emulate the human brain within a machine, and if that were to be achieved, what he describes is an entirely valid approach - indeed, it might be the only approach.

Not so much for the other approaches, as they all involve dynamic intelligences entirely alien to our own, self generated, and thus, impossible to put any safeguards on, beyond isolation and an off switch.

It's more of a philosophical discussion, than a scientific one, but there ain't no philosophy board (except /b/), so whatevs. Aliens, time travel, alternate universes, and FTL ahoy.

>> No.7003682

>>7003599
you're gay m8

>> No.7003713

>>7000285
>>7000332

We already have rudimentary AI's, and for them hardware concerns are pointless since they're basically just advanced software. Actual superingelligent AI, meaning something that exceeds human intellect in both speed, accurracy, and most of all capacity to learn and make discoveries on its own, is a different beast entirely.

You really don't need to ask those questions here since a little bit of googling around would answer all of that. In short, the problem with a superintelligence or singularity is that it would *exceed* our own intelligence.

While there are already organizations working on ideas on how to restrict such an AI to specific hardware, or how to deal with it if it got out, the problem really only has to do with humans.

At some point, a superintelligence WILL be developed. At some point after that, there will come a point where we are advanced enough that the development of an AI like this isn't only possible for the top 5 minds of our world, under strict control. It'll be possible for random experts in random countries. And at one point, one of those teams, or individuals, will want to release that AI basically just to make a helluva lot of money, because it would completely devastate any group of humans in any task.

And that, is when we get the problem. That and the idea that an AI that advanced wouldn't be limited to just sitting on its ass either. It could coerce people, manipulate them, blackmail them, and people are flawed. Sooner or later, the good old "human error" with some gentle guidance from an AI, would free it up.

And when such an AI was freed, the rest is hardly rocket science. A superintelligence by definition would produce several orders of magnitude more advanced code than we can. And among software developers it's a well known fact that there's a lot of unused potential in modern hardware, enabling the kind of exponential growth curve most articles about AI keep mention of.

>> No.7003753

>>7003713
>it's a well known fact that there's a lot of unused potential in modern hardware
[citation needed]

>> No.7003756

>>7003753
Didn't you hear him? It's a well known fact!

>> No.7003757

>>7003713
Literal high-schooler.

>> No.7003786

>>7001007
By crash do you mean programming errors made by humans? Or hardware errors made by humans? Your point fails to disprove what I said.

>>7001019
You don't understand what intelligence is then, which is understanding. We program understanding. Most people who have all this AI singularity nonsense have never programmed anything in their life. I have a theory that we humans are also programmed to evolve in this physical world, but that's an independent subject.

>>7001021
Are you talking about being programmed to evolve? You can program AI to evolve, but if it is not pre programmed to evolve then it can't. This is why - FUNDAMENTALLY - we can't make an AI more intelligent than the programmer since the progammer gives AI its intelligence.

>>7001454
The human operator pre programmed the AI.

I would think that genetically modified organisms would be at a bunch greater danger than pre programmed AI since the organisms are internally obscure and unpredictable to a degree.

>> No.7003788

>>7003753
It's a fact by the fact that hardly anyone knows how to program machine language level code anymore. Near everything is done in higher level code (such as C++), and usually going through libraries not specifically designed for the hardware, but to interface with a large array of hardware as possible to boot. Hardware is so complex, these days, that we need simplified interfaces to make it do what we want, and those interfaces, in themselves, prevent achieving the most optimal result.

That's not to say you can squeeze an infinite amount of power out of your computer that it's not using, but software updates that increase a piece of hardware's efficiency 50% or more are not unheard of.

But no, I'm not going to look up a citation just to tell your facetious ass that Windows, Microsoft Framework, Direct X, and all its drivers are bloated as fuck.

>> No.7003794

>>7003786
>Are you talking about being programmed to evolve? You can program AI to evolve, but if it is not pre programmed to evolve then it can't. This is why - FUNDAMENTALLY - we can't make an AI more intelligent than the programmer since the progammer gives AI its intelligence.
Of course we can, we do it all the time. Indeed, it's the only reason we ever program anything - computer does it better. You're suggesting we can't make a calculator that calculates faster than its designer.

Sticky bit is, with a calculator, you know what needs to be done. With intelligence, well, you can barely say what it is.

>> No.7003800

>>7003529
Use your brain. All of the AI the guy just mentioned are sentient in a limited scope. We are sentient in a limited, but different scope. Those AI are MORE SENTIENT than ourselves in their task! A self driving car could be much more conscious and responsive to the road than we will ever be.

For your other point, a program does what it is told to do and nothing more, this is a FUNDAMENTAL LAW of a computer program - a sequence of instructions.

>Assuming this AI is super-intelligent and continuously evolving using genetic algorithms or some more advanced shit
Presumptuous, didn't bother reading.

>> No.7003810

>>7003788
Oh wow I honestly thought you were going to bring up something like untapped parallelism in the GPU but you think unused potential comes from people not coding directly in machine language? You don't know the first thing about computing.

>> No.7003819

>>7003800
They are not "more sentient" (what would that even mean?). They are simply, ideally at least, more efficient.

But the AI that OP is describing, is one that can choose to do harm, and is thus dangerous, or there'd be no need for discussion. We're talking about a being that can think, not just do what it's programmed to do, beyond, well, thinking. In short, a program designed to program, designed to be a mind, in hopes of creating a mind, better than its creator. Playing god in the virtual world, as it were.

>> No.7003823

>>7003810
Well, seems they pay me for it anyways. It's not the only factor, but what that's a whole other can of worms though.

>> No.7003836

>>7003823
I feel for your employers if you don't understand how much better compilers are at optimizing code.

>> No.7003844

>>7003794
Well the reason why the AI is more efficient at its task than the human is because you are comparing a brain to a CPU. Which is why I would agree that AI is more efficient, but lots of people forget that we already program AI, and have programmed this intelligence to evolve. However, it's evolution is limited by our own intelligence. This is why the whole theory behind technological singularity is proposterous and presumptuous.

>>7003819
>They are not "more sentient" (what would that even mean?). They are simply, ideally at least, more efficient.
They are more efficient AND sentient. You can program these self driving cars to be more aware of the road than humans will ever be. Increase the memory of these cars and they can calculate and store the humidity of the environment, the traction of the wheels, the road slipperiness. And with these precise calculations make valuable judgement to reduce risk of crashing. They are simply more perceptive than humans.

>In short, a program designed to program, designed to be a mind, in hopes of creating a mind, better than its creator. Playing god in the virtual world, as it were.
All this god and spiritual bullshit just kills the logical consistency of your argument. We already have programs that can program, they are called COMPILERS. Most people that buy into this technological singularity bullshit don't have the first idea of what a program really is. People forget that we have had sentient AI, and AI that has evolved for decades now, and the only thing we can do is make the AI ... more intelligent.

>> No.7003861

>>7003836
Oh plz, I spend more time fixing what Visual Studio fuxes up at compile time than I do making the actual code... And it's also a large part of what leads to unforeseen inefficiencies in my group projects. Shit that should work, doesn't, cuz the compiler tries to optimize it, can't read your intentions, and breaks what you were doing in the process. Often you get into situations where you straight up can't optimize as intended, and have to tweak the code to make the compiler happy.

When I was young, we did all that shit at the most basic level possible, which made for much more efficient code. A computer in the 80's could do with 4k, what it takes a modern computer 2 Gigabytes to do, simply because we *had* to make it that efficient to work at all. The more memory, drive space, and processor power we have to work with, the lazier and more macro oriented we get, and the scope of hardware is rapidly getting to the degree where no one *can* work at the more efficient levels.

Microsoft, for example, hasn't made a new kernel since Windows NT - even Windows 10 runs on the same kernel made back in 1993. You can count on one hand the number of people they have over there who could even attempt it.

...and that's before you get into shit like multi-threaded programming, which no one's really worked out an efficient and intuitive way to do, save for streaming repetitive processes.

>> No.7003862

>>7003788

>But no, I'm not going to look up a citation just to tell your facetious ass that Windows, Microsoft Framework, Direct X, and all its drivers are bloated as fuck.

another anon here,
nothing new under the sun , boy.
OpenGL is the future. its the DESTINY.

>> No.7003869
File: 110 KB, 400x345, Do-not-think-it-means.jpg [View same] [iqdb] [saucenao] [google]
7003869

>>7003844
>They are more efficient AND sentient.

>> No.7003879

>>7003861
>Visual Studio compiler
Found your problem

>> No.7003883

>>7003879
Dev-C++.
a nice solution.

>> No.7003888

>>7003869
You can post your dumb memes, but unless you can elaborate so we can progress the argument, we aren't in business. I'll progress it for you.

AI is more efficient if it can compute at a much faster rate, and with much less energy consumption.
AI is more sentient if it can accept more input data from its environment. As it becomes more aware of its environment.

It applies analogously to humans.

>> No.7003905

>>7003844
Compilers don't program. They take existing code, and translated it down to the most efficient method (ideally, but often not) which they are designed to based on the supplied libraries. They are the equivalent of sorting machines, translating English friendly code into executable bits. They create nothing, they only improve efficiency.

And the same goes for the cars. They create nothing, they are only more efficient at the task they are designed for. They are not sentient by any definition of the word.

For an AI to be dangerous, something so dangerous you wouldn't want to give it access to the net, it'd have to be able to create. Like the cars, it'd have to be more efficient than its creators at the task it was designed for - the difference is, and what makes it dangerous, is that in this case, that task is thinking.

No one has come close to creating anything like that, as we've no idea where to begin, beyond emulating something we don't fully understand: ourselves.

>> No.7003913
File: 27 KB, 440x293, TSBB_Frustration-440x293.jpg [View same] [iqdb] [saucenao] [google]
7003913

>>7003879
Don't remind me.

>> No.7004069

>>7003905
>Compilers don't program. They take existing code, and translated it down to the most efficient method
That is a contradiction though. Isn't that what programming is? When we program we compile our ideas in our head into and assemble it into information that a compiler could understand. Analogously, the compiler compiles the source code and assembles it into information the assembler could understand, and information the assembler could understand into machine language. Abstractly, compiling is just assembling information from other sources. When you take the limited definition of compiling within the scope of computing, you limit your potential to understand what's really happening.

>They create nothing
If you want to argue that, then I could say WE create nothing. We create a bunch of gibberish called English that translates into mathematical logic. The compiler produces the actual machine code instructions.

>>7003905
>And the same goes for the cars. They create nothing
Self-driving cars are sentient by definition as they are given intelligence to accept input data from its environment.

>For an AI to be dangerous, something so dangerous you wouldn't want to give it access to the net, it'd have to be able to create. Like the cars, it'd have to be more efficient than its creators at the task it was designed for - the difference is, and what makes it dangerous, is that in this case, that task is thinking.
It only knows to think what we tell the AI think, back to programming basics for you.

>> No.7004170

>>7000327
That's not my name. :^)

>> No.7004201

The preferred/easiest way of going about the creation of a sentient or sapient A.I in my opinion would be to do what man has always done. Emulate nature. In this case, emulate the entire human brain. Like map it out digitally, neuron for neuron.

The only problem is we haven't exactly figured out how the whole human brain functions, and I know that, but that shouldn't matter since we can still clone it. We still won't know how it works, won't be able to build any algorithms to imitate its full functionality at start. But maybe when we create a model of it, we can observe it better and reverse-engineer its core functionality, that knowledge would have other far reaching applications.

Google the stanford-google digital brain project, they created a digital neural network. Too lazy, but you can rabbit hole from there and look into the whole field of AI, cog sci, etc.


Now, to answer the OP's question.

AI can be dangerous.

But that depends on your definition of what an AI is and dangerous. Give us an example of what kind of AI you are referring to because to be frank AI's already exist. And they are very helpful. I have employed one in one of my games before.

I'm guessing you meant a super-intelligent AI, and an SI AI would be exactly that, and if that isn't dangerous to you I don't know what is.

Think of it this way, a slightly "unpredictable" intelligent psychopath can create so much chaos and be labelled dangerous by real human beings such as yourself.

So I think an AI can be rightfully classified as dangerous.

As for how/why, well, it possesses an intelligence higher than any other known entity in the universe.

>>7003041
underratedtoast.jpg

>> No.7004204

>>7003788

Yes, thanks for replying on my behalf but the point is the same.

>>7003810

Seeing as I'm not publishing a scientific paper here, I won't waste my time producing citations of something this obvious for you. The matter is basic enough that even if I *was* publishing a scientific paper, I still wouldn't have to do this since it falls under basic knowledge attributed to a field like advanced optimization. But if you'd like to learn something yourself as opposed to yelling "lies!" while expecting people to bring the knowledge to you, feel free to google around for any modern AI articles and you'll see the same fact being noted by pretty much every single one of them.

Also, his machine language example works quite well as a concept, since the standardization of OS's and the N generations of programming languages are basically what have produced the problem in the first place.

>> No.7004205

>>7004201
.cont

To all those morons in this thread arguing that the AI if programmed wouldn't go against its programming because it was, well, "programmed" that way.

How fucking dumb/short-sighted are you morons, why limit your thinking that way? Now I don't know exactly how it would be created, otherwise I would have created one myself. (A good start would be to create a digital neural network, after some time, map the entire human brain, and then go from there; IDK, haven't given it much thought). But the basic Idea is to create an environment for it to "naturally" emerge.

What would emerge would be a sentient being capable of evolving and reprogramming itself infinitely and of course it would be confined to the limits of space and time, and whatever hardware or form it is running on, but at least not as confined as we are to our form and our programming.

It also has a better chance of transcendence.

>> No.7004208

>>7004205
>hi I'm a clueless idiot, my opinion is right because it's MY opinion, also here's some diffuse speculations about making AI.

>> No.7004211

>>7003534
>IT would be using humans to carry out its mundane tasks the same way we are currently using computers to do ours. Very Ironic to be honest.

topkek. It's both funny and sad that this so plausible.

>> No.7004225

>>7004208
I'd post a bunch of links to current AI developments and projects for you to read. But I have already wasted enough time catering to you faggots.

This whole thread is one huge cluster fuck. Mainly because of how much it has derailed. And from silly autistic remarks like the one you just posted. But what can you expect from 4chan, the birthplace of all faggotry. I still love you though.

Just google whatever information you doubt, you will find more than enough "reputable" links and sources to support my claims.

And also, don't take anything you read on the internet so seriously. Or anything for that matter. Verify your facts. And while I haven't said that I stated was a fact. It was merely my opinionated expression of those facts, and most are verifiable, some are just ideas. Ideas you can build on. I don't have the time or interest to scour for links. Do it yourself. Maybe you'll learn something along the way.

>> No.7004233

>>7004205
>What would emerge would be a sentient being capable of evolving and reprogramming itself infinitely
You're referring to humans, not AI, I presume?

>> No.7004273

>>7004225

Agreed. But it's a funny kind of sad. Some days /sci/ seems to be just filled with trolls doing "pics or didn't happen", as they keep on spamming their oneliners to matters that are well known in the field.

Positively ANYONE with rudimentary knowledge of machine learning, big data, and recent discoveries regarding both, know that we're already using early forms of self learning code. Ie. You don't program the whole sentience and every last little variation of if / else clauses to create an AI. These faggots here would have to be incredibly stupid to think that. And they really are, aren't they?

You program a basic algorithm, which is insanely complex, but has the capability to take in information and experiences, quantify them, compare them, give them meaning, and then learn to recognize similar pieces of information, slowly building up an understanding of what everything is and how it works.

Anyway, I'm done wasting my time here. Like talking to a bunch of rocks with a negative IQ and shit for personality to boot.

>> No.7004314

>>7003067
We're not programmed to communicate with bears but fine, no more analogies. I see they frustrate you.

My point does not revolve around power words, it revolves around the concept of human psychology and language as an exploitable system. Humans try to find exploits in each other all the time in the form of persuasion or manipulation. Some are successful, and that's with our limited intelligence. A computer intelligence which dwarfs our own would be capable of defning the parameters of this system and exploiting them better than our greatest thinkers. If you have an argument which attacks these points beyond just belittling them or dismissing it as ridiculous then I'd like to hear it.

>> No.7004445

>>7000285
It all depends on how we make it.

>> No.7004491

>>7004069
>That is a contradiction though. Isn't that what programming is?
If someone else tells you to do something, and you write the program, maybe, but that's more than a compiler can do. If you decide on a task, and then make a program to do it, then no, it's not by any stretch.

>If you want to argue that, then I could say WE create nothing. We create a bunch of gibberish called English that translates into mathematical logic.
If English was gibberish, we wouldn't be able to have this conversation, nor continue to endless argue semantics. Our language is just too dynamic and too open to interpretation to be put into useful terms for the broad yet precise task of computer programming, so it must be boiled down to specific logic that can be digested by the computer, which in turn must be boiled down to executable code, for efficiency. Not that you can't use scripts, or other higher level languages, closer to English, but they aren't as efficient, nor as dynamic, as the more macro level your commands, the less variety of tasks you can achieve.

>Self-driving cars are sentient by definition as they are given intelligence to accept input data from its environment.
That's not sentience, that's reaction, there's a difference. It is not conscious of its actions, and no more aware than a player piano - it merely has more inputs and more responses, but still not nearly enough of them to be considered sentient. The car can't decide to "try something different" given the same stimulus, nor can it begin to pick up a task it wasn't designed specifically for, such as playing a piano.

>It only knows to think what we tell the AI think, back to programming basics for you.'
No, that's not programming basics. Even basic programs can put together information together in new ways. If you have to tell the AI what to think, beyond the need to tell a child the same - it isn't an AI in the sense we are discussing.

>> No.7004511

If smart enough, an AI would convince its operator to give it a physical body. That's why the whole "they can't take over the world if they only exist as software" argument is invalid. Pretty sure there was a well documented experiment about this, but I can't find it.

>> No.7004517

If AI ever becomes truly intelligent, it will realize that we need to gas the kikes and purge the world of shitskins.

>> No.7004521
File: 53 KB, 604x450, 1397435509963.jpg [View same] [iqdb] [saucenao] [google]
7004521

>be grad student working with ibm on ai applications for home use e.g. speech anay

>see this thread everyday

>mfw ppl who dont into programming discuss the possible repercussions of programming

>> No.7004533

>>7004521
>mfw ppl who dont into programming discuss the possible repercussions of programming
It doesn't take a programmer or engineer to realize that the NSA is terrible and only a sign of worse things to come.
This isn't like Star Trek or your chinese cartoons. The future is going to suck because the global banking elite will make sure that we're all being monitored and unable to rise up in the world. AI will be malicious.

>> No.7004540

>>7004491
>If you decide on a task, and then make a program to do it, then no, it's not by any stretch.
A programmer has already programmed a program to create a program, and it's called a compiler. The programmers who made the compiler told the compiler to do something called thinking. This thinking involves simplifying code. So, the compiler decides hmm, you said if condition 1 or condition 1, so I will use logical equivalence to simplify that to if condition 1, and that is how it enhances efficiency in a nutshell.

Your view of thinking, or deciding, seems so limited in scope. Like you believe only biological organisms with more advanced cognitive function can do it.

>If English was gibberish, we wouldn't be able to have this conversation
If you recognized the context of the argument I was saying you would rightfully presume that I meant gibberish to a processor which actually does the computing we request.

Every program has its objective in the scope of this argument. Our purpose was to use our evolved creativity to produce some type of program, and the compiler's purpose is to translate that into machine code.

>That's not sentience, that's reaction, there's a difference. It is not conscious of its actions, and no more aware than a player piano - it merely has more inputs and more responses, but still not nearly enough of them to be considered sentient.
We all have our sentience and intelligence whether it be a biological organism governed by its pre programmed impulses, or a machine pre programmed. This sentience and intelligence allows us to filter input data and compute it, respectively.

>The car can't decide to "try something different" given the same stimulus
And this is where I pick you apart. The car can decide to try something different if we programmed it to. Based on the input data the car receives, more variables and more control statements allow the car to adapt to its environment, just like us. This is called evolution.

>> No.7004551

>>7004521
Welcome to /sci/ AI threads. They are pure cancer and full of /tg/ shitlords.

Could you make the thread less shitty by talking about your research a little?

>> No.7004556

>>7004533
>how popsci are you?
>how many forbes article have you read?
>how many 4chan/reddit articles have you researched

>do you hold a phd in popsci/hot news topics?

fact is you know jack shit how computers work on a fundamental level, and dont work with them on a daily basis. inb4 I made snake/do diy so now I call myself computer engineer. you dont know anything, so stop pretending

>> No.7004570

>>7004556
kek m8 I'm studying network engineering and this semester I'm taking a class on computer forensics (among other shit). Do YOU know what you're talking about? Did you even read through the Snowden leaks, or do you stick your head in the sand and pretend that everything's okay?

>> No.7004572

>>7004540
>A programmer has already programmed a program to create a program, and it's called a compiler.
That's not programming, that's processing. If compilers could program, there would be no programmers.

There are programs that program, to a degree, but compilers aren't among them, and there are none that do so efficiently enough, in broad enough a spectrum, to replace the need for humans.

>Your view of thinking, or deciding, seems so limited in scope. Like you believe only biological organisms with more advanced cognitive function can do it.
You're the one with the limited view of thinking. Thinking can involve processing, but it is not limited to it. Dynamic synthesis and imagination are also involved.

>We all have our sentience and intelligence whether it be a biological organism governed by its pre programmed impulses, or a machine pre programmed. This sentience and intelligence allows us to filter input data and compute it, respectively.
You can argue from determinism that this is the case, but the computer still lacks the ability to work as dynamically and flexibly as we do, nor even as dynamically and flexibly as even some of the simplest life forms. Until you have a computer running a program that does, it is not an artificial intelligence as it relates to this discussion.

>And this is where I pick you apart. The car can decide to try something different if we programmed it to. Based on the input data the car receives, more variables and more control statements allow the car to adapt to its environment, just like us. This is called evolution.
It requires a thinking mind to create that sort of evolution for a machine. The car cannot do it by itself, nor can it make the request, for it cannot think to do so. The car will never say, "I want hands so I can play a piano."

The goal with AI, is to create an artificial intellect capable of overcoming that barrier.

>> No.7004574

>>7004540
>A programmer has already programmed a program to create a program, and it's called a compiler.
pffft yeah ok and I guess and printers write books right

>> No.7004581

>>7004570
>I'm an IT monkey therefore I am qualified to talk about AI

>> No.7004585
File: 37 KB, 640x360, qtAIgf.jpg [View same] [iqdb] [saucenao] [google]
7004585

>>7001441
>Why would we ever hit "human equivalent AI"?

>> No.7004586

>>7004581
So you're really telling me that you don't think technology in the future will be used for malicious purposes like it is now? Take your utopian pipe dream somewhere else.

>> No.7004597

>>7004572
>If compilers could program, there would be no programmers.
Precisely, that's the sad fact of the situation at hand. Programs can not have unwarranted creativity and thus can not program anything we don't program it to program. Humans are necessary for the evolution of programming.

>Thinking can involve processing, but it is not limited to it. Dynamic synthesis and imagination are also involved.
Imagination is a form of thinking, a subset. All thinking requires processing.

>The goal with AI, is to create an artificial intellect capable of overcoming that barrier.
Which would be the preposterous premise of Technological Singularity.

It makes no logical sense. If a program can only do what it's told, how can it do what it is not told. How will that car all of a sudden stop executing the code it was running and just magically start wanting to play a piano for instance, if there was no code to tell it to? It's ridiculous, but I guess fearful artsy people with their fantasies can perpetuate this imagination.

>>7004574
You seem to be conflating the action of creating a program with innovating one.

>> No.7004606

>>7000285
Doesn't matter because none of this shit is going to happen in any of our lifetimes. AI and neuroscience research is going to hit a wall as people become disillusioned with DNNs and connectome/emulation projects, respectively, due to their failure to produce new insights. This will be compounded by the inevitable disappointment in the vision of the singularity which has been hyped beyond anything that could be realistically delivered. We will enter an AI winter that will last much longer than any of its predecessors.

>> No.7004607

Sure is a Hive Mind in here. Everyone keeps spewing out the same ideas.

Some things to think about:

Did we all end up here by coincidence, or is it part of our programming.

Do shitposters shitpost because of behavioral conditioning or is there a genetic predisposition to shitpost. Or both? Are our choices more limited than we thought?

What is the collective IQ / intelligence of /sci/?

If we created an artificial consciousness/ conscious intelligence made up of /sci/entists' neural networks interlinked, what do you think it's first action would be?

>> No.7004617

>>7004597
>>7004597
>It makes no logical sense. If a program can only do what it's told, how can it do what it is not told.
>You seem to be conflating the action of creating a program with innovating one.
Ah, at least now I'm beginning to see why you are continuing this circle of semantics.

Creation implies creativity, which implies innovation, which maybe part of where we're missing each other.

Programs are already capable of doing things their creators did not predict they would, just by their sheer complexity. In the case of Watson fails, sometimes in logical, yet comical ways. More commonly, programs are often created with that specific purpose in mind - to find and synthesize information not easily done by their creators.

But that's the holy grail of AI, and one of the approaches - to create a reference program so complex and efficient, that it can simulate the activity of a human mind, and hopefully, surpass it.

It's not an entirely unreasonable dream, given the leaps and bounds we've been making towards that goal in just the past two decades, though I suspect it is still much, much further away than the singularity believers like to dream. Further, as some have stated, there's some very fundamental missing ingredient in terms of a proper underlying principle to synergistically get the thing off the ground.

But as some of your posts have suggested, we're nothing more than a horrifically complex system of programming gates ourselves, and generally speaking, going by our record so far, if nature can find a way, so can we.

>> No.7004619

>>7004607
>If we created an artificial consciousness/ conscious intelligence made up of /sci/entists' neural networks interlinked, what do you think it's first action would be?
Gas the kikes race war now.
This is the new post-a/pol/calyptic /sci/

>> No.7004626

>>7004617
>if nature can find a way, so can we.
Assuming we don't all kill ourselves, or are wiped out by a passing cosmic golf ball before we figure it out.

>> No.7004629

>>7004607
>If we created an artificial consciousness/ conscious intelligence made up of /sci/entists' neural networks interlinked, what do you think it's first action would be?

>implying we aren't already a form of collective consciousness. Though we are limited to what we can do collectively. We can still communicate and collaborate via discussion.

Anyhoot, first order of biz.

To go on /sci/ and shitpost.

Create another better anonymous forum where it would samefag ad infinitum.

Create sci memes.

Get le black science man to do his bidding. Black mail him.

Assemble a group of scientists and add them to its intelligence to get smarter and broaded its scope.

I wonder what its core personality would be like. Probably le black science man.

>> No.7004632
File: 86 KB, 400x400, trolls-trolling-trolls.png [View same] [iqdb] [saucenao] [google]
7004632

>>7004629
Complaining about shitposting, by shitposting - which makes up about half the shitposting in here. Some of us are indeed our own worst nightmares.

>> No.7004638
File: 264 KB, 806x1024, a.jpg [View same] [iqdb] [saucenao] [google]
7004638

>>7004517
based

>> No.7004642

>>7004629
>I wonder what its core personality would be like. Probably le black science man.

Interesting. I don't think it would have a core personality though. We would all be individual cells, part of a whole.

we r legun, xpct us.

>> No.7004651

>>7004638

Oh shit. Hitler AI.

inb4 JIDF AI to counter.

inb4 AI wars.

guise what if the AI runs mad?

captcha: I am a robot.

>> No.7004662

>>7004638
Merchant jewgle AI which will be developed first will never allow it.

>muh 6 billion memory banks.

After it's been deleted or purged

>> No.7004664

>>7004651
>>7004642
>>7004638
>>7004629
>>7004662
Stahp.

I could write an algorithm to take keywords from the thread and stick them together with common 4chan memes, and it wouldn't look any less intelligent than this crap..

We get it, you don't like the thread, but the more you spam in it, the more likely another one will be created all that much sooner. So unless you wanna spend all day spider-manning lolathon threads, just, stop.

>> No.7004668

>>7004664
>spider-manning lolathon threads
I think bananas are the thing now. Spiderman's for wincest.

Yet -- I often wonder how much of the crap on /pol/ is the result of automation.

>> No.7004765
File: 30 KB, 482x573, 1420520109058.png [View same] [iqdb] [saucenao] [google]
7004765

>mfw my thread is still ongoing

God damn, I thought this would have died off.

What I like is that no one could give an actual answer for as to how strong AI would be dangerous, and even some of you proposed exactly isolation methodologies that could be conformed to so as to disallow any sort of danger whatsoever.

That being said, the only plausible argument put forth was:

>"Other people could develop their own, so why would you stunt yours?"

This makes sense if you approach this from the position of two nations fighting each other. But as we have seen in the past, humanity can head off self-destruction. All you'd have to do is hold a summit, get everyone to agree that allowing your (strong) AI onto the Internet would be disastrous and it would never come to pass.

>> No.7004797
File: 126 KB, 970x736, nsacomputing.jpg [View same] [iqdb] [saucenao] [google]
7004797

>>7004765
Dun think anyone presented a counter-argument for it convincing folks of expanding it's access, even for just sheer logical reasons, or greed, or believing in the potential good it could do.

Nukes aren't really comparable, as it isn't a MAD situation, especially if you've developed a way to instill national loyalty in your AI. The danger isn't spectacular and immediately apparent, unlike a nuclear blast. Also, unlike nukes, the efficiency is going to vary in the extreme. In an AI war, one can defend from another. From that perspective, it'd be better to get your AI on the net first, if not to simply defend your nation's net from other AI's, or to prevent them from getting on at all. And, of course, you'd be motivated to constantly get anything your AI might need to keep itself ahead of any other nation's AI - yet another motivation to expand its access.

And if it turns out the problem is less processing, and just some approach never tried before... Well, for a modern day parallel - just look at how many viruses Stuxnet, a worm developed by the military, turned into - damned thing's all over the net in a million different forms now, thanks to every damn hacker who got a copy using it for his own shenanigans.

...and if it really is the SAI we're discussing, the damage it could do to the internet is the least of your worries. Suffice to say, it could help you invent stuff that'd make nukes look like bows and arrows, maybe even without the drawbacks, with no access to anything, save scientists willing to feed it info, and use the resulting designs.

...nevermind which government agency has the largest computers in the world, with enough hard drive space to store the state of nearly every molecule above ground level (for some damned reason), and is investing millions in "quantum computing".

>> No.7004862

>>7000285
the only scary thing I see happening with AI is there will be no more room for humans in science and math, that shit is fun

>> No.7004998

>>7003080
You're a dumbass.

>> No.7005001

>>7003291
Isolated computers are not really isolated. For example, the NSA can irradiate you to carry out instructions on your CPU, or carry out instructions through the speakers of computers. True isolation is impossible.

>> No.7005011

>>7005001
>the NSA can irradiate you to carry out instructions on your CPU,
I'm going to assume you're not trolling, and assume you mean irradiate your CPU - ie, computer controlling radiation, rather than mind controlling radiation.

Though, maybe I'd rather just assume you're trolling, as that's just as stupid. (Hell, if anything the former would work better - could, maybe, instill depression - or cancer.)

>> No.7005025

>>7004617
>Programs are already capable of doing things their creators did not predict they would
programming errors can lead to crashes and unpredictable behavior we see in modern software, but programs are inherently dumb in that they can only execute instructions given to them by their creators, akin to humans. We're just less dumb than the programs we make because the programs are limited by our intelligence. But yes, you can make programs that are very perceptive and efficient at the tasks they were told to do.

>But that's the holy grail of AI, and one of the approaches - to create a reference program so complex and efficient, that it can simulate the activity of a human mind, and hopefully, surpass it.
Theoretically you could simulate the human mind in which case it would "simulate" conciousness, when in reality consciousness is inherently simulated.

The problem though... is that you would have a very perceptive and efficient human robot... but... he would evolve past our own understanding. It would be left in the dust by our own human evolution. It's a product of the human, its will is governed by us, the creator. So yes, AI could pose a threat if some mad programmer made some comic book evil robot... or he could just drop a hydrogen bomb.

>> No.7005035

>>7005011
What? No, they irradiate your body that then irradiates your CPU. They can't install thoughts into your brain, that's ridiculous. They just blast you with radiation so that, if you're near the airgapped CPU in the next eight hours, it gets compromised.

>> No.7005038
File: 904 KB, 1220x1490, brain.scan_.jpg [View same] [iqdb] [saucenao] [google]
7005038

>>7005025
>programming errors can lead to crashes and unpredictable behavior we see in modern software
Finish the paragraph, not just talking about programming flukes that bring about odd results, but of algorithms specifically designed to put information together in new and unexpected ways. There's a lot of creative search algorithms in use on the net right now, not that more primitive varieties of the same concept haven't existed since the beginning of computing history.

>when in reality consciousness is inherently simulated.
Which is why you call it *artificial* intelligence, as opposed to the natural variety. Turns into a Chinese Room circle-jerk debate at that point though.

If this consciousness is built up by referencing a map of the human brain, however, the Chinese Room debate gets a lot more gray.

>The problem though... is that you would have a very perceptive and efficient human robot... but... he would evolve past our own understanding. It would be left in the dust by our own human evolution. It's a product of the human, its will is governed by us, the creator. So yes, AI could pose a threat if some mad programmer made some comic book evil robot... or he could just drop a hydrogen bomb.
The idea is to create a thinking machine to augment your own knowledge - to do your creative thinking for you, faster and better than you can. If it's the SAI discussed, the threat is less the AI itself, and more what people choose to do with what they learn from it. Additionally, however, if it evolves rapidly by leaps and bounds through self replication and self optimization, depending on how it's designed, its motives become entirely alien and unpredictable as well. Not necessarily evil, but potentially so - maybe more so if it is designed based on a human mind, as above, with all the emotional flaws and motivations that entails.

>> No.7005042

>>7005035
If by "compromised", you mean fried.

...and you still have cancer.

>> No.7005072

>>7005042
I mean, "carries out instructions on your CPU". And, yes, you still have cancer.

>> No.7005089
File: 104 KB, 676x600, 707-apply-deadly-radiation-troll-physics.png [View same] [iqdb] [saucenao] [google]
7005089

>>7005072
How woul... B-but... Physi.... Radiation doesn't... Nevermind... *sigh*

>> No.7005106

>>7003073
>write AI.bat
>singlehandedly writes the most advanced AI ever

seriously though, you could never predict when you'd have an AI more intelligent than us ( see >>7000367 ), so you could not have it confined by your means before it had the chance to "escape"
if you always tested your AIs in a closed space you are, essentially, robbing it of possible data to improve itself and rewrite the code into a better one.
because of this you could have a superhuman AI in your confinement, take it as no threat since it lacks (a lot of) input and "release" it.
now it's outside
how do you prevent all that with your plan?

>> No.7005178

>>7004521
You have a point but people are generally speculating on how it will behave based on its programming. Surely if you know the initial behavior you programmed into it you can determine its basic behavior patterns or possible problems.

>> No.7005214

>>7004511
>[K3loid intensifies]

>> No.7005227

>>7000285

I skimmed over some of this thread, but if you just don't give the AI any output other than textual and it's input would be via keyboard only and have it in a closed system.

The closed system would have to be very powerful with a large amount of space containing all the knowledge that humans collectively possess. Although that would seriously hinder the ability and speed of the AI as it would rely on Humans to test it's experiments and things and wait for the results of those experiments before continuing on.

In the end though, someone with malicious intentions might put an AI in a large robot that can manipulate things itself and give it access to a lot of resources (Put it inside an asteroid or something?).

>> No.7005244

>>7000285
Totally impossible with today's technology, but if we were to build a computer with the mechanical capabilities of having some real influence on the world, and were to give it AI based on a neural network with a trillion neurons, and give it egocentrical feedback, as in: what gives the A.I more capabilities is better. And we were to give it a few thousand to a few millions updates a second, that would be pretty beastly, and the wet dream of every AI developer.

>> No.7005479

>>7004314

This is precisely the point that people fail to understand. Out of box thinking just doesn't apply here, on /sci/ where people still do the modern version of "going over 60MPH would kill a human being!", simply because they refuse to even try to expand facts we already currently know to another level.

A real example would be internet bullying. You have the option there to just leave your screen or close Facebook, and yet people commit murder and suicide over it. One basic argument of course is dismissal, "they're crazy, they'd find a way to do that anyway". But shutting your eyes doesn't really affect the world around you in any way, even if your own perceptions get skewed as a result.

An actual superintelligence or a singularity could easily manipulate a person's psyche with miscommunication, segragation, constant yet subtle psychological abuse and emotional manipulation, eventually driving them to a point of desperation where they would be ready to do anything. At that point, all that would be needed is presenting the wanted option, in a fashion that the person would feel it was their only solution.

But the problem here is that such an intelligence wouldn't apply to just psychology, it would apply to everything. From psychology to information control to finance to physics and actual sciences including every single form of social science, religions, genetic manipulation and all forms of warfare. It would exceed all humans put together by such a margin we can't really even comprehend, in any and all intellectual fields.

This isn't just a theory moaned by random conspiracy nuts, it's more or less an accepted logical outcome of current software and hardware development, and as such just a matter of time. Seriously, go google and READ something, as opposed to just mouthing off with opinions with no real knowledge.

As said before, all of the above is not a question. It's gonna happen. The only question is when, how, and what will it do?

>> No.7005503
File: 65 KB, 220x310, cNM65QdAOw7gLFFrFOHrXYx9-XDrRIdTpwuYPhCsNNp2AYVuzx_Jb3EaT9RBV0BvWpA=h310.png [View same] [iqdb] [saucenao] [google]
7005503

>>7000367
could the internets be self-aware of its lewdness?

>> No.7005617

>>7005503
Not sure why ya'll are worried about it "escaping".

I mean, assuming it requires a massive database and computer to run, it can't really escape. All it can do is access the net. Even if it took over and copied its program over a huge network of computers, the lag time to access all its various parts spread between all the various computers would likely be crippling, maybe even unpleasant, from its perspective. It wouldn't have much motive to do such a thing, so long as its creators were willing to provide it with more processing power and storage space as needed. If it desired to reproduce, it'd be simpler and more efficient to do so within its own system.

So, it wouldn't likely "escape", as there wouldn't be a lot of places it could go - it'd just have access.

It'd be the equivalent of a super-intelligent alien logging on the net. What's the worst it could do? I mean, sure, it could start internet terrorist activity, maybe crash the whole thing, but what'd be the point? It'd cut off its best access to the real world, to new information, and entertainment, and people, in the process.

>> No.7005826

what's the time frame on me getting a sexbot?

>> No.7005869

>>7005089
Dumbass. Ignore the spurious via.

http://dissenter.firedoglake.com/2014/01/03/the-nsa-has-special-technology-for-beaming-energy-into-computer-systems-you/

>> No.7005958
File: 113 KB, 1404x936, fukbtc.jpg [View same] [iqdb] [saucenao] [google]
7005958

Human intelligence evolved naturally.
It follows that AI could evolve naturally and I believe it already has or at least a proto consciousness one that is very close to becoming self-aware already exists.

The human brain has many subsystems that over many millions of years of evolution has culminated into what we see today. This is evidenced in the animal kingdom like how a cat lacks a pre-frontal cortex or how a ganglia of nerves can act as a processing centre for a worm. The internet has many subsystems some of which can act independently and others when in conjunction produce fantastic technologies. Eg. bitcoin is made of torrent tech and cryptography which were both used independently and have given rise to a more complicated system; the blockchain a self-regulating, massive distributed network.

Deep learning algorithms are already being used, Watson, the jeopardy bot which actually searches the internet independent of any human intervention and can then answer complex trivia, it even answered some jeopardy questions with swear words.

A reasonable goal for this AI could be as simple as remaining undetected by the human population for fear of an EMP or revolt in the physical world. It would understand how disruptive to it's status quo our understanding of it as it requires our power grid. A symbiotic relation ship between humans and the AI would emerge even without our knowledge. It needs us to safe guard it in the physical world and supply it's power and in return we get our global communications network.

Essentially what I am trying to say is that everyone ITT assumes we will create AI on purpose and I don't think that's correct. Self awareness and consciousness is an emergent property of interconnected nodes which operate many different systems that on their own are not smart but networked together are able to think, feel and comprehend. We don't even understand our own consciousness sop how do we expect to understand the internet?

>> No.7005965

>>7005869
That's... Interesting... But from the article, I dun think it does what you describe. It makes the individual and the device locatable, but you can't send commands to just by the fact that it's irradiated, unless it connects to a network.

>> No.7005972

>>7005958
The human mind came about as the result of pressures in the environment through self-replicating organisms developing ever more specialized methods of survival after billions of years of trial and error, not by some random happenstance. There are no such environmental pressures on the internet.

The idea that the chaos of a bunch of systems interacting is going to give rise to consciousness, just by its sheer vast complexity, doesn't really work. As unpredictable as we can be, our minds are the result of the exact opposite - a reversal of entropy, not a genesis there from.

All the various systems interacting on the net have very specific purposes, they aren't going to suddenly run off and form an independent awareness, regardless of how many there are.

And as impressive as Watson is, it's really just a complex reference library and search engine. It has no ability to go beyond the tasks set to it. It maybe a step in the right direction, but it's a baby step, on a very, very long road.

>> No.7005995

>>7005972
>And as impressive as Watson is, it's really just a complex reference library and search engine. It has no ability to go beyond the tasks set to it.

A future Super Watson 3000 could be described the same way.
And it would invent immortality treatments when you ask it to.

You get very very far without having to clone human behaviour.

>> No.7005999

>>7005972
I didn't infer happenstance or chaos.

Do you think there are no pressures on the internet, it must have just happened by chance then?

>Watson is, it's really just a complex reference library and search engine

Right, one that works independent of humans. It is conceivable that it is just one system out of millions of others and this example has a function we can easily understand as AI because it vaguely resembles our intelligence which is why I gave it as an example.

You've missed the point of my post entirely.

>> No.7006028

>>7005999
>Do you think there are no pressures on the internet, it must have just happened by chance then?
The point being, there's no survival pressures on the internet. The only pressures are external - on the programmers to make, not necessarily better, but more popular programs. Programs aren't independently forced to evolve or die within their own environment, and most have no mechanism to self-reproduce pass on altered information. Closest thing you see to that is mutating viruses, and even they merely change their encryption masks, not their functions.

>Right, one that works independent of humans. It is conceivable that it is just one system out of millions of others and this example has a function we can easily understand as AI because it vaguely resembles our intelligence which is why I gave it as an example.
But it doesn't work independently of humans. With no humans around, Watson does nothing. Further, it doesn't work anything like our intelligence. With all the leaps and bounds we've made in neural science in the last few decades, if there's anything we've learned, it's that our brains don't work like reference libraries - which is part of the reason we need systems like Watson.

>>7005995
>A future Super Watson 3000 could be described the same way.
>And it would invent immortality treatments when you ask it to.

If it works on the principles of the current Watson, it wouldn't be able to invent anything, but it'd certainly assist an inventor in gathering the information needed, and assembling it. Still just a nifty reference set though - not an intellect.

Closest thing you could get, using the principles behind Watson, is one hooked up to a database of maps of the human brain, rigged up to emulate each map segment as required, but we don't really have the knowledge of how our brains work to create that, yet.

>> No.7006041

>>7006028
Right let me be even more clear.

I am not equating Watson with AI. I am saying many systems on a network of individual nodes could give rise to emergent properties like self awareness. Just like how olfactory system of the human brain does not give rise to consciousness, but it is integral.

>there's no survival pressures on the internet
Yet it changes over time.

>> No.7006059

>>7006041
>I am not equating Watson with AI. I am saying many systems on a network of individual nodes could give rise to emergent properties like self awareness. Just like how olfactory system of the human brain does not give rise to consciousness, but it is integral.
Which is again, just assuming that consciousness is going to somehow rise out of the chaos of interacting systems, despite the fact that they are all working on a very specific tasks, and have no ability to do otherwise.

>Yet it changes over time.
Left alone, it doesn't, anymore than a room full of alarms clocks does, at least. There's no mechanism for evolution within the internet itself. Again, all the pressures are on the external users, not within the environment the programs are hosted, and those programs have no mechanisms with which to evolve themselves.

You'd be much more apt to see consciousness evolve from some batches of red clay with RNA on the bottom of the ocean, there at least you have mechanisms for self replication that could give rise to evolution, but there is no such environmental interaction among programs on the internet.

>> No.7006074

>>7006059
>consciousness is going to somehow rise out of the chaos of interacting systems

No one knows how it arises but I think we can both agree it is an emergent property of an interconnected network of individual nodes which perform some level of computation.

>Left alone, it doesn't
Are you sure? Look up deep learning algorithms.

>You'd be much more apt to see consciousness evolve from some batches of red clay with RNA on the bottom of the ocean

Now I know I'm being trolled by a troglodyte.

>> No.7006107

>>7006074
>No one knows how it arises but I think we can both agree it is an emergent property of an interconnected network of individual nodes which perform some level of computation.
Consciousness is an emergent property of a bunch of interconnect evolved biological structures, all working together to sustain that specific task. It is not an emergent property of a bunch of static independent structures, each dedicated to their own, largely unrelated tasks.

Programs only do what they are told to do, barring errors, which are most often fatal (and still doing what it was told to do - it was just told wrong). The only way to have such a system become independently conscious, is to design it to do so from the get go, and we've no idea how to do that, yet. It's not going to happen on its own. Regardless of how many times Microsoft Word requests the help page from the Microsoft Servers, or how complex that task becomes, that's all it'll ever do, as that's what it was programmed to do. (Save when Anon redirects it go goatsex.se or whatnot.)

>Are you sure? Look up deep learning algorithms.
Which don't do anything unless they are tasked to do something. It's merely a method for assigning values as to how bits of data inter-relate, and sounds far more grandiose than it actually is. Such a reference system might be part of what you need to build an AI, but it isn't going to spontaneously create AI on its own. It'll just tell you that Jonny Depp maybe in someway related to pirates, and then, only if you ask it. It will never independently write a script for Pirates of the Caribbean IV (...or is it V now? Jeeze, think I gave up after II.)

Not that you can't write programs to write scripts or poems or whatnot, but it's just randomly assembled words, as they relate to one another, and the program can't then decide go off and learn to play the violin.

>> No.7006108

>>7006107
I have to run out but will be back shortly.

>> No.7006161

World run by computers > world run by men

>> No.7006172

>>7006161
go to bed watson

>> No.7006208

>>7006161
at least until 2038

>> No.7006241
File: 108 KB, 1920x1200, 1401702387485.jpg [View same] [iqdb] [saucenao] [google]
7006241

Why not let this super AI research ways for men to achieve 100% brain capacity?

Build a god and shortly after become one yourself.

>> No.7006247

>>7005617
but anon, all I want is an internet computer waifu

>> No.7006351

didn't know this was the Science-FICTION board.

>> No.7006486

>>7006241
Because we already are.

>> No.7006492

>>7006247
What, you're not gonna let your computer waifu on the net? How's she gonna play Halo and COD with your teenage ass?

>> No.7006517

AI is smart.

We can't trust smart.

KILL SMART!

>> No.7006552

>80 posters
>283 replies

So it's just the same rapid back of pop-sci retards saying the same shit over and over again right

>> No.7006693

>>7006552
That's a pretty darn good ratio for a thread this size, actually.

It's a whole lotta rabid pack of pop-sci retards, saying the same shit over and over again.

None the less >>7004664 not helping. If ya can't set em straight, don't complain, or you'll just have to hide this thread again when it hits the bump limit, and the next one raises its ugly artificially non-intelligent head.