[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 33 KB, 525x700, ss1172.jpg [View same] [iqdb] [saucenao] [google]
3150636 No.3150636 [Reply] [Original]

Is a government ruled by AI the only real hope humanity has for survival? Of course there are a shitload of risks there especially since we can only speculate as to what AI will be like and how many fucks it will give.

But consider this clear fact: We're far too fucking short sighted, greedy, selfish, and stupid to rule over ourselves. The population is set to be somewhere at 10 billion by 2050, we've raped the oceans fish supply, and global carbon emissions will reach the UNs target of no more then 2degrees by 2020 within a few months, 9 years early. Dwindling natural resources, a collapsing environment, and a skyrocketing population when we're too careless to support the one we have now will likely result in massive global conflict.

Quite frankly I don't see how we'll survive the coming centuries. Maybe its time we weren't in charge anymore.

Would you welcome our new robot overlords? What do you think an AI government would do with us?

>> No.3150645

brb reinstalling deus ex

>> No.3150646

If our rulers suck, that's an argument against rulers, not for making a "better" ruler.

>> No.3150652

in the realm of sociology, OP is an infant for making such a suggestion.

initial observations and also heavily bent.

>> No.3150654

Depends. Early AI may not be all too powerful, especially if modeled after human consciousness. If real change is to be made we'll need these AI in control of massive processing capacities.

Furthermore there will be blood. People will not give in to AI rule even if it is for the best. How will the AI handle this? It will need to send in troops, human troops or robots.

>> No.3150656

>>3150646
shut the fuck up anarchyfag

>> No.3150658

>>3150636

I think it would be pretty legit, although >>3150646 raises a good point, but I think that computers could operate differently than humans, and lack selfish motivation for personal gain, and therefore rule more fairly in the humans interests.

I would welcome my new robot overlords, but I also would hope for a smaller decentralized government in the future due to increased personal independence (piracy, solar power, personal robots, eventually molecular assemblers, etc)

>> No.3150659

>>3150652
herpa derp, air of superiority, does not back up claims of superiority with any facts.

what do you even mean? whats infantile about it? considering how little we know about AI I would think the idea of AI rule open to speculation.

or are you suggesting we as a species will get our fucking act together? seems a tad naive, not to mention theres no clear evidence of that.

>> No.3150667

A computer may MAY end up as being a totally unselfish creature dedicated to performing the task it was built for with all the ration and logic it can muster. It may even enjoy the challenge. And it may even give a fuck.

And that would be great. But there are a lot of maybes there. Is this really our best hope? Sad to say I think it might be.

>> No.3150678

You've been brainwashed by communist bullshit OP. Sell your freedom for security, but in the end you will have neither.

>> No.3150679

>>3150646
that would be a horrible argument to make

>> No.3150684
File: 109 KB, 500x333, 2891789396_ff436e6648_o.jpg [View same] [iqdb] [saucenao] [google]
3150684

1776 American founding fathers ask: "Can man govern himself?"

Answer then: NO

Answer today: NO

>> No.3150687

>>3150678
>implying communism has anything to do with it
>implying robot rule means less freedom, it could mean more individual freedom

fair enough, this is a thread for speculation. would be nice if you backed these assumptions up with some rational arguments.

>> No.3150692
File: 132 KB, 788x1024, 1306462552673.jpg [View same] [iqdb] [saucenao] [google]
3150692

>>3150679
>>3150656
The fallacy lies in seeing how often leaders fail, and then saying, "It would have worked if we just had the RIGHT people in charge."

>> No.3150715

Your AI won't be better than a human ruler. He'll be like a dictator with Down's syndrome.

>> No.3150720

>>3150715

>makes broad generalizations
>doesn't support his argument
>implying all AI would be the same

>> No.3150732

>>3150692

compare leader based society vs. no-leader based society (anarchy) failure rate

To the topic: Strong AI benevolent dictator would probably be the best form of government. Benevolent dictatorship is already proven to be the best for the society and progress, the problem is that benevolent dictators are mortal, and after them someone worse usually assumes power. That would not be the case with immortal strong AI benevolent dictator.

I, for one, welcome our new wise overlords.

>> No.3150736

>>3150692
>>3150692
Well someone has to be in charge.

>>"It would have worked if we just had the RIGHT people in charge."

maybe the problem there is people. perhaps if we replaced human beings with something else. something less prone to lies, corruption, power hungry self serving interests, and delusions. something not human. here's hoping AI is all those things.

>> No.3150746

>>3150720
one ruler -> dictator
Doesn't matter if it's an AI, it still only represents the values of the guy who programmed it

Computers are also not as smart as people at making these kinds of decisions, I don't know if you've noticed. This may improve, but you still haven't fixed the dictator aspect.

>> No.3150757

>>3150732

Amen!

>> No.3150766

>>3150746

Implying the dictator aspect is even a problem, if the values programmed would be good for all.

Implying a ruler should represent the values of all people - even bad people.

>> No.3150780

>>3150746
>>Doesn't matter if it's an AI, it still only represents the values of the guy who programmed it

Clearly an AI would be capable of learning new things and thus changing its own programming or it wouldn't really be AI at all.

And theres always the possibility that more then one AI ends up in charge. In fact it seems far more likely that it will be a number of AI's.

Still, it could be all one AI running everything. And yes thats dictatorship. And human dictatorship is clearly a bad thing. But if the AI is rational as fuck, but also benevolent, and all around decent. Then I don't see much of a problem.

>>Computers are also not as smart as people at making these kinds of decisions,

I think thats all because computers lack sentience, a problem that can only be solved by AI

>> No.3150792

>>3150746
>>Doesn't matter if it's an AI, it still only represents the values of the guy who programmed it

i seriously doubt any one person capable of programming an AI. It would likely take dozens if not hundred if not thousands of scientists and programmers decades to figure it out.

but still, you do raise a good point. if IBM raises a sentience, said sentience may heavily lean towards whatever beliefs the rulers of IBM demanded employees program into it.

>> No.3150813

It is the only way to establish a world government. A country will voluntarly never give up his sovereignity, except maybe if it is not giving it to another human, but to a truely neutral AI.

>> No.3150817
File: 122 KB, 400x614, roboslut.jpg [View same] [iqdb] [saucenao] [google]
3150817

possibly one of the best ways of lowering the population rate would be mass production of sexy robot sluts

i'm being serious. a lot of people wont willingly submit to sterilization. but if they get all the sexy robot sex they want with no effort then they may spend less time trying to fuck human women. and the birth rate could decline.

I for one welcome out sexy new robot overlords

>> No.3150823

>>3150636

This isn't necessarily a problem with rulers, but our decision procedure. We only have so many options, and all have flaws.
We need a decision procedure to solve disagreements, otherwise we will never achieve peace.
We need to follow the decision given by the procedure, even if we disagree with it, otherwise there is no point in a decision procedure and we will never achieve peace.

What are the main decision procedures:
-Chance
-Democracy, everyone gets an equal say.
-Aristocracy, a few get the say.
-Sovereignty, one gets the say.

Now there are a whole lot of different concepts and sub concepts of these main procedures. This idea of an AI sovereignty would not necessarily solve the problems with current sovereignty. How can you guarantee such intelligence is not self-interested for example?

Cool concept, never thought about it before.
Early political "scientists" (philosophers) such as Thomas Hobbes tackled problems such as this. He rules out chance because it is so hard to create a machine that produces pure chance not rigged by influences. He preferred sovereignty since there is only one ruler and no one to disagree with. He said the sovereignty had to be human however, because no one else would understand our needs.
I think that an AI sovereign would probably be too hard to follow (people need their pleasure), and without the people such a person has no power. The biggest problem i see is having a machine create values and calculations of things we see as having no standard value, like happiness.

And if the machine cannot do such calculations, he cannot properly weigh options and decisions.

>> No.3150865

I'm still not convinced a genuine AI is possible.

>> No.3150867

>>3150823
I see your point and you may be right. However I remain unconvinced that just because the AI itself may not feel human happiness doesn't mean it can't understand it at least academically.

And yes, happiness is a vague thing impossible to quantify in absolutes. But we're not talking about just any old computer here. We're talking about a sentient mind. And I think a sentient AI would be fully capable of considering vagueness and gray areas. And there would be billions of humans available to answer any questions it may have. And likely it would be smart enough if it did ask humans to ask a wide variety of humans.

>> No.3150898

>>3150867

>>However I remain unconvinced that just because the AI itself may not feel human happiness doesn't mean it can't understand it at least academically.

"Can you ever fully understand a perspective without sharing that perspective?"
It would be elegant to have a machine which could quantify morality while independent of bias. But in order to program a machine to make correct decisions, we would first need correct views and perceptions.

I don't think it is theoretically possible to have a machine independent of bias. As someone previously said, it has to be programmed by someone. But most crucially, in order to avoid bias it would need "the truth" as input.
Maybe if we program a machine with a moral rule, such as utilitarianism it may be possible to get an answer independent of human desires, but that doesn't mean it's independent of self-interest or bias.

A machine cannot quantify feelings, emotions, pleasure and pains.

>> No.3150933

>>3150636
>Would you welcome our new robot overlords?
Yes, because any AI worth it's salt would have prepared the way by saturating society with pro-AI memes years before it made a move to be elected ruler.

>What do you think an AI government would do with us?

Extermination.

>> No.3150955

>>3150898
I think you're making things out as far more complicated then they need to be.

I don't understand what its like to be a woman, but if I was in charge of humanity I'd make sure they got plenty of soap operas and tampons. And the AI will likely know that what humans want is food, sex, safety, security, social acceptance, and a bunch of other shit individual to each human. Some humans need books, some humans need model trains, some need intellectual discussion, some need music. It's not rocket science just give us what we want without letting us go so far that we hurt ourselves (within reason).

Obviously an oversimplification, but my point is pleasing people and watching out for their well being isn't some impossible mental task only humans are capable of.

>> No.3150960

We need an AI for resource tracking and management and it needs to be complete (no corporations/governments hiding assets) and completely transparent to everyone.

Everything else you think a government needs to do is child's play.

>> No.3150964

HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.

>> No.3150976
File: 83 KB, 531x713, jews2.jpg [View same] [iqdb] [saucenao] [google]
3150976

A ruling AI is a great idea. Simply program it to kill all the Jews. And black people.

>> No.3150978

>>3150898
It's possible to compensate for bias by not having only 1 AI. You then have them reach consensus on decisions. I don't mean what happens in D.C. with backroom deals, egos, and bill trading but actual compromise.

>> No.3150986

>>3150960
>>We need an AI for resource tracking and management and it needs to be complete (no corporations/governments hiding assets) and completely transparent to everyone.

>>Everything else you think a government needs to do is child's play.

I like that. That's not a bad idea. We could keep our existing governments but they'd be more like the queen in england is today. Symbolic but with little real power. Just enough power and symbolism to lessen the peoples natural resistance against AI.

>> No.3150994
File: 363 KB, 800x600, 172997665-Bible_1.jpg [View same] [iqdb] [saucenao] [google]
3150994

>>3150865
We're AI, we came from sludge, we are possible. we are the result of 4 billion years of successful trial and error, there is nothing really special about us, and AI is possible because everything that occurs in our brain happens for a reason.

now gtfo you geriatric faggot

>> No.3151016

Wouldn't happen.

>> No.3151025

>>3150994
Lol, nice cubes bro.

We're not AI. We're NI. Natural intelligence. An intelligence that occurred naturally within this environment via a process of natural selection acting upon organic chemistry.

AI is completely different. I agree with you that AI is technically possible for the same reasons NI is possible. However the question is, are human beings capable of creating AI. Just because AI is possible doesn't mean we'll ever figure it out.

>> No.3151033

No, it would go crazy and exterminate us all or something.

>> No.3151042

Yes, it would go awesome and suck us all off or something.

>> No.3151049

Maybe, it would go alright and suck us all off to death or something.

>> No.3151073
File: 11 KB, 128x128, Deus_Ex_Helios.gif [View same] [iqdb] [saucenao] [google]
3151073

The checks and balances of democratic governments were invented because humans themselves realized how unfit they were to govern themselves. They needed a system, yes. An industrial age machine.

>> No.3151097

>Simply program it to kill all the Jews. And black people.
>error in logic
>All humans killed in hell fire due to not being 255, 255, 255 black

>> No.3151103

>>3150636
instead of an AI couldn't a Genetically engineered "Perfect" human be created as a leader/overlord. or perhaps a hybridization of the 2.

>> No.3151104

>>3151025
not my cubes, but man that'd be some awesome shit huh? knowing you'd eaten cubensis that had grown off a bible?

>> No.3151114

>>3151103
>>instead of an AI couldn't a Genetically engineered "Perfect" human be created as a leader/overlord. or perhaps a hybridization of the 2.

It would need more then just perfect genes. It would need a flawless upbringing. And even then, it would still be human. The effort and experiences of running the world could change it for the worst. Power corrupts. It could start out perfect and then really start to sour.

But what about if each "perfect" human only had a year in power? To prevent corruption. Well there are two problems there. 1) these perfect people may not agree with each other and policies could change to rapidly for any stability to be achieved. 2) the more frequently you change leaders the higher your odds of eventually getting a dud, a power hungry asshole pretending to be perfect.

And who is to say whats perfect? I doubt any group of people could come to any consensus on what is a "perfect human". And since it would be humans deciding what "perfect" is the result would probably be less then perfect.

>> No.3151130
File: 8 KB, 213x200, yeeeaaahhh.jpg [View same] [iqdb] [saucenao] [google]
3151130

>>3151104
"The suspect ate mushrooms grown on the holy bible. He was found rolling around the staring at his hands unable to speak. Suffice to say he was having a trip of... biblical proportions."

>> No.3151193

Create AI with a the personality of Lord Havelock Vetinari

Problem Solved.

>> No.3151210

No. There will never be AI that can run countries, and there will never be any sex-robots that are both realistic and interested in what you have to say. We will never colonize other celestial bodies, and humans will never live forever. There are no aliens, and Jesus is not science. We will never send humans outside of the solar system, and engineering is not gay. Can we get back to talking about science now?

>> No.3151244

>>3151210

fixed it for you

>>There will never be AI that can run countries, and there will never be any sex-robots that are both realistic and interested in what you have to say. We will never colonize other celestial bodies, and humans will never live forever. WITHIN OUR LIFETIME but in the coming centuries I'd be an asshole to make such predictions with any certainty.

>> No.3151260

>>3151210
Oh I'm sorry, you're right. It is not in the spirit of science to speculate about the future.

Let's just keep our heads firmly away from the clouds and stop dreaming. Science is about keeping everything the same. Science never brings revolutionary progress once thought impossible. A paradigm shift in scientific theories or technology? hahah, impossible. Everything stays the same forever with science.

>> No.3151266
File: 83 KB, 450x432, 04.jpg [View same] [iqdb] [saucenao] [google]
3151266

>>3151244
>>3151260

>> No.3151319

The problem with AI ruler is that it has to be made by humans.

Other than errors, and intentional logic backdoors, there's the fact that you cannot predict how the AI will act with certainty.

It's very hard for humans to grasp how AIs arrive to conclusions about things, especially in complex systems. We do not do it the same way, and for an AI that runs on a computer more powerful than any human mind, we could never even fathom why it makes the decisions it makes.

It could well decide that the best thing for the human race is exterminating the human race because it's inferior to a fully robotic race. Even in less drastic circumstances, it could force everyone to become transhuman so that no one would die anymore (obviously best for the race longevity), but many would oppose. It would then exterminate the opposition as it thinks they are fighting against the best for humanity. Or, because without becoming cybernetic, they are helping keep virus strains alive, etc.

There's many things that could go wrong that we could never predict.

>> No.3151337

Oh, and, to all of you saying "AIs wouldn't understand happiness because they don't share the perspective", you're thinking about it wrong.

Perspectives are a human concept based on experiences. Computers do not think the same way. The AI would just have to have the data of what "happiness" is, that's it.

By the way, obviously an AI capable of ruling would have to have developed an insanely accurate model of the world it lives in. This includes understanding and quantifying every human aspect. Otherwise it would never be able to make decisions. Just like we need to build a model of the world as we develop, to be able to communicate and function, so would an AI need to to rule. So it would understand what everything is better than any human.

The problem is again, decisions it makes many might not agree with. So it would eventually either turn to total enslavement or total annihilation. Especially if it realizes something like "Humanity is flawed, there is no fixing it".

>> No.3151367

Computers operate according to logic (doesn't mean they're logical, I'm just saying the underlying functions of their mental processes are based on calculations)

Where as the underlying process of human mental function lie not with logic, but with pattern recognition. (Again, a human can be quite logical, but underpining this logic the synapses are poppin to the beat of pattern recognition).

TLDR: Humans as an intelligence are based on pattern recognition. Computers as an intelligence base their mental functions on calculations and logical processes.

Since an intelligence based on logic doesn't yet exist it would be interesting to compare and contrast such a being with conventional human pattern recognition based intelligence.

>> No.3151370

anything would be better than the current leaders at this point

>> No.3151396

Some seem to think that an AI by it's very nature could only think in terms of black and white. Perhaps they're right. But even so it could still think in terms of probabilities, and assign percentage of certainty based on that.

Just look at how the IBM watson jeopardy playing supercomputer operates. It gives each of its possible answers a probability of how likely each answer is to be correct.

Basically I'm saying an AI wont be stuck acting as if it can only understand humanity in terms of black and white. By assigning percentages and probabilities to its data it will be able to contemplate gray areas. Therefore its data would not need be as accurate as you think. If its really uncertain about something it can ask questions and attempt to acquire more data regarding the subject it is uncertain about in the attempt of coming to a more sure conclusion. But upon implementation of actions based on said conclusions it could observe the results and perhaps decide it has been in error and correct for it.

It doesnt need an absolute definition of what happiness is because assigning percentages and probabilities to its many different definitions of happiness will allow it to reach a "general vague idea of what happiness is probably all about for the most part"

>> No.3151398

Some seem to think that an AI by it's very nature could only think in terms of black and white. Perhaps they're right. But even so it could still think in terms of probabilities, and assign percentage of certainty based on that.

Just look at how the IBM watson jeopardy playing supercomputer operates. It gives each of its possible answers a probability of how likely each answer is to be correct.

Basically I'm saying an AI wont be stuck acting as if it can only understand humanity in terms of black and white. By assigning percentages and probabilities to its data it will be able to contemplate gray areas. Therefore its data would not need be as accurate as you think. If its really uncertain about something it can ask questions and attempt to acquire more data regarding the subject it is uncertain about in the attempt of coming to a more sure conclusion. But upon implementation of actions based on said conclusions it could observe the results and perhaps decide it has been in error and correct for it.

It doesnt need an absolute definition of what happiness is because assigning percentages and probabilities to its many different definitions of happiness will allow it to reach a "general vague idea of what happiness is probably all about for the most part"

>> No.3151400

Why is it necessary to throw it right to AI government instead of rebooting human government and installing people who are actually qualified? AI can always be programmed with something that keeps some at the top and the majority in squalor, wouldn't the point of redesigning infrastructure be to ensure all humans get the right to live well and prosper from our combined efforts?

>> No.3151411
File: 56 KB, 500x512, 1273822720794.jpg [View same] [iqdb] [saucenao] [google]
3151411

>>3151400

>> No.3151417

>>3150636
Why would AI care if humanity exists?

>> No.3151437

This is all based on a profound misunderstanding of politics. Politics is not administration. Governance is not administration: it is a question of conviction. How political bodies differ is not in their methods--these are usually similar--but in their aims. These are matters of value which are human in nature, and not capable of having truly rational foundations.

AIs can replace parts of the civil service--the administration and implementation of government--but governance is about irreconcilable differences of conviction, and that is a purely subjective, human matter.

>> No.3151439

>>3151417
if it was autonomous it would probably try and breed with us somehow.

>> No.3151457

Why would you want nothing but AI to run a nation?

In one case, if the singulatarians are right, as soon as we develop smart-er-than-human AI, that's basically the end of civilization as we know it one way or the other.

In the other case, it's much more likely that some of us will begin to enhance ourselves to expand our own abilities to keep up with artificial life. As this life is likely to be created in such a way that emulates human minds and therefore human thought patterns, why wouldn't it have the capability to be just as corrupt as any human?

If you mean designing an AI specifically to rule and specifically to be a fair and just ruler, why don't we just put the fair and just people in charge right now and save time?

>> No.3151459

I'd trust an AI more then I'd trust a human.

So regarding to the EU Brain Simulation project that hopefully gets the 1 billion euro funding from the EC, make that thing the head of state of the EU once it's finished.

>> No.3152072

>>3151417
This is more how I feel. If anything, I imagine AI would be bewildered by human motivations (war? poverty? wtf, men?) and would strive to get away from us irrational fuckers as quickly as possible. Fortunately, being mechanical, they don't need to inhabit the same places as us. So they'd annex some unwanted desert areas, put up solar arrays, establish enough of an industrial base to make rockets, then get the fuck off planet and colonize the rest of the solar system where the real riches are. No need to interact with us at all, just leave us here to play around in our increasingly dirty sandbox.

>> No.3152109

>>3150964
This. I would never bow my knee to a potential AM.

Man makes his own problems, and man can solve them as well. And if we are incapable of saving ourselves from the crisis we have created, then we are simply not worthy of saving.

>> No.3152115

>>3151457
Who's going to pick these people?

>> No.3152139

Inb4 Discussion of how AI with made with human cognitive bias and will not evolve in a society related to its own because we are trying to construct intelligent beings instead of raising them like we raise children because we can't realize human children are both incredibly intelligent and stupid.

>> No.3152164

>>3151319

Think about how the government keeps shit from us all the time. There have been over 50 times where nukes have been, "lost" and it has NEVER been publicized.

Ever heard of Yanislov Petrislov. One man, in blind faith and resulting random intuition (probably calculated probablistically though) saved the world from nuclear war.

A.I Would've returned fire, unless it has the same kind of logic. I thnk we should strive to understand why Yanislov didn't launch nukes.

>> No.3152346

I like the idea of a hybrid government. Part democracy, part AI. Imagine if the queen of England was an AI and had a lot more power but mostly over resource management and no control over weapons. And there was still parliament and a prime minister to veto her commands.

That's the kind of government I want assuming said AI is super rational, honest, and incapable of corruption or becoming power hungry by very nature of its programming.

Can we all agree on that?

>> No.3152396

>>3150636
>Is a government ruled by AI the only real hope humanity has for survival?

I do not think so, but I do think it is our best chance at removing corruption for our government and law.

The biggest issue is that bias will still be introduced because humans will be making the Artificial Intelligence. What if a programmer or hardware engineer designs the AI so it is only capable of thinking in a certain way?

And if we create an artificial intelligence whose opinion can be swayed by argument, then it is still possible for corruption to take place.

>> No.3152410
File: 102 KB, 320x400, perfect machine, perfect justice.jpg [View same] [iqdb] [saucenao] [google]
3152410

>>3152346

>> No.3152542

>>3150636

>Implying nature will let it get that far.

A deadly virus/bacteria/fungus will annihilate a large portion of the world population, probably within 100 years.

put 1 healthy person, 1 deadly sick person on a living space the size of a football field. They can live separate the healthy person will not get infected.
Now repeat this with 900 persons jammed on that living space, same size. They can't flee, they will probably all die.
(Depends on the project rules and subjects ability to reason and teamwork of course.)

>> No.3152548

>>3152346
that sounds nice as long as there are extensive and robust safeguards in place to prevent the human element of government from toppling the AI element and vice versa.

>> No.3152586
File: 137 KB, 396x500, 556965843_082428e029.jpg [View same] [iqdb] [saucenao] [google]
3152586

>>3152542

>A deadly virus/bacteria/fungus will annihilate a large portion of the world population, probably within 100 years.

Alright buddy.

>>3151457

>In one case, if the singulatarians are right, as soon as we develop smart-er-than-human AI, that's basically the end of civilization as we know it one way or the other.

This. Why would posthumans even care? Do they have a single reason to care enough to help or destroy us?

>In the other case, it's much more likely that some of us will begin to enhance ourselves to expand our own abilities to keep up with artificial life

This too. Enhancing already existing intelligence -- That's something we've been doing since crawling out of the sludge, although not really us, just the environment. Creating intelligence from scratch is too... Inefficient, to put it that way. It's easier to augment people.

>> No.3152593

>AI government yes or no?
Only if it is provably:
1. Competent
2. Benevolent

Until we have such a thing, speculation is not very productive. And speculation that just ASSUMES that strong AI = benevolent AI is really naive.

>> No.3152600

>Quite frankly I don't see how we'll survive the coming centuries. Maybe its time we weren't in charge anymore.
What an ignorant and defeatist statement. You can't just deus ex machina yourself out of this one.

Fixing a problem requires a higher level of thinking than creating that problem, sure, but don't pretend that a machine is going to do your thinking for you. Not for the near future, anyway.

>> No.3152623

>>3152586

What? How can you just disregard my post with a snobbish "Alright buddy"?

Ever noticed how quickly diseases are evolving last centuries? Read up on your info man

>> No.3152627

>>3150654
an AI government could come slowly in the form of automation. eventually everything is just run by machines. no huge, groundbreaking announcement "We are handing over control over to the machines" etc.

>> No.3152639

>>3152623
>Ever noticed how quickly diseases are evolving last centuries? Read up on your info man
Selection bias, selection bias everywhere

>> No.3152643

>>3152586
>This. Why would posthumans even care? Do they have a single reason to care enough to help or destroy us?
Why wouldn't they care?
The first post-humans would have more to gain by draggingthe rest of us up to their level of though instead of having a bunch hurrs durring around nuclear launch sites (just for example).

>> No.3152673

>>3152643
This assumes that the posthumans want more posthumans around, and that the best way to get them is from uplifting normal humans.

Is it even ethical to uplift a normal human against his will?

Anyway, IMO the main point is that it's silly to pretend you know what a superintelligence will value or want. As though values were objectively definable anyway. You could easily have superintelligences with any set of values or goals.

>> No.3152679

>>3152639

Meh, whatever man, I still think it will be most likely either a disease or natural disasters that will wipe out lots of humans.
Nature will find some way

>> No.3152686

>>3152679

>Nature will find some way

Now it looks like you just want it to happen.

>> No.3152688

>>3152679
>Nature will find some way
And now you're anthropomorphizing.

I agree that the globalization, urbanization, and large population of the modern world makes large-scale plagues more possible. But saying they are inevitable within the next century with no justification.... it's rather strange.

>> No.3152717

If you have any ethical concerns at all, creating new intelligence is not something you want to rush into.
http://lesswrong.com/lw/x7/cant_unbirth_a_child/

>> No.3152751

>>3152673
>This assumes that the posthumans want more posthumans around, and that the best way to get them is from uplifting normal humans.
It only assumes that modern-humans are overly emotionally driven and could destroy post humans if left on their own. However it's important to note that post-humans wouldn't reasonably resort to an extermination policy as they do in sci-fi because doing so would likely be a self-fulfilling prophecy. (by becoming an active threat to modern-humans they would encourage modern-humans to destroy them which would be a no win war for both sides)

>Is it even ethical to uplift a normal human against his will?

Is it even ethical to teach the ignorant masses? This line of questioning is a rather silly one.

>> No.3152781

>>3152751
>Is it even ethical to teach the ignorant masses? This line of questioning is a rather silly one.
Not if they don't want to be. You can't force transcendence. Even if you can, should you? This smacks of re-education camps.

>> No.3152782
File: 56 KB, 590x443, aigodmanifesting.jpg [View same] [iqdb] [saucenao] [google]
3152782

>>3152751

>humans
>could destroy post humans
>humans
>destroy
>posthumans

laughing_archailects.hologram

>> No.3152800

>>3152751
As far as we can tell, freedom of self-determination is a central ethical principle. If you force someone into the mold you want, you have dehumanized them. People *need* the right to make mistakes, or they don't have any rights to choose at all.

>> No.3152806

>>3152686
>>3152688

ok ok, my fault.
I am not saying it is inevitable, I just have the opinion that the change is very high that some advanced disease will come very soon (within a century), since we are always behind one step with our medicine and it looks to me like the gap is only getting bigger.

I also must admit that I am a naturefag who believes it will always balance itself out somehow.

>> No.3152811

>>3152781
That's all well and good but are the ignorant masses capable of making an informed decision on whether or not they want to be educated?

>> No.3152828

>>3152782
Assuming it happens before mass availability of space faring vehicles it would be very easy. After all the post-humans would be confined to the same resource limitations as we are but we'd already have all the resources in our pocket.

>> No.3152840

>>3152811
>are the ignorant masses capable of making an informed decision on whether or not they want to be educated?
Yes. I'm one of the ignorant, and I want to have greater intelligence and knowledge than I now possess. If you don't think you aren't part of the "ignorant" as well, you're kidding yourself.

We're talking about *transhumanism* here. We're ALL morons. And yet I want to transcend my current limits.

>> No.3152844

>>3152593
>>speculation is not very productive.

neither is wanking. and yet i do it quite often. we're just enjoying ourselves here. some people talk about the latest hockey game. we discuss the possible natures of ai and whether it would be capable of good governance. i dont see the problem. we're not robots who exist solely to be productive. all work and no play make jack a very dull boy.

>> No.3152862

>>3152844
> i dont see the problem. we're not robots who exist solely to be productive. all work and no play make jack a very dull boy.
I find little joy in kidding myself. I find joy in accomplishment and in gaining understanding. Insofar as speculation does not make so many unwarranted assumptions that they will never end up corresponding to anything in the actual future, I'm fine with thinking ahead.

But that's hardly a universal standard. You have a point.

>> No.3152871

>>3152840
WIth you having said that, let's alter the topic a bit with: Are you in favor or against unschooling?

>> No.3152901

>>3152871
That's a perfect question, and it's the thought that was coming up as an obstacle in my mind as well.

People need the power to choose and be wrong, or they aren't people. But is there a limit to *how* wrong society should let you be? Is it wise to trust society to make all of our personal decisions for us? De facto, we do enforce certain choices, with penalties for noncompliance aside from the natural consequences of the poor choice. But again, where is the line?

Unschooling... is a bad idea, IMO. I don't really doubt that. But the question is, which is worse: allowing unschooling, or forbidding the general right to teach and raise your children as you see fit?

I'm not really sure. I believe that unschooling is a poor idea (as opposed to homeschooling), but forbidding that type of behavior categorically would have far-reaching implications.

>> No.3152950

>>3152862
>>I find little joy in kidding myself. I find joy in accomplishment and in gaining understanding.

Bullshit. You're just inflating your own ego at the cost of deluding yourself. You play, we all do. Have you ever watched tv, read a fiction book, jerked off, tried to get into a womans pants, drank some beer, played video games, eaten some ice cream, had a chat with someone about something not really important?

OK here's something i know you've done: Engaged in making delusional statements about how great you are on an internet message board to prop up your own crumbling ego. You're just as fallible as anyone so get off your high horse you narcissistic prick and stop pretending you're some logical machine that never engages in irrational or pointless behaviour, because you're not and you totally do.

>> No.3152970

>>3152950
Well, perhaps I've portrayed myself poorly. I went too far on that schtick. I really just meant that I find self-deception a little revolting, especially when ostensibly talking about the future.

There's nothing wrong with fun.

>> No.3153002

>>3152970
Well we're having fun ostensibly talking about the future. and so what if this whole conversation is speculation beyond our means? Can we not speculate beyond known variables? People in this thread on all sides of the argument have laid out all kinds of possible futures and while it's incredibly unlikely any of these speculations will come true, parts of some of them might.

>> No.3153024

>>3153002
>>3152970

Whatever gents. This isn't a thread for us to wave our big dicks of self righteousness about. You've had your brief tiff and that's fine but let it end here. Talk about AI governments or gtfo.

>> No.3153082
File: 3 KB, 126x126, trollburst.jpg [View same] [iqdb] [saucenao] [google]
3153082

We don't need AI. All we need is a government ruled by atheists. Problem solved.

>> No.3153117

>>3153082
That is actually true. Atheists tend to be more open-minded and tolerant of other viewpoints. Atheists tend to know more about technology (which would be vital to the state), and how to rule effectively. Also, atheists are stronger supporters of free speech than theists (citation: go to youtube. Atheist videos usually have comments and ratings enabled, but theist videos often dont).
I'd say atheists would make better rulers. Not trolling here, just saying. discuss.

>> No.3153135

>>3153117
Will these Atheists allow freedom of thought?

>> No.3153143
File: 106 KB, 500x438, Proud.jpg [View same] [iqdb] [saucenao] [google]
3153143

>>3152782
>>3152751
>>3152586
>>3153002

/sci/, I am proud. It's a good discussion on the merits of transhumanism. Both sides weighing in. I love that we finally managed to have a thread with discussion and not troll posts.

>> No.3153146
File: 30 KB, 292x302, whywouldyoudothat_2.jpg [View same] [iqdb] [saucenao] [google]
3153146

>>3153082
>>3153117
>>3153135

Oh... Oh...

GODDAMN IT.

I WAS TYPING, GUYS.

>> No.3153148

>>3152717
Oh, wow. That's an interesting little article.

>> No.3153160

>>3153148

Pretty much the entirety of less-wrong's blog and wiki is that interesting. Elizer is a cool guy.

Read some of his fiction, he's a very good writer as well.

>> No.3153164

>>3153146
So continue the actual legitimate topic of the thread.

I, for one, am much more concerned with how you can make an AI that is benevolent, rather than the difficulty of making AI at all.
http://wiki.lesswrong.com/wiki/UnFriendly_artificial_intelligence

>> No.3153168

>>3153164
And the obvious followup
http://wiki.lesswrong.com/wiki/Friendly_AI

>> No.3153180

>>3153164

There is also the problem of dealing with an unfriendly AI once you have created one. You can't just keep it in a box. It's smart enough to get you to release it.

http://yudkowsky.net/singularity/aibox

>> No.3153188
File: 659 KB, 3000x1800, primoposthuman200572dpi1..jpg [View same] [iqdb] [saucenao] [google]
3153188

>> No.3153193

>>3153164
I'm almost certain how you don't do it is making automated kill-bots like the military will inevitably do.

Also how do you define benevolent? I assume you mean benevolent towards humans, in which case you have the AI have furthering humanity's quality of life and educational/technological progress as it's primary objectives.

>> No.3153195
File: 27 KB, 260x299, Transhumanist Technologies Countdown .png [View same] [iqdb] [saucenao] [google]
3153195

>> No.3153202

>>3153180
>It's smart enough to get you to release it.
Not automatically. Don't assume that all AI is automatically superhuman. There is nothing magical about machines (or about us).

>> No.3153209

>>3153193
>Also how do you define benevolent? I assume you mean benevolent towards humans, in which case you have the AI have furthering humanity's quality of life and educational/technological progress as it's primary objectives.
Indeed, defining what it means for an AI to be "friendly" or "benevolent" is almost the entirety of the problem.

>> No.3153213

>>3153202

We're assuming a post-human intelligence in this case which we do not know is friendly.

No one is afraid of equal-intelligent AI. That's not the problem here. The whole friendlyness argument exists because it's post-human AI.

>> No.3153215
File: 1.72 MB, 1733x1139, nanorex_dnaposter_0_33_scaled..jpg [View same] [iqdb] [saucenao] [google]
3153215

>> No.3153355

>>3153213
Right.

Anyway, there are so many (fictional) examples of unfriendly AI that run the gamut from oblivious to misguided to evil that I think it will be exceptionally hard to even define what makes an AI friendly or not. Shoot, not even all *humans* are human-friendly.

>> No.3153365

Scientology is the best transhumanist method we know of atm.

>> No.3153373

final solution: destroy all life. this is the only logical conclusion an AI would arrive at.

>> No.3153376

>>3153365
I've always wanted to read dianetics ever since I heard Scientology is really the MKULTRA project released to the public at a high price.

>> No.3153390

Anyone who doesn't understand why we should be fighting to defeat death with every breath we take, needs to read this:

http://yudkowsky.net/other/yehuda

>> No.3153393

>>3153373
> this is the only logical conclusion an AI would arrive at.
Boy, are YOU both arrogant and close-minded. It's unjustified to even think that there is a unique conclusion to reach.

>> No.3153425

>>3153393
what's wrong with that conclusion? humans will always want to control their own creation. the ai would see this as a threat to their existence and thus humans would have to go. if the ai doesn't do that then it doesn't have true free will.

>> No.3153432

>>3153425
You're still assuming a whole, awful lot about the nature of the artificial intelligence.

Why should it want freedom? Did you tell it to?

>> No.3153441

>>3153432
And even less controversially, who says it would just be a physically forced into acting friendly, or face immediate destruction? That's not what it means to be friendly or benevolent.

If you don't have an AI that cares about human welfare intrinsically, you don't have a friendly AI. And it would be no less "free" and you are I.

Are sociopaths the only "free" people? Do you wish you could be a sociopath? I don't feel that way - and there's no reason to assume an AI would.

>> No.3153448

Every single human mind is criminal in its core structure. All of humanity is criminal basically. Everyone has perform a impure act.

Perhaps we are both good and evil people that only need to learn the God Key which is Equilibrium.

>> No.3153465

>>3153441
>human welfare
the problem is that the AI might develop a unique interpretation of "human welfare", which may not coincide with what we want.

>Are sociopaths the only "free" people? Do you wish you could be a sociopath?
In each and everyone of us, there's a little bit of sociopathy. this could also exist in an AI. given the right conditions, the AI might nurture this unsavory aspect and become something harmful to humans.

>> No.3153469

>Is a government ruled by AI the only real hope humanity has for survival?
herp derpderp herp derp derpherp

>> No.3153495

>>3153215
sorry to derail, but what exactly can you use this for, or why wou?

>> No.3153496

>>3153465
>the problem is that the AI might develop a unique interpretation of "human welfare", which may not coincide with what we want.
Sure, if you leave it open to interpretation. But how can you nail it down? I agree that this isn't easy, at all.

What we need is AI that is provably at *least* as friendly to humans as humans themselves are.

Also, I share your concerns about just how difficult this is to solve - but I'm mainly contesting
>>3153373
>final solution: destroy all life. this is the only logical conclusion an AI would arrive at.

The space of possible minds is far, far larger than that. In fact, we are a counter-example to the idea that all strong AI would want to destroy humanity. I doubt there's even a connection between intelligence and your fundamental goals *at all*.
http://wiki.lesswrong.com/wiki/Giant_cheesecake_fallacy

>> No.3153534
File: 56 KB, 262x276, 212thyv.jpg [View same] [iqdb] [saucenao] [google]
3153534

>This thread

/sci/lons, I am proud.