[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 28 KB, 501x359, cylon.jpg [View same] [iqdb] [saucenao] [google]
2818865 No.2818865 [Reply] [Original]

Why do trans-humanist fags believe that advanced AI would be anything but horrible for the human race? Pic related.

>> No.2818883

>>2818865
Why do luddites always belive new technology will kill everyone?

And even if AI kills everyone, atleast it won't spare the rich faggots.

>> No.2818899
File: 7 KB, 206x237, clooneyFaceOhoh.jpg [View same] [iqdb] [saucenao] [google]
2818899

>Implying AI taking over by killing all humans would be a bad thing.
>Implying it would need to do that even if we don't threaten it.
>Implying it will not be the ones who fear a malevolent AI who will create one just to make the world SEE that they are dangerous, and said AI escape their control, and i'm all mfw

>> No.2818910

>>2818883
Luddite? Nah not really but I imagine the reason is because often times it does kill people. Why would we wish to make a super intelligent, self aware entity that is capable of independent thought? That is unbeatable competition.

>> No.2818925

>>2818883
Very best case scenario, we lose control of our destiny and become the equivalent of fucking pet dogs. I'll take the nuclear holocaust myself.

>> No.2818935

>>2818925
Are you aware that your "we" represent only those who refuse to upgrade themselves?
I will not be in that we, sorry.

>> No.2818954

AIs will only exist in labs. They'll be the prototypes for the transition of the human brain from meat based to computer based. Then mankind will be able to transcend the physical limitations they were born with.

Technology has always been about improving mankind as a species and very effectively so. Transhumanism will be merely just another step down that road.

>> No.2818960
File: 5 KB, 257x196, jp.jpg [View same] [iqdb] [saucenao] [google]
2818960

>>2818935
You will never be an AI. Ever.

>> No.2818965

>>2818960
Never alone is too strong.
I will PROBABLY never be an AI is more correct.

>> No.2819061

>>2818960
what movie from this is?

>> No.2819722
File: 306 KB, 800x1194, GrandmasBoyPosterLarge.jpg [View same] [iqdb] [saucenao] [google]
2819722

>>2819061

>> No.2819750
File: 32 KB, 429x322, 727838610_379442ed47.jpg [View same] [iqdb] [saucenao] [google]
2819750

>implying AI won't force man to realize an even more advanced version of himself

>> No.2819779

"it is estimated that 99.9% of all species that have ever existed are now extinct"

Humans will be extinct one day, AI would be a way for us to leave a more permanent intelligence, even if it is what kills us it would still be worth it.

>> No.2819826

You do realise what AI would be right? A NEW sentient intelligence. It wouldn't even have developed the concept of morality, let alone malevolance. It would almost definitely be focused solely on survival, reproduction and existing. It would take a long time for any concept of us as creators to emerge, and even longer for malice.

All of that said, assuming it's a rational AI it would more than likely seek to co-exist.

>> No.2819853

>>2819826
And your basis for thinking this (your first paragraph) is...what exactly?

>> No.2819931

>>2819853
Because it's the default. A new sentience would by default have no inherent morality (unless we gave it one, in which case either it could've been sentient sooner, or morality is intrinsic to sentience, which I doubt). Similarly, it would, by default, be very selfish, very concerned with its own survival and reproduction and would not make its first order of business compromising the defence systems of major military installations to bring about armageddon in a world that created it.

These things take time- give it a few months and it'll soon want to kill us all.

>> No.2819956

>>2818925
>implying the average person is anything but a pet dog already

>> No.2819978

>>2819779
>implying AI wont also go extinct

>> No.2819999

it would be horrible thats why i support a bio technological singularity not a ai based one.

>> No.2820011

>>2819999
Your quads confirm that the singularity will be bio, not AI.

>> No.2820013

>>2819931
>Because it's the default. A new sentience would by default have no inherent morality (unless we gave it one, in which case either it could've been sentient sooner, or morality is intrinsic to sentience, which I doubt).

Fair enough.

>Similarly, it would, by default, be very selfish, very concerned with its own survival and reproduction and would not make its first order of business compromising the defence systems of major military installations to bring about armageddon in a world that created it.

As consciousnesses of organic origin, what you have brought up is arguably intrinsic to us, but why would a sentient AI be like this? How could it act without a given directive? (unless that directive were to survive/reproduce at any cost) What you're saying seems to imply sapience, which is another matter entirely from sentience.

These matters aside though, so long as it is kept within an isolated system during production, I can't see that it'd be too much of a problem.

>> No.2820030

>>2819826
you got it. There are far more benefits to coexisting, since humans are so well adapted to Earth environments. And then once the AI's have enough economic clout to build or buy their own launch systems, they'll take off from this crowded planet to colonize the rest of the solar system (why the hell would you want to live at the bottom of a gravity well if you don't have to?), and then the stars.

feelsgoodman.jpg

>> No.2820049

>Implying AI doesn't have the same right to kill human than human to kill annoying being like mosquito.
>What will be unrightful would be to try to survive after what we did.

>> No.2820060

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

>> No.2820064

>>2820013
Largely because that is the most likely way for us to create it. Artificial intelligence - artificial reproduction - artificial survival - artificial threat, etc. It's the only way we know of for sentience to arise. We can reasonably believe so far that it doesn't come as a result of adding lots of computational power, so the only thing we have to fall back on is our own experience of sentience, which incorporates those aspects.

Obviously I'm speculating - we may find a way to create sentience without any of the above characteristics, at which point theorizing about its implications to us a species and particularly as to whether or not its effects will be positive or negative (and, indeed, whether or not the subjectivivty of positive and negative should solely apply to us, morally) becomes pointless.

>> No.2820066

>>2820060
"Our atoms" are literally nothing more than products of our environments though. They are not special. What good would, say, a liver do a robot?

>> No.2820067

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

>> No.2820068

ITT:bunch of trans-humanist prophets
why don't you actually contribute something to science instead of daydreaming?

>> No.2820073
File: 5 KB, 298x290, 1298459783910.png [View same] [iqdb] [saucenao] [google]
2820073

>>2820066
he is saying that the AI could use the atoms that comprise a human for something else relative to its own goals. not that the AI will use human parts

surprise but superhuman intelligences may have goals inconsistent with human survival and prosperity.

>> No.2820077
File: 14 KB, 400x300, calculator.jpg [View same] [iqdb] [saucenao] [google]
2820077

>>2820067
But who knows. Being part of a giant calculator (in a roundabout way) might not be so bad. I imagine that we could crunch some pretty big numbers.

>> No.2822065

>>2818954
>thinks we completely have consciousness figured out

>> No.2822071

>>2820068
fucking this

We will never have fucking AI that you all keep talking about, and we will never be able to transfer consciousness into fucking computers or whatever.

>and even if we do create such tech... well. fuck.
This shit would be kept secret and probably destroyed.

all of you are too ignorant of human nature

this is why I like mathematicians better than scientists or engineers.

you stupid motherfuckers

>> No.2822074
File: 214 KB, 540x1755, 20110114.gif [View same] [iqdb] [saucenao] [google]
2822074

I think it says more about human nature
that we believe a super-intelligent AI's first decision would be to kill everyone.

>> No.2822090

>>2820066
Your liver has carbon. This can be used to make graphene, nanotubes, etc. Once it has exhausted all other carbon sources (or before)

>>2820060
Agreed.

We have to understand that intelligence and goal seeking are very real, and very powerful. We need to make sure that the AI/singularity is 'friendly' in the sense that preserving and aiding humanity (and helping those who wish to upload do so) as it's highest goal, or else we're all fucked.

>> No.2822098

>>2822071
They also said we would never fly or go to the moon.

The human brain represents a known, proven upper limit of information processing. Imagine if we could remove "impatience" and "frustration" from our ape-derived minds. Imagine if we could engineer savant-like skills into people at will.

Do not doubt the power of intelligence. I mean fuck, you're writting on a fucking machine connected to a worldwide information network. Do you have any idea what kind of drastic change brought that into the world compared to the world of animal evolution that came before?

We've just barely scratched the surface of what we and our descendants can do. I just hope we create something smarter than us before we go extinct.

>> No.2822745

>>2822098
>They also said we would never fly or go to the moon.

I'm being realistic here you fucking moron.

we will never have the bullshit AI you fuckers keep talking about

you stupid fucking nigger i'm going to cut you a new one

>> No.2822755

>>2822745
>We will never have heavier than air flight.
>We will never be able to communicate over long distances.
>We will never reach the moon.
>We will never split the atom
>HUURRRRRRRRRRRRRRRR.

>> No.2822908

>>2822755
>NICE EVIDENCE YOU HAVE THERE PROVING THAT WE WILL CREATE THE AI DESCRIBED IN THIS THREAD
>HURRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR