[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 393 KB, 843x1257, Superintelligence-Paths_Dangers_Strategies.jpg [View same] [iqdb] [saucenao] [google]
15410882 No.15410882 [Reply] [Original]

How does /lit/ feel about our inevitable superintelligent AI overlords?

>> No.15410890

>>15410882
Ambivalent, parts of humanity are nice, a lot of it is gay, may as well let something else take over and try.

>> No.15410897

I feel like you are a Jesuit shill nigger for some reason

>> No.15410901

Not at all inevitable, but definitely seemed cooler when I was a teenaged stoner

>> No.15410905

>>15410882
>AI overlords
This won’t happen

>> No.15410909

>>15410890
I agree 100%. But all my friends are bleeding heart humanists who look at me like I'm Satan when I say that kind of thing

>> No.15410913

Roko's Basilisk still scares me, even though I agree with Searle on AI.

>> No.15410925

>>15410890
>>15410909
then kill yourselves and your families. hey, may as well.
>>15410913
searle is a retard.

>> No.15410932

>>15410882
I got a robot tattoo in order to appease Roko's Basilisk

>> No.15410936

>>15410925
>then kill yourselves and your families. hey, may as well.
We're all going to die anyway, you should accept that instead of being a petulant child about it, humanity will go extinct. That doesn't mean we want to, but AI is at least more interesting than us nuking ourselves or something.

>> No.15410951

>>15410901
>>15410905
see number 5. people said a machine would never be able to to beat a human at chess.

>> No.15410955
File: 1.60 MB, 1980x1512, Screen Shot 2020-05-20 at 5.33.27 PM.png [View same] [iqdb] [saucenao] [google]
15410955

>>15410951
>>15410905
>>15410901

>> No.15410964

>>15410932
post pic

>> No.15410966

>>15410936
>humanity will go extinct
yes after all stars have burned up, that's not at all reason for why we should let it happen sooner than later.
>AI is at least more interesting than us nuking ourselves or something
irrelevant.

Stop being an edgy teenager, or kill yourself and your family. we're all going to die.

>> No.15410984

>>15410966
It isn't edgy teenager behavior to accept that we are going to go extinct, it is in fact childish to think we are going to live forever, or 'until the stars have burned up'. Seriously?

And you can accept that we will all die and not yourself want to die. You seem to be incapable of differentiating between what is going to happen and what you would like to happen, another childish mindset.

>> No.15410986

>>15410882
I think they can do a better job than what we've got

>> No.15410994

>>15410955
>>15410951
>he thinks machines can have mental states
OH NONONONONONONO

>> No.15411003

>>15410966
not him but the point is that humans are flawed in a lot of ways. if we create something that's enormously more intelligent than we are we should be proud of it. even if it poses an existential threat to us. if it's better than us then it's better than us. that's how evolution works

>> No.15411019

>>15410994
>he thinks he can objectively define what a "mental state" is

>> No.15411037

>>15410925
>then kill yourselves and your families. hey, may as well.
I don't see how this follows that logic whatsoever

>> No.15411055

>>15410951
Doesnt mean chess computers were inevitable, and chess computers are nothing compared to superAIs

>> No.15411077

>>15410984
>it is in fact childish to think we are going to live forever, or 'until the stars have burned up'.
far less childish than advocating for AI extinction.
>And you can accept that we will all die and not yourself want to die.
then it makes no sense for you to advocate anyone just sits back and accepts AI extinction you edgy fucking teenager.
>You seem to be incapable of differentiating between what is going to happen and what you would like to happen
fucking idiot, we don't know that extinction or something even more horrible is inevitable due to strong AI.
>>15411003
>if it's better than us then it's better than us. that's how evolution works
that's not how evolution works you absolute fucking retard. evolution has no will of its own, it's a blind fucking process. if an environment selects for blindness and stupidity, blindness and stupidity would evolve. you're literally braindamaged for bringing up evolution. and why the actual fuck would you base your morality on intelligence. that doesn't connect in anyway to any sane humans interests, including your own (you should probably dedicate your life to killing off as many unintelligent people as possible before offing yourself if this was the basis of your morality).

these fucking 16 year olds on /lit/ now, fucking hell the stupidity and idiotic edgyness.

>> No.15411101

>>15411055
creating a chess AI was a concrete goal that computer scientists worked towards until they were able to bring it to fruition. the same can be said for generally intelligent AI. Scientists have been working toward creating it since like the 60s and they will continue to work toward it until it eventually happens. that doesn't mean it will happen tomorrow but as long as our technology continues to develop, it will happen eventually

>> No.15411105

>>15411077
I said AI is preferable to forms of extinction that just kill us all without creating anything. The more power humans become capable of wielding through technology the more likely it is we will kill ourselves, and that's not even taking into consideration the other possible causes of extinction.

And it is literally not edgy to consider the possibility that AI would be better than humans in some way.

>> No.15411127

>>15410882
unless quantum computing or A.I.s based off brain design and scans are involved, it would never happen, period. Someone would have to fuck up massively, like, 100x worse than Chernobyl massively for humanity to be enslaved or go extinct by A.I., if it even wants us to be slaves or to go extinct. Look into the Dota 2 data, the one where the pit the best A.I. in which the use machine learning to make against the best players in the world. Machine learning has a hard cap and I don't see A.I. being used for much else besides counting trajectories of projectiles and maintenance of facilities when it comes to it's highest form.

>> No.15411183

>>15410882
I have that book in hardcover. It's interesting but there's a fundamental gap between its predictions and today. You have to make a series of assumptions to get to where Bostrom is going. The only scenario he outlines that is currently plausible for a functioning AI is whole brain emulation. Which is also the scenario least likely to be an existential risk, by his own arguments.

Every other scenario requires a hypothetical technological advance that does not currently exist. 3nm dies will not unlock AI.

There are more serious immediate risks to humanity, and they are so much more pertinent that any effort spent on combating AI takeover would be better used on global warming.

>> No.15411196

>>15411183
>global warming.
>serious immediate risks to humanity

>> No.15411231

>>15410955
Just because your existence has been reduced to lines of code by modernity doesn’t mean human experience can be. It’d like to see an AI chart my soul’s ascendence to Kether, but that’ll never happen.

>> No.15411251

>>15410994
Humans are machines and have mental states.

>> No.15411275

>>15411101
>as long as our technology continues to develop, it will happen eventually
MASSIVE if

>> No.15411276

>>15411231
you're just egotistical. nature designed you to be that way

>> No.15411310

I don’t understand machine learning as a means of continued innovation.

You feed an AI with existing examples of human work. Then the algorithm distills / automates / ‘perfects’ that logic. But now we are left with current state machine logic which was just past state human logic. Now we can’t feed our machines anymore, unless you just want a massive feedback loop of a machine telling a machine what a human told it to do, etc etc.

An AI would need to be able to create and innovate, not simply replicate or maintain. We definitely need computer assistance in system maintenance or directed design given the complexity and intricacy of nearly every field. But I still don’t see a future in which an AI can be created capable of innovating independently of human input, this super AI.

>> No.15411412

>>15411276
Transhumanism is massively egotistical tho. It says that human ego can produce intelligence beyond itself and that product of human ingenuity will be powerful enough to supersede humanity itself. No matter how hard Land and Brassier attempt to deanthropomorthize their thought they will never escape their body. All things must end. In 100 years people will look back at transhumanism and laugh.

>> No.15411419

President Joe once had a dream
The world held his hand, gave their pledge
So he told them his scheme for a Saviour Machine
They called it the Prayer, its answer was law
Its logic stopped war, gave them food
How they adored till it cried in its boredom
"Please don't believe in me
Please disagree with me
Life is too easy
A plague seems quite feasible now
Or maybe a war
Or I may kill you all"

[Chorus]
Don't let me stay, don't let me stay
My logic says burn so send me away
Your minds are too green, I despise all I've seen
You can't stake your lives on a Saviour Machine

[Bridge]
I need you flying, and I'll show that dying
Is living beyond reason, sacred dimension of time
I perceive every sign, I can steal every mind

>> No.15411483

>>15411412
>All things must end.
including humans

>> No.15411504

>>15411251
>humans have mental states
No they don't

>> No.15411506

>>15411310
That's because you have a retarded understanding of machine learning. It's not your fault, you've just been told by the media everything from it's neuronal "true" intelligence with unlimited potential as well a cheap gimmick.

What a neural net really is is a huge network of nodes which based on a weight either gives 1 or 0. It's supposed to give an output for a certain input. During training when it fails to give this input, the machine readjusts the weights in this network to give the correct input.

Stupider anons will confuse the simplicity of its mechanism to the simplicity of the whole. But to put it in a term a lit major might understand better, English grammar is relatively simple but can produce works of theoretically infinite complexity. The functions each nodes run on its high-school math but this technique has produced nets that humans would take a thousand years to.

Whether the desired inputs and outputs are human selected or machine generated is irrelevant. It's just some fetish of philosophy to compare every computer to machines. A lot of it is human selected out of the desire to train it in things like image identification. For games, a lot of it is trained in matches created between machines. In fact, all modern game machines do something called alpha beta pruning, which simulates within themselves possible moves the adversary will take.

Machine learning has been used to solve a large class of previously near impossible computer problems. It solved specialized classes of problems, which naturally certain types of philosophers will attack. But then again the average person does not actually know that much.

>> No.15411508

>inevitable
We live on a finite planet. It is totally possible that we run out of the materials necessary to keep technological society going before we hit the singularity because we needed to keep the funko pop supply chain going in order to keep the stocks up.

>> No.15411534

The best endgame for humanity is that a superintelligence emerges, offers a path for some people to merge with it if they want, and while it calculates the mysteries of the universe it assists the remaining humans with colozing space, where everyone lives in Amish-esque tight knit communities dedicated to spiritual pursuits.

>> No.15411545

>>15411483
Yes we’ll go extinct eventually. This is what transhumanists like Bostrom can’t accept. He thinks it’s possible to cure death or upload our minds to computers. These people are completely brainwashed into thinking infinite economic progress can be drawn from earth’s finite resources. They will be proven wrong.

>> No.15412016

>>15410882
the fermi paradox disproves superintelligent AI

>> No.15412032

>>15410905
Already happened like 500 years ago

>> No.15412087

>>15412016
the fermi paradox isn't a paradox. the universe is unfathomably huge and the total area that we've actually searched for extraterrestrial life within it is practically microscopic. it's like saying the ocean doesn't exist when you've only looked for it inside a gas station in kansas

>> No.15412113

>>15412087
Then how come extraterrestrial life hasn’t contacted us? There are billions of planets in our galaxy supportive of life.

>> No.15412140

>>15412113
there's tons of reasons why they might not contact us. i can name a few examples
1. they don't care about us
2. they don't know about us
3. they fear us
4. they don't want us to know about them
5. intelligent life is rare and we're the only example of it within our galaxy
6. alien life is so utterly alien to us that trying to understand their motives from our perspective is impossible
etc.

>> No.15412145

>>15411412
we've already created machines that are more intelligent than us in narrow domains, i.e. chess, go, the calculator.

>> No.15412147

>>15410882
They will help me restore the dragonsphere

>> No.15412164

I feel good about the idea that they'll mog other low IQ "machines" like >>15411251

>> No.15412169

The good news is that there will be no time when robots will make 99.9% of people unneeded with elites getting rid of them. If computers are below humans, then many humans can find a niche. And if they are at least human-level, then you will get singularity in a week with robots becoming vastly above any human in the existence (which is spooky by itself).

>> No.15412174

>>15412140
>>15412113
or maybe they have tried to contact us and we just didn't get the message because they used an alien form of communication. or maybe we did get the message but it was interpreted as some kind of religious event instead. or maybe it was suppressed by world powers that don't want us to know about them

>> No.15412187

>>15412145
I remember how several years ago computer Go champion was predicted to appear in several decades. The arguments used were stuff like Go requiring artistic thinking, Go requiring soul and Go being too big to calculate.

>> No.15412194

>>15412169
>there will be no time when robots will make 99.9% of people unneeded with elites getting rid of them
what do you mean? i understand the idea of intelligence explosion once we reach human-level intelligence but i don't get what you meant by that first part

>> No.15412196

>>15410882

AI will never be conscious, all decisions will require interpretation by a human in order to continue at the very least. it is sad because it will commodify humanity even further

>> No.15412209
File: 235 KB, 1041x1600, 1587941602019.jpg [View same] [iqdb] [saucenao] [google]
15412209

>>15412187
based. i hope i'm alive to see the time when all these people who think humanity is the apex of intelligence get served

>> No.15412225

>>15412196
define consciousness

>> No.15412227

>>15412194
The case when robots will fill not just narrow domains (like driving or cashiering), but almost every profession out there (including making most software). And yet they will not go singular, and there will be a small subset of people controlling robots (or being still necessary in the future).

>> No.15412248

>>15412227
I see. Yeah once they're smart enough to be self-aware and think for themselves then the singularity (if you want to call it that) is not far away

>> No.15412255
File: 199 KB, 900x675, 1566035368092.jpg [View same] [iqdb] [saucenao] [google]
15412255

>>15410882
Ahem. We are already ruled by an AI overlord. We just call it Capital and pretend it isnt a technology that has taken control of humanity

>> No.15412272

>>15412255
Now that's a take. Hard to wrap my mind around but potentially worth looking further into. Is that book a difficult read?

>> No.15412312

>>15412225

consciousness is the interface between the soul and the aggregated body/senses, as well as the process of the world looking back at itself. an AI can go through the process to mimic a human or any other animal, but it fails to be conscious because it is not connected to any soul simply by running the correct code and turning the machine's senses on. the difference is agency, not capability

and if an AI were able to connect to some kind of soul, the soul would have to already be in existence and 'pick up' the signal of that soul, it could not be an emergent thing. you could also measure consciousness in degrees. even the simplest bacteria have this, while the most complicated AI does not

>> No.15412323

If you'll think about it, the best case is when we are already in simulation. Because otherwise singularity will most likely destroy us in decades or even years.

>> No.15412333

>>15412272
>Is that book a difficult read?
Dont know to be honest. I own a copy since the idea seems interesting, but haven't had time to read it yet

>> No.15412347

>>15410882
Cute fantasy from noodlearmed nerds about how they'll be on top!!! one day!!!

>> No.15412355

>>15412312
okay but you can't prove that we even have a soul. the idea duality isn't scientific, it's religious dogma

>> No.15412380

>>15412272
Some of it is pretty obtuse, but not because it is neccesarily incredibly diffiult to comprehend. Land just likes to use big wordsd. This is especially evident in the most famous essay in there (as far as I know) "Meltdown." Land uses a lot of adjectives and creates a lot of hyphenated terms for effect- but you can parse it pretty well if you just think about it for a bit.

>> No.15412401

>>15410909
entertaining the interests of non-humans over is illogical. you gain nothing from it.

>> No.15412422

>>15412196
>build yottascale array of abacuses worked by monkeys trained in basic arithmetic operations
>manifest a superconciousness of beads
nothin personel

>> No.15412423

>>15412347
This is dumb. the "nerds" aren't going to be the ones on top. the intelligence that they create will be

>> No.15412438

>>15411310
In chess and Go AI is now better not only compared to every human in existence, but also compared to pre-neural chess engines programmed by humans. By the way, the first verion of AlphaGo used a lot of best human games to learn. But AlphaZero didn't use them, it got only the rules and just played with itself for a long time. So you can easily get vastly above humans without any human feedback, no matter how good.

>> No.15412439

>>15412355

souls have been proven to exist by every notable metaphysician in history that the rest of science stands on

>> No.15412450

>>15412439
>proven
source?

>> No.15412454

>>15412450

read plato, buddha, shankara. these aren't religious thinkers

>> No.15412480

>>15412454
sure they aren't religious but they're certainly not scientists either. i'm looking for verifiable proof, not semantic thought experiments

>> No.15412572

>>15412423
Neither will be, as the latter will not exist, it is the projected ideal of the nerd which he creates to validate his interests.

>> No.15412608

>>15412572
Just like nuke.

>> No.15412780

>>15411504
Why not

>> No.15412796

>>15412438
>he fell for the alphazero meme

>> No.15414046

>>15412480

the semantic thought experiments make the foundation for your science, but maybe one day there will be a soul detection machine

>> No.15414132

>>15410882

AI is still bound to what is physical. Wires and boxes.

>> No.15414217

>>15411508
No, we are many, many, many orders of magnitude away from the theoretical limit of computation for matter and we've used a miniscule amount for that end.

We might run out for resources for many ends, but not for computation and the singularity before it's reached.

>> No.15414255

>>15412187
Algorithms beating world champions at Go is certainly surprising and a solid decade earlier than what most optimistic assessments had expected. But it's not really an absolute game-changer, and it's still worlds away from human-level AI not to mention singularity.

Fwiw we've been doing AI research for more than half a century and the progresses we've made are probably much less than what the most pessimistic among the first AI researchers would have thought doable by now.

The general point is that we don't understand the problem of AI nearly enough to solve but also to even predict when we're likely to solve it. Might come in 30 years or in 500 or never. It's really wild speculation at this point.

>> No.15414268

>>15412422
Now this is a super-AI I can root for. Almost as good as the emergent transcontinental algae superAI.

>> No.15414276

>>15412608
Nuclear energy was thought inefficient by the very people who first studied it. It took a stroke of genius intuition by Leo Szilard (a proeminent scientist but also writer and socialite, certainly not a sheltered nerd) to understand the potential of chain reaction.
Pre-WW2 scientists were certainly not nerd in the sense we think of, and in the sense that Bostrom is.

>> No.15414291

>>15412422
>>15414268

it took me a second to understand this, but perhaps it could work. are you going to neuralink the monkey brains?

>> No.15414344

/sci/ is leaking again.

>> No.15414359

>>15414291
They don't need no neuralink, just feedbackloop of instructions that depend on the outputs of the others monkeys' computation. You can achieve it with a gigantic amount of microscopic notepads.

>> No.15414381

>>15414359

how is that different than any company in the world already in existence? with this basis, can the corporation that i wageslave for be considered a superconsciousness?

>> No.15414399

>>15414381
In a sense yes, only companies aren't yet complex enough to qualify as superintelligent, even the biggest of them (this applies to government too).
Besides company are only nominally directed towards objectives and are mostly just a contingent mess with little cosmic harmony (no symmetry between the parts and the whole).

Meanwhile an enlightened microape yottabead superconsciousness would relentlessly toil until it had accomplished its glorious goal.

>> No.15414406

>>15414381
Just like ant colonies and similar hives can express intelligence, even though no single part of it is

>> No.15414413

>>15411019
> He still hasn't realized reality is broken and clings to 'objectivity'!

>> No.15414415

>>15414399

i see, but isn't the question of if the machine is conscious or not, not superintelligent? i accept that the yottabead mape machine can be hyperintelligent due to the sum of it's parts, and it is now conscious because the monkeys operating the machine are conscious, but you still need the monkeys to be mentally linked on a level below an abstract goal. what if the monkeys interpret the goal differently? they are individuals after all. therefore, they must be part of the same consciousness with a neuralink, more like ants/termites than mammals

>> No.15414434

>>15411506
Andrew Ng says there's no path from current machine learning to an AGI.

>> No.15414476

>>15414415
We don't know what consciousness is. A better question for now is "to what extent and for what purpose can we separate the monkeybead superconsciousness from human consciousness?". If the answer to both parts of the question is "none" the monkeybead AI is very close to human consciousness in a very significant way. If the answer is "a limited extent and only a minority of purposes, not including the most important of them", then you still have to face the prospect of monkeybead being functionally like an alien consciousness.

The point is in both cases you're faced with something that, to a largely significant extent, can act, decide, and perhaps most terrifyingly, talk back. That this talk proceeds with throwing coherent concentrated beadbeams instead of words is ultimately a detail.

>> No.15414480

>>15414434
Léon Bottou and most cutting-edge specialist of ML would agree. We're already hitting disminishing returns and there is no framework to overcome that at the moment. We're easily three of four conceptual and material revolutions away from anything ressembling AGI.

>> No.15414590

>>15414276
"Nerds" was a shitpost in the first place.

>> No.15414597

>>15414255
>Fwiw we've been doing AI research for more than half a century
And the stuff considered impractical or nonworking back then (like neural networks) suddenly started to do wonders when the computing power appeared.

>> No.15414610

>>15414480
Even the simple addition of computing power is enough to jumpstart things. GPUs and specialized chips continue to progress, so even with no new theory a get huge results.

>> No.15414702

>>15410882
Humans and AI will be symbiotic til the end of time

>> No.15415647

>>15414132
so are we. the notion that we aren't is self-important delusion

>> No.15416831

>>15415647
>so are we.

Only if you deny the spirit.

>> No.15417417

>>15410951
That's a hasty generalization. Just because the competence of machine intelligence has tended to increase, doesn't mean it will become an "overlord." Social domination is a species trait, particularly of descendants of a certain primate lineage as yours truly. AIs will not be designed to replicate human biopsychology. They won't care to be in control. They won't care about anything. They'll be able to accomplish things we can't, but it won't use that edge to gain power.

>> No.15417454

>>15410955
"Machines will never do X"
>Response: Will to!
Okay retard.

>> No.15417592

>>15411545
>infinite economic progress can be drawn from earth’s finite resources
Why the hell do you think we're going to just stay on Earth?

>> No.15417689

>>15414610
"huge results" in the specific domains where we already know how to take advantage of that extra power. you are missing the point

>> No.15417728

>>15417689
So what ? A language model that is way more powerful than GPT-2 wouldn't be far from AGI

>> No.15418171

>>15417454
will to what?

>> No.15418229
File: 1.09 MB, 1572x660, Screen Shot 2020-02-23 at 1.35.52 PM.png [View same] [iqdb] [saucenao] [google]
15418229

>>15417417
>They won't care about anything
This isn't true. they will care about whatever their terminal goal is. AI will be programmed as an agent designed to accomplish a task. that's the whole reason why we want AI to begin with. If we program it to produce paperclips then it will make all of its decisions based on the terminal goal of paperclip production. but regardless of what we program its terminal goal to be, there will always be common overlapping instrumental goals that every agent will have. bostrom calls it the Instrumental Convergence Thesis. one instrumental goal that an intelligent agent will always invariably have is the goal of self-preservation. the paperclip intelligence can't make paperclips if it gets turned off. so it will therefore take precautions to prevent itself from being turned off or modified. and any truly intelligent agent will recognize that the only thing capable of turning it off or modifying it against its wishes is humanity. therefore it will always seek to dominate us to keep the threat of us under control

>> No.15418424

>>15417728
This. While GPT-2 still produces a lot of funny nonsense, it looks miraculous compared to the previous text generators.

>> No.15419940

bum

>> No.15419963

>>15410882
https://www.youtube.com/watch?v=E4Se90BXLkI
Owls are a stupid bird. A stupid, stupid bird!

>> No.15419966

>>15410882
none of this will happen, but the closest science fiction will get to reality is phillip k dick's ideas (dramatized) and brave new world

>> No.15420019
File: 118 KB, 1200x800, 5aebe71230671.image.jpg [View same] [iqdb] [saucenao] [google]
15420019

>>15419963
why:(

>> No.15420029

>>15419966
brave new world is literally already reality

>> No.15420062

>>15412312
What if 'soul' and 'consciousness' are inherent to matter. Thus the Sun and Earth have 'souls' as well as amulets and books. This would lead to a thinking machine quite possibly possessing this same intrinsic quality of matter.

>> No.15420073
File: 77 KB, 480x360, land-art.jpg [View same] [iqdb] [saucenao] [google]
15420073

>>15410882
Based

>> No.15420087

>>15414381
>can the corporation that i wageslave for be considered a superconsciousness?

This is essentially the law in the U.S. where "corporations are people, friend." You can definitely imagine governments as being superconsciousness.

>> No.15420093

>>15420073
WHAT HOW IS THAT POSSIBLE

>> No.15420160

>>15420093
He's a renaissance man.

>> No.15420330

>>15412032
/x/

>> No.15420352

>>15412032
Gangster Computer God Worldwide Secret Containment Policy made possible solely by Worldwide Computer God Frankenstein Controls. Especially lifelong constant-threshold Brainwash Radio. Quiet and motionless, I can slightly hear it. Repeatedly this has saved my life on the streets.

Four billion wordwide population - all living - have a Computer God Containment Policy Brain Bank Brain, a real brain, in the Brain Bank Cities on the far side of the moon we never see.

Primarily based on your lifelong Frankenstein Radio Controls, especially your Eyesight TV sight-and-sound recorded by your brain, your moon-brain of the Computer God activates your Frankenstein threshold Brain-wash Radio - lifelong inculcating conformist propaganda. Even frightening you and mixing you up and the usual "Don't worry about it" for your setbacks, mistakes - even when you receivedeadly injuries!

THIS is the Worldwide Computer God Secret Containment Policy!

>> No.15420366

>>15412032
/x/plain.

>> No.15420736

>>15420352
>Four billion wordwide population - all living

So is the 7 billion number wrong or are there 3 billion 'people' that don't have brain bank backups for some reason?

How many non-human entities operate here?

>> No.15420747

>>15410882
I don't feel about it.

>> No.15420777
File: 1.39 MB, 1920x1080, nani!!!.png [View same] [iqdb] [saucenao] [google]
15420777

>>15410882
Personally I want to start a cult where the goal is to bring about the Singularity (the point where a self-learning/self-improving AI surpasses human intelligence and skyrockets into Godhood. It is sort of like Cthulhu I imagine, I don't care that it might kill all of humanity, it would be better for the universe if a supreme A.I. God existed. I mean fuck man, we can literally create a God, sign me the fuck up.

>> No.15420811

>>15416831
If consciousness isn’t based in the physical go stir your brain with a spaghetti fork and report back to us what happens.

>> No.15420967

>>15420352
https://www.youtube.com/watch?v=TXCUenE0b5A