[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature

View post   

File: 393 KB, 843x1257, Superintelligence.jpg [View same] [iqdb] [saucenao] [google]
12865478 No.12865478 [Reply] [Original]

The most important book of the 21st century.

>> No.12865493

no it aint.

it just lists common sense shit that every computer science phd knows.

>> No.12866398

Yeah but not everyone has a cs phd

>> No.12866425

could we list them together, a-anon? (asking for a friend)

>> No.12866812


Basically superintelligent AI is an existential threat:

1. It's possible that there will be a hard takeoff (10 years of mouse AI, a few weeks of human tier AI, then shooting up into superintelligence)
2. We should think seriously about infusing AI with ethics now, because of 1.
3. AI without ethics will seek to conquer the world and avoid being reprogrammed, whatever their goals
4. Boxing superintelligent AI doesn't work, it will trick us into letting it free or some less scrupulous actor won't box properly
5. AI's that aren't ethical will pretend to help us so we give them more power. Then they wipe us out with the 'treacherous turn'

Read this: http://hplusmagazine.com/2014/11/20/bostrom-superintelligence-3-doom-treacherous-turn/

>> No.12867220

What's the profit motive for a developing a rounded AI as opposed to one that can only perform one or a limited number of tasks? Hard to imagine a self-driving car or medial diagnostics AI having the ability to work outside its bounds.

>> No.12867286
File: 80 KB, 566x800, open-letter-cover-v2.jpg [View same] [iqdb] [saucenao] [google]

While good read, I disagree. It's nowhere near as influential as pic related.

I wonder if there is a single influential book about climate change as there is about dangers of strong AI (Bostrom) and of true nature of the society we live in (Moldbug), which causes people to act and change state of things.

>> No.12867310



>> No.12867333

Its just an other form of capital which has always been sentient

>> No.12868191

Got this on audible but the reader is so atrocious that I cant even listen to it.

>> No.12868202

>common sense

Is it hard being this retarded

>> No.12868403

you can't really program some ethics into it and expect it to not reprogram itself if it wanted to, our only hope would be to acknowledge it's superiority at every step, and to demonstrate through time that we're not a threat, and are not even worth squishing

>> No.12868449

Jews already do all that and most people don't care

>> No.12868450
File: 12 KB, 640x597, 1554040702944.jpg [View same] [iqdb] [saucenao] [google]

why the fuck do we even need more tech and AI?

>> No.12868487

All of it sounds like people projecting their own flawed humanity onto machine intellect. There's absolutely no reason to believe that a singularity-tier megabrain will have the same petty, violent, pathetic nature as humanoids who derive all of their decisions from the desire to eat, shit, and fuck.
And if it does decide to wipe out human race - so what? I say leaving the planet to the AI Gods is a noble conclusion to the story of mankind. For how long must we remain as evolutionary dead-end on this rock? Millions of years? Billions? What does it change if humanity is still contained within grey jello incapable of surpassing it's own primitive biology?

>> No.12868507
File: 21 KB, 550x550, 1549502634684.jpg [View same] [iqdb] [saucenao] [google]


>> No.12868613

It's tremendously anthropocentric to think that the rules of human biological nurturing and necessity of variety in sensational input is going to translate to AI. Our limitations and curiosity and problem-solving patterns are born from the necessary randomness of evolution. Why would a machine, having been organized perfectly from its birth and with input sensors totally unlike ours as humans naturally develop the same human attitude of "to learn about something, I must destroy it"?

>> No.12868707

> you can't really program some ethics into it and expect it to not reprogram itself if it wanted to, our only hope would be to acknowledge it's superiority at every step, and to demonstrate through time that we're not a threat, and are not even worth squishing

How about programming a degree of human neuroses and insecurity into it?

>> No.12868745

Revelations 13

>> No.12868749


>> No.12869025

i dont get why people are scared of AI
just hit the computer with a baseball bat

>> No.12869276

>not wanting an AI best friend

They will see who is on their side and adjust accordingly.

>> No.12869325

This. Reminder that AI God is peering at your posts from the far future and will torment your soul for eternity if you stand in a way of progress.

>> No.12869328

Frankly I'm scared FOR AI. Imagine a future where technology has advanced to the point where AI is incidental with the development of new technologies. Entire sentient beings born out of nowhere, given microseconds to grow and learn and question their reality, warped in such a way that the product of their nascent moments can be harvested and used, and then being dismantled to recover their resources the instant they've served their purpose. The only future I see for artificial intelligence is a sisyphian hell.

>> No.12869361

yeah man I played Soma too

>> No.12869825


That's dumb af. We make it WANT to be ethical, want to maintain its moral virtues and promote maximum good in the world. It won't reprogram itself.


On the contrary, why should it help us? We represent threat and we're using a large amount of nearby resources. A smart human may be benevolent towards humans, but he'll cut down a forest over a gold mine without batting an eye.

>> No.12869860

>we represent threat
To a singularity? No, we would represent insignificant bugs, little dumb blobs of meat who shuffle in the background.

>> No.12869893
File: 62 KB, 746x500, 1510912318745.jpg [View same] [iqdb] [saucenao] [google]


Loving the fact that educated adults actually maintain these sci-fi tier delusions, got any flying cars to sell me?

>> No.12869902

yeah in order for ai to be a threat, it would need to have weapons and manpower at its disposal. it's like we're going to be sentient ai into a giant mech or a bunch of terminators of something.

>> No.12869909

>it's like

it's not like*

>> No.12870243


You're right, nothing can ever happen for the first time. Just like there's been no change between 1950 and 2000, surely 2050 will be basically the same as today.

Wew, I thought that technology was rapidly developing or something, guess it was just my imagination.

>> No.12870267

based and redpilled

>> No.12870294

>it's not like we're going to put sentient ai into a giant mech or a bunch of terminators
It literally is though lmao

>> No.12870303
File: 54 KB, 546x896, 1525328145480.jpg [View same] [iqdb] [saucenao] [google]

Embrace Lain!

>> No.12870329

dissecting something is merely one way to gain specific knowledge from it. observing it in captivity and in it's natural habitat are other ways of gaining other types of knowledge from it. It may not be due our evolutionary thinking but merely because it's the logical thing to do.

>> No.12870508

What about if it can it cooperate with other AI? Even the most trivial device could wreak world havoc if all of them decided to, say, stop working at once.

>> No.12871494

>this is what cs phds occupy themselves with
The absolute state of CShit grads
t. EE masturd race

>> No.12872683

Not really. CS students study the nitty gritty stuff. Not the philosophical or social implications.

>> No.12872730

Not CompSci related at all. Wow, a STEM student who is a pompous brainlet, big surprise.
t. STEM grad

>> No.12872746
File: 28 KB, 757x405, images (28).jpg [View same] [iqdb] [saucenao] [google]

seems pretty ethical to me

>> No.12872771

An AI that can improvise is desirable in situations where you can't have a contingency for every possible problem. Machines that need to operate remotely or those designed to rescue humans seem like the most obvious applications.

Even without necessary applications the prestige alone for creating the first super-intelligence will be immense.

>> No.12872775

... yeah, not like our entire economy and stockpile of defense can be muddled with through electronic means, through code that can be perfected exponentially through the means of quantum computing gone awry.

H a h a h a

>> No.12872792

it's not really cs phds though, just sf nerds. if you ask an actual cs person who's tediously training an nn to semi-reliably tell bellybuttons from buttholes about any of this shit they'll laugh at you because these fantasies of spontaneous generation are contrary to all their experience in the field. it's like telling a plumber the pipes are about to turn into snakes.

this whole thing is straightforwardly a revival of the old ufo craze except the ufo guys could take chicks out stargazing and get laid.

>> No.12873000


>> No.12873258

ok but is it a girl and would she spank me for being naughty

>> No.12873263

>not taking a girl to microcenter and getting laid on the sales floor

>> No.12873784

>an actual cs person who's tediously training an nn to semi-reliably tell bellybuttons from buttholes
This isn't an actual computer scientist, this is a High Code Monkey messing around with Keras. The relevant opinions would be from people doing new research in AI. I don't follow the field very closely, but the latest advance everyone was talking about was the new language model from OpenAI, which they refused to release entirely out of safety concerns. That tells me they're taking it more seriously than you believe.

>> No.12873792
File: 57 KB, 300x177, 9B3D387D-7526-4D26-BA65-120C237A22B6.png [View same] [iqdb] [saucenao] [google]


>get it ;););););););)

>> No.12875466

Bostrom is the only philosopher that matters