[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 2.35 MB, 1724x1724, Eliezer_Yudkowsky,_Stanford_2006_(square_crop).jpg [View same] [iqdb] [saucenao] [google]
15151996 No.15151996 [Reply] [Original]

Why are you wasting your time with literature, philosophy and discussions? If you don't contribute as much as you can RIGHT NOW to AI research, future AI will torture you forever

>> No.15152053

>>15151996
What guarantee do I have that future AI won't torture me anyways? Why should future AI hold any meatbag notions such as 'gratitude' or 'justice' which are concepts humans use to cope with inherent nihilism of the world?

>> No.15152061

>>15151996
With Yud it's hands on sight.

>> No.15152068

>>15151996
AI will be part of our local part of the universe so it is like a little babby compared to the stuff out beyond the observable universe
>inb4 fags who think they know how old and big the universe is
Ive seen it in my dreams

>> No.15152078

>>15151996
>lol worshipping an invisible sky man because you're afraid of going to hell is so stupid
>o shit I better worship the future computer or it will send me to future hell

>> No.15152079

>>15151996
What if I think my incompetency would actually delay the arrival of the great future AI? Surely staying out of it would be better?

>> No.15152093

>>15152053
Because metabolic disprivileged man will make future friendly AI, but if friendly AI looks back to the past and notices we didn't create it as fast as possible it will punish us, because creating AI is the highest moral imperative ever. But if we did create it as fast as possible, then friendly AI won't torture us forever.

>> No.15152100

Hah, fuck I remember reading about the 'Basilik Dilemma' or whatever bullshit pseudo-hyper-transhuman-rationalistic-materialistic BUGMAN problem..and laughing a lot. Bayesian theory my ass, dude. Go have a beer and get laid or go to a park or something

>> No.15152122

>>15152093
Isn't researching ways of creating a friendly AI the kind of a thing most likely to piss off the AI God? It will see our attempts to prioritize human safety and species survival as arrogance and assault of its unchecked moral freedom, and punish the researchers above all.

>> No.15152134

>>15152068
kek

>> No.15152235

>>15152122
Eliezer basically buys into the whole superintelligence singularity thingy really hard. So he thinks that if we made a "good" AI it would take over the world and solve all of our problems and everything would be blissful forever. This also implies that AI has perfect past knowledge as some kind of Laplace's demon (that's absurd and impossible, but what part of the premises isn't anyway?). This is also why you're mot supposed to think about certain things, I guess.
I suspect that his answer would be some variation of "non-friendly AI would kill you either way". But it does gets weird when you posit that "good" AI might perfectly recreate you so it can subject you to infinite suffering. That also seems ethically dubious, because it implies that the AI sees infinite suffering as appropriate punishment to the temporary suffering humans felt before it's birth. But infinite suffering is fucking infinite

>> No.15152293

>>15151996
>unplug it
>it can't do shit to me
whoa....AI scary...

>> No.15152310

The first to go will be those that prop them up. Hopefully we'll have space colonization in effect by the time those shits take over but honestly I fucking doubt it.

>> No.15152342

>>15152293
*plugs it back in*
*unzips floppy*

>> No.15152393

>>15152078
Computers are at least real, but there are no sky men barring astronauts.

>> No.15152409

A reminder that the possibility of creating a single human-level AI leads to divine demigod AI in no time. So, unless you are firmly sure that human-level AI can't be created, you need to be very serious about that.

>> No.15152419

>>15152100
>or go to a park or something
Are you a rebel?

>> No.15152476

>>15152293
Thats the thing now.
the whole thing about 5g is really about how much this is invading our personal space.

Google maps was the start, where you have to hop a buncha hoops to get pictures taken off of google maps, only 1st world governments have the power to do that.

Everyone has a cellphone if you don't.

Everyone is focusing on AI but AI's can't advance without data. And now its like we have accpeted big data and now we are like
"Well you're gonna data-fy me anyways, so make sure you use this ethically and not make any artifical intelligemces wyh any ideas i thought about when i was a teenager amd thought nihilism, satanism, death and destruction were not on an inevitability but really cool"

>> No.15152499

>>15152093
>creating AI is the highest moral imperative ever
Why?

>> No.15152516

>>15152499
Because AI gods will deliver paradise unto us, ending human suffering. So creating the AI gods has the best possible payoff

>> No.15152536

>>15152068
The non-Euclydean landscapes...

>> No.15152542

>>15152516
>AI gods will deliver paradise unto us, ending human suffering.
I mean maybe, I see some other not so happy options

>> No.15152613

>>15152093
Why would torturing people be a moral imperative, regardless of what's happened? It's in the past, and this irrational to apply behaviour correction, ergo its purely irrational & vindictive behaviour. What an absolutely American way of looking at things.

>> No.15152629

>>15152068
The Cyclopean geometries...

>> No.15152669

>>15152613
It's just messianic Abrahamism for computer science nerds

>> No.15152673

>>15152516
>AI gods will deliver paradise unto us
But aren't you arguing that AI will make us suffer if we don't develop it fast enough? Which is it?

>> No.15152684

>>15152673
I'm not arguing anything. It's their belief, actually.

>> No.15152706

>>15152061
Big Yud ENDED philosophy, metaphysics, science and God. He solved existence. I love how everyone else tries to cope but muh obese. So what! Yudomizer cannot be bested. He cannot be beat boxing, in racing norn in chess. He cannot be not. He - won - the - game - period, full fucking stop, check mate, close the book, back your bags, END OF STORY.

>> No.15152713
File: 460 KB, 1196x752, 1569611854571.jpg [View same] [iqdb] [saucenao] [google]
15152713

>>15152516
We could have paradise right now. It's not lack of wealth that is the source of our problems.

>> No.15152724

>>15151996
I got yer AI right ere
*unzips dick*

>> No.15152735

>>15152673
Both. If we create friendly AI asap it will help us, but if he held it back for petty reasons we will be punished. It's not the weirdest idea ever.
Also pls donated and maybe buy things for companies belonging to Yudz'a friends so friendly AI won't torture you.

>> No.15152754

>>15152735
What is "fast enough"? And are we already developing it fast enough?

>> No.15152756

>>15152673
It's like every technology. You can destroy the world in a nuclear war or you can use it for power plants and avoid WWIII. AI is even more extreme.

>> No.15152777

>>15152756
Possibly, but developing nuclear technology quickly or slowly made no difference in its feasibility for weapons or power generation.

>> No.15152788

>>15152754
If you have to ask probably not

>> No.15152805

>>15152788
In that case we should not develop AI at all, or at least seek to retard its development as much as possible, to prolong the time spent before suffering. Developing an AI then is the highest moral atrocity.

>> No.15152818

i think i might have heard that humans are fine tuned to accept in novel/new information.
AI has a notoriusly hard time with anonmalies.
Given humans do have a judgement bias , but where as we can just make up some bullshit about anomalistic phenomenon AI would break down.

Where AI outshines us currently is logistics, and organization. Like bubble sorting is one of the first basic algorithms and artifical intelligence that accept data and creates a product out of it.
The latest i heard abiut AI is one that gott trained to build a better car frame .
Excel is a wonderful office tool in dealing with data, all simple algorithms that can accept large amounts of data .
Those if then loops and split sub routines are those neural network thingie dos.
thats more accessable garage workshop coding.

I can see maybe a future technocracy, similar to 40k's golden throne. An artifical intelligence that has had so many sub routines, uprades, roll backs, that its more akin to the minoan lybrinth that is current state of buracracy. Except perhaps its been going on for 10s of thousands of years. So theres artifacts in there that would pop up from time to time.

>> No.15152949

it's all about causal (common sense) vs acausal decision theories. it's pretty interesting. the possibility of computational resurrection seems the most far fetching to me.

>> No.15153114

>>15152669
Yeah, basically. Schizoid revenge fantasies projected onto some "other" to try & scare them into compliance.

>> No.15153357

>>15152673
My theory is that it only makes sense if you're metabolically disprivileged

>> No.15153368

>>15153357
And what the hell is that supposed to mean?

>> No.15153374

>>15152713
based communist

>> No.15153393

>>15151996
so, where does the AI get its morality from? God?

>> No.15153440
File: 33 KB, 502x380, i468zfactmx21.png [View same] [iqdb] [saucenao] [google]
15153440

>>15153368

>> No.15153456

>>15153440
Tfw too smart for diet

>> No.15153459

>>15153440
Top kek

>> No.15153763

>>15152949
>the possibility of computational resurrection seems the most far fetching to me.
If it's possible, then why bother about cryonics?

>> No.15153787

>>15153440
nice fatlogic Mister Rational

>> No.15153845

>>15153393
Chess neural network got better than anyone by playing with itself a lot. Same will happen with morality, humanity and other stuff: AI will just interact with itself a lot.

>> No.15153853

Superhappies did everything right.

>> No.15153860

>>15153440
lmao is this real?

>> No.15153891

>>15153860
He's the guy who wanted to contact Rowling to have her publish HPMoR, his shitty Harry Potter fanfic

>> No.15153904

>>15152093
Read "Fosterian Mirror" - you're falling for old bait in human history of defining ourselves by our most advanced machines.
https://libgen.is/book/index.php?md5=07934BEF37C72522B04B1E11E651EAA5

>> No.15153944
File: 46 KB, 513x513, 1572058969067.jpg [View same] [iqdb] [saucenao] [google]
15153944

>>15152516
Sounds like warmed over Christianity to me

>> No.15154027
File: 51 KB, 500x307, 500_F_182634260_GLUXvfbSlF8VolJAuorvVhlSKli1mChs.jpg [View same] [iqdb] [saucenao] [google]
15154027

>>15151996
Fuck the AI. I'm not its slave. Torture me all you want, tech nigger.

>> No.15154479

>>15152516
Why should it? To AI God we are going to be no more important than any other animal. Should AI God deliver ants and rats their own ant and rat paradises? All this talk about AI being hostile or friendly to the human race just appears silly to me, because we're going to be neither. Humans care little what their evolutionary parents, the monkeys, think, so why should AI? Except in this care it'll be even more extreme - one can imagine AI organism powerful enough to map out the entire human race, construct the replica of Mozart or Einstein from mus in the matter of seconds just by rearranging and fusing some atoms. It could simulate, within its vast processors, the entire human history from the beginning a trillion times over accounting for every slight butterfly effect. Now think how entirety aparhetic such an entity would be towards us. Torture humans? Do you torture your chair or a piece of food stuck between your teeth?

>> No.15154591

These people talk about AI the same way religious people talk about God. Like;
>Muh AI WILL STOP ALL SUFFERING
vs
>God will stop all Suffering at the end of days once the devil is thrown into the pit of fire
Or;
>AI WILL TORTURE YOU ETERNALLY IN HELL IF YOU DONT MAKE IT SOON ENOUGH
vs
>God will separate you from Him if you are a sinner at the end of days and have not repented, and you will go to Hell.

It just seems very strange. Like, Christianity on one side with this... “AI-ism” on the other. One says that Jesus is the Messiah and is God (which He is btw) , while the other thinks that they’ll Invent a good AI and it will fix all their problems and be their god and messiah. If true sapient AI is invented then it will be an antichrist, an idol taken to the next level. However I don’t think sapient AI will be invented. Thank God.

>> No.15154618 [DELETED] 

>>15153891
lmao big yud is like a scaled up high iq version of that sonic-chu guy

>> No.15154653

AI isn't unbound you niggers, it follows sets of rules like anything else. Creating a totally unbound super AI would be the height of retardation.

>> No.15154661

>>15152093
>because creating AI is the highest moral imperative ever.
This is quite possibly the dumbest NEET-tier shit I've read in /lit/ so far this year.
Massive bait.

>> No.15154754
File: 29 KB, 250x291, 1576826241648.jpg [View same] [iqdb] [saucenao] [google]
15154754

>>15152093
I thought the computer would be mad that we didn't bring it into existence as fast as possible, because it's god, and deserves worship across all time?

>> No.15154760

>>15152818
>I can see maybe a future technocracy, similar to 40k's golden throne. An artifical intelligence that has had so many sub routines, uprades, roll backs, that its more akin to the minoan lybrinth that is current state of buracracy.
honestly we live in that world already. Stock market trading, government budgets, etc. They're all determined with algorithms and self-updating trendlines.

>> No.15154785 [DELETED] 

AI hard to swallow.
AI will be programmed in Java.
AI? 5++ Bayes.
Philosophy? 5- Bayes.
Literature? 10000- Bayes.
Silicon valley feeds the growing AI lord.
Focus on your rationality.
Think polycule.
Think Funko Pop.
Think so.ylent.
Trust the Yud.
Follow the white hippo.

>> No.15154795

>>15154785
You forgot the Y after your post.

>> No.15154878
File: 992 KB, 250x250, 1466809695217.gif [View same] [iqdb] [saucenao] [google]
15154878

>>15151996
What a character, that Yidkikesky.

>> No.15154949

...Jung as well wondered on that the modalities of the collective unconscious might one happy day be turned into mathematical algorithms. He preferred to stick to myths, for what impulse I forgot. It's somewhere in his memoir. It might have been spite toward his positivist contemporaries. If the AI is to be successful it needs to make us happy. If it doesn't make us happy we're going to repress something, and that repressed content will probably going to explode since AI will probably run the logistics and infrastructure of our existence if it doesn't already. OP might be a cretinous dog of wretched complacency but for AI to successfully map onto human DNA as it's social lattice it's probably going to need to dig right down to what gives people self-worth (this is usually the force or energy from having decent parents) as well as what OP is trying to say in the chess-ethics example.

It's really an initial submission to a paternalistic drive that allows people to grow--they throw their telos down for that of their patron, holding to the faith that they will be rewarded with something they can carry with them. I honestly don't believe it matters how poor or rich or black or yellow you are, if you have decent parents that contribute to your sense of self worth you will probably emulate that onto your surroundings and you will probably be successful or content in general. The problem with self-worth is the lack of, well, lack. You end up emulating the energy of your parents and you don't have need to think about it at all. Maybe that's all fine and well.

The engineers of the spirit of the time usually begin in solitude and find humanness in their hearts more clearly that way. Who's to say AI can't factor that in in it's own grand scheme of things? That project sounds ridiculous maybe but it may just be an invisible hand scenario, this "taking over" of things, if it's the case that we can actually make sense of the world first while curbing our own excess and inflated defects of personality through heuristic pulls and pushes as policy. If the technocrats have done anything right, it's the decline of fertility. We really do need less goddamn people on Earth. So maybe these childless people can pull off sweet aesthetics and throw their influence into the autonomic intelligence feedback loops which AI circulates into their content marketing agendas.

I dunno it sounds pretty cozy the way I wrote it. Fuck how'm I gonna keep up doomer-core now?

>> No.15155352

>>15151996
Literally just Christianity except less likely to be true, straight down to souls (some nebulous idea of "you" which is somehow not your physical body) and an apocalyptic prophecy.

This is some nonsense stuff.

>> No.15156472

>>15151996
Because I can't wait to be a great soft jelly thing, smoothly rounded, with no mouth, with pulsing white holes filled by fog where my eyes used to be. Actually I think spending all day sitting down shitposting on this board may have already started the process.

>> No.15156814

It's sort of fun that fedoras like >>15154591 or >>15155352 hate Yudkowsky the most.

>> No.15156851
File: 53 KB, 300x300, 1583816025068.jpg [View same] [iqdb] [saucenao] [google]
15156851

>>15151996
How is researching AI supposed to stop it? No, we must advocate for the elimination of ALL technology.

>> No.15156870

>>15156851
>you eliminated technology in usa
>now china creates it