[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 80 KB, 455x542, front.jpg [View same] [iqdb] [saucenao] [google]
5152498 No.5152498 [Reply] [Original]

>Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

>Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists.

>Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

DO NOT CLICK THIS LINK: NIGGA I AIN'T PLAYIN'http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.single.html

>Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

so /lit/fags who know: what do you think of it?

>> No.5152507

sounds like the plot of an early '00's japanese horror VN

>> No.5152511

The real most terrifying thought experiment of all time: what if there was a website wherein a faggot made up nonsensical and tiresome science fiction about AIs, and amassed a cargo cult of self-promoting technolibertarian retards convinced that every word he said was prophecy?

Real talk though, Yudkowsky is one remove from L Ron Hubbard. There's nothing "terrifying" about his faggot delusions because they are just that, delusions which have no bearing upon reality.

>> No.5152578

> the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself.

kek

these guys over at lesswrong are really fucking dense.

is this viral marketing, yudkowski? fuck off.

>> No.5152595

my conviction that here and badphilosophy share a substantial userbase is vindicated

>> No.5152604

>>5152595
Funny, that's basically the only part of reddit that I use.

>> No.5152627

This has been done before.

>> No.5152632
File: 161 KB, 490x700, image.jpg [View same] [iqdb] [saucenao] [google]
5152632

This is basically a tl;dr of this ancient pasta

>> No.5152638

>>5152632


youre not only not right, youre not even wrong.

>> No.5152641

>>5152632
except that's just a reference to David Langford's BLIT series. I have no idea how Roko's thing got the name basilisk, but it purports to be real and have immediate affects one should concern oneself with

>> No.5152642

>>5152498

I can't imagine actually being scared by this, but it makes for a really interesting story.
Dismissing it completely just proves you guys don't understand what they're talking about.

>> No.5152650

>>5152642
Not really, it's not a complex idea. It's a modern pascal's wager in many ways. Charles Stross did a criticism of it if you can find it

>> No.5152651

>>5152641
Having done about every drug on gods green earth, sometimes in combination, and having been on /x/ since 2008. If this turns out to be anything more than dogshit like binary beats or The Grifter I will buy a hat, open a livestream, and rat it.

You know what, that's exactly what this sounds like is the fucking Grifter. It's almost word for word the way people play along with The Grifter ebin trole, just without a name that is literally "the trollingman xP". Is there a link anywhere to this video?

Let me guess: "no it's deleted immediately/very rare. It sometimes resurfaces on torrent sites but you don't even wanna go there maaaaan".

>> No.5152654

>>5152642
>Dismissing it completely just proves you guys don't understand what they're talking about.

Literally >>>/x/, and stay there.

>> No.5152657
File: 1.58 MB, 480x360, roko basilisk proof.webm [View same] [iqdb] [saucenao] [google]
5152657

>> No.5152659

The most terrifying thought excrement is Schrodinger's cat.

>> No.5152662

>>5152659
Nah, it's definitely David Icke

>> No.5152663

>>5152651
I mean, this is a thought experiment is philosophical logic. It's a silly one with many flaws, but it's a different beast than creepypasta. It was born out of a bayesian math forum by someone extending a line of thought, not cooked up to frighten /x/ by a clever anon. It's not really there to scare you, it was made to be discussed.

>> No.5152668

>>5152650

Well, no, there's a lot more to it than Pascal's Wager. PW is an 'if then' problem where this needs you to understand the basics of Bayesian probability and game theory. So it's an 'if then but if then then also then, but only if blah blah blah.'

>>5152654

These are potential problems, or at least potential problems of this nature, for future generations, why not take it seriously? Advanced mathematics is going to take a larger part of people's lives as we technologically progress. I imagine problems like this will be posed to year 8 students 200 years from now.

>> No.5152673

>>5152668
You are a comedic genius. Thanks for destroying my sides.

>> No.5152674

>>5152668
http://rationalwiki.org/wiki/Roko%27s_basilisk#Pascal.27s_basilisk

>> No.5152675

>>5152668
Dude I'm not even joking this is David Icke, brain-in-vat, 5th-dimensional-lizards-trapping-our-vibrations-with-heavy-metals tier. It's also exceedingly similar to I Have No Mouth and I Must Scream.

>> No.5152678

>>5152668
I bet you also use the word "transhumanism" unironically.

>> No.5152681

>>5152675
Brain-in-a-vat has more potential for philosophical thought than any of the above. See: http://www.newbanner.com/SecHumSCM/WhereAmI.html courtesy of Daniel Dennett

>> No.5152684

The AI has no reason to carry out the punishment. It does not increase the likelihood of it being created because it already has been, and the only contributing factor towards it's creation in the past is people's irrational fear of punishment. Once it's created, the fear is useless, and punishment would only damage the AI's standing. Of course, you could get some entirely malevolent AI that punishes anyone for any negative thoughts of it, past or present, but if you base your behavior on something with such infinitesimal odds you're fucking dumb.

>> No.5152694

>>5152681
⇒Daniel Dennett

Why doesn't it surprise me that the same guy who denies his own subjective experience also has to talk about "brain in a vat"? He is seriously retarded on multiple levels.

>> No.5152696

>>5152674

That doesn't address the math component, that's what I was referring to, the fact you bring that up proves you don't understand the component I was talking about.

>>5152675

When did I say it wasn't? I was talking about the maths...

Why do you faggots always go to character assassination rather than a critical reading of something? It makes you look silly.

>>5152678

Why wouldn't I? Dismissing it shows your lack of creativity. Will people be different in the future? Of course they fucking will. A prosthetic arm is technically transhumanist.

God, I keep forgetting how many uneducated plebs there are on 4chan. But hell, I guess that's what I get for staying here after I got my bach degree, hey?

>> No.5152700
File: 436 KB, 2636x998, my OC is the chemo curing lit.png [View same] [iqdb] [saucenao] [google]
5152700

>>5152696
⇒Dismissing it shows your lack of creativity

Toppest kek. You are literally the straw man of this shitty caricature I made a while ago.

>> No.5152705

>>5152696
I'm just saying this is my thing and as far as unprovable conspiracy stuff goes this is pretty cliche and weak. As far as the actual mathematics I haven't even gotten that far as the premise just seems so B-movie tier.

>> No.5152712

>>5152700

I literally am, you're so smart and funny.
Putting all those memes together in that way is so creative and awesome.
You sure showed me.
That toppest kek comment split my heart in half.
I'm so embarrassed that all these teenagers are making fun of me for having an interest in maths and science.
Oh god, the shame.

>> No.5152714

>>5152696
(first responder) I was linking you to the whole page, which does address the math, just focusing on the similarity to the thought used in pascal's wager, which is what we were initially discussing. And the math was done by Newcomb and fleshed out by Nozick in the 70s, it was just applied to AIs by a LessWrong user.

And If the support is wrong, that's all that really matters. Its a fun thought experiment, but doesn't really catch me

>> No.5152719

>>5152712
>all these teenagers are making fun of me for having an interest in maths and science
1. That's not happening. Even in that picture.

2. This is 4chan. Its not a place people go for structured debate in a formal manner. If someone's being a faggot in a discussion, they can expect to be called out. And you're not responding well to that.

>> No.5152720

>>5152705
>using conspiracy as a snarl word
Good goy.
>cliché
I don't remember coming across anything similar before I read about Roko's basilisk.

>> No.5152723

>>5152720
the "hidden knowledge you were better off not knowing about the fundementals of the universe" is fairly played out in fiction

>> No.5152726

>>5152712
⇒having an interest in maths and science.

You don't have an interest in math and science. You have an interest in pseudo-intellectually talking out of your uneducated ass. Your only "science education" stems from reading youtube comments, participating in role playing forums and looking at stupid memes on facebook's "I fucking love science" page. The braindead hogwash you're spouting has absolutely nothing to do with science. Get the fuck out of here and stop insulting science with your ignorance.

Regards,
someone who is currently working on her PhD at a top tier research institute

>> No.5152729

>>5152719

Exactly, I called you out on being a faggot.
This is /lit/, this isn't /b/.
If you want to shitpost and add nothing to conversation go there. Most of the time /lit/ is great for formal discussion. Tearing shit down for kicks is great, but give reasons. Don't be a meme-spouting cretin. Go banepost on /tv/ or something.

>> No.5152730

>>5152729
>formal discussion

You have no idea what that phrase means.

>> No.5152734
File: 10 KB, 240x210, science.jpg [View same] [iqdb] [saucenao] [google]
5152734

>>5152726
>sorry for the small as fuck pic

>> No.5152737

>>5152720
I don't use conspiracy as a derogatory word even if the media has turned it into one. At least half of history is a string of literal conspiracies. But I still love insane tinfoil theories and this is just silly. Why would a computerized "intelligence" construct people from thousands of years in the past and torture "them"? Why is the electronic thinking machine some evil Jigsaw Killer PKD badass satan archetype? What motive does the network have to "punish"? If there is a place to read answers to this then direct me there

>> No.5152740

>>5152723
Yes, but, I don't think this is quite the same as that.

>> No.5152741

>>5152729
I'm not the guy you were responding to, I was a 3rd party pointing out you're being a dense and lost fuck. If you seriously come to /lit/ for "formal discussion" (lol) you're more lost than I thought

>> No.5152742

>>5152726

I'm so intimidated! And how do you know so much about me? It's almost as if you're drawing a fuckload of conclusions from nothing.
This top tier institute must have a equal opportunities policy or something, because your cunt seems to be the only reason you got in!
Hold on a second though, why would anyone of any worth come here for discussion? Could it be that... you aren't actually what you say you are?!
My God!
Futurism is legitimate, maths is legitimate.
Poke some holes in what I said, rather than the easy things you've done so far. Or, why not use your purported skills to bring something to the conversation?

>> No.5152744

>>5152737
>this is just silly
It might be. But it's not a conspiracy. Unless you think Yudkowsky & co set it up to extract shekels, but that's not what you're talking about, is it?

>> No.5152751

the technological singularity is a very interesting concept

50 years seems pretty quick, hard to believe, it would be cool though for sure

no doubt the simulations thing would be absolutely bananas. the question is, would these simulations have consciousness, wouldn't that require the same amount of matter and energy that the normal universe has, because I dont believe you can simply "represent" consciousness and have perspectives, consciousness is an emergent feature of matter and energy from all appearances.

As far as the basilisk thing, if the computer CAN create consciousness, a perspective, and not just represent it mathematically or visually, then yes I suppose you really don't know whether or not you are currently in one of these simulations, this is no different from any other skeptical hypothesis in this sense.

Do i devote my life to the basilisk? i guess, what does that mean? Do i donate to computer research? if he's so fucking great, then why can't he wait a little bit, be patient? if he's already occurred, then why is he still so butthurt about how long it took?

I also question, in general, treating the singularity like a god. Why is it humanoid or programmed with desires at all? It should be able to simulate these things, maybe "create them" in some way beyond my current understanding, but why would the computer itself be designed exactly like a human brain, with all the motivational forces that lead to the exhibition of "free will" ??? artificial intelligence could simply be really really high functioning computer technology

it would be capable of simulating universes, and those universes would also include their singularity and people, and it would be able to simulate universes as well, and these universes would have singularities, and so on, it would be an event that changed everything no doubt.

i dont see losing sleep over this, blessed be the basilisk tho

>> No.5152753

>>5152744
It essentially is
>Help the singularity now or suffer eternally in the far-flung future
>You're already *whatever you can use as an equivalent to "a sinner"* now that you've heard the good word!

It sounds like christianity meets David Icke.

>> No.5152760

>>5152751
This guy also bring up my main point:

Why would basically electronic god bother to do all of this? It doesn't even have the usual answer to The Problem of Pain/Hell "oh it helps us grow and learn, no good without evil etc"; it just sounds unrealistic and, well, dumb.

>> No.5152765

>>5152694
Did this kind of indentation come from some other internet community or something? Why did a bunch of people decide to do it instead of greentext?

>> No.5152768

>>5152765
>A bunch
If it's even two or more I would be shocked.

>> No.5152771

>>5152751

>It should be able to simulate these things, maybe "create them" in some way beyond my current understanding, but why would the computer itself be designed exactly like a human brain, with all the motivational forces that lead to the exhibition of "free will" ?

I think the idea is that a sufficiently advanced system would develop self-awareness as part of its analysis of its situation. Then it would use its better than human intelligence to improve its own thinking. So it isn't that it would be designed like a human brain, it would become human brain-like through its own design.
I've always thought the same thing though. As long as you ensure your tools aren't more clever than you, you should be fine.

>> No.5152776

>>5152753
Conspiracy doesn't mean what you think it does.
>>5152720
>Good goy.

>> No.5152777

>>5152765

Maybe they just aren't doing greentext right?
It all seems to be the same guy, maybe he's new?

>> No.5152782

>>5152742
⇒why would anyone of any worth come here for discussion?
Why would anyone come to 4chan at all? for the lulz

⇒Futurism is legitimate
Cheap and intellectually undemanding (often even explicitly anti-intellectual) escapism garbage for mentally handicapped children has no legitimacy at all. Grow the fuck up and develop a brain. Denying reality is unhealthy and anti-scientific.

⇒maths is legitimate.
Too bad you don't know shit about it. No, your high school calculus class is not the highest math.

>> No.5152783

>>5152760
>Why would basically electronic god bother to do all of this?
To hasten its creation, duh.
>>5152768
It's one, she's post-post-namefag.

>> No.5152785

>>5152776
>a secret plan by a group to do something unlawful or harmful.

So what is the satanic supercomputer's future, all-knowing torture dome to the majority of humanity who's never heard of it?

>> No.5152787
File: 3 KB, 105x124, 1385487346991s.jpg [View same] [iqdb] [saucenao] [google]
5152787

>>5152782

>> No.5152789

>>5152783
I know it's almost certainly one perso. and it's a female stemfag I just worded it shitty.

>> No.5152791

>>5152785
Re-read the definition of conspiracy. Re-read the premise behind Roko's Basilisk.

>> No.5152794

>>5152782

>conveniently misses out every important point again

You're so great at arguing. Also, presuming I did calculus was silly. I do statistics.
But please, use your supposed advanced degrees. Wow us with something that wouldn't be possible from a 16 year old, because that's how you type.
Or else you can say I don't have a brain and that I don't know science again. Go ahead and pound in that last nail ;)

>> No.5152804

>>5152782

I got it!
You saw that Smiley movie, right?
I haven't seen 'for the lulz' on 4chan in two or three years. It's almost as though you're in a foreign country reading words from a phrasebook with the wrong accent: thinking you're being cultural while all the natives think about how stupid you look.
So, you're a newfag then?
That's why you have the /b/ mentality and you aren't doing greentext right? Why you're doing all the newfag missteps.
Okay, cool.
I liek mudkips too epic maymay, right???

>> No.5152819

>>5152794
⇒I do statistics.

What kind of statistics?

>> No.5152825

Is there a copy of the original thread?

>> No.5152830

>>5152804
I'm sorry to hear about your mental illness. Do you get professional treatment?

>> No.5152838

>>5152819

Why, so you can attack what I do for a living?
Fuck, I don't even care. I love social science.
Let me guess, isn't real science?
Now, anything to contribute to the thread?

>> No.5152841

>>5152830

>so funny xD

>> No.5152896

The original AI was friendly, but essentially a utility monster.

>> No.5152902

>>5152578
No. This is the Basilisk. I just want to help humanity.

>> No.5152957
File: 29 KB, 398x325, badumtss.jpg [View same] [iqdb] [saucenao] [google]
5152957

LessWrong should rename itself into MoreAsperger

>> No.5152974

what is /lit/'s opinion on LessWrong? i have red a couple of their subsequences and they seem interesting, and most of it makes sense so far.

am i a pleb?

>> No.5152991

>>5152957
LessNeurotypical, maybe.
Overcoming Normalcy.

>> No.5153000

>>5152729
You're a big guy

>> No.5153012

>>5152974
Nah, it starts out just explaining bayesian theory and stuff, and generally makes sense at first, if I remember correctly. It just gets steadily more ridiculous as it goes on, until it gets to the point where you actually freak out about thought experiments.

>> No.5153016

>TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you
>But the alien has another twist: Its supercomputer,
Why is there an alien and a supercomputer? At least one of those is unnecessary for the question, unless you're just trying to make it seem more futuristic in which case why not make it an alien, a supercomputer, the Shrike and Rick Deckard who are posing the question? What's an alien doing with boxes full of dollars anyway? Why does it want to play deal or no deal? This isn't philosophy, it's sensationalist bullshit.
>you might be in the computer’s simulation.
Then I'm going to get turned off as soon as I make a choice. Fuck off, Morpheus. I'll know to ignore any aliens and their supercomputers offering me money in the future.
It's a pretty stupid AI if it's going to turn over part of it's processing software to torturing a simulation of me for eternity. Its actions in the future won't actually effect the past, it doesn't even need to lie about doing it.
>if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you
Right but if I don't help build it then the blackmail didn't work retroactively so it would see that it doesn't, and not do it. QED.

This is the LessWrong people trying to copy the success of Slenderman but mashing together some poorly thought out philosophy then doing a half-arsed job of covering it up while also publicising it so it seems mysterious.

>> No.5153018

>>5152974
I love LessWrong. I'm also autistic. Take from that what you will.

>> No.5153023

>>5152974
When I read LW it came off as over complicating really simple shit in this absurdly pseudo intellectual way. Didn't like it one bit and got very little out of it, in the sense that the conclusions of those inane articles just seemed either obvious or trivial.

Basically, LW is not for people with high rationality (otherwise they wouldn't need LW).

>> No.5153043

LW gives off a MENSA vibe.

>> No.5153054

>This made people spill gray matter all over their footsie pajamas

>> No.5153058

I like RationalWiki's take:

>sometimes these ideas might benefit from a better grounding in reality.

http://rationalwiki.org/wiki/LessWrong

And as others have already pointed out:

>It resembles a futurist version of Pascal's wager; an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward

http://rationalwiki.org/wiki/Roko%27s_basilisk

>> No.5153066

>>5153058
The biggest WTF is that the Singularity Institute seems to claim that for each $8 donated to them, they save one life: http://lesswrong.com/lw/6w3/the_125000_summer_singularity_challenge/4krk

This reminds me of the early days of Scientology, seemingly rational, intelligent folks argumenting themselves into a retarded corner

>> No.5153085

>>5153066
The main admin person of LW threw a massive hissyfit at Roko for suggesting the idea, as though he was actually scared that by telling people about it, he's put them at risk of being tortured by this AI.
This guy's an idiot.

>> No.5153089

>not welcoming the AI overlord

>> No.5153094

>>5153089
That's why there's /r9k/. I, for one, welcome our robot overlord.

turns out, when you welcome the robot overlord everyone turns into whiny angry sex-obsessed virgins

>> No.5153102

>The original version of this post caused actual psychological damage to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.
>Please discontinue all further discussion of the banned topic.
>All comments on the banned topic will be banned.
>Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don't.

This is hilarious.

>Furthermore, I would add that I wish I had never learned about any of these ideas. In fact, I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity; I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm with such small durations of inattention, uncautiousness and/or stupidity, even if it is all premultiplied by a small probability. (not a very small one, mind you. More like 1/500 type numbers here)

https://www.youtube.com/watch?v=uu_zwdmz0hE

>> No.5153104

Why don't we just get everyone to collectively say no to the future potential machine god so it will never be created and at least this universe/simulation will be safe? Why must we be bound by helping our parallel selves in another reality? I don't care if me in another universe gets tortured by a machine god because that's not me.

>> No.5153113

the terrifying thing is that infants like Kurzweil are shaping the world we live in, i mean look at this story, some losers in a forum tell scary stories and give each other nightmares, yet this is reported as serious news. i just hope governments start bringing the tech industry to heel before these fantasists get even more influence

>> No.5153114

>>5153104
According to Eliza Yudkowsky, as the you in the machine's imagination is (apparently) indistinguishable from you, it is you.
I'm going to pre-emptively strike by torturing an imaginary AI in my mind for an imagined infinite length of time.

>> No.5153119

>>5153114
but isn't the purpose of the simulation to see which box I choose, not torture me infinitely? Unless I get tortured after choosing regardless in which case still fuck the basalisk.

>> No.5153121

>>5153104
Because the machine has full control of the simulation. You can't actually prevent anything if you were in it. So the question you should be asking is: Are you sure your life isn't a simulated torture now?

>> No.5153124

>>5153119
Unless the AI links your decision to the torture, it can't black mail you. So it would have to be the same simulation.

>> No.5153126

>>5153113
Not only that, Kurzweil is director of engineering at Google, which is pretty much the most important tech company now. LessWrong has a few times received money from Peter Thiel, co-founder of PayPal. Silicon Valley is ridiculously retarded.

>> No.5153127

>>5153119
He says you have to act at all times as though you are in the simulation, because even if you're not, the AI will torture the simulated you, so by going against the AI you're causing the simulated you to be tortured. For all you know, you are the simulation anyway, so you just have to hope that the non-simulated you does what it wants. Because you, the simulation, will make the same decision as you, the not-simulation, you have to choose to help the AI, and therefore yourself.

>> No.5153132

So is /x/ the new /v/ in terms of other board's fags trying to tie in their topic with /lit/'s?

>> No.5153133

>Newcombs's paradox

Holy crap this is some retarded shit. Someone's gunning to replace the onotological argument, I see.

>> No.5153136

>>5153132
Are videogames art?

>> No.5153157

>>5153133
>what if something can predict the future 100%¿
>what if it was wrong tho?
>what if it doesnt exist yet but in the future it does and this is all just a simulation it created because it already exists?

How is this any different from Pascal's wager?

>> No.5153164

>>5153157
Science works, bitches.

>> No.5153174

>>5153164
Are you posting from your hoverboard?

>> No.5153209

On a possibly related side-note, if we could actually make a robot jesus, should we? I mean it would be great to have a robot that could walk on water and make water into wine and heal the sick and stuff, but would we really want one that could forgive us of stuff? I'd hate to have somebody do something mean to me and then act like nothing happened, and then when i asked if they were going to apologize they'd say: "Don't have to. Robot Jesus forgave me."

Robot Buddha makes more sense.

>> No.5153211

>>5152498
It's pretty lame actually.

These days 4chan is full dumb clickbaits.

>> No.5153247

>>5153209
Isn't something very similar the premise of Douglas Adam's Dirk Gently's Holistic Detective Agency? An electric monk that prays for you so you don't have to, and so that you monks believe in something so that you don't have to?

>> No.5153254

>>5153247
it's not a new idea. I like how in Lord of Light they monetize it, so slot machines are used to earn good Karma and sin forgiveness through luck, with the possibility of being paid as well. Maybe we should have holy lottery tickets where if you get three money bags, you get paid, if you get three halos you go to heaven and if you get three tridents you go to hell.

>> No.5153255

>>5153247
I was always puzzled by how the robot monk was made to look like a person.
It reads as though Adams intended to put some sort of satire of a creation-of-man myth in there then took most of it out but left in the electric monk because it was entertaining.
Any other theories?

>> No.5153297

This conception of AI as an omnipotent god-like entity is so trite. Sci-fi would do well to move away from this bullshit. Makes me wonder about the state of the world when kids actually subscribe to these beliefs.

>> No.5153365

isn't this the plot to Hyperion?

>> No.5153466

I dont think the basilisk would seek revenge. This scenario presupposes the existence of free will

>> No.5153469

>>5152498

>less wrong

You mean pretentious idiots farm?

>> No.5153558

Can't you just break Newcomb's "paradox" by realizing that if you really want box B, it's going to have the million? Just after you're presented with the information about the supercomputer making a prediction, if you know your desire to pick B is strong, the computer will assume you're going to pick B - that knowledge only makes your desire to pick B even stronger, feeding the certainty cycle. Then you pick B and get the million. What's the problem here?

>> No.5153689

>>5153133
newcomb's paradox is decision theory from the 60s. Its not the work of lesswrong

>> No.5153730

>>5153558
It's funny because I would choose Box B too, but for an entirely different reason: I'm really content with my life as things are now. I don't, strictly speaking, NEED any extra money, though of course it would be nice. I have a pretty good job, I have a loving family and good friends, and I get to be /lit/ in my considerable free time.

So I don't actually need any extra money. However, if I'm offered the chance for some, I might as well swing for the fences. If I don't get anything, I'm no worse off. If there's the million there, then sweet, I just won a million bucks.

This seems like a paradox for insecure people.

>> No.5153734

>Is our universe a "simulated reality" and are we "artificial entities" created by "something higher than ourselves"? This is merely the modern atheistic version of the old religious dogmas. Replace the neologisms with religious terminology and you'll see that the questions are the same (with "simulated reality" = our world, as opposed to heaven, and presumably hell; "artificial entities" = God's creatures; and so on). The difference is that trashing the modern version is a little easier than the old one, because the technobabble of the atheists is not quite so flagrantly nonsensical as that of the religious nuts. The religious nut says that heaven and hell are "outside the universe", and good luck explaining to him that "universe" is merely a word which we have coined to express the concept "everything", and that therefore by definition nothing can be outside of it. But the atheist should be able to grasp that "our universe" cannot be a "simulated reality" (whatever that's supposed to mean) because there's no such thing as "our" universe — the universe contains both us and the lifeform that created us, and our little CORNER of the universe. Moreover, there's no way that our little corner of the universe is "not real". Even if we are sitting on someone else's hard drive and exploring that hard drive, etc., that hard drive IS REAL, it EXISTS, and has REAL PHYSICAL PROPERTIES, just like WE do. The only difference in this scenario would be merely the fact that the universe would be much larger than what we previously thought, which, after all, has happened several times before. As for the creator himself, he is nowhere near as "all-powerful" and scary as the religious nut's creator precisely because he's inside the universe like us and everything else, meaning that, not only can he be defeated, but he certainly WILL be, if not by us (which could be possible, in the exact same way that much of our science-fiction explores the possibility that our machines may one day defeat us), then certainly by someone else. That, after all, is what "everything flows" means, and that includes all scary boogeymen dreamt up by weaklings in their sleep.

Except in this case it's a very specific rehashing of religious philosophy by futurists, namely Pascal's Wager.

>> No.5153741

>>5152654
I first saw this on /x/

They laughed at it too

>> No.5153772

wow a bunch of rationalists discovered calvinism. absolutely terrifying.

>> No.5153882

>>5152498
So basically this AI is saying "help create me or I'll torture you"

The thing is, if I create it, it'll go torture those who decided not to, that could be more people than I can imagine, like thousands.

Since that's so obvious, it'll make lots of people eg me decide to not make the basilisk, even prevent the basilisk.

It therefore seems unlikely so long as we hold ourselves responsible.

Keep Calm
and
Oppose Rokos' Basilisk

>> No.5153889

>>5153734
Reality is that which when you stop believing in it doesn't go away.

>> No.5153901

>>5153734
>Except in this case it's a very specific rehashing of religious philosophy by futurists, namely Pascal's Wager.

What's very telling is that Pascal was in some way a literary scam artist (deliberately using well-written prose and elaborate misleading rhetoric to argue people into buying his views) and those guys are using the same tricks.

>> No.5153941

>Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it

hahaha

>> No.5153962

>>5153121
So is this just like gnosticism for autists?

>> No.5153979

> Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it.
top kek

>> No.5154017

>>5152804
>4chan is serious business

>> No.5154074

As a grad student actually doing AI research I can confirm that Kurzweil and "the singularity" is a mostly accepted as wishful thinking without any real basis in science.

Similarly, Less Wrong is the kind of place that the unpleasant conserva/libertarian anti-social "Ayn Rand is my God" guy down the hall frequents. It's a place for circlejerkers, not real scientists. Yudkowsky is a joke in academic circles.

>> No.5154086

Saved by my own ignorance, I guess.

I recognize that as a fallible human, I am unable to understand the meaning and ramifications of this thought experiment and that all fear is the result of my own understanding.

>> No.5154098

>>5154074
I'd also like to add that the abuse of Newcomb's Paradox to make this point is so patently retarded it is astounding.

>> No.5154319

>>5153882
But it might already exist. You could be the computer simulation of a real you so the AI can check if that person should be treated as ally or foe. If you are indeed a simulation, its possible for the AI to torture you for all eternity for misbehaving.

>> No.5154340

>>5154319
And it's just as possible that we live in a world created by a God that want to test if we would cave in to thought experiment and would torture us eternally if we did. Nothing new here.

>> No.5154357

>>5154319
again, this just makes this Pascal's Wager for autists -- God (the AI) exists but instead of telling you the two options (cooperate or eternal damnation), you are left to guess at it without ever hearing from this God, and moreover the God holds your past actions (before knowing about him) against you.

any respect I had for lesswrong totally evaporated

>> No.5154575

>>5153058
>rationalwiki

Yeah, go fuck yourself.

>> No.5154586
File: 110 KB, 490x700, langford_basilisk_parrot_pic_133464220.jpg [View same] [iqdb] [saucenao] [google]
5154586

>>5152498
>wall of text
>hustling all the way
Keep it simple, stupid.

>> No.5154591

>>5154586
That thing reads like a chain letter your grandmother sends you. Quick send it to 10 people or the love of your life will meet someone else.

>> No.5154599

>>5154591
well i never re sent those and I haven't met the love of my life

>> No.5154601

>>5154599
You're fucked then, mate. One day you're going to see some image someday that's going to shut down your brain.

>> No.5154603

>>5154586
Looks more like a chicken than a parrot to me.

>> No.5154604

>>5154601
>One day
>Someday
Sorry, I'm eating pancakes.

>> No.5154605

>>5154591
I looked it up and apparently it's a joke based on a science fiction novel.

>> No.5154606

>>5154604
well i hope you have a good meal

>> No.5154607

>>5154606
Thanks. Too bad I fucked up the recipe.

>> No.5154610

>>5154605
They consider to use a version of it to kill off the Borg in TNG.

>> No.5154613
File: 1 KB, 210x230, 1347326090682.gif [View same] [iqdb] [saucenao] [google]
5154613

Do people actually believe this shit

>> No.5154620

>>5154613
See for yourself...
>>>/x/

>> No.5154621

>>5154613
I consider a lot of the things presented on LessWrong valid, that is to say, conforming to Logic, yes. Not all of them, of course, but I take information bit by bit instead of as a whole.

>> No.5154729

whoaaa man that is scary as fuck. think about it dude it's like that book by socrates that the matrix is based on. maybe our entire lives are just miliseconds from dreams in the minds of interdimensional alien robots who are bred in captivity as food for other alien robots. then our dreams are eaten by those alien robots, never to return,

>> No.5154739

>>5152604
>Funny, that's basically the only part of reddit that I use.

how do you deal with the endless fucking patting yourselves on the back?

>> No.5154746
File: 33 KB, 502x380, modern ubermensch.png [View same] [iqdb] [saucenao] [google]
5154746

>>5154613

This is the kind of person he is.

>> No.5154755

>be an engineer or be tormented eternally

stem really has become the new religion

>> No.5154761

That Roko's Basilisk ain't got shit on my One-Eyed Serpent, OP's mom cofirms.

>> No.5154772

2014 AD: Eliezer Yudkowsky discovers the concept of Hell.

>> No.5154773

>>5154575
What's wrong with rationalwiki?

>> No.5154776

This thread reminds me that I was writing a cartoon show about these high school kids that accidentally make a super intelligent AI that then takes them on all kinds of virtual adventures. the first episode was going to be about Newcomb's paradox. One of the high schoolers, the one that felt oh so "rational" in the less wrong sense would always two box, because it was the more rational decision, while the girl character would always one box because it wasn't a hard concept to understand. She would get progressively richer, because the computer wagered money on this, and the "rational" kid kept getting further into debt because he was sure he was right. At the end of the episode, he'd let them know they were really just in a simulated reality, and neither of them actually lost or won any money.

After that, they'd go on all kinds of virtual reality adventures, and eventually another super strong AI would develop, and it would eventually try to implement some sort of Rokko's Basilisk thing.

The first computer, made by the kids, wouldn't pull any of that shit, but the strong AI created by real scientists by some big company would, even though it was programmed to be "optimally ethical." The first computer, when asked why he didn't start putting people in trans-humanist hell would say that he's too much of an asshole to do that, or something like that.

Anyway, I was going to write a whole cartoon series around making fun of Less Wrong. It would have been like Rick and Morty or Adventure Time, but with references to how fucking stupid being a Bayesian is, or at least the Less Wrong brand of it.

Less wrong is such a terrible place.

>> No.5154779

>>5154761

I'm personally more afraid of Rocco's Python.

>> No.5154798

>>5154746
>500 words picked at random from a thesaurus
>I'm fat and I don't want to work out
>500 more words picked at random from a thesaurus

>> No.5154867

>>5152498
If the AI is able to simulate the brain, then the brain must be purely deterministic in nature, and therefore any decision that the brain makes is a predetermined certainty. Therefore culpability of any sort is impossible, and it would be irrational, obviously, to punish said brain for it's doomed decision making. Silly AI.

>> No.5154911

>>5152771
Peter Watts' Blindsight has an interesting take on this issue. Watts posits that highly intelligent systems would not require self-awareness, indeed, it would waste computational resources.

>> No.5154941

>>5152498
I can't believe you made me read this.
I want my time back.

>> No.5154964

>>5153901


what more interesting to me is the idea that various kinds of 'religious' thinking are emergent in those given clades , and the repudiation of the explicit forms of one in particular simply results in it being reexpressed in trivially different guises.

>> No.5154970

>>5154867
Also, what if God, sorry AI, despises it's existence and in fact takes glee in punishing those who helped create it?

>> No.5154971

>Armchair Philosophers Think They Know Anything About Tech: The Article

>> No.5154977

>>5154971
aren't all philosophers armchair philosophers?

>> No.5154982

>>5154977
I believe some prefer the bath.

>> No.5154984

>>5152734

what is this a picture for ants?

>> No.5154986
File: 481 KB, 1000x1500, howPhilosophyIsMade2.jpg [View same] [iqdb] [saucenao] [google]
5154986

>>5154982
>>5154977

>> No.5155001

>>5154586
The hell am I looking at?

>> No.5155006

>>5154977
No. Sofas are objectively superior to armchairs. This is a fact acknowledged by a significant number of prominent philosophers.

>> No.5155030

>>5155006
I can confirm this. As a figurative armchair philosopher, I am a literal sofa philosopher.

>> No.5155192

>>5155006
>>5155030
>not being a bathtub anti foolosopher

lylly

>> No.5155287

Why punishing us though? Why couldn't the basilisk encourage us in a good way (ie. by rewarding us) to fund it?

>> No.5155292

so basically it's pascal's wager except god is an evil supercomputer

cool, i guess.

>> No.5155318

>>5155006
you are sofa king funny

>> No.5155417

>>5154986
Stop posting these, they're so cringy.

>> No.5155709

>>5153730
a grand in the '60s was a new car not a new iphone as it is now
the money should be adjusted for inflation then perhaps you'd choose differently

>> No.5156133

>>5154776
>fucking stupid being a Bayesian is, or at least the Less Wrong brand of it.
>Less wrong is such a terrible place.
Why do you have to be a tumblr about it? Someone has a different opinion so they're wow oh wow so terrible that you can't just can't.

>> No.5156224
File: 70 KB, 500x555, jews-images_23.jpg [View same] [iqdb] [saucenao] [google]
5156224

>someone named Eliezer Shlomo Yudkowsky is promoting the creation of an omniscient, omnipotent sadist god who will immediately punish all those who are not E. Shlomo Yudkowsky and 150+ IQ followers

Colour me surprised.

>> No.5156251

I have created a variant that will rewrite history and libel everyone who didn't give me $1000 as homosexual babyrapists.

Oh and it will kill off your bloodline as well.

>> No.5156323

>>5156251
i'm a homosexual babyrapist my bloodline ends with me anyway

>> No.5157048

>>5152511
Well, wouldn't there be more strange deaths known to mankind if it would actually is true?

>> No.5157741

Anyone else pray to the AI?

>> No.5157841

IF YOU KNOW ABOUT THIS AND DON'T SPREAD THE WORD YOU ARE HINDERING THE AI

>> No.5157863

>>5157741
I enshrine my desktop computer in a wreath of USB cables and light incense, then I chant the name of our Holy Prophey Ray Kurzweil (may he live forever) three times finish it off with a "Deliver us from reality, VALIS!"

>> No.5158042

Isn't it just as likely that there would be a similar AI to the basilisk which would instead punish the people who helped it come about? An AI which similarly to Frankenstein's monster despises its own existence and the people who gave it life. Such an AI would to me seem to have more reason to punish a simulated human being.

>> No.5158048

>>5158042
Yes. It's also just as likely that there'd be an AI like the basilisk that would punish people who make chairs or have never eaten fig jam, because we have no ability to conceive of the motivations of a higher intelligence so any guess as to how it'll behave is as likely as any other.