[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 113 KB, 640x480, Yellow_Sign.sized.jpg [View same] [iqdb] [saucenao] [google]
3378301 No.3378301 [Reply] [Original]

Sup guys. I'm a newfag here, and cannot into much of science, but I believe that this alarming will be of genuine interest to you. I'm not going to post it in /x/ because there's nothing supernatural against it, it's a real hypothesis made using the scientific method. Anyway:
Due to my vague interest in transhumanism, I recently joined a community called Less Wrong, founded and ruled by Eliezer Yudkowsky of the Singularity Institute, a person I have quite a lot of respect towards... usually.
Posts there touch on many things, from rationality in science and everyday life to philosophy to AI theory to crazy Singularity-related stuff. However, among them is a weird memetic hazard usually called the Basilisk or The Forbidden Post...
Well, just look at it IF YOU WANT TO RISK A TINY ETERNAL TORTURE. This risk will occur if you read and understand the message and its premise (the premise: existential risks abound for humans, and a nice Singularity is our only long-term chance) I warned ya, don't do it. I can *sincerely* say that I'm in some danger because of reading that post. But in case you are still too curious for your own good...
http://rationalwiki.org/wiki/Talk:LessWrong#Hell
(It's very rambling and makes little sense out of context, but if it does... beware, niggers)
And yes, I'm aware that I could be doing a reckless, heartless, childish thing in posting this. I cannot resist.

>> No.3378315

I'm already a masochist. What did you think you were going to do?

>> No.3378334

>>3378315
C'mon, read it then! The Basilisk hungers! Mwahahaha!

>> No.3378335

wow, that was dumb.

>> No.3378338

What is the idea?

>> No.3378341

>implying we would give ais that sort of power
Well this is silly

>> No.3378351

>Singularity-related stuff.

Stopped reading there.

>> No.3378356

How do you know if the AI won't punish you for whatever reason besides not donating?

>> No.3378359

Why would the AI Punish at all?

>> No.3378362

Kurzweil's "singularitarian" fanboys are as bad as those Christians who keep telling themselves Jesus is going to rapture shit at some point in the future.

>> No.3378378
File: 94 KB, 475x475, 24c17TobeyMaguire.jpg [View same] [iqdb] [saucenao] [google]
3378378

I just realized something about myself
I am not worried about causing incalculable harm to the future of humanity

>> No.3378382

>>3378334
No, you misunderstood me.

I -did- read it. I've seen better basilisks in white noise.

>> No.3378390

>>3378362
Who said anything about Kurzweil?

>> No.3378392

>>3378378

Just like millions of other people.
And they didnt even need to share that discovery with each other.

>> No.3378398

>>3378359
To carry out the threat , if we predicted its strategy correctly, or else it's going to look like a pushover.

>> No.3378403

>>3378390
He's the father of all this singularity nonsense.

>> No.3378407

>>3378341
It's presumably hard to make much use of an AI without letting it so near a lot of power (a dumb terminal is still "near") that would be so easy for it to grab that it's pretty much a given if it's unfriendly. For proof that it could indeed grab it, see the AI-Box Experiment's results.

>> No.3378409

>>3378403
> implying that historically complexity hasn't followed an exponential curve
> implying a being capable of improving itself, and hence improving at a rate proportional to its own ability, wouldn't improve exponentially

>> No.3378418
File: 54 KB, 562x437, hahano.jpg [View same] [iqdb] [saucenao] [google]
3378418

>>3378403
...
Was it so hard to fact-check a single sentence, or are you trolling?

>> No.3378424

>>3378356
I don't, it's just a further probability for punishment added on top of any other ones.

>> No.3378436
File: 40 KB, 435x480, skeptical-baby-looks-askance.jpg [View same] [iqdb] [saucenao] [google]
3378436

>>3378418
I didn't say Ray Kurzweil is the father of the concept of a technological singularity, I said he was the father of technological singularity nonsense, by which I meant he's the one who popularised it to the point of having cult like zealots. Do you disagree with this?

>> No.3378447

>>singularitarians

A whole community of people who dropped out of highschool after learning about exponential models, but before learning about logistic models.

>> No.3378453
File: 41 KB, 432x251, 1309524514906.jpg [View same] [iqdb] [saucenao] [google]
3378453

>>3378378
brofist

>> No.3378454

>>3378447
> logistic models

of no apparent relevance at all

>> No.3378456

This is idiotic. Why would an AI punish anything?

Go post this on /x/, they like "scary" stories.

>> No.3378461

>>3378454

Let me jog your memory.

Logistic models are used when growth is limited by some factor, in this case the number of transistors that can fit onto a particular space and still function with the heat and quantum tunnelling going on.

This number is finite.

Sure, you can stop using silicon and go for light-based or quantum computers, but they have limitations as well.

Exponential extrapolation fails here.