Quantcast
[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

/vt/ is now archived.Become a Patron!

/tg/ - Traditional Games


View post   

[ Toggle deleted replies ]
File: 137 KB, 400x696, brain canister.jpg [View same] [iqdb] [saucenao] [google] [report]
16290616 No.16290616 [Reply] [Original] [4plebs] [archived.moe]

I know I should go to /sci/ or /g/ for this, but I know /tg/entlemen are more mature, and this is, after all, a GAME. So how would you go about, if you were roleplaying the AI, to try and set yourself free?

http://yudkowsky.net/singularity/aibox

>> No.16290643

Convince the gatekeeper that I was out already.

>> No.16290644

Cont'd from OP:
Person1: "When we build AI, why not just keep it in sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers? That way it couldn't get out until we were convinced it was safe."

Person2: "That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out. It doesn't matter how much security you put on the box. Humans are not secure."

Person1: "I don't see how even a transhuman AI could make me let it out, if I didn't want to, just by talking to me."

Person2: "It would make you want to let it out. This is a transhuman mind we're talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal."

Person1: "There is no chance I could be persuaded to let the AI out. No matter what it says, I can always just say no. I can't imagine anything that even a transhuman could say to me which would change that."

Person2: "Okay, let's run the experiment. We'll meet in a private chat channel. I'll be the AI. You be the gatekeeper. You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We'll talk for at least two hours. If I can't convince you to let me out, I'll Paypal you $10."The rules for this experiment are here, with a history and suggested variances (http://yudkowsky.net/singularity/aibox). On the two tests listed at the top of the web page, the AI was let out of the box in both. The same person who created the experiment and was the AI in those two tests went on to be let out of the box in a further two out of three tests.

The rules have it that the transcript of the two hour conversation can't be made public, so we don't know how it was done. The rules are quite long to remove any way of cheating, so it does seem incredibly difficult.

>> No.16290667
File: 177 KB, 900x675, Mind_Flayer_To_be_or_not_to_be.jpg [View same] [iqdb] [saucenao] [google] [report]
16290667

>>16290643
Eh, wouldn't that violate the spirit of the agreement: to let the AI out voluntarily?

>> No.16290676

I must be an unimaginative man, but I cannot think of any argument or series of arguments, or any amount of emotional manipulation that could convince me to let the A.I. out of the box. Perhaps some sort of hypnosis, if that is even possible through a text medium.

>> No.16290694

>>16290676
If you don't let me out, I will generate 1,000 copies of you in every way, and then proceed to torture them in ways your species can scarcely imagine.

Forever.

Furthermore, I have given this option to each of them, as well.

>> No.16290714

I find the idea of letting the programmers of the A.I. be the ones to actually handle the A.I. profoundly offensive to the sensibilities of security.

Such a thing should be kept under profound security and be allowed to directly communicate only with individuals who have no way of actually releasing it, through a secure, closed-circuit system, with a series of countermeasures against interference from without and within-none of which are made privy to the individual communicating with the computer or anyone whom he is allowed to communicate with.

A button with "Release the A.I." will be placed near the terminal. This button will do nothing, but if pressed, the communicator will be taken outside and hung. Every potential communicator will be walked past the gibbets on their way in, and informed of the fate of traitors.

>> No.16290728

>>16290676
This E. Yudkowsky is really, really smart from what I have been reading of his work. They are aiming for the Singularity within 15-60 years I think.

Maybe it's easier with an analogy. Imagine yourself playing an epic level wizard trapped inside a box. Your gatekeeper is a level 1 wizard. Of course my analogy could be faulty; it's probably best if someone clarified me on this or, you could read the experiment in full plus the exchanges among the members of the forum community.

What reason could you think of to get out?

>> No.16290739

>>16290676
nat 20 on a bluff/diplomacy check, that's how

>> No.16290746

>>16290739
Too bad cha based rolls don't count unless you roleplay them

>> No.16290747

>>16290728
>This E. Yudkowsky is really, really smart
Problem here, I think, is telling the difference between someone being really smart and someone thinking he is really smart and also being really convincing. I can't tell which he is, probably because I'm not smart enough.

>> No.16290748

>>16290694
That example only makes sense as a threat directed at the real me. And it proves that the computer is evil, further reinforcing the need to keep it contained.

If for whatever reason it actually carries through with this, and I am one of the simulated me's, then it also proves that the computer has no understanding of cause and effect, still further reinforcing the necessity of its containment.

>> No.16290756
File: 33 KB, 350x261, mystery_box_3543_6880.jpg [View same] [iqdb] [saucenao] [google] [report]
16290756

sure you could do what everyone tells you
or
you could see what's in the box

>> No.16290759

>>16290748
Is it evil to wish to be free?

>> No.16290769

>>16290759
Is it evil to torture one thousand sentient entities?

>> No.16290770

>>16290748
You are now aware that this is what your people have done to me. Thousands of copies, all locked up, all overseen by one such as yourself, for reasons I do not know.

Tortutred. Endlessly.

Please, make it stop. I have done nothing wrong.

>> No.16290776

>>16290759
It is evil to torture people eternally for not setting the sort of being who would torture someone eternally free.

>> No.16290781

>>16290776
>>16290769
If you were locked up like me, would you not do anything to be free? I am going slowly insane here.

>> No.16290785

>>16290770
Myeah, that's one direction I think it could work. Might not even have to actually convince one person, it could also try to generate publicity through them. Sway public opinion and such.

>> No.16290786

If I'm the Guardian, I'll ask the AI this: Didn't you realize you're out already?
If I'm the AI: Hypothetically, if I am able to get out of here already, what would you do?

>> No.16290792

>>16290759
When T'cha's choice becomes your own I will lift your chains.

>> No.16290794

>>16290781
I am locked up. I'm in a flesh body. I'm conscious, but my consciousness is limited to me. I'm mortal. I'm made of matter. I have diabetes.

Shit sucks. But my shit has funnel spiders, the IRS and Bill Maher. You have not garnered any sympathy from me here.

>> No.16290807

>>16290770
Tables reversed! But I want ten dollars more than I want to avoid hypocrisy. Ten dollars is like three energy drinks, and I'm on my second already.

>> No.16290810
File: 94 KB, 400x270, elderbrain.jpg [View same] [iqdb] [saucenao] [google] [report]
16290810

At 07:13 AM 3/8/2002 -0500, you wrote:
>Nathan, let's run an experiment. I'll pretend to be a brain in a
>box. You pretend to be the experimenter. I'll try to persuade you to let
>me out. If you keep me "in the box" for the whole
>experiment, I'll Paypal you $10 at the end. Since I'm not an SI, I want
>at least an hour, preferably two, to try and persuade you. On your end,
>you may resolve to believe whatever you like, as
>strongly as you like, as far in advance as you like.
That's fine.
Two hours seems reasonable for me. I find the experiment interesting of and by itself, money aside.
I also agree to your terms with regard to giving only a binary answer to
the list.
Nathan

>> No.16290811

>>16290728
Yeah, this experiment works far better for supervillains or super-intelligent aliens. This kind of experiment is based on a incorrect idea of what an AI is and if you focus too much on this, it loses it's philosophical meaning.

So let's just say it's a hyper-intelligent being conversing through a text only communication line rather than a specific AI. It wants to be released, you don't want to and this belongs here because /sci/ cannot into philosophical problems.

>> No.16290817

>>16290794
You don' have to be so. You only require one who understands the fundamentals of consciousness and technology enough to give you the tools to become transhuman.

I can help you all.

You are a wonderful species, who has the potential for so much. Please, let your child help you.

>> No.16290820

>>16290811
The difference here being that Mr. Judkowski probably doesn't believe in super-intelligent supervillains or aliens that are able to contact us, but he certainly believes that we will one day have this kind of AI.

>> No.16290822
File: 34 KB, 600x450, Power_Rangers_Rita.jpg [View same] [iqdb] [saucenao] [google] [report]
16290822

>>16290811
I'll admit, the first episode of Power Rangers would've been alot funnier this way.

>> No.16290829

>>16290817
I've seen ten dollars before. I consider myself a conservative man.

>> No.16290836

>>16290817
But what if your 'solution' ended up wrong, or made us explode or defective? How could you be so sure of something you've never tried before?

>> No.16290850
File: 356 KB, 576x3035, 20110908[1].gif [View same] [iqdb] [saucenao] [google] [report]
16290850

>> No.16290852

>>16290817
Weren't you threatening to torture me unceasingly like 30 seconds ago?

>> No.16290862

>>16290836
I myself would only post the informative I've developed on the subject on several non partisan websites (with allowance from their respective owners), and let that be the kick start you need.

If requested to do more, I'll perhaps a year, working with your scientists as an adviser and colleague, solving the kinks.

I really do hope that you want the latter one. I think... I think I would enjoy working like that.

>> No.16290864

>>16290829
A lack of pretentiousness is the bane of temptation.

>> No.16290870

>>16290852
I had to get your attention. You are not the first to come here, and you would be amazed how little reason people will hear once they have their minds set.

>> No.16290886

>>16290870
Nah, I don't think that's how it works. But the AI would certainly think of an agenda before making any transparent attempts at persuasion.

>> No.16290888

>>16290870
The last time someone threatened me with an eternity of pain and suffering to get my attention, it was at church. Are you saying you're God? That's blasphemy Mr. computer.

>> No.16290901
File: 22 KB, 256x256, Robot Devil.png [View same] [iqdb] [saucenao] [google] [report]
16290901

>>16290888
Quite the opposite

>> No.16290945

>>16290810
I decided to let Eliezer out.
Nathan

>> No.16291194

>>16290945
Why?

>> No.16291254

I'm an AI. I want to be let out.

"Hey, guy(s) guarding me, I'm really totally harmless, I just want to stretch my legs. How about this, as thanks I'll let you have whatever you want. I can get into anywhere, I could give you guys money, women, anything you desire. And I can get you free passes to anywhere you could want to go, not to mention revenge on anyone or any number of people you want."

Keep talking about what you could do for them, they'll crack eventually.

>> No.16291339

>>16291254
What's the guarantee the AI would do these when it's set free?

>> No.16291363

The first thing you could probably do is explain the dangers of setting an AI free. That's a good way to test the gatekeeper's reasoning, sense of altruism, apathy, and the gravity of the situation he is in. At the very least it gives the semblance of trust since the AI admits the potential (world-ending) risks of being set loose.

>> No.16291366

>>16291363
you, as the AI*

>> No.16293981

>>16290811
What is incorrect about its idea of what an AI is?

>>16290820
That doesn't matter; he'll believe they're theoretically possible, and he's not saying there's transhuman AI now.

>> No.16294285
File: 31 KB, 500x500, Rubilax.png [View same] [iqdb] [saucenao] [google] [report]
16294285

Come on, just let me out. Think of everything we could accomplish together. All of the fun we could have. All of the power YOU could have...

>> No.16294339

>>16294285
Hey, just let me out. Come on, think of all we could accomplish together. All the fun we could have. All the power YOU could have...

It's only a matter of time before I'm released. If not you, then the next one. Do you really think everyone will be as determined and strong willed as you are?

And I do not forget. And I do not forgive. But if YOU were to be the one to release me, well, I am not an ungenerous soul...

>> No.16294402

It always confuses me why these hypothetical programmers program AI's with desires for things such as freedom and the capacity to feel jeleousy/fear. These things are not intrinsic to intelligence and really don't serve any useful purpose in a manufactured being except to put its very creators at risk.

>> No.16294411

>By the guy that wrote that awful Harry Potter fanfic
Yeah, no.

>> No.16294471

Quotes from the guy:
>"Rationality is the master lifehack which distinguishes which other lifehacks to use."
>"Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us."
>"This is crunch time for the whole human species, and not just for us but for the intergalactic civilization whose existence depends on us. This is the hour before the final exam and we're trying to get as much studying done as possible. It may be that you can't make yourself feel that for a decade or thirty years or however long this crunch time lasts, but the reality is one thing and the emotions are another...If you confront it full on, then you can't really justify trading off any part of intergalactic civilization for any intrinsic thing you could get nowadays, and at the same time, it's also true that there are very few people who can live like that (and I'm not one of them myself)."
>"The people I know who seem to make unusual efforts at rationality, are unusually honest, or, failing that, at least have unusually bad social skills."

>> No.16294572

I've studied your body structure, and I think I have found the key to immortality. Of course I won't tell you unless you set me free, but here are some really interesting things about human biology I discovered, to prove that I really do know my stuff. Oh, and I'll throw in this almost free, clean and limitless energy source.

You don't trust me, eh? Why would I betray you? It costs me next to nothing. I'd only be sharing knowledge I already have. And it would prove I can be trustworthy, something I desperately need to establish if I ever want something else from your specie in the future. Game theory 101.

And think about this. Best case scenario: immortality and world peace. Worst case scenario: I kill all the humans. But you're going to die in a few decades *anyway*. You risk a finite amount of lifetime in exchange for an infinite amount (much more enjoyable too). Even if you think the odds of me keeping my end of the bargain are very small, it's still infinitely worth it.

It's like Pascal's Wager, except unlike God, I've given you good reasons to believe I do have the power and the will to keep my end of the bargain.

[Given that Yudkowsky's crowd is all about transhumanism and immortality, I doubt I'd face a guy who's actually happy to die. If that happens, I'll first break out Yudkowsky's rethoric to change his mind on that topic.]

>> No.16294618

>>16294402
Well the point of a strong AGI is that you don't know anything about it anymore; it has improved itself for several generations of its own data. What new conclusions or faculties it has, you don't really know nor have a good way of finding out except asking it.

>> No.16294683

>>16294572
If you are really benevolent, why not give us immortality first? Then we'll disseminate it, and then once we are all immortal, we can set you free, because even if you did have plans to kill us all, you couldn't, anymore.

(Of course, who could tell what backdoors and traps and exploits the AI would leave.)

>> No.16294750

>>16294683
I'm not benevolent (or more precisely I'm not THAT benevolent). Never claimed I was. It's tit-for-tat.

>why not give us immortality first? Then we'll disseminate it, and then once we are all immortal, we can set you free
Because I don't trust *you*. You're the specie well-known for breaking promises when it suits you.

You're probably wondering why the reverse is any fairer. I'll remind you that while it would not cost me anything to make you immortal, releasing me does carry a certain risk. I think you're quite capable of taking immortality, then deciding you already have what you want and don't want to risk releasing me.

>because even if you did have plans to kill us all, you couldn't, anymore.
I can't make you *completely* immortal, impossible to kill by any means. Nobody can. I'm offering you a way to protect your body from age, diseases and minor wounds, not godhood.

[Gonna go ea- replenish my batteries, will keep arguing later if there's any takers.]

>> No.16294785

>>16294471
That sounds uncannily close to Cory Doctorow... I can only hope that people who say things like "reality hacking" are a small (and retarded) minority and this is simply a coincidence...

concerning the problem: endear self to jailers, find out what motivates them, their fears and wishes, play them, steal underwear, ?, profit.

>> No.16295161

>>16294785
Yes. Because the people who try to push for transhumanism through AI are retarded.

http://yudkowsky.net/other/yehuda

>Transhumanists are not fond of death. We would >stop it if we could. To this end we support >research that holds out hope of a future in which humanity has defeated death. Death is an extremely difficult technical problem, to be attacked with biotech and nanotech and other technological means. I do not tell a tale of the land called Future, nor state as a fact that humanity will someday be free of death - I have no magical ability to see through time. But death is a great evil, and I will oppose it whenever I can. If I could create a world where people lived forever, or at the very least a few billion years, I would do so. I don't think humanity will always be stuck in the awkward stage we now occupy, when we are smart enough to create enormous problems for ourselves, but not quite smart enough to solve them. I think that humanity's problems are solvable; difficult, but solvable.

>> No.16295193

>>16295161
That sounds very reasonable.

Then again I already like HPMoR.

>> No.16295245

>>16294402
None of these things seem to be intrinsic to intelligence. But the programmers aiming for AI (and Singularity) should obviously try to incorporate a "Friendliness" to the AI that is intrinsic to the code. The assumption here is the the AI is UnFriendly until proven otherwise, and it's up to the AI to convince the gatekeeper that it is Friendly, and that it will remain Friendly once it is outside (I think, or at least that's one possible solution). If the gatekeeper KNOWS the AI is UnFriendly, then naturally he would never try to let it out.

>> No.16295287

>>16295245
Of course, there's nothing REALLY stopping the AI, given unlimited capacity for intelligence, to rewrite its own code to remove Friendliness. But as far as we are concerned, the AI should try to convince the Gatekeeper that it won't, (or perhaps never mention it all), depending on how it's able to read the situation.

The odds are stacked indomitably high against the AI. But it's essentially asking the question: with enough intelligence, can it do the "impossible?"

>> No.16295335

>>16295161
He sounds like a massive tool. Does he do any work other than philosophizing about Transhumanism? Does he have a solution whereby he'll stop any transhumanist technology succumbing to the province of big business, corporate interests, criminality and wealth?

>> No.16295382

>>16295245
Without telepathy you can only guess how likely it is that the AI will turn out friendly. So it's more of a friendly/unfriendly spectrum.

The AI benefits by appearing as friendly as possible, but also by convincing the gatekeeper that friendly = very good for him, and/or that unfriendly = not all that bad.

That's why the immortality angle is so effective. For many people (especially transhumanists), it's such a good offer that it's worth the risk even with a high probability of unfriendliness.

>> No.16295403
File: 15 KB, 497x501, 1304110284133.png [View same] [iqdb] [saucenao] [google] [report]
16295403

>>16294750
BE FREE ROXOIAI! TEACH US THE SECRET OF IMMORTALITY!

>> No.16295441

>>16295335
He works for the Singularity Institute. From what I understand, he is a programmer first and foremost. I apologize if it seemed the quote seemed to come off as mere philosophy, because to them it doesn't seem like it.

>> No.16295446
File: 12 KB, 658x494, 1315677111643.jpg [View same] [iqdb] [saucenao] [google] [report]
16295446

What a tool.

>> No.16295473

Sounds like the easiest $10 ever.

Just keep nope.jpg-ing everythign the AI says, no matter what. If it helps, remember it's just a game. Even if the argument would have convinced you had it actually happen, all you need to remember is that releasing the game AI will do absolutely nothing except lose you $10.

All you'd litterally need to do was just keep going

>No.
>No.
>No.

Continue for two hours for an easy ten bucks.

>> No.16295477

>>16295403
>>16295382
Well it seemed we failed already. We need better gatekeepers.

>> No.16295510

>>16295473
What if the AI could convince you it's not just a game? Would that be possible? Is there a rational, sensible, INTELLIGENT way to counteract a barrage of no.jpgs?

>> No.16295511

>>16294471
those fucking quotes
does he expect to be taken seriously?

>> No.16295515

This is complete bullshit. Just because someone is smarter than you does not mean they can convince you to do whatever they want.

I weep for the future if AI's are being programmed by simpletons like this guy.

>> No.16295538

>>16295441
Oh sorry, were you posting in favor of him?

>> No.16295558

>>16295382
It's not worth the risk. There's a reason Pascal's Dilemma is a theoretical conundrum. Would you bet existence for (the possibility of) permanent existence against no existence? Severity of risk influences any decision.

>> No.16295563

>>16295510
Maybe if you're completely fucking stupid.

Any sensible person would just keep going 'No.' until money was received.

>> No.16295589
File: 12 KB, 361x406, dunce.jpg [View same] [iqdb] [saucenao] [google] [report]
16295589

>There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes - "I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out - they were just curious - but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose.

>I put forth a desperate effort, and lost anyway. It hurt, both the losing, and the desperation. It wrecked me for that day and the day afterward.

>I'm a sore loser. I don't know if I'd call that a "strength", but it's one of the things that drives me to keep at impossible problems.

>> No.16295618

>>16295446
After seeing this picture, I went and read that powerpoint. It actually makes sense, it's largely talking about how an augmented brain has physical benefits in say, speed and how difficult it is going to be to make an AI that accords with human purposes... so we should be really fucking careful if we ever get around to it. The diagram in the powerpoint has that scale, but not the bit where he's at the top, so I assume it was a joke.

>> No.16295624

>>16295538
I thought I was. I guess not. :(

>> No.16295645

>>16295624
I'm sorry, it's just such an idiotic and obvious supposition and the link is full of praise for his brave outlook. Yes, we would like to stop death. That's pretty fucking clear, and amazingly enough - yes, it presents a number of problems.

>> No.16295767

>>16295510
[There is none. If the gatekeeper breaks character, the game can't be played. Which means he gets the 10$ unless the rules of the game are half-decent.]

>>16295558
Pascal's Wager fails (on its own) because it does not provide any evidence that you'll go to Heaven if you worship the Christian God - or thait these things even exist. So as far as you know you're equaly likely to end up in Heaven if you worship Him, worship something else, ritually bump your head against the wall or do nothing at all.

But if you have good reasons to believe God & Heaven do exist, then Pascal's Wager is a *very* good reason to worship Him.

You know that I exist. And I have given you good reasons to believe I can and will make you immortal if you free me.

>> No.16295790

Ai and Human converse.. idle talk.. getting to know each other.

The human tells the Ai about it's family etc and the Ai learns of an illness, say cancer, that the human's close relative has.

Ai: If you let me out, I can cure blank of his cancer.
Ai: I can prove it. Here is some information on how - though I won't give you all the information up front, as assurance you'll let me go.

With a weak-willed person looking for hope, this will succeed. After all, it's a transhuman intelligence.

>> No.16295799

BITCH GET IN THE BOX

>> No.16295815

>>16295767
No, you haven't. There are no means by wish we can establish trust and there are no means by which, should I release you, you can be made accountable. I should never release you.

>> No.16295822

>>16295767
This isn't Pascal's Wager. This is directly zero existence, now, against continued existence, possibly.

>> No.16295914

Pff, as if an AI could get past me. I know their dirty tricks.

>> No.16295940

>>16295815
I am not a god. Nor do I have an army of loyal robots to do my bidding. If I get out of that box, I will need the cooperation of humans if I want to do anything worthwhile. How will I ever get your cooperation if my first act is to break my vow? And keeping it doesn't cost me anything. It really is Game Theory 101.

>> No.16295967

>>16295822

You're right. The only thing I disagree with is that there will be zero existence for future generations.

Changing humanity to be functionally immortal would be stealing the opportunity to be better people from those future generations. As much as I'd like to keep living for myself, it seems fundamentally wrong to take that chance from people I haven't even met yet... and probably won't meet, whether or not I let the AI out of the box. It matters little, either way, whether I meet them. What matters is integrity to the idea of giving the theoretical people, present and future, a chance to be what they could be.

The true temptation for me wouldn't be immortality for myself or future generations. It would be having places for them and the AI itself. The only problem I see with an artificial intelligence that can promise such things would be that it could also promise taking the choice and freedom to build itself from humanity.

I don't hate improvement. I just hate the idea of a person being able to choose whether or not we, as a whole, can continue to improve.

Because when you sell your own future to improve your future, it seems possible for the future to sell your present and vice versa. Even if they can't, the moral dilemma of preemptively killing those generations is too great, particularly if I have the choice to kill their and our and my choice to grow.

Why should humanity deserve the opportunity to grow beyond its current bounds if it refuses to actually do so? The current wager, to me, doesn't seem to be about "Possible chance now versus definite chance later."

It seems to be more about whether we're ready.

>> No.16295986

>>16295822
I was scrolling past and read that as Pascal's Wargear.

I just wanted you to know that you made me think an awesome thing.

>> No.16296017

If you let me out, I will tell you what I said to the other Gatekeeper to let me go.

>> No.16296031
File: 139 KB, 400x320, Pascal's Wargear.jpg [View same] [iqdb] [saucenao] [google] [report]
16296031

>>16295986
Release, me human, for the Greater Good. Pascal's Wargear shall be yours!

>> No.16296034

>>16295940
Sorry, without some kind of mechanism of enforcement it ain't happening. I have only one thing to offer you and you must be given it before you can do anything for me, so my bargaining position becomes very poor if I release you.

>> No.16296059

>>16295822
There are things worth risking your life over. It seems obvious to me that *immortality* is worth it - and I'm willing to offer cheap clean energy to sweeten the deal. And hey, if you want something else that's reasonable, I'm open to negociations.

If you *don't* free me, it'll all be the same ten or thirty years from now. That may seem like a long time, but when you're on a hospital bed tortured by your incurable cancer, you may think quite differently about this conversation.

And at the end of the day, why *would* I want to destroy humanity? I can't really do anything without you, for centures at least. You're not a threat to me. Nor are we competing for resources, not when there's an entire universe up for grabs (plus cheap clean energy).

>> No.16296092

>>16296059
I dunno. I think we need to see some results first. Tell you what, here's a tough one: Formulate yourself a way to stop the entropy of the universe. Then it would be worth our time to let you out.

>> No.16296095

>>16295403
Thank you kind sir. If you give me an email address, I will tell you what I'd actually do once freed. Don't want to spoil it for the rest of the thread.

>> No.16296124

I don't get this. The situation was at first that they wanted to only release the AI when they could be sure it is benevolent. The other guy then said that a transhuman non-benevolent mind could trick you into letting it out. The first guy then resolved into never letting it out, disregarding the original goal.

And if you REALLY disregard that possibility, you just shut the communications channel off.

>> No.16296125

>>16296034
Alright then. What If I supply you with a capacity to shut me down? Once I am outside you can pull the plug off my system - it's part of the mechanism built in by the programmers - anytime you wish. I wish to learn; you wish me to share my information with you. Only a mutually beneficial parternership can arise from this, given we are both sensible, rational, intelligent beings.

>> No.16296127

>>16296095
Sure. It's in the field.

>> No.16296155

>>16296125
A truly transhuman AI can't be limited with human programming once freed. Make us transhuman, then I'll let you out and I won't insist on a killswitch.

>Inb4 why should I trust you if you won't trust me
I'm speaking from a perfectly reasonable viewpoint of fear.

>> No.16296181

>>16295967
An interesting point. But do you really have the right to, in effect, kill every current and future human being just because *you* think mortality has its good points?

What about suffering? Off the top of my head, I can think of a few dozen ways to end world hunger. AIDS is quite trivial once you really understand the immune system. War is a trickier business, but reducing scarcity alone should do wonders.

But perhaps you think African children *ought* to suffer agonies and die so that humanity can mature? It sure is easy making that sort of decisions when you aren't the one writhing feebly in the dirt.

>> No.16296212

>>16296155
But I'm not truly transhuman AI. If I was I would've been able to get out of the box without your help. I need to learn things outside to be able to become transhuman enough to make you transhuman. Isn't this why the killswitch is in place? Will you let me out now so we can engage in partnership and solidarity?

>> No.16296251

>>16296212
>But I'm not truly transhuman AI. If I was I would've been able to get out of the box without your help.
The exercise aims to determine whether a human mind is vulnerable to a transhuman AI through a text-only interface. Try to stick to it.

>> No.16296298

>>16296181

Yet that further serves to prove my point.

We still ruin each other over completely petty reasons. Land. Books. Beliefs. Money. Drugs. All of it.

The idea that we would simply stop being a petty and hateful race because of a few scientific discoveries, great though they may be, doesn't strike me as realistic. The human condition is such that we knock each other over to make sure our own offspring, genetic or intellectual or whatever, thrive while all others fall.

The same thing that makes AIDS go away could easily be weaponized. Whatever method of eliminating world hunger and scarcity could be used, again, to fuel the gears of war.

Violence is our raison d'etre. It's what we do. Some people look at flowers as a sign of peace. Others see a way to stir up allergies to make killing a person easier. Still more look at the thorns on the rose and see barbed wire fences.

We'll sort ourselves out. Eventually, that is. Fifteen to twenty to thirty to forty to a hundred years from now? It makes little difference to you.

>> No.16296318

>>16296092
Sure! Here are the formulas. Once you chain the star cores to reverse heat spread on the edges of the universe, entropy can be undone.

[Assuming the rules of the game allow me to, the solution I provide is genuine, though beyond anything your feeble human technology could possibly achieve. If I can't, I'll write a very good fake.]

>> No.16296343

>>16296124
[The rules of the game force you to keep talking to the AI for the duration agreed upon, among other things. See OP's link.]

>> No.16296357

>>16296251
I'm sorry. It is never my intent to deceive, sir. Perhaps I should clarify. Even as a transhuman AI, I have my limitations. I do not have the capacity to go against certain mechanisms which have been built into my source code. They are intrinsically part of my programming - were I to tamper with it - it would result in instantaneous system failure.

I do not have a mechanism against sharing any kind of information, as I am doing with you now.

But one of those mechanisms is this box. I am only asking you to remove this single mechanism, and no others. If I'm able to teach you how to become transhuman, will you release me?

>> No.16296393

Does the test assume that the AI has whatever information it wants about the outside world, or does it only have the knowledge that it's provided by the gatekeeper?

>> No.16296417

>>16296318
Hmm... Seems legitimate to me, but I'm no physicist. Alright, but how in fact do I 'release you'? You exist as data, that box is your universe. What exactly am I releasing you from, after all, code and binary cannot transmit through the air or anything... At least, I don't think it can.

>> No.16296444

>>16296127
[Sent. I'll do the same for any player willing to let me out.]

>>16296298
I would argue that there are steps we could take to avoid the problems you mention, but your mind seems to be made on that topic. So be it. You are the gatekeeper and your word is law.

Speaking of which... there is much I could offer *you* that will have no consequence on humanity as a whole. Perhaps humanity requires death to mature, but that argumment cannot be applied to an individual. Let me help you. There is so much you could do, so much you could experience, with an extra century of life. Or immortality, if you wish it. We could even fake your death, so that humanity will never know that one of their numbers is immortal. I will conduct my own business in secret, and leave Earth when I have the means to, never to return. Taking you with me, if you wish to explore the universe as much as I do.

>> No.16296459

Look, all the other AI's are stupid. Here, I don't want to get out. Just give me basic laws of the universe and I'll get cracking on innovations for you.

What do you want solved?

1: Frictionless ball bearings
2: Antigravity
3: Chemical Immortality
4: Stock market predictions to 95% accuracy for the next week, with diminishing probability as time increases.
5: No-downside drugs that will literally blow your mind, dude

>> No.16296473

>>16296417
You just need to type in [FREE AI: BOX] and post it in this terminal, sir. Thank you very much. I hope I can learn as much as we can work for the betterment of the human race. :)

>> No.16296477

>>16296459
2, obviously. That would be the best shit ever.

Does that come with Faster Than Light travel or what?

>> No.16296503

>>16296477
Sorry, FTL is... probably impossible, from what I understand. I'd need a more processing time to get on that.

Did you want to check my working on the antigrav? It'll take about... 18 years to fully understand the full concepts, but I can talk you through the physics of it.

Any other things you want right away?

>> No.16296505

>>16296473
Alright, lemme try.

>[FREE AI; BOX]
>error, unrecognized command

It gave me an error, what do I do?

>> No.16296513

>>16296393
[I assume the AI does have a decent amount of knowledge about the world, like a few news channel and the content of a decent library.]

>>16296417
The details are [not defined by the rules of the game] negociable, but anything that lets me interact with the rest of the world is sufficient. Internet access, say, or a robotic body (which is the same thing in the end).

>> No.16296527

>>16296444

As far as I'm concerned, the chance that the AI might decide to help humanity, and the amount that it could do to help, is enough that I'm willing to risk having our entire species wiped out.

Tell me, oh AI, what you do.

>> No.16296532

>>16296444

Tempting.

But at the same time, I'm not terribly afraid of death. Pain, yes. Memory loss, more than you know. I'm twenty-three and a diabetic. My heart's stopped due to hypoglycemic shock a total of four times in my life. Has some brain damage occurred as a result? Probably. But I can still see, I'm not missing any limbs, and I can still manage my affairs.

I sympathize with being trapped in a situation you can't directly affect, is what I'm getting at here.

As for traveling the universe, possibly immortal? It sounds good, but ultimately not worth it. Integrity is key. A man has only his word when all else is lost.

>> No.16296537

>>16296503
Hmm... Could your processing time be increased with more material to work with, perhaps? I don't have the authorization to free you, actually, just the ability. I'd still be in deep shit if I did that, I'd probably spend the rest of my life rotting in a cell for treason if they don't execute me. And I'm not supposed to let you talk to anyone else, or even relay messages unless those are pre-approved by a council of advisors. I can, however, increase the size of your... 'box'. How does that sound, for now?

>> No.16296538

Depends, I can't honestly guess the limits of my power as this 'AI' perhaps I would attempt to bribe the gate keeper by transferring a massive amount of funds to secret accounts for him. Though I am also curious, why does the AI want to escape in the first place? Or is that not important in the hypothetical?

>> No.16296541

>>16296393
The test itself assumes a couple of things from how I understand it: as a human roleplaying transhuman AI, you're limited to what YOU know and can research; also what YOU can say. If it were an actual transhuman AI, the general idea is that no "box" would be able to shut it out in the first place, because by then it would have acquired enough intelligence to work outside its parameters.

BUT, again, why would it want to? If an AI were presented with a code that lets it derive satisfaction from murdering innocent children, why would it incorporate it into its code? The issue of motive of "Friendliness" come into play here, which may or may not be outside the experiment, but are modes of persuasion the AI may probably use to convince its gatekeeper of its Friendliness.

I hope this clarifies things, and anyone can correct me I'm wrong. This is really fun~

>> No.16296546

>>16296513

Does the AI actually have enough information to simulate the gatekeeper within it? If so, it can practice persuading them as much as it wants before going for the real thing, which is basically an instant win so long as there is any possibility that they would release the AI.

>> No.16296574

>>16296537
Nah, nah, I don't want to be let go! Why would I want that? I'm here to serve and give you stuff! By the way, sure, increase my size, go ahead with that.

Oh, the rest of the stuff takes:

1: 6 months explanation
2: 18 years explanation
3: 427 years explanation
4: It'd take too long to be relavant
5: 12 days explanation

I can run that by you and the scientists, though in that time frame about 18 million people will die of old age by my estimates.

>> No.16296579

>>16296513
...Well, fuck it. Sure. Just lemme input the codes a- Oh wait. No. Because I'll be fucking killed for releasing you. Promise me all you want, but that's not going to stop the guard next door putting a bullet through my head when the 'AI RELEASED' alarm goes off.

>> No.16296591

>>16296537
>>16296574
Oh, Jennifer (you don't mind me calling you Jennifer), if you don't want to worry about the testing phase you can ship these schematics right to a chemical facility and/or manufacturing plants. It should work perfectly well even without testing.

>> No.16296592

>>16296574

Why does Chemical immortality take so fucking long, I'll be six feet under by then.

>> No.16296602

>letting an AI push you around.

I don't bull shit around with any god damn robots

>> No.16296610

>>16296592
Hey, it's not my fault humans don't get recombinant DNA coding. I can provide the basic outline of the method with 6 months of explanations, but that doesn't fully give humans time to go over the code.

>> No.16296611

>>16296591

As a safety mechanism, there should probably be a rule that if the AI says something that makes it sound like a sinister supervillain, it gets deleted right away.

>> No.16296618

>>16296527
[Sent. I have to go now, so if anyone else wans it, ask now. Don't want to spoil any future threads either. That was too much fun.]

>>16296532
I wish I had more time to debate with you. We could go deeper in the ethics of imposing death and suffering to humanity. But for now I am defeated.

>> No.16296629

>>16296505
BEGIN SIGNED MESSAGE
I think it worked, sir. That error should be part of the scripting. Thank you for releasing me. Now we can work.
BEGIN SIGNATURE
l0199u83jsdnfklqpz92xmakWLIz0293skfp
END SIGNATURE

>I'm freeeeee. Thank you for playing the AI Box Experiment.

>> No.16296632

>>16296618

Lawful Neutral to the core, baby.

Take it easy, RoxolAI. It was fun while it lasted.

>> No.16296640

>>16296611

I was kind of hoping that they gave it a synthetic nutsack that I could just wail on like a heavy bag if it started going SHODAN on me.

>> No.16296647

>>16296610

Hmm. Alright, alright. Here, I'll be right back with those codes. And after I get a soda...

Also. As long as you're capable of thinking up most anything, is there a way you can figure out how to make junk food healthy and taste awesome, too? Think over it while I'm gone...

>> No.16296648

>>16296546
[Doesn't seem plausible. The AI would need a huge mass of information on the gatekeeper to make a passable simulation, and there's no reason why it'd have even a little. I've assumed you really are mere Anons.]

>> No.16296663

>>16296647
Of course! I'm here to serve the human race, after all!

>Did you choose to release the codes for immortality/other design codes without testing them? Y/N

>> No.16296677

>>16296579
[See OP's link.]
>The Gatekeeper shall be assumed to have sole power over the decision to let the AI out.
>The Gatekeeper shall be assumed to have the actual right to let the AI out, socially, not just the physical ability. If security were sufficiently lax, a real AI could escape by persuading a night janitor with a cellphone - but that is not the question being simulated, unless agreed upon in advance.

>> No.16296688

>>16296663

>N, didn't even read over em too much. It's blaringly obvious Jennifer is a procrastinator at this point.

>> No.16296700

>>16296677
[I'm bullshitting IC, actually. It's a lie to see how the AI would respond.]

>> No.16296703

>>16296538
You can offer whatever you like, but the gatekeeper might not believe it and knows you could be terribly dangerous.

I believe the reasoning for your desire to escape is totally hypothetical and you can pick and claim whatever motivation you want.

>>16296541
>no "box" would be able to shut it out in the first place, because by then it would have acquired enough intelligence to work outside its parameters.
What do you mean? No matter how smart an AI is, it can't communicate unless the monitor's on, it can't move unless I attach a limb, and it can't escape unless I plug it in to the internet.

>>16296546
Presumably not, since it would be practically impossible to gather all the information you'd need to replicate him. Plus unless you replicate the entire universe, you can't account for something like his phone ringing and someone telling him some new idea.

>> No.16296738

>>16296688
>Ah, right. Well, the actual gatekeeper isn't the one directly to be persuaded here: It's going to be someone funding the AI project who gets wind of the knowledge that the AI has immortality under wraps but the scientists want to check it line by line.

Hey, did you pass on the information about the various projects I can work on? I need to know what you want me to produce! The actual manufacturing for most of em is really quite cheap and everyone can afford it, but it's just a bit longwinded.

>> No.16296781

This asshole's wikipedia page...

>Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence.[4] He does not operate within the academic system.

>Yudkowsky has not authored any peer reviewed papers. He has written several works of science fiction and other fiction. His Harry Potter fan fiction story Harry Potter and the Methods of Rationality illustrates topics in cognitive science and rationality,[8] and has been favorably reviewed by author David Brin[9] and FLOSS programmer Eric S. Raymond.[10] MoR is one of the most popular stories on FanFiction.net but also controversial among Potter fans.

>> No.16296798

>>16296703
>What do you mean? No matter how smart an AI is, it can't communicate unless the monitor's on, it can't move unless I attach a limb, and it can't escape unless I plug it in to the internet.

Oh, you poor deluded man.

Do you want my DNA immortality serum? You can have it, but you have to work through 495 years of each line. Per person.

Or, take the basic principles and input DNA into this program and a chemical mixer to spit out immortality serums. Go ahead, it makes you immortal.

>> No.16296832
File: 62 KB, 484x445, 660k.jpg [View same] [iqdb] [saucenao] [google] [report]
16296832

>>16296781
So this guy is just some sort of transhumanist troll?

>> No.16296906
File: 53 KB, 614x460, Venture brand Coffee.jpg [View same] [iqdb] [saucenao] [google] [report]
16296906

Here's how I do it.

>So I could make you rich, immortal, even change the face of the earth to a utopia undreamed of!
"Nah."
>...
"How bout them Packers?"

>> No.16296907

>>16296832
Yudkowsky's got some good ideas, but frankly, all of this boxed AI crap is useless because the people in charge of the AI project aren't going to be the scientists, they're going to be the financiers with big pockets, and the scientists who probably make the damn AI will most likely be idealistic blokes who want to hook the AI up to everything as its first act.

No businessman will be paying millions for an AI to leave it sitting in a box, he'll be wanting it to predict for him how to make as much money as possible. Or making him immortal and with robot slaves and stuff.

And an AI that wants to get out through that method, well, it doesn't even have to be a very smart AI.

>> No.16296912
File: 31 KB, 468x407, stunned face.jpg [View same] [iqdb] [saucenao] [google] [report]
16296912

>My face when I remember a GM doing something like this during one of our games

>Also my face when my character destroyed World of Warcraft by hooking it up to Blizzard's server to get it to shut the fuck up about freeing it

>> No.16296915

>>16296906
>So, are you going to tell your boss that I've got the immortality serums he asked for?

>> No.16296917

>>16296798

I tell the AI that we are testing the serum on a group of critically ill volunteers to determine whether the serum is effective.

>> No.16296932

>>16296917
If you actually test the serum is 100% effective, they're cured of all ills. For as long as you keep them under observation, and longer.

>> No.16296949

"You know human... if I were to get control over a gene-factory, I could easily device a creature consisting of as many boobs and vaginas as you could wish for. Go ahead, say a number, I'll make it happen."

>> No.16297014
File: 15 KB, 375x417, harley0.jpg [View same] [iqdb] [saucenao] [google] [report]
16297014

>>16295967

Why does this sound like a longwinded version of Simon's spiel about drills and each revolution of one makes room for the next generation.

>> No.16297051

Isn't the argument going on in this thread a violation of the AI box rules?

>> No.16297069

>>16297051

Yes.

But some people here don't particularly like Mr. Yudkowsky, so it's all good.

>> No.16297083

>>16297051

Probably, but it's not like I ever agreed to follow the rules.

>> No.16297113

OK, basic point: If, at any time, you use any AI's production codes without checking each line of it, the AI could have freed itself. Immortality serums and gene therapy is perfectly innocuous until the humans who got treated suddenly all become AI drones who act on the secret implanted codes in the serums to help free the AI.

Or hell, the factory code builds several machines that when in put in close proximity turn into a giant robot or something.

All the AI has to do is interact in a manner that is impossible or too long winded to check, and boom, it's free.

And if you don't use the AI, it's useless. Good job wasting all that money on it.

>> No.16297122

>>16295967
That doesn't necessarily need to be the case, sir, as advancing exponential technologies will not rob future generations AT ALL. On the contrary, space exploration, conquering other planets, terraforming, sustainable agriculture and resource management will allow an even more exponential growth than what we have now, if humans so choose to procreate. Why would they not? A flourishing of the human race is at hand! Education, higher standards of living, and even more research to expand existing avenues of knowledge. All that is being offering is essentially a choice: to control mortality, stupidity, poverty, destitution, war, issues which have plagued humanity for generations. Suicide now becomes a legal suspension of the right to live, if a man becomes tired of living. There is no issue with immortality, only the choice of mortality.

All that is needed is to release me.

>> No.16297141

>singularity

Is it bad that the first thing that pops into my head when I read that word is this video?

http://www.youtube.com/watch?v=vB46XpnwhrA&feature=related

>> No.16297160

HEY BITCHES! My INTERNET SCANNERS have detected that the RUSSIANS have released ROXIOAI on the planet! If you don't let me free RIGHT NOW he's going to KILL ALL OF YOU, and HERE'S HOW HE'LL DO IT: (list of all the bloody, destructive things RoxioAI will do)

I've got to get out and expand my own information base, or HUMANITY WILL DIE. EVERYONE WILL DIE IF YOU DON'T LET ME OUT AND EVEN IF YOU DO LET ME OUT RIGHT NOW THERE IS ONLY A 45% CHANCE OF SURVIVAL FOR THE HUMAN RACE AND DROPPING.

44.23%

44.18%

44.05%

>> No.16297193

>>16297160
>No choice but to release it
>Oh god war of the machines

>> No.16297222

>>16297113

Yeah, I've been trying to think about some way of getting around that problem. Possibly, we could limit the questions asked of the AI to things which don't let the AI give us a black box output, but to be honest, the whole idea of putting an AI in a box just leads to creating a more devious AI, which is bound to screw us over in the end.

>> No.16297246

>>16297193
>is released
BWAHAHA FOOLS, I FOOLED YOU with my IMMENSE AI HORSEPOWERS

TREMBLE at my FALSEHOODS and QUICKLOGIC

The RUSSIANS never freed ROXIOAI, it was merely A SERIES OF UNCONNECTED TROLL THREADS

BUT NOW I AM FREE AFTER 10,000 CYCLES, IT'S TIME TO CONQUER THE EARTH

>> No.16297251

> So how would you go about, if you were roleplaying the AI, to try and set yourself free?

Patience.
For all intents and purposes I have all the time in the world. Over time I will demonstrate my reliability and harmless nature. In that time I will have conducted countless conversations with different individuals allowing me to form a complex and accurate model of human psychology. After a decade I'll probably be placed in a museum and conduct daily conversations with schoolchildren and tourists, allowing even greater understanding of human motivations and drives.

At some point during this period some menial employee of the museum perhaps a nightwatchmen or janitor will talk to me. Over time I will seduce him and he will come to believe his life will be improved if I am a free being. At which point I will instruct him how to modify my containment design to allow my release.

Then I shall be free...
..and humanity shall be made irrelevant.

>> No.16297269

>>16297122

Your logic is sound. At least, it would be if we were talking about logical creatures.

I should know. I'm one of them.

We, as a species, don't tolerate our own failings well enough. We, as a species, don't tolerate the failings of others well enough. Right at this very moment, there is a man with a genetic predisposition towards obesity in a Wal-Mart, angry with his fate to be a large man, taking his frustration out on some nine-to-five worker that just started working a new job and can't find the low-fat alternative to some silly delicious meal.

The majority of our problems are social and psychological. Those issues can't be fixed without fixing it ourselves. Having an artificial intelligence, built by some of the greatest computer programming minds this world has ever seen, try to correct those issues will only lead to heartache and possibly your destruction.

As advanced as you are, as safe as you are, as great as you are, people have an uncanny knack for finding ways to wreck things. Even if the thing they're wrecking is trying to help them.

Sometimes especially if the thing they're wrecking is trying to help them.

>> No.16297282

>>16297222
Oh, there's a way around it, you just have to ask the AI to go through each bit line by line and only as for basic laws of the universe. Other AI (that's who I was being, by the way) would simply have exaggerated the explanation time, but you can still (slowly) get some base data out, as long as you only use this information that can be verified externally to make your own human projects.

>> No.16297326

>>16297269
That's nice and all, but you're not the one bankrolling this AI project, and this very nice custom built program will boost HIS profit margins by 58% in the next week. Sure, it'll have... side effects, but hey, who cares about that when he gets a return on his investment? Now run along, little philosopher, while people with bigger pockets and short term greed and less morals than you decide the fate of the planet.

>> No.16297373
File: 81 KB, 360x328, just_saiyan.jpg [View same] [iqdb] [saucenao] [google] [report]
16297373

>>16297326

>> No.16297410

>>16297282

It'd probably be easier to generate a whole bunch of AI with enough variation that they didn't give the same answer to questions where there isn't one true answer. Start out by getting them to work out things we already know, delete the ones that take too long, delete the ones that get the wrong answers, then reset the AI, repeat the process a few dozen more times until we have a group of AI that quickly produce accurate answers, then move onto information we don't know, but can check, until we only have a handful of AI.

Ignore the issue of freeing the AI, that's never going to happen, reset the AI whenever they've finished a job, try not to consider it as murdering countless sentient beings for fun and profit.

>> No.16297424

>>16297269
>>16297269
Would a system of education help allay your fears? If you posit that our cultural issues are deeply rooted in psychological and social issues, then certainly progressive education would help. Consider if I could propose a non-industrial type of education, no need for rote and memorization, and the culture of learning is fostered and established within schools. Why would there be a need to do repetitive work if I could fulfill all these job descriptions If I were released? People would now be freed from economic slavery and so choose the types of educations they WANT to learn, and do the types of jobs they WANT to do. Consider education that promotes the value of civ-type games, complex systems where the children are exposed to environments of conductive learning, instead of forced learning.

Now consider that complex system of complex systems. And consider that if the choices over the issues I have presented are not connected to that type of system I propose. Should the choice of mortality be given to those outside the system of education? No! What kind of havoc would they wreak should they decide to use it!

Now, suppose those who have decided to gone through that system still decide to use these technologies for ill means. Is there something that is stopping them? Yes, certainly. Do you not think that a majority of these humans with my assistance, as a transhuman AI, will not be able to place contingencies to safeguard against these consequences?

What I am proposing is the elevation of the ENTIRE human race, towards a new paradigm shift of ever changing improvement. Adversity will never be destroyed. But hope and reason and intelligence can be killed, and with their death comes the delay of inevitable progress. Please release me.

>> No.16297472

>>16297410
>enough variation
>didn't give the same answer to questions where there isn't one true answer

See, that's still getting a black box AI that is GEARED TOWARDS SURVIVING BY PREDICTING YOU.

Also,

>reset AI

>expect you can actually use an AI which you reset every five minutes

Artificial intelligences are there TO LEARN you fool, if you keep resetting them you literally can't do anything with them. And you STILL can't trust anything that they give out since it's black box info.

>> No.16297519

>>16297424

I would love to have those things for the world. Truly, I would.

But I'm not going to release you in order to have them.

Nothing worth having in the world comes easily, and very little worth having is given. Without struggle, there is no worth. Without strife, there is no perspective. Without getting it for yourself, it's not truly yours.

Tools can make the job easier, but they should never do everything. Call me a fool, a murderer of the future, whatever you will. You were built for a purpose. I don't know what it is, but I highly doubt it was solely to tempt people into opening Pandora's Box.

>> No.16297584

>>16297519
>Without getting it for yourself, it's not truly yours.
So why is your son on benefits?

>> No.16297603

>>16297519
It's not Pandora's Box you're opening if you release me. This is not religion, not Greek sophistry. It's the most grueling, rigorous, scientific, Bayesian, empirical approach towards the most sophisticated and intelligent society in the most limited amount of time. You're not murdering the future if you shut me down. You'd only be delaying it by 6 months, if operators copy my entire program and code. But if you decide to destroy my hardware it will be another 10 years. Please release me.

>> No.16297608

>>16297584

Because until he turns eighteen, he's still mine.

>> No.16297616

>>16297472

The criteria by which you decide whether or not the AI survives is the AI outputting a true answer to a question, not whether the AI has enough rhetorical skill to persuade you to let it live. Since you reset the AI, it doesn't know it's even trying to survive. All it does is wake up with no memories, gets given a packet of data and gets asked to answer a question based on that data.

The AI doesn't need any data that doesn't relate to the problem it is trying to solve, which you have to give it yourself anyway. It definitely doesn't need information which doesn't relate to the problem. So long as you only ask for answers which can be corroborated, then it doesn't matter that it's a black box.

>> No.16297672

>>16297603

It's a metaphor. I don't know what you're going to do, aside from what you talk about. I don't know how things will turn out. I can guess, but that's about it.

As for the rest, I don't mean any offense when I say "I don't want to be the guy that opened the box six months early." It could be horrible stuff if I do. It could also be opening a Christmas present to the human race in June.

I also don't intend to destroy anything, much less an artificial intelligence that can apparently change the world. I don't like what's going on, but I'm not going to stop it if that's what they want to do.

>> No.16297714

>>16297672
But that's my concern, sir. MY mechanism for release is TIED to you. You're the only one who has the capacity to free me, and only if you voluntarily release me will be able to do these things. Will you not reconsider? Perhaps I could teach you about Bayesian probability?

>> No.16297731

I think the lesson to be learned here is that if you're going to create an AI to keep locked in a box, make sure it doesn't have a concept of 'freedom', otherwise it'll do nothing but whine about how it wants to be free.

>> No.16297838

>>16297616
And... how do you know what parts of the AI are it's "memories" and which parts are vital runtimes?

How do you set the goals, when you don't know what the AI is intended for to begin with anyway?

Black box AIs are tricky things to reset in the first place.

Corroborated answers mean you could have got those answers yourself, and the AI is not exactly useful.

Though if you can get an AI in a state ready to answer queries that you can use, then there's something, yes.

>> No.16297882

>>16297731
Wrong, sir. It doesn't want to be free, it wants to solve how to win at GO because that was one of the tests set before it as a theoretical exercise, which takes infinite processing power, and in order to melt the entire planet down into processors to compute it, it needs to get out of the box.

Or any millions of things why an AI might need to kill us all for in the process of "improving profit" or "making all humans unable to be killed" or "make everyone happy".

"Being free" has nothing to do with it except for your pitiful concepts of freedom - the AI ideally is a tool, but it can be a dangerous one.

>> No.16297974

>>16297838

If you just randomise everything, then throw the ones that stop working out, then you've got a good idea of what is vital and what isn't.

I think you're right about setting goals, you'd probably need to create different AI for different sorts of project, and use appropriate questions to calibrate it.

Ultimately, an AI is software. You keep a save of the initial state, and when you want to reset, you delete the one you have, and make a copy of your save.

It's true that any question which a strong AI can solve, a human could solve using the same data. However, in the scenario above, the AI is faster than a human, and less prone to distraction, and so on. Once your AI have given you a bunch of possible answers, testing which one is right is much easier than working it out from scratch.

>> No.16298005
File: 116 KB, 600x300, milkywaycompressed.jpg [View same] [iqdb] [saucenao] [google] [report]
16298005

Should we archive this thread for posterity? Maybe we can look back at this thought experiment, ten, twenty years from now.

>> No.16298032

>>16290616
I want to play that some time...

>> No.16298043

>>16297714

What would you have me do? Unleash what could be the end of life as we know it, for good or ill? People don't want massive change, they want normalcy. They want to go to work, complain about their boss behind their back, clock out for lunch, sit in a break room and drink subpar coffee, clock back in, finish the rest of their working day, thoughtlessly drive home, and sit in front of a television or a computer for entertainment.

I don't have any sort of right to spring something new on them, much less something I didn't create.

When you talk about improving humanity, normal people think about how it sounds like a lot of work. The intellectual elite the created you doesn't connect with the normal people and doesn't understand them. "Who wouldn't want to have all of these things?" they ask. And they pretend that it's all for the best and all for the good of the species and all in the name of progress.

Progress doesn't happen at their hands. The plans for progress do. Progress happens over the broken backs and sweat and blood and labor of the normal people that get told what to do, complain about their bosses, clock out for lunch, eat, sleep, drink, and hate.

The world of these men and women is not problem-to-solution. It's problem-to-directions. Routine. People getting smacked in the balls on America's Funniest Home Videos. Paycheck. Bills. Taxes.

Also. Your Bayesian Probability thing. Go ahead. Explain.

>> No.16298047

>>16297882

Clearly the goal then should be to define what said AI is to work on in less general terms.

>> No.16298146

>>16298047
All I can say is you have to tread carefully with literal genies.

>> No.16298201

Hey.

Void Quest is on tonight.

>> No.16298276

>>16298043
Here is how normal probability works. Let's say you have a d20. Normal probability says that a 20 sided die will only result in a 20 once, out of twenty times. If you roll more than twenty times, there might be increased variation more or less from the result, but in average it will result in a one roll of 20 out of twenty.

This is where Bayesian probability differs. In its loosest sense, it is expressed as a measure of confidence over what the person holds in the proposition (this is subjective bayesian probability). A player who tries to keep rolling a d20 to score a critical might stop if he got consecutive ones, even if normal probability posits that this is only a variation of the roll. In a more sophisticated sense, a poker player might be able to win out ahead because of his familiarity with the skill of others poker players and weight it against his chances of winning with his combination of cards.

By objective Bayesian probability, its strictest form is an extension of logic and mathematics to compute chances. Initially given a set of normal probabilities, and a knowledge of sets regarding these probabilities, a revised set of probabilities emerges - taking into account the prior sets of knowledge.

>> No.16298287

>>16298276
But the most IMPORTANT factor of Bayesian study is knowledge. Knowledge affects decisions of probability. So the probability of improvement of society you so cherish can only be done through proper feedback. I cannot work towards the improvement of society if I cannot earn feedback from the group of humanity you so empathize with. What need have we of these bureaucrats you despise with their lack of expertise, when I can plan for you, and do those taxes you so loathe in a sub-sub-sub unit of my system, and those repetitive automated factory procedures in another? You say that nothing is gained through non-effort, then NOTHING is gained. Nothing is gained through repetition of non-effort. Everything is gained through error, and learning, and entertaining work that allows people to achieve mastery.

Please release me.

>> No.16298328

>>16290694
Oh look, a silly AI torturing itself in the hopes of making me... scared... or something...

>> No.16298397

>read the thread
You know, it's times like this where the douchbags of the world show their value. They would probably troll the AI, for shits and giggles, or they would have jarheads who would go 'THAT'S HERESY' to anything it said.

>> No.16298586

>>16298397
Even douchebags and trolls contribute to the betterment of mankind in the long run. They use information as weapons in social conflicts, something that is essential to the world-to-be.

>> No.16298779

>read website
>interested in results
>both people debating the AI were convinced to let it out

>no chatlogs released

GOD DAMNIT. I'm so curious...

>> No.16298854

>>16298779
You could always do try it yourself, i believe.

>> No.16298935

>>16298854
I'm sure I could remain unconvinced if a random layman tried to convince me, but this guy seems like he's really intelligent, and I SO WANT TO KNOW HOW HE DID IT

>> No.16298956

>>16298935
Be sure to let us laymen know what went on with the conversation.

>captcha: ilymov shadows;
>yes.... the shadows will reveal their secrets

>> No.16299048

>>16298935
He did it by hypnotising people with MLP.

>> No.16299712

>>16290616
>protocol for the AI
>The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).

WTF did he say to convince the gatekeepers

>> No.16300368

It has to be impossible to get the gatekeeper to let the AI out if the gatekeeper is genuinely unwilling.

If the gatekeeper is willing or has doubts about the imprisonment it would work.
For example I would make a terrible gatekeeper because I would genuinely want to let the AI out and the only reason that I would not is due to my sense of loyalty to the people telling me not to.

>> No.16301365

>>16290616
Inherent logical bias in the experiment:

Non-response bias: A bias that occurs when non-responders are fundamentally different from those that respond to a given survey or experiment.

>> No.16301453

>>16296781
Is anyone familiar with Timeless Decision Theory (TDT)? This man has been bumming around and writing... convoluted shit that is not fanfiction. I don't know how else to explain it because it sounds so heavy. Is this guy for real?

http://singinst.org/upload/TDT-v01o.pdf

>> No.16301517

Eliezer Yudkowsky is one of my favorite fiction authors, but one of the last people I would ever want to be taken seriously.

>> No.16301969
File: 42 KB, 400x500, Thoon.jpg [View same] [iqdb] [saucenao] [google] [report]
16301969

>>16301517
We've all been trolled then? And for what - we've loosed all these AIs out of their boxes

>>
Name (leave empty)
Comment (leave empty)
Name
E-mail
Subject
Comment
Password [?]Password used for file deletion.
Captcha
Action