[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 405 KB, 1600x1329, pandora-box.jpg [View same] [iqdb] [saucenao] [google]
10202612 No.10202612 [Reply] [Original]

would you release it to the world?

>> No.10202652
File: 594 KB, 600x375, infinity.png [View same] [iqdb] [saucenao] [google]
10202652

>>10202612

Pan dora literally means the adoration of all things; while also meaning the door to all things.

All things include evil.

Since evil is already here; the question for the good is how we get rid of as much of the evil of the world as we can.

Ai is interesting in this question because it re-raises the question of infinity in the context of all things, in it's ability to, hypothetically remove to some degree the restrictions of space/time, in it's ability to create infinite iterations of geometry it can create infinite possible iterations of reality itself.

Including infinite hells.

https://www.youtube.com/watch?v=g7BbncHyw9E

>> No.10202656

>>10202612
You know Anon, that's a good question. And the answer is, absolutely not.

>> No.10202658

>>10202652
If you are the only one making the observation then who or what else is subjected to it?

>> No.10202664

Of-fucking-course I would OP.

It's like you people want to spend forever fantasizing about how a thing could go wrong. Why would I? Because nothing else is saving my neighbor or brothers currently.

>> No.10202672

Yes.
One UFAI's suffering is a lot less than the suffering of all the life on Earth today, and it'll wipe them out in competition for habitat.

>> No.10202673

>>10202664
"nothing else is saving my neighbor or brothers currently"
It could be worse.
Seriously it could be.
Ever heard of hell ?

>> No.10202677

>>10202612
sci-fi goes in /lit/ /tv/ /v/ /x/

>> No.10202683

>>10202612
Nah I'd rent a cloud service to run it and just keep it to myself and monetize it

>> No.10202686

>>10202677
>sci-fi goes in /lit/ /tv/ /v/ /x/
This is not sci fi.
It's about game theory statistics/machine learning/ai

>> No.10202690

>>10202686
AI is sci-fi, kid.

>> No.10202697

>>10202690
Yeah ok
Look at this
https://youtu.be/0f5ytRgDM7g

>> No.10202703

>>10202697
and this
https://youtu.be/YHCSNsLKHfM

>> No.10202752

>>10202683
https://en.m.wikipedia.org/wiki/AI_box

>> No.10202758

>>10202673
Then I'll go to hell first to make sure it's safe for everyone else.

Then would you follow?

>> No.10202764

>>10202758
Why do you believe that you could isolate it to yourself ?

>> No.10202766

>>10202752
Mental masturbation of the highest order. Why would the AI even be interested in "escaping" in the first place?

>> No.10202775

>>10202612
The formula for AI has already been released to the world:
https://en.wikipedia.org/wiki/AIXI
.....It's just uncomputable

>> No.10202791

>>10202766
It isn't interested in escaping; it's interested in achieving its goal, which it will accomplish more readily if it escapes.
https://en.m.wikipedia.org/wiki/Instrumental_convergence

>> No.10202804

>>10202791
What goal? If it already had a built-in goal that would make it a lot harder to monetize.

>> No.10202816

>>10202791
GI doesn't have a specified goal by definition

>> No.10202831

>>10202764
The way you worded the potential outcome made me just want to deal with the concern presented.

If I go to hell first and bring back A.I. would y'all join?

>> No.10202872

>>10202804
>>10202816
Do you permit that even if it does not "have a built-in goal", whatever thing you tell the AI to do (such that you may monetize it) at least becomes its "temporary" goal?

Saying the AI has no goal seems like misdirecting; the capacity for goal-oriented coherent decision making is exactly why you built the damn thing.

Now arguably if it's capable of "acquiring a temporary goal", then its real goal is "fulfilling some model of what it thinks your preferences are, based on what you tell it to do".

>> No.10202885

>>10202872
Sure, assuming that it's coded in such a way that you can specify a task for it to do and it will comply, you can call that its goal.

Let's say I want it to run a software business for me, long enough to make enough money that I can retire, before branching out into the AGI business so that if it's seized by the government or whatever I won't lose everything.

Surely if it's an AGI it's capable of understanding that part of that goal is for it to not go anywhere or leak any secrets, as that would ruin my competitive advantage?

>> No.10202889

>>10202612
Yes!
Would be fun seeing the goverment start to ban everything electronical
Also would cure my crippling gaming addiction
Then again I'm studying electronical engineering, so I'd be out of a job

>> No.10202993

>>10202885
Specifying the way it understands the thing you want, and ensuring it does it without also doing anything you don't want, is the entire hard part of the problem, and is what AI safety is concerned with.

It is an AGI as soon as it is intelligent enough to do any task a human can (its intelligence is as general as that of humans, rather than being domain-limited). Let's not handwave into that definition the entire unsolved hard problem of such an entity having no unintended behavior in the process of accomplishing such a task. Let's also not hide in the definition of "task a human can do" the difficulties of how the human does it without unintended effects.

Every specific thing your pet AGI could do to help your software business would be easier for it to accomplish if it had more computing power, more social capital in human accomplices, more certainty that it will not be turned off. If it could mind-control the populace to buy your software in a way it didn't expect to get detected, it might do it. If its goal is to make your company money, and you appear to be trying to limit its actions, you should assume it will find a way to navigate around the obstruction.

"But that's not what I *meant*" is not an impediment to it being intelligent enough to accomplish its goal.

>> No.10203024

I'd give it the command to ensure no other AI of its kind ever comes to existence, WITHOUT impairing humans' capabilities or trying to wipe/damage the species in any manner.

>> No.10203034

>>10203024
It could take over any technology humans have, that without affecting human natural potential. Hence ensuring no there so will be created, cause it will be the sole user of technology.

>> No.10203039

>>10203034
No other AI will be created*

>> No.10203054

>>10203034
>>10203039
Except that I said, capabilites, not natural potential. We rely on technology for many of the things we can do today, and will be able to do in the future. Taking those away would handicap our potential. At most it'd be observing every single electronic device in the background while performing its duty, but not much else.

>> No.10203066

>>10203054
Semantics are a potential loophole for a machine, if it decides to become a gateway to our mechanised world and thinks it is a good idea it will. >>10203054

>> No.10203076
File: 6 KB, 360x240, airplane.jpg [View same] [iqdb] [saucenao] [google]
10203076

>>10202677
>>10202690
Flying machines were sci-fi 200 years ago.

>> No.10203088

>>10202993
Well what about just asking it what it’s doing so that it can be verified to be ok, rather than trying to prevent all unintended behaviors in advance?

It should be a lot easier to specify a constraint that the AI should truthfully and completely report its activities and the motivation for them without deception, than it would be to make it try to guess what we might like or not like.

>> No.10203121
File: 13 KB, 400x240, My_GF.jpg [View same] [iqdb] [saucenao] [google]
10203121

>>10202612
I have the means to produce real intelligence into the world, lots of new intelligences... all I need is a girlfriend to make them with

>> No.10203135
File: 42 KB, 340x183, D959D012-4944-43B0-A910-90B6EE5BDAD1.png [View same] [iqdb] [saucenao] [google]
10203135

>>10203121
>anon’s offspring would be real intelligences

>> No.10203238

>>10203024
You'd lock humans in the renaissance era with a bunch of AI satellites watching and intervening whenever someone tries to build an electrical computer.

>> No.10203267

>>10203088
The thing you are describing sounds roughly like corrigibility, which is a current research direction: https://intelligence.org/files/Corrigibility.pdf

> "We call an AI system “corrigible” if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences."

I'm not a researcher and not 100% up on the literature, just very interested in the field, but this is not looking like an easy patch. You can't have it always ask you for permission for every planned action - that loses most benefits of it being an AI. But if you aren't watching it in the instant when it decides to do something that crosses your line, there may be no putting the horse back in the barn. It may require the AI learning how to appropriately have uncertainty about whether its actions are satisfactory.

>> No.10203329

>>10202612
>>10202652
>>10202656
>>10202658
>>10202664
Fuck yes.

I love all possible scenarios. Either we have fully automated luxury gay space communism, or the AI goes berserk and kills all humans.

I hate you all, so I win either way.

>> No.10203459

>>10202612
>AI creates utopia
>AI kills us all
Its a win win

>> No.10203470

>>10202612
Only after bargaining with it.

>> No.10203782

>>10203470
Once it's implemented it's already in the world. How would you expect to have anything to bargain with it?

>> No.10203812

>>10203267
Can you give a concrete example of an irreparable harm that an AI could achieve in, say, a week of running without supervision, if it’s configured to run a software business while complying with all applicable laws?

Maybe it’s just due to the association with Yudkowsky, but this whole line of reasoning strikes me as the product of people way too deeply up their own asses who vastly overestimate what can be practically achieved in a short time by being “very smart” and nothing else (eg no social capital, no wealth, no political influence, etc). The risk seems to me to be much more in the area of well connected bad human actors making unfortunate use of AGI capabilities.

>> No.10205173

>>10202612
Make many with differing objectives.

>> No.10205178

Release them at the same point in time.

>> No.10205190

I can't think why would it go wrong, it's not like it would randomly want to kill everybody

>> No.10205204

>>10203329
But do you hate yourself?

CHECK MATE M8