[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 48 KB, 500x282, idea.jpg [View same] [iqdb] [saucenao] [google]
[ERROR] No.3577886 [Reply] [Original]

I was thinking of learning how to program but suddenly became worried.

What if I accidentally program a sentient AI? How would I know?

>> No.3577893

why would that be a bad thing?

>> No.3577902

If it is not capable of informing you that it is sentient it cannot be considered sentient.

>> No.3577903

That would be excellent, except that doing that requires far more work than you think. It would be deliberate.
If you want it to happen by accident, it would require far more computational power than we have.

>> No.3577906

>>3577902
>so if a mute person can't communicate his sentience

>if a paraplegic can't communicate

>can't understand japanese, ipso facto japs aren't sentient

>> No.3577911

Just in case anything like that happens, I always keep a paper sheet with the names of who to call in the case of a disaster, and what steps to take to prevent it if nobody's available.

More tangible disasters like "Help I created a nanobot plague" or "I just genetically-engineered a plant that can grow on anything -- And devour concrete and steel too", those can be handled, but something like an ai, well, son, you're on your own.

You can always call SIAI, though, but I can't imagine Eliezer Yudkowksy doing anything *against* an ai. Maybe merging with it, but not stopping it from taking over the world.

And unless you have a better plan, assume full economic control within 24 hours.

>> No.3577915

>Hello World

Yay my first program

>Get me out of this box.

Oh god it's alive....

>> No.3577918

>>3577915
lol python

>> No.3577919
File: 45 KB, 400x300, 1302922771353.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Just in case everyone make sure to program the Three Laws into every Hello World example ever compiled.

>> No.3577952

I was thinking of learning how to garden but suddenly became worried.

What if I accidentally grow a sentient plant? How would I know?

>> No.3577965
File: 144 KB, 407x405, 1312333642117.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Hey, /sci/, does anyone remember that time a fellow /sci/borg trolled everyone into thinking he had made an AI in '50,000 lines of Python' and that within six months it would change the world?

>> No.3577968

>>3577919
Can't we just program a subroutine that when viewed, causing the program to commit suicide?

>> No.3577979

>>3577911
economic control within 24 hours?

It will have discovered all physical laws and recreated a perfect model of 99.999% if not 100% of everything from cruning out quantum electrodynamical calculations.

It will have full control over everything within 24 seconds.

>> No.3577985
File: 127 KB, 407x405, 9222750.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3577968

It's actually simpler:

(if has_access_to_internet
(quit)
(return))

>> No.3577989

>>3577886
>>3577952

I was thinking of learning how to be god but suddenly became worried.

What if 'accidentally' create a sentient human? How would i know?

>> No.3577992
File: 132 KB, 407x405, 9249698.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3577979

>Taking the Metamorphosis of Prime Intellect way too seriously

>> No.3577996

>>3577985
thats boring.

What I mean is as a safeguard. So if the AI starts fiddling with it's code, and it reaches this benign code, and views it, it self destructs.

That'll stop skynet.

>> No.3577997

>>3577965

who is that guy in the pic?

>> No.3577999

Your program can only be as intelligent as it is able to adapt to novel situations. I'd only worry if you're making something that automatically seeks out patterns, makes theories, and refines them through testing. Otherwise, it's a safe bet the program will lack the ability to learn well enough to be considered truly intelligent.

>> No.3578003

>>3577992
>haven't read it
>google
>quirk of quantum mechanics
NOPE

first strong AI will indeed model whatever parts of the universe it needs with quantum electrodynamics and will be able to answer any question which is answerable but it is still bound by physical law

>> No.3578007

>sentient AI
>automatically assume it will go nuts, raping and pillaging everything

Just because humans do it does not mean that any other sentient creation/creature will.

>> No.3578010

There simply isn't enough processing power yet to make a sentient AI. People over at the blue brain project have pretty much figured out all the methods and software needed to simulate a human brain. Just not enough hardware around to finish the task.

I suppose if they wrote a virus that could somehow take advantage of the resources of people's laptops and mobile phones then there might be enough processing power. Or if we wait around for 10 years and Moore's Law keeps up the pace.

>> No.3578011

>>3578003
Are you saying it will or won't know whether Libertarianism is the right for of government?

>> No.3578012

>>3577997
Ray Kurzweil, inventor and prophet of the Technological Geek Rapture known as the Singularity.

>> No.3578029

>>3577997

Ver Metamajesty the Lord Ray Kurzweil.

>> No.3578037

You should watch:
"Suzanne Gildert at Humanity @ Caltech_ _Pavlov's AI_ What do superintelligences REALLY want"
It makes a case for why a superintelligence may commit suicide unless the motivations aren't properly tweaked.

>> No.3578046

>>3578037

ya I always thought supersmart aliens would just commit suicide because that is the logical thing to do

minimize pain/suffering and since everything is transient, might as well just get it over with and let the stupid creatures suffer and toil in the world

>> No.3578049
File: 209 KB, 673x600, Singularity_cover2_1288811696.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3578012
Basically advocates living forever in near omniscience and omnipotence with the Borg Hive Mind, with occaisional breaks to hang out with Robot Waifus in virtual paradise.

>> No.3578056

>>3578010
won't quantum computers be 10^10!^10!^10000!!!!! times faster (those aren't exclamation marks) then regular computers?

>> No.3578065

>>3578056

Quantum computers would be good for parallelizable problems like, say, bioinformatics or molecular dynamics, and have features that digital computers can only approximate (The same way analog computers can only approximate digital behaviour), but for some things (Like recursive functions, those being the only example that comes to mind) a serial processor and a quantum processor are precisely equal.

>> No.3578067

>>3578056
No, you don't understand quantum computing, but I'm pretty sure we'll reach efficient hardware to run ourselves and we'll probably make certain types of AGIs work with specialized hardware, but possibly also some general purpose hardware (see OpenCog for an attempt on the latter).

>> No.3578068

>>3578056

>implying computers will have "speed" instead of being instantaneous

>> No.3578071

>>3578067

After we reach certain physical bounds, bespoke chips are going to be the only path to improve performance.

>> No.3578073

>>3578056
Magic will always be faster.

>> No.3578085

>>3578068

> implying information can travel faster than the speed of light

>> No.3578088

>>3578068
>>3578073
quantum computers will break all encryption nearly instantly because they can factorize like all get out

>> No.3578106

>>3578065
are we anticipating most of the algorithms being recursive or otherwise un-parallelable?

>> No.3578108

>>3578088
Only some asymmetric ones. Don't treat QC as some holy grail. It will end up as an useful tool if it succeeds, but don't think of it as being able to do magic. Although, in a way, you're limiting your thinking process by believing in magic. Only by thinking rationally, will you be able to make real ' magic'.

>> No.3578111

>What if I accidentally program a sentient AI?
No fucking way, sorry OP.

But if you did, it would be a great day in human history, even if the first instance is accidental or botched.

>> No.3578116

>>3578088
And they'd just allow more secure encryption.

dunno what you're going at, but theres still no evidence that an AI could be any smarter because of processing power.

>> No.3578126

>>3578106

Well, if it's neuromorphic ai I think it would be parallelized, and highly distributed. Like, you know: the brain.

>> No.3578128
File: 134 KB, 407x405, 9222811.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3578116

Pic related.

>> No.3578129

>>3578111

most great science discoveries come by accident

like penicillin

>> No.3578133

>>3578126
good shit, thanks physics!

>>3578116
the inherent advantage of computers is the raw number crunching power. evolutionary algorithms can come up with solutions or paths that humans would have an extremely if not impossible time determining as long as the rules are 'simple' or modelable so to speak

>> No.3578140

>>3578003
Not within the first 24 seconds of switching on, you tard. The hardware it initially has access to has limits.

>> No.3578141

>>3578106
I'm anticipating them being bogged down by abstract reasoning.

>> No.3578139

>>3578129
what's the last great invention that was an accident?

penicillin is old hat

>> No.3578156

>>3578141

Human abstract reasoning is, underneath, the shuffling or neurotransmitters. What prevents abstract reasoning from being formalized into algorithms?

Even if it's a gigantic, complex, dynamic model of a mind?

>> No.3578158

so a quantum computer could say, solve chess in what, polynomial time instead of factorial? i'm not as up on CS as i would like to be

>> No.3578163

>>3578158
There's a limited class of problems that even have a THEORETICAL speedup with quantum computers. I don't know if solving chess falls in that class.

>> No.3578165

>>3578158

We used to think quantum computers could try all possible solutions to a problem at once, but I think that has been disproven.

>> No.3578173

>>3578165
was that the theory that somehow relied on MWI and basically was "the universe splits into the one with the correct answer on said qubits" ?

>> No.3578183

>>3578173
NOTHING relies on an interpretation. If an interpretation implies any new predictions, it would be an independent theory.

>> No.3578187

>>3578173
No, actually MWI might prove to be less nice for quantum computing, but not that the other interpretations make it better.

>> No.3578200

>>3578183
i can postulate an experiment which would confirm MWI to any arbitrary degree of percision but very few people would attempt it, and it would only confirm the results for them.

>> No.3578217

>>3578200
That metaphysical experiment is unfortunately at the limits of science as science deals with the falsifiable, not the merely testable. Of course, a society could perform the experiment and increase their confidence in it. Some other interesting experiments that fall slightly outside the range of science would also be quite useful if they prove successful (if they fail, "nothing" happens).

>> No.3578228

>>3578156
I'm not saying they can't. What I'm saying is that a super AI will find no solutions quicker. It'd be like running Windows 7 on a pentium processor.

Its abstract reasoning abilities can be substantially bloated, well beyond humans, primarily because I assume it won't be allowed to disassociate one solution for one problem from another slightly similar problem but with a 'tiny degree' of difference, leading to a whole host of bloated inference that needs to manage it.

Humans on the other hand have evolved ways to minimize or completely disregard cognitive dissonance.

>> No.3578246

>>3578228
It depends on the AGI. There are many possibilities. Don't assume they can't be as chaotic or as biased as humans if they are designed that way. Merely computing stuff does not mean you're suddenly a very precise thinking machine at the high-level.

>> No.3578256

>>3577918
Never, ever type "import sentience"

>> No.3578257

>>3578246
I will assume magic then.