[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 141 KB, 500x438, humanoid.jpg [View same] [iqdb] [saucenao] [google]
3277652 No.3277652 [Reply] [Original]

The main problem I have with the technological singularity is that it doesn't describe anything that hasn't happened numerous times in history. All the singularity really amounts to is: In the future, mankind will invent some technology that will completely change society.

How is this any different from the invention of the internet? Or the printing press? Or the written word?

>> No.3277670
File: 568 KB, 2000x3000, mfweis.jpg [View same] [iqdb] [saucenao] [google]
3277670

>How is this any different from the invention of the internet?
the idea is that from then on someone/something other than humans come up with technologies afterwards. What happens is beyond human control.

>> No.3277690

>>3277670
Wouldn't this be negated by transhumanism? If we had the capability of creating machines more intelligent than ourselves, why wouldn't we also make ourselves more intelligent?

>> No.3277703
File: 237 KB, 1280x1024, 1272418309779.jpg [View same] [iqdb] [saucenao] [google]
3277703

>>3277670
what fails about this is that people don't consider that when this happens people will advance to

i cant really put this into words but ill try consider we make a super ai that can create techology faster then a human, is this not the same as the millions of people who have worked together to create current computer technology? in the 1800 no one whould have imagined people would be able to use such advanced instruments but here we are arguing and watching porn or what not

>> No.3277712

The technological singularity is merely the point when we develop Strong AI capable of recursively improving itself.

>> No.3277715

In a sense, OP is right. There have been innumerable technological advances that have been created/discovered of which their impact could not be determined beforehand. The difference with the singularity is that it is alleged to also cause a massive increase in the rate at which we create/discover new technologies, whether it be by AI or transhumans.

>> No.3277722

>>3277690

Well, we could, but we wouldn't really be 'ourselves':

http://www.scientificamerican.com/article.cfm?id=the-limits-of-intelligence

>> No.3277780

>>3277722
How so?

>> No.3277800

>>3277722

Oh, here we go again.

>We'll transcend our biology not our humanity
At least if you want to. There's also the full Transcend Package for transcending both biology and humanity, it comes with it's own self-improving AI and molecular assembler.

Then there's the Ned Ludd pack which includes nanotech defenses to shelter *YOU* from the consequences of a catastrophic Singularity!

>> No.3277919

>>3277800

What? That's not what I meant to imply at all. We'd be so different that we couldn't really call ourselves human anymore. Our thought processes would be completely different.

>> No.3277931
File: 88 KB, 600x338, 1308772655969.jpg [View same] [iqdb] [saucenao] [google]
3277931

>>3277919

>possibly implying this is necessarily bad??
>can't decide if luddite or okay
>Toronto??????

(I know you're maybe not implying this)

>> No.3277944

>technological singularity
Is happening as we speak, the concept is an exponential increase in progress(scientific, technological, infrastructural etc) that at one point will exceed the rate a biological human brain can keep up with, depending on what you define as "keeping up" we're either already past this point or thanks to the internet and information technology rapidly approaching it.

Of course the real takeoff won't happen before AI grows a bit more or someone invents high-grade BCIs.

>> No.3278044

>>3277931

I'm not implying it's bad. Yes, Toronto.

>> No.3278078
File: 25 KB, 640x277, 1306789424534.jpg [View same] [iqdb] [saucenao] [google]
3278078

>>3278044

Well thank God. We have more than enough luddite trolls here.

>> No.3279194

>>3277652
Okay, picture this...

We make an A.I. capable of human level intelligence... and we hook it up to some rapid prototyping machines, and a few lithography machines.

Within 10 hours, it has created a super efficient version of it's own hardware and software, and makes 10 of them, and puts them online.

These new processors start working on NEWER processors that are even more efficient, and faster, it makes 10 of those, and puts them online... pretty soon, the "A.I. Cloud" is gaining intelligence at a rate of "The Entirety of human brain capacity" per hour, then per minute, then per second.....

By the end of the fifth day, it has created a self contained advanced society that improves at a rate faster than the human eye is capable of SEEING....

Eventually, it cracks the last physics puzzle, and makes a new universe with a 56k fax modem, and some ULTRA duct tape.

There's your singularity.

>> No.3279227

>>3277652
It allows people to enjoy religiousity without all that pesky religious aspect.

>> No.3279238

>>3279227 DERP DERP NERD RAPTURE DERP

Go to Lesswrong. It will teach you about Singularity and Friendly AI

>> No.3279258

>>3279194

Recursive intelligence is a different kind of problem than human-level AI. Now, human-level AI will be absurdly useful to us, but it won't necessarily be what you describe. For example, human beings are already human-level AI, and we can't do this stuff.

>> No.3279266 [DELETED] 
File: 17 KB, 405x289, facepalm7.jpg [View same] [iqdb] [saucenao] [google]
3279266

>>3279258
>human beings are already human-level AI
>human beings are AI
mfw

>> No.3279275

>>3279238
Eh? Goto:
http://io9.com/5814484/myths-about-the-future-that-could-ruin-your-life

Realize you're just as deluded as all the other fragile people.

>> No.3279284

>>3279266

I find your lack of poetic appreciation disturbing.

>> No.3279287
File: 11 KB, 150x192, tinfoil.hat.jpg [View same] [iqdb] [saucenao] [google]
3279287

>>3279238
lesswrong is a bunch of kids sucking off some jewish guy who thinks bayesian probability is separate from science.

>> No.3279302

Okay, a HUMAN LEVEL A.I. is an artificial intelligence that is capable of human levels of comprehension.

Like, is you say to the H.L.A.I. "Hey, draw me a picture of Optimus Prime having sex with papa smurf" it will actually draw you a picture... as opposed to "File not found"

Computers are ALREADY vastly superior to human beings at computational speeds... Your computer processors are rated in Billions of mathematical operations per second...

So, if you had an A.I. that could understand concepts and Ideas at the same level of a human, it could engineer computer architecture much faster than a whole TEAM of humans doing the same thing.

>> No.3279303

Nah, he thinks Bayesian probability is the SOURCE and MOTHER of all science. He's a Jew, so was Einstein. And they ain't suckin off him anymore: the guy ain't written a thing for a looong time, now the community feeds upon itself and it GROWS, by Newton it is beautiful.

>> No.3279308

>>3279303
>So does Islam
>So does Scientology
>So does Hinduism

>My god, it's indistinguishable from magic

>> No.3279315

>>3279287
i went to that yudowsky dudes site. theres definitely a cultlike vibe.

>> No.3279327

Yeah, that's the thing with doomsday cults: they GROW bigger than the man who sparked them. The difference being, OUR priests actually have POWERS and can DO shit, bwahaha.

>> No.3279341

>>3279327
>Brandishes his IPAD2 and CACKLES

>> No.3279343

>>3279315
>>3279303
>>3279287
Not to mention the fact that his fanfiction's hilarious.

>> No.3279344
File: 33 KB, 300x221, ray_kurzweil_edited.jpg [View same] [iqdb] [saucenao] [google]
3279344

>>3277652
>The main problem I have with the technological singularity is that it doesn't describe anything that hasn't happened numerous times in history.

how is this a problem?

>How is this any different from the invention of the internet? Or the printing press? Or the written word?

its different in the sense that it encompasses every one of those inventions. We are living in the goddamn singularity.. its continuous growth of the growth of technology.

lrn2 vernor vinge

>> No.3279348

>>3279344 Doesn't that make the word meaningless?

>> No.3279350

>>3279343
sauce!

>> No.3279362

>>3279350
>http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality

Gird your loins, it's a good 70 chapters.

>> No.3279364

>>3279348
Definition of Technological Singularity

by The Singularity Team
(Palo Alto)

Technological singularity is a point at which technology is not defined or not well-behaved, for example, infinite or non-differentiable.

>> No.3279373

>>3279364
>The point at which technology is indistinguishable from magic

Congratulations, you have a absolutely useless definition. Welcome to the definition of religion, enjoy the kool-aid.

>> No.3279380
File: 33 KB, 320x240, jim_jones.jpg [View same] [iqdb] [saucenao] [google]
3279380

Well, we can also call it the point of technological development where Kool Aid is indistinguishable from Flavor Aid.

Drinks are on me, boys.

>> No.3279381
File: 2.54 MB, 400x225, bece58466b2e8fa8b7a7adcace404be3.gif [View same] [iqdb] [saucenao] [google]
3279381

>>3279373 Technology is indistinguishable from magic? Whoever experienced magic to compare the two? I'd like to meet them! Nothing in common with religion though,,,

>> No.3279383

>>3279381
Yes yes, your level of irrationality is securely based in the footprints of common knowledge.

>> No.3279390
File: 23 KB, 401x271, aubreydegreyrasp.jpg [View same] [iqdb] [saucenao] [google]
3279390

>>3279373
useless? hardly. look how much money we're getting for "singularity university"

>> No.3279392

>>3279390
Look at how much money the IPAD2 is getting.

>> No.3279399
File: 72 KB, 350x318, yield sign.jpg [View same] [iqdb] [saucenao] [google]
3279399

>>3279392
i do see that. however, our advantage is that by conjuring up a word and gaining funding for it, our production costs are non existent.

>> No.3279401

>>3279383 Magic. What does that word even mean, besides "shit I don't understand"?

>> No.3279403

>>3277690
The one doesn't imply the other. I can fairly easily make a machine that is better than me at chess. That ability doesn't make me any better at chess.

>> No.3279404

>>3279399
Well you still have to waste an awful lot of time whining on the internet. It's not exactly free, it's still work.

>> No.3279411

>>3279401
...You said it.

>> No.3279417

>>3279302
>A.I. that could understand concepts and Ideas
So at the current rate we will reach the singularity at exactly never.... and we have no real theories about how to change that rate... just magical claims about complex systems becoming conscious for no reason we can postulate.

>> No.3279427

>>3279411
So basically the Singularity is when shit gets so sciency I have no fucking idea how it works?

...

Actually that's ALREADY true. Science, even the most advanced sort, only gives a partial understanding of shit. And no human being knows exactly all the shit that's going on on a PC, you can only master one speciality or two in a lifetime. And dont get me started on the PEASANTS.

>> No.3279428
File: 94 KB, 800x544, troll4667.jpg [View same] [iqdb] [saucenao] [google]
3279428

>>3279404
>it's still work

yeah. trolling is serious business.

>> No.3279436

>>3279427
see:
>>3279344

>> No.3279453

>>3279417

We don't need AI to actually understand ideas, just to simulate it.

Actual CS/AIfag here...

>> No.3279472

>>3279453
It seems to me that AI doesn't have very good chances at inventing anything new without understanding what it is doing.

>> No.3279477

>>3279453 For all practical purposes it understands.

>> No.3279485

>>3279472

I don't think this is true. In a universe where very logical rules are followed, it is pretty easy to input these rules and allow it to search for solutions. This already has been implemented in the simple rules of Chess, and the more complicated rules of Jeopardy! but I hope you can see how this could easily be extended to the physics of the real world. We just have to show the AI what the solution we are looking for is, and it could find it.

>> No.3279554

I've come to the conclusion that only pop sci reading underageb& believe in post-human transhuman bullshit.

>> No.3279562

>>3279554

I'm 26, have studies CS (poster from above), and never read pop-sci. I believe in transhumanism because it is the best place technology can take humans. Technology will eventually go all of the best places.

>> No.3279563

DO YOU FUCKING SINGULARITY FAGS ACTUALLY UNDERSTAND WHAT YOU ARE TALKING ABOUT?

GODDAMN I FUCKING HATE ALL OF YOU
YOU DON'T UNDERSTAND HOW THE BRAIN WORKS
YOU DON'T UNDERSTAND HOW COMPUTERS WORK
YOU DON'T UNDERSTAND HAVE ARTIFICIAL INTELLIGENCE WORKS
YOU DON'T UNDERSTAND ANYTHING

I FUCKING HATE POP SCI

FUCK THE MASSES

>> No.3279569

>>3279554

What, exactly, is bullshit about the observation that humans have gotten better at understanding and exploiting the laws of nature since we started investigating them; and the prediction that this trend will continue?

>> No.3279586

>>3279563

This is not necessarily true. The biggest difference between people with this view of the history and other with a more bleak view is purely optimism. Optimists see the growth of tech in the past, and without seeing the SPECIFIC way these technologies will come around, believe they will. On the other hand, pessimists don't see the current development of these technologies, so they think they will never happen.

>> No.3279607
File: 124 KB, 338x343, U mad hanar, lol.jpg [View same] [iqdb] [saucenao] [google]
3279607

>>3279563

>> No.3279610

>>3279569
See
>>3279275

>> No.3279616

>>3279569
>>3279586
>>3279607
FUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK YOUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU

I WILL BEAT THE SHIT OUT OF ALL OF YOU

>> No.3279619

I don't know what's going on anymore.

>> No.3279626

>>3279586

Okay. Past performance is not necessarily indicative of future performance. And transhuman predictions are often, at the very least, unfounded, and at the most, asinine. Singularity 'predictions' are by definition vague whenever they are in earnest.

But the fact remains that throughout history, we have always made things better, with only minor and local setbacks. People notice the setbacks, because they are events, and so memorable.

But in general, optimism is supported by the data more than pessimism, in this area.

>> No.3279631

>>3279610

Article is TL;DR, but what I did read, it still seems to be just someone's opinions. I believe my opinion is more likely to be right, and therefore tell that article to fuck off.

>> No.3279636

>>3279610

I don't hold to any of those fallacies.

And none of them challenge the concept of transhumanism, or the idea of a singularity.

And many of them contradict one another. One says that comparisons to history are useless, while the others use such comparisons.

>> No.3279638
File: 6 KB, 192x144, 50316_2256816327_8655_n.jpg [View same] [iqdb] [saucenao] [google]
3279638

>>3279626

>And transhuman predictions are often, at the very least, unfounded, and at the most, asinine.

Anders Sandberg, to name one, has a PhD in computational neuroscience. I wouldn't call his views on mind uploading "unfounded".

You're referring to the Singularity people who sit on their asses re-reading Accelerando for the five hundredth time and thinking that in thirty years robots will pop into existance, tofu-powered, tofu-driven and tofu-manufactured cars will whisk us through the heavens and to Nerdtopia. Please don't confuse transhumanism with Singularitarianism, the trolls are already too past the point of no return to see a line there.

>> No.3279645
File: 964 KB, 3119x2655, Jaron_lanier.jpg [View same] [iqdb] [saucenao] [google]
3279645

http://www.youtube.com/watch?v=Pr3t9qxZv6c&feature=related
Enjoy your religion, singularity fags.

>> No.3279657

>>3279638

I was talking about certain, specific predictions. The kind luddites criticise as being the whole body of transhumanism.

And the singularity is by it's definition unpredictable. It's the point past which predictions about the future cannot be made.

>> No.3279665

>>3279645

All things being equal, the religion that does not make absolute moral statements about how everyone should live their lives, is the more palatable religion.

>> No.3279671

>>3279665

In addition, the singularity is at least based in SOME rational science.

>> No.3279693

>be skeptical of science and technology
>LOL FAG UR A LUDDITE HAHHA CHRISTFAG U HATE SCIENCE LOLOLOLLOL TECHNOLOGY WILL FIX EVERYTHING YEAH SCIENCE IS PERFECT LOL PHILOSOPHY OF SCIENCE IS GAY RAY KURZWEIL IS GOD YOU SHOULD ONLY BE SKEPTICAL OF RELIGION THAT'S IT

>> No.3279701

>>3279693

I see that you have a strong opinion that is against the ideas of transhumanism and the technological singularity. Would you care to coherently express your thoughts on the matter, instead of being butthurt?

>> No.3279703

>>3277652
Difference about a singularity would be that, presumably, the changes caused by a singularity would be irreversible; permanent for the rest of eternity.

>> No.3279720
File: 3 KB, 300x237, mopimoth.gif [View same] [iqdb] [saucenao] [google]
3279720

>>3279703

>Hasn't read The Metamorphosis of Prime Intellect

>> No.3279731

>>3279720
I hate teenagers even more because of you.

>> No.3279741
File: 48 KB, 239x244, 1287799488256.jpg [View same] [iqdb] [saucenao] [google]
3279741

>>3279731

>> No.3279757

>>3279720
i have actually.

>> No.3279781

>>3279757

>She ignored him. "Some of us might be human again one day, if the Change were reversed. But I think it's too late for the ones like AnneMarie." Three point two.
>"It can't undo the Change, Caroline."
>"Lawrence, it'll do something. If it's going to happen anyway, isn't it better for it to happen sooner instead of later? If it had happened a few hundred years ago, maybe there would have been enough resources. Prime Intellect, neural stimulation is like a black hole. Once a human falls into it, they will never be human again. They are dead to the world, and will never interact with others again. And the more time passes, the more humans will fall into this trap. They will order you to help them. You will have to do it because they are human."

>> No.3279862

>>3279781
I don't consider this to be a realistic or likely scenario.

>> No.3279869

>>3279862

Ya, the thing that bothered me about the novel was that the title was essentially a huge lie: Prime Intellect acquired the power of a God but it never thought like a God, it's imagination and consciousness were below those of humans, only capable of handling far greater loads of data. It was bound by human laws from its inception to its destruction.

There was never really a metamorphosis, a moment of secular apotheosis for the machine.

The cover is fuckawesome though.

>> No.3279919

>>3279869
>but it never thought like a God
Right i agree. Any proper AGI I imagine would have the capacity to know what humans would likely say/do/think, both present and future, so any changes it would make due to some human speaking certain words, it could have predicted that fortthcoming change and just make the change earlier rather than later.

>> No.3279947

>>3279869
>>3279919
>taking science fiction seriously

>singularityfags as usual

>> No.3279968
File: 27 KB, 429x410, 1287535238754.jpg [View same] [iqdb] [saucenao] [google]
3279968

>>3279947

I have repeatedly stated I don't think a Singularity will happen.

>> No.3279986

>>3279968
Transhumanism is still retarded.

>> No.3279990

>>3279986

Generic reaction image.

>> No.3279998

>>3279990

Generic insult.

>> No.3280002

>>3279998

Generic dump of links proving transhumanism to be totally legit and a paragraph of me desperately trying to separate transhumanism from singularitarianism to dodge some of the shitstorm.

>> No.3280008

>>3280002
Generic paragraph in caps about human nature and the government and military and negative outcomes of transhumanism.

>> No.3280011 [DELETED] 

>Mfw transhumanist people are usually idiots that are depending on this technology to survive instead of improving themselves so they read fanfiction to pass the time until their lives are over and they are full of regret.

>> No.3280015

>>3280008

Generic two-post copypasta about molecular assemblers and open-source manufacturing and how "the computer will destroy the pyramid" and nanotechnology will allow us to live in glorious anarchist utopias like in The Diamond Age.

>> No.3280016

>>3279986
About as retarded as clothing, glasses, and prosthetic limbs.

>> No.3280023

>>3280015
Generic insult about wishful thinking.

>> No.3280029

>>3280023

Generic "truth always goes through three stages" quote.

>> No.3280034

>>3280029
Generic paragraph about how humans suck at predicting the future.

>> No.3280043

>>3280034

Generic discussion of predictions people got right and that while one can't predict specific products one can predict the growth of technology.

>> No.3280050

>>3280043
Generic u r a fagget

>> No.3280055
File: 118 KB, 294x371, Immanuel_Kant_(painted_portrait).jpg [View same] [iqdb] [saucenao] [google]
3280055

>implying we aren't already immortal

>> No.3280057

>>3280050

Generic cropped version of a furry porn picture with a reaction image caption.

>> No.3280081

>>3280002
>trying to separate transhumanism from singularitarianism
Can you elaborate?

>> No.3282445
File: 231 KB, 1024x768, Fallen Angel.jpg [View same] [iqdb] [saucenao] [google]
3282445

>>3280081

Transhumanism, in its purest form, is people who think we can upgrade the human condition, and that we should and maybe that we will.

Singularitarianism is a group of people who think that in 30 years robots will whisk them away to nerdtopia where they drive flying cars powered by tofu. Powered by, designed by, and driven by tofu.

Anders Sandberg is a real transhumanist. He has a PhD and wrote a bunch of papers on mind uploading.

Ray Kurzweil is not a real transhumanist. He writes and watches the stock market.
Eliezer Yudkowsky is not a real transhumanist: He writes and pretends he's an AI researcher but probably hasn't written as little as an Eliza bot.
Michael Anissimov is some butthurt retard who gets his jimmies rustled whenever someone implies the Singularity won't hapen in his lifetime.
Natasha Vita-More is not a real transhumanist: She called neuroplasticity 'a real smart idea'.

>> No.3282494
File: 126 KB, 340x480, 1265416643295.jpg [View same] [iqdb] [saucenao] [google]
3282494

>>3282445

>> No.3282682

>>3282445

Bump for trolls to see.

>> No.3282698

>>3282445
>Singularitarianism is a group of people who think that in 30 years robots will whisk them away
i imagine by 'robots' you meant AI.
So you think an AGI is not likely in the next 10-50 years? Why?
You don't think an AI could be useful in 'upgrading the human condition'?

>> No.3282702

>>3282445

Basically this. Though I'd argue that Yudkowsky and Kurzweil are more "Sagans" in the field.

Here's a good ted talk on the subject actually. This one is about genetics.

http://www.ted.com/talks/harvey_fineberg_are_we_ready_for_neo_evolution.html

>> No.3282724
File: 24 KB, 500x280, biblebussign.jpg [View same] [iqdb] [saucenao] [google]
3282724

>>3282698
>Jesus vs SIngularity

Who will save our souls?

>> No.3282738
File: 82 KB, 388x599, 388px-Raptorjesuscuwn.jpg [View same] [iqdb] [saucenao] [google]
3282738

>>3282724

My money is on robot Raptor Jesus.

>> No.3282742

The typwriter didn't recursively improve itself without human input

/thread

>> No.3282744
File: 81 KB, 804x452, transcendantman.jpg [View same] [iqdb] [saucenao] [google]
3282744

>>3282738
It'll be close.

>> No.3282787
File: 281 KB, 488x650, 1307056209423.jpg [View same] [iqdb] [saucenao] [google]
3282787

>>3282698

Algorithmic AI has been a great tool for the past 50 years but it has not lead to posthuman or human-level AI's, the AI's you see in fiction like HAL or GLaDOS. The AI we have is domain-specific AI.

Emulating the brain is comparatively easy to making AI. I don't want to get too philosophical, but there's this whole "part cannot comprehend the whole" thing. It's like trying to taste your tongue. You can emulate the workings of human brains, but create consciousness from scratch?

>You don't think an AI could be useful in 'upgrading the human condition'?

I'm more concerned with using our human-made technologies to upgrade humans, rather than spin up stories of AI Gods and call myself a "researcher".

>> No.3282850

>>3282787
>Emulating the brain is comparatively easy to making AI.
Wouldn't an emulated brain be an Artificial General Intelligence (not domain specific)? And if it had the capacity to rewrite its own code/software, and the motivation to make itself smarter, wouldn't this situation then quickly lead to a technological singularity?

http://en.wikipedia.org/wiki/Technological_singularity
>Technological singularity refers to the hypothetical future emergence of greater-than human intelligence.

>> No.3282879

>>3282742
You'll have to explain in detail how you're going to to build a machine with unbounded creativity before this is something other than boring science fiction.

Saying "the human brain does it so I must be able to build a machine that does it too" is not an explanation.

>> No.3282898

>>3282787
>It's like trying to taste your tongue.
I can taste the whole surface.
For the insides, i can can and taste the tongue of someone else.
Na nana na na!

>> No.3282914

>>3282850
Wouldn't an AI which could write it's own AI be no better than the time traveling grandfather killing?

It'd just be a loopy paradox.

>> No.3282918
File: 289 KB, 512x384, 1307030443488.png [View same] [iqdb] [saucenao] [google]
3282918

>>3282850

No, an uploaded mind is an uploaded mind, not an artificial intelligence. It was not made by humans (inb4 reproduction huehuehuehuehue).

>>3282879

Is that an argument against mind uploading or AI?

>unbounded creativity
Yeah, no.

>> No.3282925

>>3282914

What? Time travel? What are you talking about? Recursion is not time travel.

>> No.3282937

>>3282914

People can fuck and make other people. No time travel here.

>> No.3282940

>>3282925
So uh. How does this magnificent AI modify it's own code while it's operational? I mean, what you suggest is no more paradoxical than time travel as you nor your AI knows if any specific modification would do any specific think.

>> No.3282946
File: 96 KB, 600x1140, 1308372000061.gif [View same] [iqdb] [saucenao] [google]
3282946

Has anyone tried to create an entire AI using an evolutionary process? i.e a very simple recursive algorithm that writes more algorithm, perhaps offset by some underlying variation of its virtual environment?

>> No.3282947

so what algorithm would you use to produce consciousness? what's the formula like?

what's the formula for pain and pleasure? I want to make a program that feels different levels of pleasure, whats the secret computer code for that?

Oh wait, I'm a dumbfuck and that's impossible.

>> No.3282956

>>3282940

You're thinking about some compiled program modifying its source code and recompiling? Programs running in interpreters can modify their code while they run. All you need is a Lisp REPL.

And then, well, the AI could just blindly poke at areas (Which is what we do with the brain) and see what happens. The difference is that neuroscientists don't to it to themselves, they do it to others so they remain more-or-less objective observers. The AI could, I guess, copy itself and have the copy work as a test subject, then see the changes.

Alternatively it may try modifying itself, and every time it gets a change it has to undergo tests to see if it has destroyed something vital, ie "How are you feeling? What's two times two? Press OK within 15 seconds to keep this settings."

>> No.3282960

>>3282946

A genetic algorithm to make an AI would probably get stuck with some Morlok Eliza bot.

>> No.3282967

>>3282918
>No, an uploaded mind is an uploaded mind, not an artificial intelligence. It was not made by humans (inb4 reproduction huehuehuehuehue).
But you said,
>Emulating the brain is comparatively easy to making AI.
'Emulated' not 'upload'. I assumed you said this because you thought a brain emulation was possible/feasible in the near/medium term future, and that a straight AGI creation was not. Do you think this? Or do you think a human brain emulation with the capacity to self-modify is not feasible.

>> No.3282974

>>3282956
Yes, but the specific response was to the AI modifying itself. Which is a time travel paradox if you consider a systemic complex system.

>> No.3282987

>>3282967

>Do you think this?

Yes, except for the short term part. I'm not getting cyberparadise in this lifetime.

>Or do you think a human brain emulation with the capacity to self-modify is not feasible.

I think it is.

My point is that running the same processes of a human mind in a computer (Simulating/emulating/uploading a mind) is easier than a person or group of people reverse-engineering their own consciousnesses and then writing a program to do it.

>> No.3282991

>>3277652
>>3277652
One approach to it is by imagining a machine that can improve efficiency of arbitrary machine by a finite amount. As you apply this several times you can achieve any efficiency you wish up to <100%. This allows you to achieve any desired energy transformation with near unity gain therefore you don't need to care about optimization anymore (sure you can always improve it make it smaller, faster...). But the general idea is that you can get your hands on any technology without having the technology itself. How so? Well imagine you only know incandescent light bulbs so you boos the efficiency to 50% here you go you have an equivalent of an LED but without having discovered an LED. You can adequately add appropriate supplementary components that would otherwise be very inefficient but since you no longer care about efficiency (you can always increase it with your machine) so you technically speaking can make anything that physics permits you.

Here you go you have all the technologies you can have (you can think of at least). It doesn't mean that you can't improve yourself, it just means that the step between a fantasy/initial concept and realization is very small. Hence if you say you want a flying car the time needed to make one will be "near instantaneous" - the time needed to make a prototype and some easy design solutions.

Keep in mind that the true limitation is that you need to formulate WHAT is it that you want - so shape and size and other parameters of your flying car if you wish.

If such a machine can exist? Good question, I think yes. And everybody who says hurr durr 2'nd law of thermodynamics - go to hell the "law" was not even shown to be applicable for energy band diagrams at all so I don't see why wouldn't it work.

>> No.3282992

>>3282974

I can modify myself by taking medications or poking a needle at my accumbens nucleus.

>> No.3282999

>>3282987
>I'm not getting cyberparadise in this lifetime.
Do you have a particular reason for thinking this?

>I think it is.
So given what was said here, >>3282850
Don't you think what you're describing is a technological singularity? An emulated human brain would likely have the capacity to self-modify, and would likely also have the desire to improve its intelligence. It would rewrite itself to make itself more intelligent, and then viola you have a greater-than-human intelligence, aka a singularity event.

>> No.3283013
File: 1.19 MB, 2560x1600, 1306688925719.jpg [View same] [iqdb] [saucenao] [google]
3283013

If we're going to create an AI as advanced as us, shouldn't we use the same process that resulted in us? i.e evolution? Isn't it insanely more difficult to create an AI using a top down from scratch approach? i.e the very reason creationism is not plausible?

>> No.3283016

>>3282992
Yes, but you don't get smarter nor do you add additional neurons or change the basic hardware, which is what you're proposing from an AI whose suppose to become exponentially smarter.

>> No.3283023

>>3282999

>Don't you think what you're describing is a technological singularity? An emulated human brain would likely have the capacity to self-modify, and would likely also have the desire to improve its intelligence. It would rewrite itself to make itself more intelligent, and then viola you have a greater-than-human intelligence, aka a singularity event.

It does not imply that it immediately jumps into some "upward spiral of self-improvement".

>> No.3283032

>>3283016

While a person may not understand the whole of their brains, they might focus on a specific structure and consider the structure as a whole rather than at the level of neurons, by abstracting some of its behaviour. They then begin to modify that structure, using the safeguards above, and if a modification results in positive gains they might keep it, then move on to another structure.

>> No.3283048

>>3283013
Example of what you are saying: Polyworld.

And, yeah, I agree.

>> No.3283062

>>3283032
But you're positing something that breaks fundamental laws such as the halting problem. Especially when you consider what you're messing with is a non linear system. Theres no test that'll tell you what any given algorithm at runtime will do.

>> No.3283084

>>3283013
If you think about it, everything we create is in its own way a consequence of evolution. We came about from random actions of chemicals; why can't an AI come about from the random creativity of a human brain?

>> No.3283102

>>3283084
i just read a book where something like that happened, there was quantum communication and the network of all quantum devices linked together turned into a self aware consciousness

>> No.3283107

>>3283062

I know the halting problem. The only way to see what an algorithm returns is, well, to run it, which is why the AI would perform this on a copy of itself or -- Wait, I see your point: I postulated that the AI would make a change, and have some time to choose to keep it before it automatically rolls back to past settings. But this change might prompt the machine to chose to keep the changes, even if they are not at all beneficial.

So with this, the only way to do it safely would be to modify the copy of the AI while the original observes and studies. That does not violate the halting problem of introduce the problems of the above paragraph.

>> No.3283135
File: 108 KB, 719x241, 1306867898303.jpg [View same] [iqdb] [saucenao] [google]
3283135

>>3283084
>why can't an AI come about from the random creativity of a human brain?

It can, but it would just take a lot longer, wouldn't it? We should harness the same mechanism that resulted in us...we could make a virtual model of abiogenesis and see what it brings forth of its own accord.

>> No.3283136

>>3283107
except the AI is suppose to have human like consciousness, and what you're proposing is no more than human experimentation, is it not?

I'm sure you can get away from the pesky moral problem, if you're trying to make an AI that care for humans, as opposed to just an AI for..whatever, you're not gonna get an AI to make a copy of itself to experiment on.

Also, your experiment is limited to time of execution, and does not remove the halting problem, it only reduces based on the 'waiting' period, which is arbitrary and the fact remains.

Your exponential revolution is limited in this regard, is it not?

>> No.3283150

>>3283136

>except the AI is suppose to have human like consciousness, and what you're proposing is no more than human experimentation, is it not?

Nope.

>you're not gonna get an AI to make a copy of itself to experiment on.

Who knows how desperate it is to improve itself?

>Also, your experiment is limited to time of execution, and does not remove the halting problem, it only reduces based on the 'waiting' period, which is arbitrary and the fact remains.

Yes.

>Your exponential revolution is limited in this regard, is it not?

That's a Singularity, which I don't believe in. But yes, in any case it is.

>> No.3283173

1. Write emulator of universe.
2. Start it at the state when mammals had just evolved.
3. Let evolution do its work.
4. Copypasta discovered algorithms for intelligent life into real robots
5. ????
6 U MAD SCIENTISTS

>> No.3283203

The Singularity is a science-fiction fantasy believed to be inevitable and self-evident by nerds, fanatics, and Kurzweil cultists.

>> No.3283290

Pure rationalism is bunk. You need empiricism (experiments and the data they bring) to progress science and technology, you can't just magically derive everything just because you have the processing power. Setting up and conducting experiments is tied with numerous technical time and resource-intensive challenges.

Scientific and technological progress may increase however dramatically you'd like and can even be exponential for a while, but it will plateau no matter what. Well, or stop as whatever does the discovering reaches unexceedable physical limits.

>> No.3283313

>>3283290
HEh. Rationally, you can if we some how unify the forces of the universe. We're not there, so it's alot of mental masturbation, but whats the point of experimentation if you know the fundamental forces of the universe?

>> No.3283342

>>3283313
Even if we could unify the whole of physics with currently available data and had unlimited processing power, both of which are highly doubtful, we still would likely need to gather data to make useful predictions, and the data will be limited in accuracy.

>> No.3283370

>>3283342
No doubt. I believe in AI, but not in magical AI where some how an algorithm will function in some godlike and incredibly efficient manner.

>> No.3283387
File: 2 KB, 146x186, 1296836124591.png [View same] [iqdb] [saucenao] [google]
3283387

>>3283370

Well we can all agree on that.

>> No.3283753

Self-improving AI already exists. Why, with powerful enough computer, can we not have a superhuman intelligence in a machine?

>> No.3283763

because the technological singularity is the point where humanity is changed dramatically, not society.

>> No.3283777
File: 21 KB, 540x603, wangularity.gif [View same] [iqdb] [saucenao] [google]
3283777

>> No.3283780

>>3283777
Familiar handwriting. Is that SMBC?

>> No.3283784

>>3283777

The wangularity already exists... in my pants.

>> No.3283787

>>3283780
Quite likely

>> No.3285962

>>3283013
>evolution
There has actually been a paper written about this, sorry i don't have a link on hand. The computational resources needed were thought to be out of reach to sort of just randomly evolve an intelligence.
Note: evolutionary algorithms are already used, but for specific issues rather than for creating intelligence generally http://en.wikipedia.org/wiki/Evolutionary_algorithm