[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 706 KB, 1680x1050, 1281495037120.jpg [View same] [iqdb] [saucenao] [google]
2326906 No.2326906 [Reply] [Original]

Ask me any questions about the technological singularity, and I will answer them.

Also singularity thread

>> No.2326910

Stop, OP. Your entire premise is wrong.

How about this revision?
>Ask me any questions about the ideology and projections associated with the technological singularity, and I will answer them.

Don't treat this as a topic about which you can report facts. It is speculation on the future course of events.

>> No.2326917

in b4 ops trying to explain how he can predict the future

>> No.2326930

>>2326910
That is actually what I meant. I just want to clarify things for people that don't understand the ideas behind the singularity.

And I enjoy discussing and speculating on it.

>>2326917
Not what I meant at all >_>

>> No.2326948

>>2326930
First off, to see if faggotry can be avoided, tell me:

Do you consider it possible that Strong AI will not be developed within your lifetime?

>> No.2326958
File: 50 KB, 734x703, lucifer (33).jpg [View same] [iqdb] [saucenao] [google]
2326958

What is the technological singularity?

>> No.2326965

>>2326958
The idea that the apparently exponential growth of technology will continue to a point where the pace of growth approaches infinity. After that point, just about anything is possible. A race of machines wipe us out? We become gods? Who knows.

To avoid the problem of human minds being limited, the most common mechanic invoked is that of Seed AI: An AI that will design a more intelligent AI, which will design a more intelligent AI, and then BAM: singularity.

>> No.2326968

First off I don't come to this board often (at all) so I'm not sure if this is a common topic that pisses people off or not, I'm just here for discussion.

>>2326948
I'm sure its possible, and most likely, that strong AI will NOT be developed in my lifetime. But I look forward to the possibility and the fact that my college and career will be working towards creating strong AI.

>> No.2326973

>>2326958
When are we gonna upload our minds into computers?

>> No.2326977
File: 126 KB, 672x352, shot000004.jpg [View same] [iqdb] [saucenao] [google]
2326977

>> No.2326979

>>2326965
(cont)
I have two main problems with this idea.

1. All attempts to create an AI algorithmically have failed, and we are a long way from any other method (such as faithful neuron-level brain simulation). Even this would be less efficient than a real human brain, but it would prove useful for research.

2. Seed AI may be impossible, at least as characterized. How can you make something that is more intelligent than you are? Stronger, faster, with certain abilities, sure, but actually smarter? Not just faster-thinking, but better-thinking? I think the best as seed AI could do is also the best we can do: blind variation on a working model, applying artificial selection and educated guessing along the way.

We don't even have the "working model" yet.

>> No.2326986

>>2326968
>I'm sure its possible, and most likely, that strong AI will NOT be developed in my lifetime. But I look forward to the possibility and the fact that my college and career will be working towards creating strong AI.
I'm very glad to hear this. It means you're not one of the zealot acolytes of Ray Kurzweil (though you may like some of his ideas). They treat these ideas as dogma, with little understanding of what strong AI would require. Truth is that no one knows how to make it work.

>> No.2326989

>>2326979
What is the field of expertise that focuses on this, Computer Science?

>> No.2326992

What is the most advanced AI up to this point?

>> No.2326994
File: 51 KB, 614x734, horse-head-21220.jpg [View same] [iqdb] [saucenao] [google]
2326994

>>2326906
>technological singularity

what is this shit?

>> No.2327003

I think strong AI research is the 21st century alchemy. Not fruitless, or even impossible, but fundamentally misguided.

The alchemists wanted to turn "transmute" lead (or something else) into gold. After centuries of scientific inquiry, it turns out to be possible, through neutron bombardment. It's just not worth it, and would be a dumb idea anyway (if you make gold plentiful, it loses value). HOWEVER, we got Chemistry, and a good portion of the foundation of modern science, out of the effort.

Comparing this to AI research, it may turn out to be possible, but horribly inefficient and fairly pointless to make human-level AI artificially.

So my question is: Where is the "chemistry" in strong AI research? Genetic engineering of the human brain? Self-correcting control systems for complex environments (a facet of intelligence, but not the whole of it)? Data mining? Helper AI?

>> No.2327007

>>2326992
Depends. What kind of ability are you going for? Intelligently handling a complex problem, or carrying on what looks like an intelligent conversation, or something else?

>> No.2327021

>>2326989
CS is close. It's something of a discipline unto itself. It's usually considered a sub-discipline of CS.
http://en.wikipedia.org/wiki/Artificial_intelligence

I'm not sure that intelligence is algorithmic at all, though - with the possible exception of an algorithm that simulated the nonlinear, analog dynamics of a physics brain. At this point, the advantages of algorithmic formulation would be gone.

>> No.2327024

>>2326973
Shortly after your brain has irretrievably decomposed.

>> No.2327042

>>2327007
As for simply carrying on conversation, you've got "Contextual learning" AI which, insofar, really are only capable of repeating questions and answers to compile responses.

>>2327003
I think biomedical engineering would get a HUGE boost out of it. Technologically enhanced longevity through highly-functional prostheses and sensor modules (like artificial eyes, or even new/enhanced senses) would be attainable, for one. Depending on how researchers approach the creation of AI, different realms of science would be affected. For example if one realizes that (as has been said in this thread already) our technology can't exactly hold a "brain" because its just too simple at the moment, researchers are going to focus on the creation of quantum or atomic computers, or even carbon-based computers (not terribly unlike the human brain).

So physics and biology would both definitely get a kickstart.

>> No.2327054

>>2326973
That's not necessarily what it's about. While some singularitarians see "transcendence" as the goal of the Singularity, this is simply the natural idea of evolution using a new tool; a paradigm shift, if you will.

>> No.2327063

>>2327007
Same type of intelligence that human possess

>> No.2327072

>>2327063
Too hard. Nothing comes even close, right now, and the best things in different applications are hard to compare on a shared metric.

For autonomous navigation of the natural world? Retarded cockroach level. For carrying on a conversation? Well, I don't know if it says more about the AI or about most human conversations, but chatbots good enough to be amusing. They never pass tests that require thought, or often even memory.

I think the Turing test is a flawed standard.

>> No.2327074

>>2326986
Exactly. Some singularitarians are nothing more than zealots of another bullshit religion.

>> No.2327083

>>2327072
Chatbots simply copy answers to pre-designed questions and add them to a database they draw from, to create "context" to understand speech.

Ever wonder why they all tell you that you're a robot? Its because people always try to tell them that.

>> No.2327087

>>2327072
However, I would add that expert systems are fairly impressive. They're just large classification schemes. They're good at playing 20 questions, using data provided and compiled with the help of humans.
http://us.akinator.com/

>> No.2327095

>>2327072
Also, this article on autonomous trading algorithms on the stock market is fascinating. Stealth traders, predatory traders, flash traders, prop traders - all using different algorithmic strategies to identify opportunities for profit. It's an artificial ecosystem, and they already constitute the majority of trade volume at Wall Street.
http://arstechnica.com/tech-policy/news/2011/01/algorithms-take-control-of-wall-street.ars

>> No.2327102

>>2327087
A good example of contextual AI.

A bad example would be Cleverbot.com
They basically built his shell and then, for some reason created by someone with way too much faith in humanity, offered him up to be "taught" by the internet.

I'd love to get ahold of that engine and talk to it by myself long enough for it to develop correctly. I think it would be an interesting system.

>> No.2327104

>>2327083
I've very rarely found a chatbot that could pass the "remember a word for me, then recall it later" test. They don't have any train of thought, or short-term memory specific to a conversation. There's no "understanding" that there's even a continued thing called a "conversation".

I'm not sure how to resolve the Chinese Room problem though. I've had different thoughts at different times. I don't believe it constitutes intelligence, as it lacks adaptability.
http://en.wikipedia.org/wiki/Chinese_room

>> No.2327106

Do you think technological singularity is the ultimate goal of evolution?

>> No.2327115

>>2327106
"Goal" implies long-term conscious intention, which is an a trait exclusive to sapient beings. "Evolution" has no goals. It has probable outcomes, certainly, bu there is no "goal".

Humanity has goals, inasmuch as there is a consensus. And I've got high hopes for the future of humanity - and our goals.

>> No.2327125
File: 159 KB, 547x511, 1292675096416.png [View same] [iqdb] [saucenao] [google]
2327125

>>2326906
Which version of the singularity OP?
1: The creation of true AI
2:the point at which technology can no longer logically improve.
3:the point at which robots become advanced enough to improve upon themselves without human aid
4:the point at which technology allows for the lines between gender,race,appearance,nationality to be blurred/unimportant.
5:the point at which the line between what is human and what is machine is indistinguishable.
6:the point at which humanity is overtaken by machines and rendered obsolete.
7:the point at which humanity is made into gods through technology.
I've heard all of these referred to as "the singularity"
which flavor do you represent OP?

>> No.2327128

>>2327106
Not a goal, just a tool.

Look at evolution as a "march towards complexity" starting with bacteria, simple organisms that they are, and further towards multi-cellular organisms. Rather than continuing to advance our brainpower or physical prowess, humanity "chose" (randomly) to use and study tools. Throughout history, we created more and more powerful tools. Evolution used blind selection as its tool before, until it gained a "better" tool with which to impose change.

Other civilizations in the universe could just as easily have a mental singularity. Many reports of alien abductees claim that the aliens did not speak with mouths, but broadcasted thoughts and ideas "into their minds." Could they have, through selective breeding or meditation or whatever, advanced the evolution of their brains instead of their tools?

By the way, not implying that Evolution is, in any way, intelligent. Just a metaphor.

>> No.2327144

>>2327125
Not OP, but here are my reactions:

>1: The creation of true AI
A momentous day. Not much changes though, until AI can become significant more intelligent than current humans. This might take a long time.
>2:the point at which technology can no longer logically improve.
Then we become gods.
>3:the point at which robots become advanced enough to improve upon themselves without human aid
This could easily happen before 1. It's just evolution in an artificial system. It can even be artificial selection and directed mutation.
>4:the point at which technology allows for the lines between gender,race,appearance,nationality to be blurred/unimportant.
Will probably happen in a few centuries, or certainly the next millenium - with or without amazing technology.
>5:the point at which the line between what is human and what is machine is indistinguishable.
I think this transition characterizes many of the ideas of the singularity very well. Not sure that exponential tech grow with continue. Growth, sure.
>6:the point at which humanity is overtaken by machines and rendered obsolete.
I think this also qualifies, more strongly than 5.
>7:the point at which humanity is made into gods through technology.
See 2.

>> No.2327150

>>2327125
All of them are facets of the singularity seen through different eyes. I suppose its simply one facet of trans-humanism.

Humanity is a word that is usually thrown around as a synonym for sapience, which it is not. Sapient beings all deserve care and respect.

Also, carbon-based life is not the only "true" life. Artificial intelligence, if it IS true AI, can love and feel with the same degree as a human, and as such, is sapient. Some would say it is "human." In fact, it is sapient.

>> No.2327155

>>2327128
I don't know about the subject so much but in this wikipedia page (http://en.wikipedia.org/wiki/Evolution_of_complexity)) it reads:

>Although there has been an increase in the maximum level of complexity over the history of life, there has always been a large majority of small and simple organisms and the most common level of complexity (the mode) has remained constant.

Is it only the maximum level that counts?

>> No.2327159

>>2327128
>Many reports of alien abductees claim that the aliens did not speak with mouths, but broadcasted thoughts and ideas "into their minds." Could they have, through selective breeding or meditation or whatever, advanced the evolution of their brains instead of their tools?
This is far better explained through typical dream experiences. You're not telling me you believe them, do you? Occam's Razor applies here.

>> No.2327163

>>2327150
Well I suppose then I am a mix of 4 and 5?
I still see it as this, today. What does it matter what someone is made of? Black/white, male/female, carbon/copper. If one can think and feel and truly appreciate the "gift" of life then it is alive.

As such, my problem with Creationism.

>> No.2327166

>>2327155
>Is it only the maximum level that counts?
I'd say it's the most important. That's why human desires are more important (to us) than what rats want.

The problem arises when something smarter, of a higher organization than us, comes about. I hope I would have the courage to defer to what they want. Because, hey, they're smarter than me. By a lot. I suppose I'm assuming that intelligence correlates with cooperation and benevolence - which seems to hold up, from human history. So I'm presupposing a lot about the nature of intelligence and ethics, namely, that ethics are not drastically different for more intelligence beings.

Just a hypothetical scenario.

>> No.2327171

>>2327155
The minimum isn't becoming any simpler.
I mean, yes the mean can stay the same, but maximum complexity continues to rise.

>>2327159
Oh I simply used it as an example but you're right. I don't discount the idea that there are other beings in the universe, I was just trying to explain a mental singularity with commonly understood examples.

>> No.2327178

>>2327166
I'd think ethic WOULD be different, depending on exactly how different these "higher" intelligences are. For example, if this being cannot feel pain or sorrow, they will not have feelings like fear or love, at least not in the same way humans do, and would not understand the "Human condition" as we know it.

>> No.2327181

>>2327171
We should note that evolution does NOT always produce more complexity or order, and that "devolution" is a misnomer. Just so we're all on the same page.
http://en.wikipedia.org/wiki/Devolution_%28biological%29
http://www.scientificamerican.com/article.cfm?id=is-the-human-race-evolvin

>> No.2327191

>>2326906
Sounds like bullshit. Do you ever stop watching anime?

>> No.2327197

>>2327178
Fear and sorrow are pretty basic to intelligence, IMO, and not just biologically. Better words might be "apprehension" or "anxiety" instead of fear, and "disappointment" instead of sorrow.

For any sapient being, it is necessary to form goals and monitor whether current actions are producing results that are helpful to those goals. Consciousness of a goal with high value but low probability of success is "fear" (anxiety, apprehension). Consciousnes of a goal with high value that has failed to be met is "sorrow" (disappointment). Similarly, to "love" something is simply to value its inclusion in your environment, or value it highly as a goal.

The subjective experience may be very different, but some analog to these emotions would have to be present in any sapient being the pursues goals.

>> No.2327223

>>2327181
I figured that was assumed. Sorry >_>

>>2327197
Just simple explanations. Thank you for clarifying them, I agree totally.

>>2327191
I'm not a faggot, I don't watch anime.


I am going to eat lunch now. If this thread is still here when I get back then I'll jump back in with a refreshed mind and actually contribute something interesting, otherwise, enjoy your discussion and good day.

>> No.2327231

>>2327223
>I'm going to go watch anime now

Fixed that for you, OP

>> No.2327241

>>2327231
No one really likes you

>> No.2327249

>>2327197
Things like this give me hope that we'll be able to relate to aliens in some way. And AI - which might as well be alien. Although there is some very good thought in fiction about encounters with intelligent life where this is a fairly insurmountable barrier to communication.
http://en.wikipedia.org/wiki/Solaris_%28novel%29

>> No.2327251
File: 8 KB, 267x189, district 9 christopher..jpg [View same] [iqdb] [saucenao] [google]
2327251

I feel like a hypocrite saying that this subject is all speculative and so far into the future there is not point to talking about it.
...
Primarily because my friends call me a retard for thinking about interstellar travel and the prospect of humans colonizing other worlds.

>> No.2327256

>>2327241
Oh dear, you forgot your name AND your tripcode!

>> No.2327268

>>2327256
>>2327231
Please stop. I'm begging you. Does every /sci/ thread have to be shitposted?

>> No.2327276

>>2327256
umad

>>2327268
you are correct sir

>> No.2327312

>>2326906

It's the second time I've heard about the singularity and the first time I actually heard what was meant by it.

As I see it, a strong AI plays a very important role in the singularity.

So my question is:

If P =/= NP,
won't the mathematical and physical limitations of AI computation prevent a singularity from happening?

I mean if there is a physical limit or rather a treshold to how fast computers can be and there is no mathematical way to improve computations up to another certain treshold .. will a strong AI not be bound to their own (certainly higher than human but still existent) limitations?

>> No.2327322

>>2327249
>relate with aliens
>Solaris
I've only seen the movie, but I wouldn't want to relate with that planet. More like being experimented on than relating with

>> No.2327333

>>2327312
>won't the mathematical and physical limitations of AI computation prevent a singularity from happening?
I think so. It takes some blind assumptions to say that such restrictions (NP-complete problems) can be sidestepped somehow.

Still, this would be a fundamental limit on what is possible. Doesn't mean that progress couldn't continue - perhaps even exponentially for a while. It's just that these problems won't lay down and die. They'll still be expensive to solve, compared to other problems.

>> No.2327343

>>2327322
Interactions with the intent to learn. It's hard to draw a line - I suppose, whether you consider the other party to be benevolent, or at least not totally indifferent to your desires. That said, I would be fairly apprehensive about studying that planet too.

>> No.2327421

>>2327181
>We should note that evolution does NOT always produce more complexity or order
For that statement to be true, you would need to show an example of evolution which provided less complexity.

Actually, extrapolating from the current theories of evolution, we should expect it to try to minimize complexity, which it obviously doesn't do.

>> No.2327426

>>2327421
>For that statement to be true, you would need to show an example of evolution which provided less complexity.
Loss of eyes in cave-dwelling species.

>> No.2327430

I am 20 years old.

Can you guesstimate the probability that I will see it in my lifetime?

>> No.2327444

>>2327430
Assuming a generous life span... huh.
I'm no expert, but I give you 10%.