[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 155 KB, 1210x899, PPTSuperComputersPRINT[1].jpg [View same] [iqdb] [saucenao] [google]
2358580 No.2358580 [Reply] [Original]

What does /sci/ think about The Singularity? For those of you that have not been enlightened yet, The Singularity refers to the moment in the future that computer technology becomes more capable than human brains ourselves. At this point, computers will begin self-improving themselves at such a fast rate, that we, with our inferior human speed, won't be able to stop them. If the computer's logic is advanced enough, they may even be able to think "Hey, a robot world would be a lot better than a human world" and wipe us out.

Of course, there are precautions to be taken (such as Asimov's three laws of robotics), but The Singularity is an inevitable moment of humankind. This moment will change our lifestyles forever. For once and for the rest of the universe, humans are no longer the most advanced and forward-thinkers on the planet; we have created things that are far more capable than we will ever be. The future of technology after The Singularity will be unbelievable. Rather than us coming up with technologies ourselves, the computers will do it for us. They will be smarter and more capable of ANYTHING that we can only dream of doing.

I strongly believe that it will happen within our lifetimes, too. What's your take on this, /sci/?

>> No.2358648

Sounds fun.

>> No.2358667

OP, you forgot to mention the bigger point to singularity. singularity is not the point where we design a functional A.I. comparable to ourselves. It is where we can do that, and then begin to make ourselves into that A.I.
we all become cyborgs with brain backups.

>> No.2358684

>>2358667

I thought singularity was the point at which a computer would be able to design a computer better than itself (which would in turn be able to design a computer better than itself, etc...)

>> No.2358698

>>2358667
>>2358684
All the things you mentioned are definite implications. But to be honest, I don't see anything regardig "neural uploading and downloading" to be possible within our lifetime, especially not before The Singularity.

>> No.2358712

>>2358698

It definitely seems possible to me. We're already starting to interface technology with animal brains, which is the first step.

>> No.2358723

No no, you're all wrong!

Technological development appears to be following an exponential plot towards infinity. Singularity is the point at which infinity is reached on that plot.

It's a point at which we can all collectively go "Well what the fuck happens then?"

Someone related earlier a nice analogy - if you go back 150,000 years into the past, and you ask someone "What will fire be used for, 150,000 years from now?" They couldn't give you a full answer. "Cooking" they might say - but they would never guess that you could use fire to melt special kinds of rock to create weapons, armor, or building material.

It's kinda like that, except instead of a span of 150,000 years, it's more like 150 years.

To give you an idea of our technological progress today - in 50 odd years, we went from Not being able to fly at all, to manned spaceflight and landing on the moon.

One of the big inventions that may be the turning point for us, is the development of recursively self improving artificial intelligence. We make something smarter than us, its smart enough to improve itself, and once it does so, it's now even smarter and improves itself again, and again, and again...

>> No.2358737

>>2358684
>the point at which a computer would be able to design a computer better than itself

This is the correct answer.

As for the plausibility of it, it's highly speculative, and although Ray Kurzweil has been admittedly quite accurate with the vast majority of his predictions, most of those predictions are rather obvious ones anyway, and the ones which are more important (like the stability of the markets) failed to precipitate. The idea of The Singularity is far more speculative than predicting a stable stock market, so it's too early to tell whether the theory is right. He also predicted driverless cars being common. Despite the technology being here, we're decades away from widespread adoption because people and the politicans that guide them are blind to the immense safety, convenience and economic benefits that they offer.

How will we ever get to human level AI with such a sloppy administrative infrastructure?

Having said this, Henry Markram and others are actually building towards these predictions, but it's hard to bridge the mental gap between simulating a few neurons in a computer and in 40 years having computers that are vastly more intelligent than the human race.

When I look at how the world has changed over the last 10 years, there really don't seem to be any tangible results. All of the old diseases are still as difficult to treat and everything appears to be in perpetual research.

>> No.2358744

A power cord dude - you just unplug the infernal thing if it gets too smart.

I'm thinking of a Star Trek episode where some people were rapped in the holodeck and there was no way to shut it down.

Seriously, how stupid are people going to be in the future. Put a big manual disconnect switch on the power supply to the holodeck - get a EE to spec one out.

>> No.2358763

i happen to be friends with a kid who is 17 and getting his PHD at MIT for cognitive neurosci and i asked him about the timeline for uploading consciousness. these graphs completely negate the inefficiencies in emulating something. for example, it takes way more processing power to EMULATE a ps3 game on a computer as opposed to playing it on the ps3. he says brain machine interfaces will be perfected in about 90 years and that it will take until 2300 before we can program computers as powerful as humans to do our bidding and while i am skeptical of that number, my point stands that its gonna take WAYYYYYY longer to get to that point than you think, the people who makes these graphs have agendas.

>> No.2358794

>>2358737
diseases will be a much smaller problem in coming decades once biotech gets big. there are already a family of antibiotics that can theoretically be made for any disease that have been tested on mice and have incredible rates of efficacy. its basically a organic chemical that bonds itself to bacteria and holds up a big biochemical sign to white blood cells that says "eat me'

>> No.2358805

guys, have you ever read the Culture series? Granted it's soft scifi but there is an interesting thing there. The computers are mind blowingly advanced there but they didn't wipe out the humans, because if they do there will be no point in their existence.

>> No.2358818
File: 159 KB, 912x624, rapture-for-nerds01.jpg [View same] [iqdb] [saucenao] [google]
2358818

>> No.2358820
File: 6 KB, 89x126, AnotherDayAnotherDollar.jpg [View same] [iqdb] [saucenao] [google]
2358820

>>2358744

>I'm thinking of a Star Trek episode where some people were rapped in the holodeck and there was no way to shut it down.

Oh, THAT episode...

>> No.2358848

>>2358744

I'm pretty sure it was plausible. Maybe the holodeck made a holo generator for itself or something like that. No, really, it could work!

>> No.2358854

>>2358763
>it will take until 2300 before we can program computers as powerful as humans

Tell your friend that he can make those predictions once he actually gets some real academic credentials.

"It is not impossible to build a human brain and we can do it in 10 years," Henry Markram PhD, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. In a BBC World Service interview he said: "If we build it correctly it should speak and have an intelligence and behave very much as a human does."

>> No.2358861

>>2358854

The problem with this that it's only achievable if they get the money for that and some other mundane things

>> No.2358869

>>2358861
>if they get the money for that

Errr they already have the money because they've been doing it for a year.

>> No.2358870

>>2358854
building a human brain=/=running a programmable emulation of a brain on a computer. also he got his undergrad at UC berkley and has an IQ of 200. i am inclined to believe him to a somewhat greater extent than those in the field that talk publicly about the matter as they have an agenda. also, he is most likely overshooting it but the point is, it'll take a long ass time

>> No.2358872

Hardware is outpacing software. I can't wait to see Waston in action and how it will close the gap.

>> No.2358881
File: 7 KB, 248x169, Christopher-Michael-Langan.jpg [View same] [iqdb] [saucenao] [google]
2358881

>>2358870

IQ of 200

>MFW your friend is Chris Langan

>> No.2358889

>Of course, there are precautions to be taken (such as Asimov's three laws of robotics)

In nearly every story where the Three Laws were used, a key point was their circumvention. They don't have the best track record. One of the main problems is that they rely on interpretation by the entity on which they are binding. A Three Laws robot with human-level intelligence, which most of Asimov's robots were, was quite capable of finding gray areas and ways around the laws.

>> No.2358892

simple OP. Just remove "wants" from robots
If they're sole objective is to make themselves more efficient they would have no reason to attack people

>> No.2358898

>>2358870
>IQ 200
actually,that would be a bad sign if it was true.
Ver high IQ people are usually completely lunatic under-achievers

>> No.2358903

>>2358892
Maybe they find that the most efficient solution to problems is to remove humans from the equation?

>> No.2358917

>>2358889
Yeah, the whole point behind Asimov's stories, were ways that robots GOT AROUND the 3 laws of robotics.

The three laws make for good entertainment, but in reality it's not nearly so simple.

>> No.2358920
File: 257 KB, 1440x900, 1279474104671.jpg [View same] [iqdb] [saucenao] [google]
2358920

>>2358903
>He doesn't know about the zeroth law

>> No.2358929

MY BODY IS READY

>> No.2358936

>>2358889
Why not teach the robots morals and ethics and let them decide for themselves, just as we humans do?

>> No.2358946

>>2358936
Who's version of right and wrong will we teach them? What about robots trained by Nazis?

>> No.2358949

>>2358936
Because that's logical and makes sense.

>> No.2358955
File: 112 KB, 912x624, rapture-for-nerds02.jpg [View same] [iqdb] [saucenao] [google]
2358955

singularity-logic
> put bacteria culture in petri dish
> Hey guys, this thing doubles its population ever 20 minutes!
> Soon the entire earth will be covered in bacteria three miles thick!

>> No.2358956

>>2358936
All robots with human-level or greater intelligence?

>> No.2358964

>>2358936
I think morals and ethics might be extremely hard to teach robots, and would be seen as pointless for what we'll be using them for. And plus, we consider ourselves ethical and whatnot, but we still kill anything that's "lower" than us on the food chain.

>> No.2358970

>>2358898

Yes, but usually because they are incredibly good at few certain kids of task that just happen to be the ones that are tested by IQ tests and their intelligence is only really equivalent to that of the average grad student at Oxbridge or an Ivy League uni. A high IQ score will obviously give such people unrealistic expectations, unwarranted attention from others and a superiority complex. These are the classic ingredients for making a crank.

>> No.2358972

>>2358881
uh...no. his name is evan ehrenberg. you can look him up and you'll see how legit he is.

>> No.2358969
File: 375 KB, 769x312, empbomb.png [View same] [iqdb] [saucenao] [google]
2358969

Well if a sufficently advanced AI DOES start a war with us, at least we have the ultimate weapon to use against them.

Pic related.

>> No.2358981

>>2358955
Well if the analogy you're trying to make is that the size of the petri dish is as to the total possible knowledge that we can attain, then how exactly do you know the size of the petri dish?

Maybe we've barely scratched the surface, maybe the petri dish really is big enough to cover the earth 3 miles thick in bacteria.

>> No.2358982

No matter how intelligent a robot is, if it doesn't have a reward system it will never have a reason to do anything other than it's preprogrammed purpose.

>> No.2358983

>>2358969
A fleshlight?

>> No.2358987

>>2358969
God help you if they have Faraday cages.

>> No.2358990
File: 74 KB, 640x804, 1294889695798.jpg [View same] [iqdb] [saucenao] [google]
2358990

>>2358744


>>2358892
But if computers begin to evolve faster than we can improve them (because we allow them to do so), would self-propagation not become an intrinsic quality to the whole process? If a computer designs a new and better computer, processes are going to be implemented at some point for the sole purpose of copying its own genes more efficiently.

>> No.2358997

>>2358970
dude, the kid is 17 and getting his PHD at MIT. he is definitely deserving of that 200 IQ

>> No.2358999

>>2358981
Bro, that is the single most intelligent response I have ever had to that comment. It made me pause for a moment and consider my position. The analogy was not about how far intelligence can get, though. It's more a question of literal resources.

>> No.2359004

>>2358982
If it's intelligent enough though it might just create its own reward system to give itself more of a purpose

>> No.2359054

>>2359004
>>2358982
Durka durka. If the point is to emulate humans we would use our own reward system. You dopewhatImine?

>> No.2359069

eh, i'll worry about it when something smarter than retard-bot comes along.
i'm still not convinced genuine ai is possible

>> No.2359097

I am a massive skeptic when it comes to the promises of futurists, specifically with respect to computing power. But recently I saw that a go bot using distributed computing finally managed to be stronger than me. (I am a weak dan at KGS.) I think it used 86 computers.

That's really impressive for 19x19 go.

>> No.2359101

>>2358990
>But if computers begin to evolve faster than we can improve them (because we allow them to do so), would self-propagation not become an intrinsic quality to the whole process?

If we program computers to serve us. I see no reason why they should deviate from this. Nobody said designing better computers would mean building millions of computers and i still can't see a reason why those computers would do anything other than serve humanity. When better AI comes in, older models would terminate themselves.
>If a computer designs a new and better computer, processes are going to be implemented at some point for the sole purpose of copying its own genes more efficiently.

I see no reason to believe this. AI will have no desire to copy it's "genes" whatever they are. AI will have no desires. (unless we give them a reward system-which would be stupid).

>>2359004
>If it's intelligent enough though it might just create its own reward system to give itself more of a purpose
 
Why? Would would it decide to give itself the illusion of a purpose?

>> No.2359114

>>2359054

is the point to emulate humans or is it to serve humanity?

I suppose a few experimental robots with reward systems cuold be built, just to prove it can be done, but they'd have to be done in a very controlled environment, with not amazing intelligence, and not many of them either...

>> No.2359156

What's all this talk about a reward system? There doesn't need to be one, the robots are being built by us. There is no needs for rewards. They are programmed for a purpose, and once that purpose is complete, they ask "what do next"

>>2359101
And of COURSE artificial intelligence will have desires, those are what we tell them to do. We're not making humans, people. We're making robots. There's a difference.

>> No.2359164

>>2359156
I can imagine God telling the Angels something vastly similar. Your arrogance is epic in proportion to that mythology.

>> No.2359168

>>2359164
what

>> No.2359226
File: 34 KB, 345x369, 1289787513873.png [View same] [iqdb] [saucenao] [google]
2359226

>>2358580
>"Required for Human Brain Neural Simulation for Uploading (2025)"
>2025
>In my lifetime

>> No.2359240

>>2359156
>And of COURSE artificial intelligence will have desires, those are what we tell them to do.

They wouldn't be emotional desires surely.
Does a laptop have desires?

>> No.2359252

>>2359240
No, because a laptop isn't intelligent. However, it can be seen as having desires when we do something like say "search for x". Its desire, at that point, is to find your file. What is it getting in return? Nothing. This is an extremely watered-down example, but how would it be any different that robots?

>> No.2359265

>>2359252

Well a sentient laptop could have a primal desire to comply to your orders, if you say "search for x" then it knows that if it's done in a very efficient way, it will increase it's chances of the owner not throwing it away, increasing the chances of survival for the sophont laptop.

What the hell brain... What the hell...

>> No.2359276

>>2359265
But that's not a desire of the computer, that's a desire of the people who coded that computer. Being in it for the money, they want to sell their product, so they make it work. It all needs to come back to humans.

As I said before, we're not making people, we're making robots.

>> No.2359309

>>2359276
Motivational machinery does not care where it comes from.
Human's motivations are sex, food and sleep (and some related ones), they all have their specialized neurotransmitters and hormones. These are evolved motivations, however nothing says you can't just program whatever motivations you want into certain types of AIs, either directly or indirectly.

>> No.2359333

>singularity/AI skeptics think that once we have an AI it'll be hooked up to our nukes or able to control anything outside of itself for that matter

>my face when

>> No.2359361
File: 12 KB, 470x457, 1288077802245.png [View same] [iqdb] [saucenao] [google]
2359361

>>2359265

>it knows that if it's done in a very efficient way, it will increase it's chances of the owner not throwing it away

>> No.2359362

>>2359333
I keep seeing this on this board, no face when posting a mfw reply, what does it mean? Does it mean you actually think the implied statement is true or does it mean that it's so absurd that it doesn't even have a face to describe it? Which is it?

>> No.2359374

more flops=/=a machine actually being intelligent

>> No.2359375

>eastern culture
robots will do my laundry and walk my dog!
fuck yeah!
>western culture
Robots will do my laundry and walk my dog!
Oh no!

>> No.2359382

It means you are an aspie.

>> No.2359419
File: 26 KB, 604x343, 1294515594270.jpg [View same] [iqdb] [saucenao] [google]
2359419

>>2359375
>mfw a robot fucks my wife for me
>mfw I have to fuck a robot

>> No.2359424
File: 20 KB, 229x252, 1292288186663.jpg [View same] [iqdb] [saucenao] [google]
2359424

So what's going to happen when we have 40% unemployment once robots start filling all of our manufacturing and menial jobs?

>> No.2359435

>>2359424
Isn't the answer obvious?
communism
The only reason we have capitalism is to make people work. If robots do the work then we can just chill and do whatever we want.

>> No.2360161

>>2359226
Regarding the dates, I think the picture in the OP is misleading a bit. You don't need supercomputers or CPUs to simulate brains. It's actually a terrible choice, even if a popular one. Using a supercomputer will get you a brain which is thousands of times slower and consuming many orders of magnitude more power than what you would get by using the right hardware for the job. CPUs are sequential. Brains are parallel (same as the laws of physics). Electronics are parallel (again, laws of physics). The right solution is to build specialized chips to run neural networks, this will give a realtively equivalent architecture to the one in the brain (except digital, and the models will probably have to remove some less important details found in biological neurons, but as long as we keep the important ones, it should be fine). However, not just any chips would be preferable, for example some ASIC baked with synapses/neuron bodies would be stupid as it limits a lot of things, instead some platforms similar to FPGAs should be invented, and they should aim for maximum density that they can achieve. Lower speeds aren't a problem, but they might not be required (neurons are many orders of magnitude slower than your average ASIC chip's seed). Minor defects can be tolerated, just like dead pixels in LCDs are, only brains can tolerate a lot more dead neurons that we can tolerate dead pixels. For improving density and power requirements, newer technologies which allow such things should be used where possible: it seems the memristor is one such potential technology, however it has some speed-related disadvantages, luckily biological neurons are a lot slower, so non-issue.

>> No.2360168

>>2360161
> continued
DARPA and HP are currently making neuromorphic hardware which will a few orders of magnitude more faster than biological neural nets (however eat a few orders of magniture more power, but compared to how much power a supercomputer doing the same task would eat, this is incredibly little and manageable even for normal people). Network a few hundred of these chips and you have something the size of the neocortex. Expected to be done in >5-10 years. This is a much better choice than wasting power on supercomputers.

Still, this will only give you some form of mammalian-intelligence and eventually human-level intelligence if succesful. This should be incredibly useful to humanity, however I wouldn't call it the singularity, but human-level (and beyond) AI is something that would clearly benefit mankind/society in general.

Actual brain scanning uploading will likely be much harder to achieve. I have a strong belief that human-level AI will be reached within my lifetime. I have no such belief for brain uploading. Why? Because scanning technologies are still quite primitive, and the effort required to replicate all the biological features in the brain is much higher. You'll also need to engineer yourself a body with compatible senses, have something to trace the neural network in your body or reverse engineer it in some way, which is unique to each human, and so on. Biological systems are way too chaotic and complicated. AI may be achieved by distilling our intelligence (a model of the neocortex+thalamus+a few other circuits) and improving upon that, but actually replicating all the intricate details of the human is a much more difficult task. Maybe you could task your future "super AI" to spend its precious cycles reverse engineering complex and chaotic human physiology for you.

>> No.2360216

>>2359424

You don't need to worry about that.

Singularity goes hand in hand with the end of scarcity.


You see, joblessness is only meaningful when the things you need to survive are scarce. When everyone can survive without depriving anyone else of the means to survive, unemployment is irrelevant.


You say robots will put people out of work. I say robots will let people out of work.

In the future, careers will be more like hobbies, and everyone will get to do whatever one they most like the sound of. This will lead to a massive increase in creativity and production, as those who are the most dedicated and skilled in a particular area really invest themselves in it. Every Einstein or Hawking that currently spends their time managing a Barnes and Noble will be able to stop worrying about money and just do what they want.

>> No.2360223

>>2358580
this doesn't seem very likely when there really doesn't seem to be any possibility for true AI, ever. Computers will probably never be tuly self-aware.

right now we cannot really even make a learning computer. sure we can tell it "if see this, do that, see that do this" etc, but, this is just plain old regular programming. Sorry to say, we wont see any AIs for 100s of years.

>> No.2360235

>>2360168
are there even the slightlest indication yet that we will be able to make any of these networks actually think? I'm pretty sure there isn't. I don't really see the point of all this research, except to use as pretty fast regular computers.

>> No.2360263

>becomes
>inevitable
>believe
>happen

thats the kind of language that makes people passive
There's nothing inevitable about it. If you would like to see this happen then work on it.
If everyone is just waiting around for those smart scientist to build stuff nothing will get done.

>> No.2360271

>>2359424

What do you think happens anytime you get a large enough mass of desperate people?

Eat the rich.

>> No.2360325

>>2358580
My take is that you are leaving out the advancement of neuroscience and how it plays into the singularity.

Computers will not become more forward thinking than us; instead, we will become the computers as we continue to integrate technology with ourselves.
There are currently projects seeking to map brains on to computers, their main limitation is the size of the computers they have to use.
As this becomes less of an issue, we will continue to learn more about how the brain works and I have no doubt that we will also find a way to download a consciousness eventually.

Even before that occurs however, we will reach a point where we can install on board computers in the human body to not only correct malfunctions (as in pace makers and smart prosthetics) but improve natural functionality.
It may become possible to integrate electronic memory devices with the brain and so forth.

This, I suggest, is the more natural progression of things.
It is hard to make computers that process better than a brain so before we get to the point where computers that outperform humans are prevalent, we are more likely to take advantage of the vast processing power each of us have and use computer electronics to enhance that.

If you think a super computer is awesome, try to imagine what 1000 networked brains could do.

>> No.2360349

>>2360325
>Computers will not become more forward thinking than us
Computer science major here
Yes they will

>> No.2360351

>>2358723
Please take note: any time someone declares that everyone is wrong without profuse evidence to support their claim, that person is usually the one who is mistaken and is simply to egotistical to see that.

A technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict. -http://en.wikipedia.org/wiki/Technological_singularity

>> No.2360369

For now, my friends, don't contemplate that. That will happen, and by then I will (and you should too) already have put a bullet in my head.

>> No.2360383

>>2360349
Computer science major here.
No, they won't.

Stop being a bag of dicks and provide a counter argument.

Stating something as fact with no supporting evidence only proves your incompetence.
Instead try stating that you disagree and saying why you think computers will out perform humans even in light of the possibility that we will integrate computer electronics with our brains thus accessing all of the current computing power available in addition to our own innate computing power.

>> No.2360399

>>2360325
>we will become the computers as we continue to integrate technology with ourselves.
You just summed up my thoughts on this entire subject. Thanks bro.

>> No.2360413

>>2360223
You don't understand that the brain is not a static program.
Computers are just an implementation of a turing-complete architecture. Traditional programs are just optimized to perform for specific tasks fast and well. There is just information processing there, no real general intelligence as humans have. That said I'm fairly certain that the intelligence that humans have can be modeled mathematically and implemented in both software and hardware (as complex neural networks, with a special purpose model for the neuron), and such models can of course be implemented in a turing-complete machine. Thus computers could technically behave indistinguishable from humans. Well, at least in theory. The reason they don't is because the brain is highly parallel, and you can't emulate something of the size of the human brain on sequential CPUs without it being many orders of magnitude slower (possible with enough cores/CPU, but it would still be a power hungry beast eating up incredible amounts of power compared to the brain and performing slower than it). Oh, and if you studied at least a bit about how our brain works, you'll know that all the intelligence and behaviour is also partially because we have a constant stream of coherent (although very noisy) senses coming in, and we integrate them with the internal state and generate behaviour from them (as well as updating the internal state/learning). You won't be able to do that without some emulated environment or a robotic body, "disembodied brain" without senses probably would never be able to learn or understand human language the same way as we do. Thus we need a suitable architecture for running such neural networks in real time as well as (real or virtual) sensors to connect it to. (I would prefer real, since the real world is a lot more consistent than VR's we could cook up, and besides, it requires a lot less processing power to provide a real reality).

>> No.2360422

>>2360413
> continued
Suitable hardware can be cleverly designed ASIC/FPGA-like chips which combine memory and operation of small units similar to neurons or possibly minicolums (neocortex-like ones). Make such chips networkable/interconnectable to be able to scale size easily. Such neoruomorphic hardware is being developed at a few labs around the world now.

>> No.2360424

>>2360422
> continued

>>2360235
> are there even the slightlest indication yet that we will be able to make any of these networks actually think? I'm pretty sure there isn't.
You need to understand what thinking, consciousness, memory is and how the human neocortex works. In neuroscience there are varying models, some better, some worse. Personally I subscribe to one model that as I view it explains to me most of the mysteries of human intelligence. I find it quite consistent and I can understand my own behaviour quite well within it, I can also understand a large part of my mental processes. It's only a mystery if you don't try to understand how "thinking" works. Even if you don't want to understand that, at least study mathematical models of various neural networks to understand their low-level properties at least. I can see why people like your are confused and puzzled when they think of the metaphysical consciousness and wonder how can it arise out of the brain, however while thought processes itself can be explained, and just about everything else in the brain can (or in some cases, will be possible to), the one thing you won't be able to explain is the existence of qualia, but frankly you don't have to: there is no indication in the human brain that qualia should exist at all. Just leave the hard problem of consciousness to philosophers and worry about behaviour. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. Such should be a test for human-level intelligence.

>> No.2360429

>>2360424
> continued

> I don't really see the point of all this research, except to use as pretty fast regular computers.

Neuromorphic hardware won't give you faster computers. Completly different architectures and different aims than sequential computers. You'll get faster computers from advances in the semiconductor industry, which will soon grind to a halt as we approach the inevitable physical limits (you can't easily go beyond representing information beyond an atom, or more practically a few atoms). I think you should study a bit of electronics and maybe some digital hardware design to actually understand how your CPUs are build and also understand that CPUs are hardly the end-all solution. They're just what's practical/easy for human-made special-purpose programs.

The brain is also just another special-purpose biological architecture for implementing one type of intelligence.

>> No.2360604

i thought it was simply when computers are better at handling things than us.... no ai destroying us or anything like that although it is a possibility of it, just when computers are improving themselves and we live in a 'garden of eden' type utopia, (keeping /b/ alive to destroy any A.I. that might crop up)

>> No.2360614

oh yeah and happen within our lifetimes...

no your kidding yourself. its not for a FEW MORE GENERATIONS AT THE VERY LEAST SAY 200-300 years (sorry for caps)

>> No.2360646

>>2358580
technically, we can do that already, but not through self improving ai, but with simple evolution of trillions of computation.

there was this one program, that programmers made in which it programmed simple movements into a wireframe representing a human, they set up random(yes impossible, but seemingly random) movements for each wireframe with the goal of having the wireframe "walk", so by choosing the best of each generation's wireframe movemnt that closely mirrored "walking", computers were eventually created a wireframe that could run.

>> No.2360688

It would be good because humans would stop thinking of themselves as the top "species", and think they have special rights or something.

>> No.2360776
File: 241 KB, 1000x706, 1276927753259.jpg [View same] [iqdb] [saucenao] [google]
2360776

I don't give a shit about the Singularity. I just want my damn robot body.

A sexy robot body.