[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 37 KB, 600x600, fembot.jpg [View same] [iqdb] [saucenao] [google]
3573915 No.3573915 [Reply] [Original]

The singularity.

Whats the best estimate of when it will occur?
How will it happen? Will it be apparent quickly or will it take time for our society to reflect it?
What kind of technology will emerge?
What are you hoping for?

Let discuss this shall we.

>> No.3573933

bamp

>> No.3573940

>>3573915
i'm expecting the augmented reality, or becoming JC.

>> No.3573960

>>3573940
>augmented reality
This already exists.

>> No.3573965

>>3573960

Not in africa.

>> No.3573966

>>3573915

1. 2050-ish, but my bet's on the binary millenium (2048).
2. The combination of intelligence amplification/merging with technology and establishment of general artificial intelligence. The lead up to it will be like any other technological adaptation, but it will likely be a 'hard takeoff' scenario.
3. F'in awesome technology, my friend.
4. That the weakly-godlike intelligences some of us become and/or create are willing to stick around for a couple years and help the rest of us.

>> No.3573975

>>3573966
>likely be a 'hard takeoff' scenario
What do you mean?

>> No.3573990

>>3573975
Where it happens over weeks or hours instead of years or months.

>> No.3573991

>Whats the best estimate of when it will occur?
It ranges from 2012 to 2045. I'm expecting to be disappointed or at least die before it happens.

>How will it happen? Will it be apparent quickly or will it take time for our society to reflect it?
Probably with the first A.I. that can pass the turring test. People will panic. Movies will come out about how technology will end the world. But the public wont even notice it being integrated into their everyday technology. Soon, they'll start offering brain ehancments and people wont complain (except the "all natural" people).

>What are you hoping for?
Humanoid companions. ;_;

>> No.3574024

This thread got pushed off the front page by 3 troll religion threads. nice.

>> No.3574072
File: 141 KB, 1020x870, ss08_exponential_growth_large.jpg [View same] [iqdb] [saucenao] [google]
3574072

>> No.3574087
File: 1.65 MB, 3900x2194, 1305813121395.jpg [View same] [iqdb] [saucenao] [google]
3574087

>Whats the best estimate of when it will occur?
2040 - 2060
>How will it happen? Will it be apparent quickly or will it take time for our society to reflect it?
In my opinion it would take about 5 years with much of the population bewildered by what the fuck is going on.
>What kind of technology will emerge?
It's called the singularity because we don't know. Past human level intelligence all predictions of what new shit will come about break down.
>What are you hoping for?
Full-body transhumanism with the transition from brain-in-a-jar to dying-neurons being replaced by artificial neurons that were monitoring them for decades and copied the information inside. A gradual change should preserve the 'real' me.

>> No.3574095

>>3574072
Thats depressing. I'll be dead before anything significant happens.

>> No.3574096

>>3573915
Best estimate: Never.

>> No.3574104
File: 70 KB, 450x450, 1297704056056.jpg [View same] [iqdb] [saucenao] [google]
3574104

>>3574095
http://www.hplusmagazine.com/articles/forever-young/manhattan-beach-project-end-aging-2029

http://www.ted.com/themes/might_you_live_a_great_deal_longer.html

http://nextbigfuture.com/2011/07/sierra-sciences-working-towards.html

http://www.sens.org/sens-research/research-themes

http://video.google.com/videoplay?docid=-3329065877451441972#

http://www.nature.com/news/2010/101128/full/news.2010.635.html

http://www.guardian.co.uk/science/2010/nov/28/scientists-reverse-ageing-mice-humans

http://www.physorg.com/news/2011-06-biologists-yeast-cells-reverse-aging.html

http://www.physorg.com/news/2011-06-dna-reverse-premature-aging.html

>> No.3574103
File: 28 KB, 449x242, aotm_buzz4.jpg [View same] [iqdb] [saucenao] [google]
3574103

>>3574024
Yeah, it's a shame when religion threads are fighting.

>> No.3574109

>>3574104
the last two were my contribution! :3

>> No.3574110

>>3574104
>implying you or I will ever get to use age reversing medication

>> No.3574114
File: 54 KB, 361x500, d5b4e23f61a5ac2a3109766099d54d0e-dq797e.jpg [View same] [iqdb] [saucenao] [google]
3574114

>>3574109

>> No.3574115
File: 8 KB, 184x184, 1262534347152.jpg [View same] [iqdb] [saucenao] [google]
3574115

>>3574110
>implying we won't
Don't even start with IT WILL BE FOR RICH PEOPLE

>> No.3574120

>>3574087
>artificial neurons
>possible

>> No.3574121

>>3574115
I'm just saying, whats the likely hood that they'll have something developed and out for consumer consumption before we're old and falling apart.

>> No.3574131
File: 27 KB, 477x387, 1264130112814.png [View same] [iqdb] [saucenao] [google]
3574131

>>3574120
>neurons
>possible

>>3574121
Very high, as that would be one of the hottest products any pharma company could possibly sell. And if they make it affordable to as many people as possible compared to 4,000 $1,000,000 cures they earn much more.

>> No.3574137

>>3573915
ITT: religious wackos exclaim that their version of the future will come true.

>> No.3574140

we already have a device that can store memory and let you access it, giving you unlimited memory. doesn't that count as artificial neurons as it simulates the connections of neurons?

>> No.3574142

>>3574137
Point those posts out or you're full of shit.

0/10
making me reply

>> No.3574144

It's wishful thinking, it won't happen. Not because the technology isn't there or can't be created but because there is far too much socio-economic evolution which needs to take place before the technology will be allowed to be used or made widely available.

>> No.3574147
File: 4 KB, 146x159, 1262708779329.jpg [View same] [iqdb] [saucenao] [google]
3574147

>>3574144
Must not turn into TRS thread,

Must not turn into TRS thread,

Must not turn into TRS thread,

Must not turn into TRS thread

>>3574140
If the internet is our neurons, that would be analogous to the telegraph. Very early technology, needs a while to go.

>> No.3574151

>>3574140
and before anyone thinks i meant books or some shit, i mean a chip you implant into your scull which gives you photographic memory untill the chip is switched of and you lose those memorys.

>> No.3574155

>>3574151
source? I'm interested.

>> No.3574161
File: 69 KB, 325x324, transhumanDPp.jpg [View same] [iqdb] [saucenao] [google]
3574161

>>3574155
http://gizmodo.com/5813821/scientists-create-first-memory-expansion-for-brain

>> No.3574170

>>3574147
Well yes there's TRS but since you haven't done that yet I can't count on it. For the foreseeable future we're stuck with a culture where religious fanatics would run around murdering everyone who's had life extension or any enhancements made.

>> No.3574180

>>3574161
>rats
I'm tired of hearing about rats. In the end, they'll say "well these are incredibly different from humans. It'll take another 10-20 years of development before testing on humans begins"

sigh..

>> No.3574183

Best case scenarios never work out, I'm betting on 2130s era tech being worth calling unpredictable

though obviously we'll see most of the things scientists are working on pan out or at least producing something useful before then

>> No.3574205

>>3574087
>A gradual change should preserve the 'real' me.
That's an illusion, and you should be ashamed of coddling your emotionally-charged myths about self.

>> No.3574213

>>3574205
I remember hearing 30% of the brain rewires itself in 6 months. Don't get your nips in a twist.

>> No.3574220

>>3574170
>religious fanatics would run around murdering everyone who's had life extension or any enhancements made.
Seriously? If it ever came to that, it would immediately be a war. At worst there will be isolated attacks.

>> No.3574229

>>3574220
'Isolated' attacks are not a minor problem.

>> No.3574232

The singularity will be gradual, sure there will be spikes of progress and integration but overall, we probably won't really notice. No one's going to wake up one day and be like "Oh shit the singularity is here". You'll probably go in for some synthetic organs due to cancer and some other shit and your doctors will explain to you not that your chances of survival are actually not low, and to top it off once the procedure is completeed your life span should be greatly extended and that you may want to consider memory enhancements to go with your additional 75

>> No.3574235

>>3574229
Nothing the gay community hasn't dealt with already, though. Not to mention black people.

>> No.3574245

>>3574232
The big issue isn't life extension. It's superhuman intelligence. THAT shit will rock your monkeybrain to its core, and you will never catch up with the perpetual culture shock without becoming transhuman.

Unless the new overlords make a little nature preserve for unaugmented humans, I suppose. It might even be really nice.

>> No.3574246

>>3574220
>it would immediately be a war
nah, unless it was a nation declaring war against those who were "unholy."
Anyway, those religious fanatics would just decrease the credibility of their religion. I would say that they'd try to establish colonies for the "pure." But I don't see it lasting to long without financial support from some government. And lets be honest, the government will adopt technology in to the fullest. So it'd be ironic that these colonies rely on the "unholy" to survive and fight them at the same time.

>> No.3574253

>>3574235
I was talking more like abortion clinic bombings and eco-terrorism. Imagine being in surgery and all the sudden the power goes out because a faggot decides to cut the trunk line.

>> No.3574254

>>3574246
The proposed scenario was "religious fanatics would run around murdering everyone who's had life extension or any enhancements made." This implies ineffective or absent police protection.

You would have a war, even if it is isolated to one city and ends up being called a long riot.

But I don't think the scenario is plausible anyway. It won't get any worse than violence against blacks or gays in the US has been, which is still pretty horrible, but there was hardly a widespread extermination campaign.

>> No.3574256

>>3574253
Yeah, that might happen a few times.

>> No.3574347

>>3574245
>superhuman intelligence
>intelligence
>you understand intelligence
>u want more but you don't know what it is/how much it costs the brain

>> No.3574357

I'm sorry but if the singularity, causes a divide between people who embrace transhumanism and the people who don't and the Transhumans can't win it then we don't deserve a singularity

>> No.3574372

>>3574347
What are you saying, exactly? That being more intelligent would be prohibitively expensive, metabolically? Because last I checked, the US has a problem with eating TOO MUCH food.

>> No.3574377

>>3574357
Well, yeah. But I can't imagine a hardcore Luddite movement actually turning back the clock anytime soon.

>> No.3574379

>>3574357
It's very difficult to win against people who should be labeled as clinically insane without just outright killing them.

>> No.3574380

Technological growth will never become exponential. However, many of the technologies being talked about today, stem cells, cybernetics, intelligence enhancement, etc, will probably be mature by no later then 2161.

>> No.3574383

>>3574380
>Technological growth will never become exponential.
It already is, in many metrics. That's the point. What are you talking about?

>> No.3574385

>>3574380
>Technological growth will never become exponential

It's called a microprocessor.

As for the rest of your post, the thing most people don't realize is that technology is more limited more by funding than anything

>> No.3574387

>>3574385
more limited by*

>> No.3574428

>>3574372
I'm saying "more intelligence" will come at a greater cost for mental stability.

>> No.3574436

>>3574428
Hahahahah what?

You think there is a necessary tradeoff between intelligence and sanity? I'm sorry, but that's a pretty fucking weak myth you've bought here.

>> No.3574438

>>3574385
>more limited by funding than anything

Yes, this is very true.

Suppose, for an instant, that all the technological research had unlimited budgets from this point on.

We would already be colonizing the moon, computers would be a lot more advanced, and who knows what else.

However it seems technology isn't profitable enough at this point to warrant a large investment from our overlords

>> No.3574439

>>3574428

And upgrades to the brain designed by those with "more intelligence" can increase the brain's efficiency and restore stability.

Win win.

>> No.3574455
File: 287 KB, 1901x503, Self-aware spam.jpg [View same] [iqdb] [saucenao] [google]
3574455

We already have independent artificial intellects.

>> No.3574458

>>3574455
I wish.

>> No.3574554

>>3574436
There is evidence that in the higher percentiles of intelligence score, the risk of mental illness and social dysfunctionality increases.
Read "Children above 180 IQ" by Hollingworth, which is a longitudinal study of gifted children. Also Lewis M. Terman's research, but take it with a grain of salt, because the statistics are misinterpreted to give the impression that there is no unusual problem with giftedness. There's an analysis of his statistical data here: http://prometheussociety.org/articles/Outsiders.html

>>3574439
Upgrade what, exactly?

>> No.3574577
File: 81 KB, 804x452, 1309019893370.jpg [View same] [iqdb] [saucenao] [google]
3574577

I'm not counting on a singularity happening, but there's this great site called The Uncertain Future:

http://theuncertainfuture.com/

You put in some data and it calculates the likelihood of a Singularity happening by year, within this century.

I've taken it several times but it always shows the Singularity happening next year so tread carefully.

>> No.3574634

>>3574577
I got around 2050.

>> No.3574635

>>3574577
Huh?
Using the most optimistic numbers, I got a 79.8% chance of singularity by 2070.

>> No.3574654

>>3574554
Sure, I agree with that, but claiming this says anything about transhumanism is pretty silly.

I agree that exploring the space of "what minds are like" by tinkering with genetics or augmentation will have unintended consequences, and will introduce new mental disorders to our lexicon, but it will also produce people who are superior in every way. There is no inherent tradeoff between intelligence and sanity once you can tinker with the substrate itself.

>> No.3574737

>>3574654
>There is no inherent tradeoff between intelligence and sanity once you can tinker with the substrate itself.
What is the substrate of intelligence?
How can it be tinkered with without creating more mental dysfunctionality?

I'm curious about the technical details, how would this be done and how it would work.

>> No.3574749

>>3573915
Not in your lifetime, stop dreaming.

>> No.3574905

>>3574737
I am not whom you argue with, but I would like to pose an answer to one of your questions.

The direct substrate of intelligence is some currently unknown, or maybe undefined, intermediary level of meaning found between the highest symbol level of thought, intelligence, consciousness etc., and the lowest level of meaning, the highly formalized system of neurons interacting with one another. Intelligence, thought and consciousness are all symbolically high level systems, which are consequently "software." Neurons, at the base, form the biological goo which powerhouses the final level of interpretation; this can otherwise be thought of as hardware in a computer, although unlike the thought/software analogy, the neuron/hardware analogy does not produce a meaningful isomorphism, thus we are still in the dark as how to recreate this base, foundation or substrate.

>> No.3574935

>>3574905
I'm sorry, the direct substrate is some unknown level directly below and implicitly connected with the high symbol level. The final substrate of the entire system is the neural level, or the rough analogy of computer hardware. The neural level obeys absolutely the laws of physics and logic -- so far as neurons will always behave as they are expected -- yet these messages get twisted and distorted as they pass through subsystems of interpretation (formed by decades of experience) until finally they reach our top level of thought, leaving enough room for error and human blunder.

This view of the brain is a very ironic understanding -- that the lowest level is completely rational with little knowable meaning, but the highest level with meaning is capable of being flawed. I guess that makes human, separate from some computer or recursive formal system.

>> No.3577844

i was thinking about this today

it will occur when the first AI is born, which will be able to solve every problem we have, if they are possible to solve within the laws of physics, invent its own problems, and then solve them

it will do this as fast as its computer architecture allows it to do so, and if it can design its own architecture it will do it near instantly.

>> No.3577858
File: 22 KB, 113x116, zubrin-trollin-3.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3577844

>it will occur when the first AI is born, which will be able to solve every problem we have, if they are possible to solve within the laws of physics, invent its own problems, and then solve them

Why.

Why does everyone assume the first AI will go on some kind of 'upward curve of self-improvement'.

You guys are counting on some imaginary posthuman ai that can (By the very definition of 'posthuman') not be described in suficient detail to be falsifiable, ie, a silicon God. And then you expect the ai to breastfeed you knowledge.

Couldn't it be, you know, just maybe, that the ai has its own personality and doesn't feel like dyson-sphering the entire universe?

>> No.3577867
File: 29 KB, 440x330, 1294558429318.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3577858
If we put them into cats and give them lots of ear rubs they might become human-friendly.

>> No.3577870

>>3577858
because that's what intelligent humans do but they are limited by the shitiness of the human brain

also

>implying personality is a function of intelligence and not a by product of evolution and evolutionary mutations

>> No.3577878

>>3577858

I mean, Christ, it's already bad enough when people think we'll have nanotechnology by 2020 (Even though we have to wait half an hour to get the results of a single operation with an AFM) or mind uploading by 2030 (herpity derpity the nanobots will destroy the physiology you want to scan) but this just goes beyond it. I wish transhumanists could stop obsessing over ai and the singularity and start being more down to Earth about the problems we can currently tackle.

>> No.3577879 [DELETED] 

Sure is singulitard circle jerking in here.

>> No.3577883

>>3577870

What if the ai has the goal of turning every object in the universe into a CD case?

You're not considering the possibility of the ai spiraling into an epileptic wreck or isolating itself.

>> No.3577889
File: 24 KB, 510x370, 236229_height370_width560.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3573915
Best Estimate? World is full of Ace and Gary.

>> No.3577895

>>3577883
kill -9

>> No.3577990

I don't know, but I expect it in 40 years or so.
Instead of just thinking of "the singularity", which can happen anytime between 20 and 50 years (or more) from now, we should focus on individual problems:
First child-level AGI? First human-level AGI? First post-human-level AGI?
Molecular nanotech? Bootstrap from biology? Bootstrap by AFM or similar? Multiple stages or straight-to-"diamond"?
SIM(Structure Independent Minds) or "Mind Uploading"? Destructive or gradual? Interactive/side-loaded?
It should also be mentioned that any of these 3 technologies can lead to acceleration of the other.
For AGI: DARPA SyNAPSE might reach human-level in 10-15 years, but it is doubtful it will go much beyond. It might also be a good hardware platform for SIM if it succeeds. OpenCog and more alien designs that talk about high-level patterns of the mind directly are the ones most capable of self-improvement, soft or hard take-off would be possible with such types of systems (much harder to do that with neural-network based ones). Estimates for good results with OpenCog (or similar) - 10-20 years. If anything, it will at least have some useful contributions to society(even if it stays below human-level for certain standards).
MNT could happen faster, but it's too underfunded. With proper and effort it could very well be done in 10-30 years, but currently I hold no particular estimated time to completion here. I do think that AGI or SIM success will result in MNT happening a lot faster, and MNT success will also result in AGI and SIM benefiting greatly as MNT makes manufacturing (especially of computational elements) much easier and the efficiency of whatever is manufactured depends only on its design (it's already hitting atomic limits).

So, I don't know anything about "singularity", but I expect that progress in any of those fields to eventually help the other ones and that will greatly change (hopefully for the better) human society.

>> No.3578018
File: 100 KB, 1024x1024, dimer.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3577990

>MNT could happen faster, but it's too underfunded. With proper and effort it could very well be done in 10-30 years, but currently I hold no particular estimated time to completion here.

I wouldn't be so sure. We've spent countless CPU cycles *simulating* it, but until recently we've not had the technology to demonstrate it.

Sure, you could raise some money for the IMM -- But what will they do with it? Continue simulating, or actually go for an experimental trial?

In 2003 Silicon mechanosynthesis was sort of demonstrated. Again in 2008. Last year DARPA+Zyvex demonstrated atomically-precise removal of Hydrogen from a Silicon surface, then a CVD equivalent for Silicon that grew a nanostructure on the depassivated area. It's all very impressive, but how long until we get positional control? AFM tips still have limited lifetime, and they're too slow. Positional control is too slow... And parallelizing the thing doesn't get you anywhere if the manipulators hit each other constantly. And how well does all of this progress translate to diamond and the other covalently-bonded crtystals?

I'm all up for Drexlerian type of nanotech -- but let's be honest, the only mechanochemistry we've done has been on Silicon, not Carbon, and it has been done with AFM, not the tools envisioned by Drexler and Merkle.

Pic related a QuteMol render of a mechanosynthesis dimer.

>> No.3578237
File: 41 KB, 246x381, ConsiderPhlebas.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

Sentient Beings with exponentially greater levels of cognition will replicate easily. Humans and Drones with human-level AIs who choose not to upgrade will probably reside in Habitats that cater to their chosen way of life.

Pic Related

>> No.3578263

I think the entire premise of seed AI is badly flawed. I mean, you're an intelligence. You're an example of the best class of intelligences we are aware of (humans). Can YOU design a better human brain?

Just like being made of neurochemistry does not give you an innate understanding of neurochemistry, an AI will have no fucking clue how to improve itself. At least, not any more than we do. It will take some kind of semi-blind experimentation searching through the parameter space, and stumbling in the direction of "smarter", but whatever metric we use.

Essentially the same as trying to improve human brains, really. Until we gain some fundamental understanding of how intelligence works, there won't be any explosion.

>> No.3578267

>>3578237
>and I will still never reproduce

>> No.3578331

>>3574554
Inherently a value judgment in this approach, I'd set my social functionality to 0 if I could get an equivalent increase in intelligence.

>> No.3578358

>>3578263
This. I think how singularity futurists treat the topic of seed AI really separates the science from the religion in this topic.

>> No.3578360
File: 47 KB, 294x475, ChildhoodsEnd.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3578267
no need. not on the human level anyway. the age of homo sapiens has been essentially eclipsed and transcended by a higher order of experience and existence.

humans will probably still hang around in some form or another though.

>> No.3578364

>>3577878
>I wish transhumanists could stop obsessing over ai and the singularity and start being more down to Earth about the problems we can currently tackle.

Like what? Are we finally going to genocide the mouth-breathers that are hindering progress daily?