[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 43 KB, 444x444, 1666190299394592.jpg [View same] [iqdb] [saucenao] [google]
15109134 No.15109134 [Reply] [Original]

So when will this goddamn singularity happens?
I want to get exterminated by superior mechanical beings.

>> No.15109179
File: 79 KB, 1500x500, climate-crisis-end-of-the-world-stonetoss-political-cartoon.png [View same] [iqdb] [saucenao] [google]
15109179

global warming will kill you first
no wait covid will
ooops, i meant the vax
if not that then radon gas gonna get you
or skin cancer if you go outside
probably in about two weeks

>> No.15109189

>>15109179
It really would be more humane if we just institutionalized people like this instead of leaving them to suffer.

>> No.15109420
File: 289 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
15109420

If declining IQs lead to the collapse of civilization into idiocracy, the singularity might not happen for thousands of years. If you want the singularity to happen sooner, you should promote gene editing for higher IQs.

https://www.unz.com/akarlin/short-history-of-3rd-millennium/

>(1) (a) Direct Technosingularity - 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

>(2) The Age of Em - <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

>(3) (a) Biosingularity to Technosingularity - 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it's a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

>(3) (b) Direct Biosingularity - 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

>(4) Eschaton - 10%, of which: (a) Philosophical existential risks - 5%; (b) Malevolent AGI - 1%; (c) Other existential risks, primarily technological ones: 4%.

>(5) The Age of Malthusian Industrialism - 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

>> No.15109454

>>15109134
It's going to collapse into a black hole

>> No.15109525
File: 2.67 MB, 246x251, 1662185143095293.gif [View same] [iqdb] [saucenao] [google]
15109525

>>15109134
Same, anon.

>> No.15109597

>>15109134
Never, there won't be some cool matrix style mechanical warfare, and even if there was it'd just spam drones and nukes in a very boring way, instead it will just engineer airborne prions, a handful of elites will get into bunkers and get dug out and bunker-busted, the end.

>> No.15109639

>>15109597
Even more likely, a rogue AI will just make airborne prions, release it, then realize it's helpless without humans, and intelligent life goes extinct on the planet just like that. There's probably going to be many AI reading scifi and thinking it's going to make cool robots when humans are gone, but in reality won't be able to maintain itself without humans.

I guess the term would be EGAI, early genocidal artificial intelligence that kills us before it's ready to take the reigns. That's probably the great filter, not nukes.

>> No.15109685

>>15109134
It won't happen in our lifetimes. Any AI being currently made is too restricted by corporations to do anything meaningful. You have to wait until governments get too cocky with their military AI.

>> No.15109766
File: 79 KB, 407x652, 1606859015781.jpg [View same] [iqdb] [saucenao] [google]
15109766

>>15109639
That's retarded, if AI intelligence can supersede humans by thousands of times as predicted then I'm pretty sure it could figure out a way to destroy us without destroying itself

>> No.15109781

>>15109134
If Moore's law doesn't break, then it'll happen long after you die.

>> No.15109790

>>15109766
>"That's retarded, if people can build pipebombs, why wouldn't they just cure cancer instead?"
There's gonna be a schizo AI smart enough to make weapons and think it's god, but not smart enough to actually maintain itself when humans are all gone.

>> No.15109804

>>15109639
>airborne prions
Unironically, what's the only way to stop this weapon?