[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.14673875 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14673875

Why don't AI safety people advocate eugenics as a way of solving the alignment problem? If you could genetically engineer geniuses with IQs of >200+, they could do a much better job of working on AI safety than you could.

>> No.14624763 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14624763

>>14621814
Scientifically, what will happen when India starts eugenically increasing their intelligence while others countries refuse to?

>> No.14582106 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14582106

Why don't any of these AI alignment people advocate eugenics as a way of solving alignment? If you could genetically engineer geniuses with IQs of 300, they would probably do a much better job of working on AI safety.

>> No.14581664 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14581664

>>14577863
How likely is it that transhumanism will take off in India?

>> No.14573050 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14573050

>>14572209
Why don't AI safety people promote eugenics as a way of solving the AI alignment problem? If you could genetically engineer embryos into 300 IQ geniuses, those 300 IQ geniuses could do a much better job of working on AI safety than you could.

Relevant:
https://www.unz.com/akarlin/short-history-of-3rd-millennium/

>We still haven't come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

>Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

>Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.

>> No.14544901 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14544901

>>14544600
There's a good chance that dysgenics will prevent transhumanism from happening during this millennium.

https://www.unz.com/akarlin/short-history-of-3rd-millennium/

>(1) (a) Direct Technosingularity - 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

>(2) The Age of Em - <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

>(3) (a) Biosingularity to Technosingularity - 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it's a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

>(3) (b) Direct Biosingularity - 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

>(4) Eschaton - 10%, of which: (a) Philosophical existential risks - 5%; (b) Malevolent AGI - 1%; (c) Other existential risks, primarily technological ones: 4%.

>(5) The Age of Malthusian Industrialism - 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

>> No.14512655 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14512655

>>14509285
>>14509423
India is probably the only place where eugenics will ever realistically happen.

>> No.14470808 [View]
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14470808

>>14470790
>tfw scientists of the future will be pajeetas

Navigation
View posts[+24][+48][+96]