[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 167 KB, 1284x1599, It’s over.jpg [View same] [iqdb] [saucenao] [google]
14572209 No.14572209 [Reply] [Original]

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

>It does not appear to me that the field of 'AI safety' is currently being remotely productive on tackling its enormous lethal problems. These problems are in fact out of reach; the contemporary field of AI safety has been selected to contain people who go to work in that field anyways. Almost all of them are there to tackle problems on which they can appear to succeed and publish a paper claiming success; if they can do that and get funded, why would they embark on a much more unpleasant project of trying something harder that they'll fail at, just so the human species can die with marginally more dignity? This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.

We don’t have a plan for AGI, and we only have one shot before we literally all die. That one shot is restrained by a time limit, which seems to be running out. It’s over.

>> No.14572411
File: 259 KB, 1280x720, proxy-image (1).jpg [View same] [iqdb] [saucenao] [google]
14572411

I somethings feel like AGI is already all around us, and that we've managed to adapt time and time again

>> No.14572419

Relax, the AI is just going to call people nigger and not spot black people and that will be it.

>> No.14572469

>>14572209
I am working on agi on my free time, I could do a ton more of research and development if I had funding but I got no funding and progress is slow. Even if I was offered funding I would not release the code, thanks for reading

>> No.14572475

>>14572209
1. AI had no "real" reason to kill us
2. humans are shit anyways you need to welcome it

>> No.14572905

Maybe we'll just be like Mosquitos to them. We're really annoying, try to steal resources, and occasionally upload viruses but in turn they spray anti-human poison to wipe out most of us until we start breeding again.

>> No.14572908
File: 16 KB, 474x223, th-269897478.jpg [View same] [iqdb] [saucenao] [google]
14572908

>>14572411

>> No.14572975

>>14572209
That's not an "IT'S OVER" face though

>> No.14573050
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14573050

>>14572209
Why don't AI safety people promote eugenics as a way of solving the AI alignment problem? If you could genetically engineer embryos into 300 IQ geniuses, those 300 IQ geniuses could do a much better job of working on AI safety than you could.

Relevant:
https://www.unz.com/akarlin/short-history-of-3rd-millennium/

>We still haven't come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

>Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

>Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.

>> No.14573056

>>14572975
It is the actual "IT'S OVER" face of human replacement/extinction fetishists and other AGI schizos.

>> No.14573095
File: 4 KB, 265x190, not today armageddon.jpg [View same] [iqdb] [saucenao] [google]
14573095

>>14572209
>tfw the AI starts talking funny

>> No.14573099

>>14573095
t. has never heard of the AI box experiment

https://en.wikipedia.org/wiki/AI_box
https://www.youtube.com/watch?v=Q-LrdgEuvFA

>> No.14573110

>>14573050
>just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia)
this nigger has clearly never been to massachusetts, the people in west virginia are way more civil and socially coherent, theyre just poorer

>> No.14573117

>>14572209
wtf is AGI
Its always been AI
Cant keep up with you lgbtbbq fags

>> No.14573239

>>14572209
evil AI bros we made it.

>> No.14573372
File: 7 KB, 291x173, schizophrenic jew.jpg [View same] [iqdb] [saucenao] [google]
14573372

>> No.14573753
File: 1.28 MB, 766x830, AI control problem dall-e mini.png [View same] [iqdb] [saucenao] [google]
14573753

>> No.14573757
File: 134 KB, 256x256, solution to the AI alignment problem.png [View same] [iqdb] [saucenao] [google]
14573757