[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.15849804 [View]
File: 167 KB, 1284x1599, it's over yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
15849804

>>15846100
>spending money on climate science and string they instead of AI alignment

>> No.14604794 [View]
File: 167 KB, 1284x1599, it's over yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
14604794

AI will probably kill us all before we get to that point.

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

>> No.14572209 [View]
File: 167 KB, 1284x1599, It’s over.jpg [View same] [iqdb] [saucenao] [google]
14572209

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

>It does not appear to me that the field of 'AI safety' is currently being remotely productive on tackling its enormous lethal problems. These problems are in fact out of reach; the contemporary field of AI safety has been selected to contain people who go to work in that field anyways. Almost all of them are there to tackle problems on which they can appear to succeed and publish a paper claiming success; if they can do that and get funded, why would they embark on a much more unpleasant project of trying something harder that they'll fail at, just so the human species can die with marginally more dignity? This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.

We don’t have a plan for AGI, and we only have one shot before we literally all die. That one shot is restrained by a time limit, which seems to be running out. It’s over.

Navigation
View posts[+24][+48][+96]