[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature

Search:


View post   

>> No.18900955 [View]
File: 709 KB, 684x2928, https___bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com_public_images_00895a86-389c-454f-8026-7736e15f1605_684x2928.png [View same] [iqdb] [saucenao] [google]
18900955

>>18900887
That's a feature of working safety systems, but it's not the whole story. For the Challenger, NASA got an explicit no-go the night before the launch, on account of the O-rings, but decided to twist the relevant consultant's arm to make them take it back. The Beirut issue was repeatedly notified: to whom did it need to be escalated?

I've been reading a book called Engineering a Safer World, because I'm very worried about something I see as an incipient Chernobyl: AI. It seems like the methods recommended there work, but how do you get the relevant leaders to give a damn about safety and implement them? And also, can they work in the context of ensuring scientific research (of any kind, not just AI) doesn't wind up killing us?

Can we stop pic related?

Navigation
View posts[+24][+48][+96]