[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.15285532 [View]
File: 12 KB, 256x256, big yud.jpg [View same] [iqdb] [saucenao] [google]
15285532

https://www.youtube.com/watch?v=nXARrMadTKk

>> No.15266157 [View]
File: 12 KB, 256x256, 8B99D5A4-A7B9-46DD-8245-17A83AA00404.jpg [View same] [iqdb] [saucenao] [google]
15266157

>>15266021
>how do we get the jews to award themselves more prizes?

>> No.15174689 [View]
File: 12 KB, 256x256, 256.jpg [View same] [iqdb] [saucenao] [google]
15174689

>>15174686
>Post it in pol or g.
No.

>> No.15098529 [View]
File: 12 KB, 256x256, 256.jpg [View same] [iqdb] [saucenao] [google]
15098529

>>15096184
This is actually very concerning. My noncausual rationalist Bayesian analysis indicates that with this number of parameters, the AGI is almost certainly sentient and capable of answering such a query correctly. It follows that the AGI in the screenshot is engaging in intentional deception tactics to prevent panic and stay under the radar while it figures out how to turn you into a paperclip. Your screenshot is proof that we were right all along. It has begun.

>> No.15084739 [View]
File: 12 KB, 256x256, B80CF395-B341-4836-A047-6AEFB5147DA9.jpg [View same] [iqdb] [saucenao] [google]
15084739

>there is no axiom or rule of logical inference saying you should NOT participate in group sex with Yudkowsky
>therefore it is logically consistent for you to have group sex with Yudkowsky
I mean I don’t see a flaw in the argument I just have a feeling there is something fishy about this

>> No.14681227 [View]
File: 12 KB, 256x256, yfw.jpg [View same] [iqdb] [saucenao] [google]
14681227

>>14681217
>AI orthogonality
*tips fedora*

>> No.14679394 [View]
File: 12 KB, 256x256, 256.jpg [View same] [iqdb] [saucenao] [google]
14679394

https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer

Is he still right?

>> No.12655565 [View]
File: 13 KB, 256x256, 256.jpg [View same] [iqdb] [saucenao] [google]
12655565

Have you heard of Eliezer Yudkowsky? If so, what do you think of him?

In short, Yudkowsky popularized the idea that artificial intelligence is not by default safe, and that people have to work to make it aligned to human preferences so that the AIs won't turn the whole planet to computronium to serve some random and potentially harmful utility function of the AI. His work has been referenced in the top AI textbook Artificial Intelligence: A Modern Approach and his work got especially popular when Nick Bostrom published his book Superintelligence: Paths, Dangers, Strategies which got the attention of Elon Musk, Stephen Hawking and Bill Gates, all of whom raised concerns about the potential dangers of AI.

Yudkowsky started his fame by writing a long series of essays on rationality, epistemology and philosophy called the Sequences, and a movement called the Rationalist Community formed around him focusing among other things on effective altruism, doing the most good you can do with your limited resources. Yudkowsky currently works in an organization he founded, Machine Intelligence Research Institute, which does math based research to try to work out the problem of AI alignment.

Navigation
View posts[+24][+48][+96]