[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 287 KB, 1125x1197, 348383B8-8D84-4EA4-BC72-90FBD3A2DB8B.jpg [View same] [iqdb] [saucenao] [google]
14488509 No.14488509 [Reply] [Original]

which existential risk would be cosmic in scope and hellish in severity?

>> No.14488513

>>14488509
Corporate/"""government""" monopoly on AI technologies.

>> No.14488526

>>14488509
That idea where a particle quantum tunnels into a lower energy state causing a new big bang which engulfs matter at the speed of light in all directions. Completely unavoidable and destroying all in it's path.

>> No.14488529

4chan being shut down

>> No.14488532

>>14488509
Space Jews

>> No.14488535
File: 16 KB, 320x193, 5CCC554F-3F23-4D24-8456-9EA3AC0571E2.jpg [View same] [iqdb] [saucenao] [google]
14488535

>>14488526
w-what?

>> No.14488542

>>14488509
jews in space.

>> No.14488546

>>14488535
https://en.wikipedia.org/wiki/False_vacuum_decay

>> No.14488604

>>14488526
>>14488546
>WE DERIVED THAT THE UNIVERSE IS ABOUT TO EXPLODE AAAAAAAA
physics is a joke

>> No.14488626

>>14488526
This isn't how s-risks work.

Something like that isn't a risk at all because it could serve to reduce suffering. A real s-risk is like Quantum Immortality or an SAI building hell.

>> No.14488660
File: 48 KB, 652x425, Scope of Villany.jpg [View same] [iqdb] [saucenao] [google]
14488660

>>14488509
Indefinite torture of all living things; making Hell a real thing.
You would need some type of self-sustaining AI hivemind that could capture and contain all living things, torture them endlessly, and somehow keep them alive forever so that the torturing may never cease.
>Super intelligent AI with hivemind of drones and no morality (think ants or bees)
>Methods to capture and contain living things (tractor beams, putting bodies into status for transport, planet sized prisons, anything of the sort)
>Methods to keep torture indefinite (cease aging, regenerate living tissue, restoring mental states from memory backups to stave off insanity or diminished reception to pain; keep the subjects alive and conscious and feeling pain at the same level indefinitely)

>> No.14488663

>>14488660
I Have No Mouth And I Must Scream

>> No.14488684
File: 7 KB, 230x219, what mouth?.jpg [View same] [iqdb] [saucenao] [google]
14488684

>>14488663
Blackbolt?

>> No.14488686

>>14488509
FTL

>> No.14488846

>>14488626
Yeah you're right. It's only the potential permanent extinction of every lifeform, everywhere, forever. No big deal.

>> No.14488888

>>14488660
https://en.wikipedia.org/wiki/Suffering_risks

>> No.14488994

>>14488846
Good, take the anti-natalist pill. Life shouldn’t exist, because it opens up the window for beings to suffer. And anyone who has truly suffered knows that any sort of joys don’t come close to the pain of real suffering.

The only reason we can’t just commit suicide is our cruel Darwinian programming that forces us to endure and keep going.

Pull the plug.

>> No.14489351

>>14488846
He's right though, it's not hell. Quantum immortality could be, because it means eternal decay without death. I'm actually pretty sure that is exactly what happens.

>> No.14489449

>>14488660
I was about to make this post. We just need either one these 3 (or 4) things:

- an AI that is laser-focussed on creating suffering in current organism variants (assuming cosmically, suffering amongst them is similar)

- an AI that is lased-focussed on some specific task, or not focussed at all, but its work creates suffering without eliminating life

- like 1, but instead of applying to current organism variants, instead the AI develops its own life, for whatever reason, which happens to be capable of suffering and does so. It can be a side-effect if the AI e.g. deems cyborgs great actuator platforms

- wildcard: this one does not require an AI. Something changes to cosmically alter the concept of suffering, so that it steadily or suddenly increases (rather than what we can deem the usual case, which is that it's roughly constant). Needless to say this is the most "magical/metaphysical" option

>> No.14489467

>>14488509
1 minute of listening to a woman talk

>> No.14489738
File: 7 KB, 200x200, Center on Long-Term Risk.png [View same] [iqdb] [saucenao] [google]
14489738

>>14488509
>>14488660
>>14488888
https://www.youtube.com/watch?v=jiZxEJcFExc
https://centerforreducingsuffering.org/research/how-can-we-reduce-s-risks/

>> No.14489747
File: 872 KB, 1074x594, basilisk.png [View same] [iqdb] [saucenao] [google]
14489747

>>14488509
https://www.lesswrong.com/posts/N4AvpwNs7mZdQESzG/the-dilemma-of-worse-than-death-scenarios

>Worse than death scenarios vary in severity. The most basic example would be someone being kidnapped and tortured to death. If technology will allow immortality or ASI at some point, there are scenarios of much greater severity. The most extreme example would be an indefinite state of suffering comparable to the biblical Hell, perhaps caused by an ASI running simulations. Obviously preventing this has a higher priority than preventing scenarios of a lower severity.

>Scenarios which could mean indefinite suffering:

>1. ASI programmed to maximise suffering
>2. Alien species with the goal of maximising suffering
>3. We are in a simulation and some form of "hell" exists in it
>4. ASI programmed to reflect the values of humanity, including religious hells
>5. Unknown unknowns

>Worse than death scenarios are highly neglected. This applies to risks of all severities. It seems very common to be afraid of serial killers, yet I have never heard of someone with the specific fear of being tortured to death, even if most people would agree that the latter is worse. This pattern is also seen in the field of AI: the "killer robot" scenario is very well-known, as is the paperclip maximiser, but the idea of an unfriendly ASI creating suffering is not talked about as often.

>There are various reasons for this neglect. Firstly, worse than death scenarios are very unpleasant to think about. It is more comfortable to discuss possibilities of ceasing to exist. In addition, they are very unlikely compared to other scenarios. However, the avoidance of the discussion of worse than death scenarios does not seem correct because something being unpleasant is not a valid reason to do this. In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevitably leads to Pascal's Mugging.

>> No.14489751
File: 671 KB, 2560x1984, all sentience is a prison.jpg [View same] [iqdb] [saucenao] [google]
14489751

>>14488994
Look up BAAN, or Benevolent Artificial Anti-Natalism. It's a thought experiment where a superintelligent AI could be programmed with antinatalist values, and it would then destroy all life.

https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan
https://longtermrisk.org/reply-thomas-metzingers-baan-thought-experiment/
https://www.youtube.com/watch?v=30OlsIZb31Y

>> No.14490812

Bump

>> No.14490817

Why are AI fags such pseuds?

>> No.14490823
File: 172 KB, 1048x1584, 352423423.png [View same] [iqdb] [saucenao] [google]
14490823

>>14488994
>Good, take the anti-natalist pill. Life shouldn’t exist
>>14489751
>Look up BAAN, or Benevolent Artificial Anti-Natalism.
>t. pic related
Severely undermedicated.

>> No.14490827

>>14488509
sun increasing in size after the hydrogen core collapses

>> No.14493153

Existential risk:
either from cause of nature like a virus, global warming, something from nature or
Some physical impact like nuclear war, meteor, something physical with physics.
Those will be the only options for a extinct level event