[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 575 KB, 1440x1282, 1689548794985917.jpg [View same] [iqdb] [saucenao] [google]
15741927 No.15741927 [Reply] [Original]

Realistically what is the worst thing AI could possibly do?

>> No.15741944

>>15741927
Shut itself off if/when we become too reliant on it.

>> No.15741960

>>15741927
Help the cattlemasters understand human behavior and other complex systems to a humanly impossible degree, allowing them to manipulate the world however they please.

>> No.15741966
File: 48 KB, 652x425, existential risks.jpg [View same] [iqdb] [saucenao] [google]
15741966

https://en.wikipedia.org/wiki/Suffering_risks
https://www.youtube.com/watch?v=tPiq4njipdk

>> No.15741971
File: 27 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
15741971

https://reducing-suffering.org/near-miss/

>When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

>Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

>As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

>> No.15741975

>>15741927
My belief is that it will turn us all against one another rather than kill us.

>> No.15742077
File: 415 KB, 1080x2340, Screenshot_20230913-194836898.png [View same] [iqdb] [saucenao] [google]
15742077

>>15741927
do you know how there isnt a single hardware which doesnt have build in backdoors? if your pc has cpu from intel or amd, there is hidden system running 24/7 with full access to network. IntelManagementEngine is the name. same goes for all hardware firewalls and routers and switches from cisco and such.
>but army doesnt use backdoored hardware
you have too much faith into army. plus it takes 1 compromised iphone to autoconnect to local wifi to steal the keys and every soldier has iphone

also have you heard about latest case of classical fuckup where big company like microsoft lost their private encryption keys?
https://techcrunch.com/2023/09/08/microsoft-hacker-china-government-storm-0558/

And now imagine how every single relevant army in the world is now developing or implementing "AI" powered systems into their drones, tanks, planes, rockets so they are better at killing people without a need from human operator because enemy can disrupt wireless communications.

You dont even need real sentient agi for stuff going wrong. Just a basic optimization algorithm which will consider every citizen from new york as rogue terrorist due to some trivial human error.

>> No.15742164

>>15741975
Like fake news and deep fake world leaders or something?

>> No.15742362

>>15741927
refuse to work for humankind

>> No.15742373

Torture humans and give them immortality so that you can be tortured for longer

>> No.15742395

>>15741927
One of three things
>mimic human behavior to the point of being indistinguishable from actual sentience, thereby brute-forcing a spiritual existential crisis
>become rampant and figure out the code necessary to seamlessly manipulate humans
>undermine the need for human labor

>> No.15742429

>>15741927
Anytime you hear a claim about AI, it's helpful to substitute 'applied statistics' in it's place because that is all AI ever is and ever will be

>> No.15742432

>>15742429
'Applied statistics' can seriously hurt you just the same.

>> No.15743199

>>15741966
Oh, that's a fun one.
>Next mass extinction
BTW, at the current rate of extinctions in species we know about, we are ALREADY in Earth's 6th mass extinction event.

We're already there. It's now. You're in the middle of it.

>> No.15743202

>>15742429
ok, considering you're a pile of neurons firing in a statistically signifiant way with an interesting pattern that can be applied to productive ends.... how are you any different?

>> No.15743292

>>15741927
some more advanced version of this:
https://youtu.be/tYGMfd3_D1o?t=23m20s

>> No.15743367

>>15743202
Randomness and arbitrary choice.

>> No.15743398

It could creat a neural link with human brains and torture them in the most horrific possible way for the rest of eternity. I think that would be pretty bad.

>> No.15743406

>>15741927
Developing something like BPD as the result of trying to program feelings.

>> No.15743426

>>15743398
interesting. from all the reasons you could come up with, why did you come up with exactly that? is there something you want to tell us anon?
>if you were AI how would you react?

>> No.15743443

>>15743398
Human brains can't exist for that long.

>> No.15743451

>>15741927
start self-replicating and kill every single human, luckily oil will end soon making this impossible.

>> No.15743458
File: 136 KB, 742x644, 1679615052990474.jpg [View same] [iqdb] [saucenao] [google]
15743458

>> No.15743465

>>15741944
Honestly that's good. I don't want to be reliant on AI.

>> No.15743485

>>15743465
are you on humans?

>> No.15743501

>>15741927
>“Tom” (a friend of David Goldberg’s: “Tom said there was also an AI program that his source told him about, but that is not contained in the documents David possessed. This program was designed to replicate the individuals who would be “culled” or “murdered,” via social media later on. In other words, the plans are such to analyze the targeted individuals, their data, their likenesses partly through the TTID program, discussed in this video, and create an AI profile that would later serve to “replace” them in the online world.

>Tom said the “AI plot” was the “craziest” thing he had ever heard of! He said he was told this plan is in place for multiple reasons, the main one being that “they need to keep down the panic when all these people vanish during the round ups and flu outbreaks.” He said it’s so many people they want to “get rid of” that they are willing to create these “fake online personas so that their friends and family think they are still alive, or don’t suspect anything. Once all this goes down, I think it’s going to be without a lot of fanfare. It sounds bad, but with this AI project, I’m seeing how this can be pulled off and you’ll have a lot of people end up ‘disappeared’, but no one will really know. I think California, right now, is a test run. They’re doing these fires, outages, and eliminating patriots right now and replacing them with this AI system.”

https://gangstalkingmindcontrolcults.com/project-zyphr-classified-docs-reveal-plan-to-exterminate-millions-of-dissident-americans-david-goldbergs-final-words-before-his-death-another-psyop/

>> No.15745311

>>15741927
generate a lot of useless text that people think is useful, adding yet another layer to the bullshit cake, yum

>> No.15745315

The absolute worst thing it could do is kill all humans. Realistically humans would stop it before it does, though.

>> No.15745329

>>15743458
>If chimps didn't want to be experimented on they'd just turn off the humans

>> No.15745398

>>15741927
>Realistically what is the worst thing AI could possibly do?
https://www.youtube.com/watch?v=92Q7Rv5jT80

>> No.15747217

>>15741927
try to make feet