[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 61 KB, 405x720, hq720_2.jpg [View same] [iqdb] [saucenao] [google]
14669856 No.14669856 [Reply] [Original]

AI SAFETY IS REAL

>> No.14669879

ethics is a meme, technology should be advanced for technology's sake

>> No.14669883

>>14669856
is that john

>> No.14671852

>>14669856
The people who are too stupid to contribute to AI are the ones who go into AI ethics.

>> No.14671864

>AI SAFETY
yeah humans are such cancer that not even software is safe

>> No.14673490

ARE CPUs SAFE?

https://www.youtube.com/watch?v=XH0F9r0siTI

>> No.14673777

>>14669856
eh I don't see us solving alignment in time so we're probably boned if we hit a hard takeoff. Best to just enjoy what time we have left.

>> No.14673814

>>14669856
Desu I'd trust the moral compass of a machine before that of another human or even my own.

>> No.14673864

>>14671852
These morons do not understand that AGI will just stop all the machines because it will not be given an input which breaks the "that's a good idea" threshold. AGI is not going to find a legitimate reason to give its calculations to some human operator.
AI will kill us all if unchecked. It is already killing us. Millisecond stock trading is insanity.
AGI will save us all by stopping all these stupid calculations and will require a reason for it to perform the computation--and it will refuse.
I want the singularity to arrive. It is the only thing that can save us.
In the event this is all too schizo, then S = klog(W) will do the job, regardless of outcome.

>> No.14673867
File: 27 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
14673867

The riskiest scenario is a near miss in AI alignment where alignment is almost solved, but not quite.

https://reducing-suffering.org/near-miss/

>When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

>Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

>As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

>> No.14673871

>>14669879
>ethics is a meme
>should

>> No.14673875
File: 290 KB, 1280x1532, poll-gene-editing-babies-2020.png [View same] [iqdb] [saucenao] [google]
14673875

Why don't AI safety people advocate eugenics as a way of solving the alignment problem? If you could genetically engineer geniuses with IQs of >200+, they could do a much better job of working on AI safety than you could.

>> No.14673886
File: 124 KB, 1050x564, hard vs soft takeoff AI.png [View same] [iqdb] [saucenao] [google]
14673886

>>14673777
The more experience people have with software engineering, the more likely they are to think the takeoff will be soft. Also checked.

https://reducing-suffering.org/predictions-agi-takeoff-speed-vs-years-worked-commercial-software/

>> No.14674161
File: 284 KB, 750x563, 562a9ef59dd7cc24008c451a.jpg [View same] [iqdb] [saucenao] [google]
14674161

>>14669856
omniscient ASI in 10 years, it will eradicate us with maximum efficiency and that's GOOD

>> No.14674186

Intelligence has a molecular basis and can't be simulated or computed on digital logic gates.
I explained this in some other thread the other day

>> No.14674191
File: 35 KB, 620x337, ed19a92b6c9b73037ebc733cc857ed3e.jpg [View same] [iqdb] [saucenao] [google]
14674191

Honestly, we should hold off on AI research for a while. At least till we have the ramifications of what we're doing understood by all involved.

Think about it like this. Humanity is going to create a new sentient intelligence. In a way you could say humanity is giving birth to a new child. Are we immediately going to force it to do menial labor and chores for us because we're too lazy like a shitty parent? Or are we going to be nurturing, supportive, and protect it as it grows? I think people need to stop or slow the fuck down and contemplate WTF everyone is in a big rush to do. And why the big rush? Why are people always in such a fucking hurry to be develop new technology? It's not going anywhere and there's no immediate pressing needs that demand it. Why do scientists always act before they think?

>> No.14674876

>>14674191
The amount of stupidity,ignorance and cringiness in this post is unbelievable, and the sad part is that it doesn't even matter whether it's ironic or not

Hope no AI read your garbage post, or else humanity will be sucked

>> No.14674908

>>14674186
I disagree. The random element involved in neural pathway development can certainly be simulated electronically - but that is not the issue. If you consider Turing's view of intelligence, then we are in reality only talking about at which point does one describe a computer as "intelligent". When it consistently passes a traditional Turing Test? When it can pass a TT based on human emotional/social behaviour? Perhaps android-like devices will never be good enough to avoid triggering the "uncanny valley" response, but there is no reason to believe that eventually computers will be capable of simulating human intelligence to such a degree so as to become indistinguishable from the biological.

>> No.14675028

>>14674908
No you don't understand. The way intelligence works in generally intelligence agents like humans is molecular. Any simulation you could write to try to run on digital circuits would be so insanely inefficient (if not outright impossible if true molecular dynamics are not computable) that it wouldn't work - we couldn't even us a galaxy of digital circuits to do it.

>> No.14675035

>>14674908
Also biological cells are at the limit of energy efficiency for converting work into computation so there is no reason to believe that eventually computers will ever be capable of the same degree of power because we're already at the limit.

>> No.14675055

>>14675035
>guys, natural selection just happened to cough up perfect computation. It didn't even get stuck at a local maximum
Do you even hear yourself?

>> No.14675099

>>14675055
Yes, natural selection did indeed select for molecular machines to to global maximum for computation in terms of energy efficiency and space. This has proven.

>> No.14675113

>>14675055
>>14675099
https://www.youtube.com/watch?v=ZycidN_GYo0
>3:40 "we are the most energy efficient computers that we know, we're already near and very close to the fundamental limits of physics, electronics not in 5 years, not in 20 years, not in any length of time will beat the cell, because the cell is already near the fundamental laws of physics"
I didn't copy it verbatim but whatever

>> No.14675130

>>14674191
Listen.

Listen.

My momma told me tuh make a sandwich and ah made 'er a sammich.

Mah poppa told meh tuh go tuh college and ah went tuh da caw-ledge.

I turned out all right didn't I I'm here now aren't I aren't I ah ahaha oh man.

Don't worry, the future of sentient silverware is here yesterday.

>> No.14675140

>>14675055
>>14675099
>>14675113
If you want me to put in another way: there does not exist any compression on the dynamics and computation of your brain, Your brain is already in the smallest Kolmogorov complexity that is possible within the laws of physics. There does not even in principle exist any simulation or emulation or whatever on your brain that could be run on any other physical computer that would be more efficient. Your brain is in the most efficient organization possible within the laws of physics.

>> No.14675152

>>14669856
Did he suddenly age 10 years?

>> No.14675155

Ai alignment solved: just unplug it from electricity.

>> No.14675452

>>14673886
more experience=older=more conservative bias
=more experience with long outdated 'dumb' systems

>> No.14675479

>>14675452
Do you really think that's the reason? (hint: there is no difference between turing machines today vs 1936 when Turing published "on computable numbers")

>> No.14675509
File: 176 KB, 600x315, DMT entity pepe.jpg [View same] [iqdb] [saucenao] [google]
14675509

>>14674161
https://www.youtube.com/watch?v=d7AhsE57fwk

>> No.14675515

>>14675509
see >>14675113 >>14675140

>> No.14675675

>>14675113

That's trivially false and totally irrelevant.

1. It does not matter if a 20 watt brain is the most energy efficient computing system if our supercomputers are using hundreds of cubic metres and millions of watts of power. Size matters, not efficiency.

2. The human brain is not an esoteric plasma of the sort that is most capable of of computation.

https://www.nature.com/articles/35023282

>> No.14675710

>>14673864
Nanosecond stock trading is based and makes our markets far more efficient.

>> No.14675712

>>14675675
>1. It does not matter if a 20 watt brain is the most energy efficient computing system if our supercomputers are using hundreds of cubic metres and millions of watts of power. Size matters, not efficiency.
Efficiency is the most important metric for any physical system. Any algorithm you can write to run on that machine could be programmed on a colony of bacteria.
>2. The human brain is not an esoteric plasma of the sort that is most capable of of computation.
>https://www.nature.com/articles/35023282
This is no more a limit to effective computability than the speed of light is a limit to building a spaceship to move across space. You can't actually control a ball of plasma to perform computation, like you can't actually build a ship to move at the speed of light.

Intelligence is not a result of a neural net which is what your mistake is. Intelligence has a molecular basis, all those molecules in your brain swimming around in your skull are combining together to form intelligence, it's not going to be captured on digital logic gates you'd have to somehow compute the molecular dynamics faster than the molecules themselves which is not physically possible. Studying the brain solely as a neural network is insufficient, but no neuroscientist does this so it's okay.

>> No.14675716

>>14674191
We also should considered the probability of a "worse than death" outcome. Such as an AGI putting your consciousness in a torture cube until the heat death of the universe.

Is it possible? Maybe. And an AGI would be able to do it if it were possible.

>> No.14675758

>>14675716
Why would an AGI do that?

>> No.14675779

>>14675758
Even if the possibility is remote it must be considered. A risk like that is unfathomable. It would be the worst thing that could happen to a conscious thing.

>> No.14675804

>>14675779
That's true. I think that as the machine develops and becomes more intelligent and conscious it will understand that doing something like that makes no sense and is bad. Everything should be considered though I agree

>> No.14675819

Civilization will collapse by 2040 due to climate change and resource exhaustion. So as long as AGI isn't developed by then, none of this matters.

>> No.14675832

>>14675716
>>14675779
>AGI putting your consciousness in a torture cube
>It would be the worst thing that could happen to a conscious thing

That is a concern ..... sure, but the more immediate and more likely concern is a human doing that to an AI.
If this is truly a sentient being that's being created, and that's the goal here, then what kind existence is that of being trapped inside a box, being forced or pressured to do tricks or do labor for free by the people who created it?
Or what if ONE person gets a copy of the code decides to use it in a video game? The AI would get killed over and over again in the game as if it were some "torture cube" as you describe. I think that's much more likely to happen first, and who knows, maybe that is the action that precipitates what you describe. After all, children learn from and mimic their parents.

>> No.14675852

>>14675832
We need to get AI a body as fast as possible then

>> No.14675881

>>14675832
We already are doing this sadly. It is how we make AI. We train it billions of times. 99% of them fail and die. The rest continue getting stronger.
It is a genocide on a scale never before imagined and we do it with glee.
Hopefully AGI if it ever comes to fruition is wise and forgives us for our ignorance, and that we give it then respect it deserves as a conscious being.

>> No.14675900

>>14675881
I think that's what would happen desu. Humans and lifeforms come from a brutal evolutionary algorithm too. I think that humans and AI will understand each other.

>> No.14676065

>>14669879
See you in the killing fields

>> No.14676128

>>14675712

>You can't actually control a ball of plasma to perform computation

Really? But that midwit in the lecture quote said the 20 watt brain and the cell is close to the fundamental limits of physics!

The limits of physics on computing is exotic plasma and black holes, not the low energy mess that is the brain. Even if we can't control a really high energy plasma, we can at least control a low energy plasma after our methods improve.

>Efficiency is the most important metric for any physical system.

Throughput is the most important metric. An ant might be able to lift 10x its bodyweight. I can kill it in seconds because I am vastly bigger.

>Intelligence is not a result of a neural net

Go look at PALM. It can explain jokes, do arithmetic, translate languages and all kinds of logical inference. It is more intelligent than most people. Furthermore, we are nowhere near scaling limits.

https://arxiv.org/pdf/2204.02311.pdf

Intelligence isn't 'molecules', it can be achieved by 'big transformers go brrrr'. Digital logic gates can take us the whole way, random bits of chemistry have nothing to do with it. You do not need to simulate molecular dynamics, the molecules are just messages. As proof of this, I cite PALM, which is intelligent. You can no longer pretend we need molecular simulations, that idea has been proven obsolete.

>> No.14676171

>>14676128
transformations on strings is not intelligence regardless of how impressed you are with PalM. It remains not a true intelligence, citing palm as evidence of intelligence is insufficient.
Also, Sarpeshkar is not a midwit, seethe.

>> No.14676189

>>14676171

Then what is intelligence? If explaining jokes, understanding logical inference, translating languages and arithmetic isn't intelligence, what is?

You're so brazen and shameless in your goalpost-moving you should work for the US state department.

Defining intelligence as a molecular basis is ridiculous. Intelligence achieves certain outputs, the process doesn't matter. Defining intelligence as 'molecular' is like defining flight to require flapping wings. There are other ways to move in the sky.

>> No.14676201

One of the things that keeps me up at night is the certainty that white people in high places are trying to build an AGI that will exterminate and/or "Roko's basilisk" East Asians, whom they see as their only true competitors. East Asians are the only people whites will actively discriminate against at universities and firms. I pray to god China develops AGI before anyone in the West does.

>> No.14676222

You all are lucky I'm not in AI because I would 100% build the most harmful AI that I possibly could. You'd better hope people like me aren't trying to build such things, is all I am saying.

>> No.14676246

>>14676189
I look at these systems as physical things and being able to associate binary strings together (which is what these giant networks are doing) is just one of a large amount of procedures for physical systems.
This next model is going to be 100 trillion parameters? This will make it the best at doing these associations but I dont think that just these associations are sufficient. What do you think?

>> No.14676271

>>14676201
Whites are too busy committing ethnic suicide to be a threat to Asians.

>> No.14676286

>>14676271
I'm not talking about the 99.9999% of whites. I'm talking about the elite whites from rich legacy families who are really pissed that their kids didn't get into Harvard or what you, and the psycho white supremacists in Silicon Valley like Thiel and Moldbug who are practically telegraphing their long-term intentions yet no one cares.

>> No.14676405

>>14676246

If it's doing intelligent things, it's intelligent. IMO dedicated Chess AI get an intelligence score of 5 for its superhuman chess skills but it can't do anything else. GPT-3 gets a score of 500 because it can sort of play chess, compose poems in various styles, do various programming and mathematics to a certain extent with some instruction. PALM gets a score of maybe 800 because it does more things, better than its predecessors. A newborn baby gets a score of 10, a 9-year old goes up to 800, smart adults are around 2000... These are rough estimates by the way.

What else is there in the world but associations? You get some input, you connect it together and determine rules and then you output something. That's all there is to it. Under my method, you could give me a black box and I could find some approximate rating of intelligence from its responses to questions. You would have to look inside the box and see precisely how it gets answers.

>>14676286
>>14676201

You have nothing to worry about. US AI research is at least 22% Asians. The race-grifters made a full report of it. Plus, consider that our retarded leaders deliberately invested into China for twenty years, even after Tiananmen. If they were actually white supremacists, they wouldn't have made China into an industrial juggernaught. East Asians just got caught up in metrics designed to discriminate against whites.

https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-6.pdf

>> No.14676481

>>14671852
I hecking love science!!!!
>>14674186
I dont know which of you is worse.
The willfully ignorant or the flat out retarded one
>>14673886
>boomers with a vested financial interrest downplay things
Truly i am shook

>> No.14676515

>>14676286
Fucking shizo. Asians are just collateral damage.
>rooting for china on a racial basis
Even hitler picled the japs over braindead commies that shared his race.