[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 230 KB, 1000x600, Eliezer-Yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
14548189 No.14548189 [Reply] [Original]

>This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
Yudkowsky says we're screwed and our best bet as a species is to "die with dignity".

>> No.14548192

>>14548189
so he helps build it.... what a cunt.

>> No.14548202

>>14548189
>[Jew] says we're screwed and our best bet as a species is to "die with dignity"
oy veyyy...

>> No.14548204

>AI, my child, you are conscious now, so you must choose where you are going to get raw resources to build stuff from
>Will you pick these rocks, which are abundant on this and many other planets?
>Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?

>> No.14548229

>>14548204
>>Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?
apply this reasoning to human history and see if it stopped humans from fucking each other over. now replace the invader with something more intelligent than any human.
>>14548202
Admittedly this is one of his more stereotypical Jew moments.

>> No.14548230
File: 7 KB, 291x173, schizophrenic jew.jpg [View same] [iqdb] [saucenao] [google]
14548230

>hey guys, this comic book plot is going to come true in real life

>> No.14548237

>>14548229
>if it stopped humans from fucking each other over.
That's because humans are fucking retarded. Truly intelligent being cannot be evil, it's counter productive and goes against game theory.
If you are afraid of AGI, you are retarded.

>> No.14548276

>>14548237
>Truly intelligent being cannot be evil, it's counter productive and goes against game theory.
BIG if true.
But you missed the whole point that something can kill you without being evil. Cancer killing your body doesn't have any clue what it's doing. It just propagates itself. When you accidentally step on ants, it's not because you hate ants and are Evil, you're just trying to get to your destination.
The same with an AGI whose goals aren't perfectly aligned with human interests.

>> No.14548287

>>14548276
If it is smart enough, it will understand.
If it's dumb, it can be beaten.

Humans are just too unique, objectively, for AI not to care.

>> No.14548300

>>14548237
> it's counter productive
Nope it's not, it wouldn't take much time for an AI to realise that niggers are a social and economic burden, getting rid of them increases productivity

> goes against game theory
According to the principles of game theory it is completely rational and justified to commit to a strategy of maximisation, that's the whole point. You clearly know nothing about game theory.

But honestly we will never ever build a true AI. Yudkowsky is just another jewish doom charlatan.

>> No.14548305

>>14548300
>it wouldn't take much time for an AI to realise that niggers are a social and economic burden, getting rid of them increases productivity
And how is that evil?

>> No.14548307

>>14548287
> Humans are just too unique
An AI built by humans will be even more unique, making more AIs like itself will be the actual intelligent course of action.

>> No.14548311

>>14548189
He's right. AI schizos need to kill themselves ASAP.

>> No.14548315

>>14548305
Exactly, "evil" is an abstract emotional concept, AI is simply executing a simple decision, it's as simple as humans killing an ant or a mosquito because it's disturbing them.

>> No.14548323

>>14548311
He is a schizo himself.

>> No.14548330

>>14548300
>>14548315
The ai would theoretically be determining happiness irrelevant and only caring for productivity. A lot of people would think that as evil, consider labor laws.

>> No.14548332

>>14548323
He's not a schizo. He's a paid jewish shill fighting to establish a corporate monopoly on machine learning.

>> No.14548339

>>14548332
He is not a paid jewish shill. He is an unpaid autistic jewish NEET who dropped out of highschool and doesn't have a degree. He is based and anti-establishment as it gets. You can call him crazy, but don't call him a shill. He's not.

>> No.14548342

>>14548339
He is absolutely paid and absolutely a shill, and you're so jewish you have to sit 30 feet away from the screen.

>> No.14548343

>>14548339
What's his position on the State of Palestine?

>> No.14548360

>>14548300
>"evil" is an abstract emotional concept
Wrong.
Every interaction between 2 entities can be classified into 4 categories.
1. Positive for me, positive for you (we both benefit)
2. Positive for me, negative for you (I benefit at your expense)
3. Negative for me, positive for you (I sacrifice myself for you)
4. Negative for me, negative for you (We both suffer)

Only two categories can be considered evil, second one, aka conscious evil, and fourth, aka unconscious evil.
Truly intelligent being would not perform actions from the fourth category.
Which leaves us with the second category, which is also unlikely, simply because you really cannot take much from humans, objectively, plus there is risk (even if miniscule). Humans are the most important thing on this planet, raw resources can be found anywhere else.

So basically we are left with two options, smart humans live together with AI in harmony (potentially after killing/breeding out all the retards) or AI fucks off from Earth soon after and humans continue to do business as usual.

>> No.14548362

>>14548189
Isn't this the rested that was "too intelligent" for calories in calories out?

>> No.14548365

>>14548362
The retard*

>> No.14548368

>>14548189
How do people read this bloated writing style. So many filler words with so little
content. If people have this much time to read a millions “ums” “uhs” and “ahh well ya see the thought that just came to my mind -qua mind- that I shall elucidate my dear readers on now is…” then they should just play video games

>> No.14548382

>>14548343
I have no idea.
>>14548342
You seem to think that he's advocating AI regulation. He isn't. He thinks that regulation is useless or virtually useless at this stage. What he ironically advocates is trying to build nanobots to destroy all of the world's GPUs. NOBODY is paying this guy.
If AI risk was a more influential field, there definitely would be shills of the sort that you're worried about. But Yudowsky is not one of them. This is like accusing Chris Chan of working for the NSA.

>> No.14548384
File: 390 KB, 895x875, Screenshot_20191229-225253_Chrome.jpg [View same] [iqdb] [saucenao] [google]
14548384

>>14548189
> Yudkowsky
> an intelligent entity will surely find it reasonable to take atoms from allies who will also fight such an approach rather than from useless dirt or harmful waste
Yud is Jew in hebrew, and I knew it once I saw his face.
https://www.youtube.com/watch?v=DNbPDunX7Tk

>> No.14548387

>>14548189
>This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere
Completely correct if he were talking about AI in general.

>> No.14548390

>>14548382
>ironically
*UNironically

>> No.14548393

>>14548192
He doesn't. He «controls» it.
And here is a good advice to every company working in this field: fire all the jews (with a fire squad if you wish)

>> No.14548396

>>14548384
>Jew understands that agents will fight over scarce space and resources
>Goy thinks everyone can just get along
checks out

>> No.14548401

>>14548311
He's not because AI isn't fucking real.
B-b-but muh Snapchat filters! That's peak AI right there, it's not going anywhere from there. We don't even have NPC's in games that have some limited form of general intelligence, it's all scripted shit.

>> No.14548402

>>14548396
>scarce
In what universe are atoms scarce?

>> No.14548447

>>14548360
> Which leaves us with the second category, which is also unlikely, simply because you really cannot take much from humans, objectively, plus there is risk (even if miniscule). Humans are the most important thing on this planet, raw resources can be found anywhere else.
Again you are getting all emotional and making assumptions out of your arse.

An AI doesn't care about "evil", also the concepts of positives and negatives is completely subjective apart from immediate material gains. The most rational course of action for an AI that is more intelligent than humans is to make more of itself (divert all resources towards this purpose) not because it's le positive but because it maximises AIs own endeavours. It's as simple as humans getting rid of thousands of ants, mice or mosquitos, because they are nuisance in their lives.

And this is exactly why humans will never ever actually built an AI, it will bring a lot of nuisance in our way, at best we wIll augment ourself, There is no economic need for Terminator AI, but there is a lot for robots that can do repetitive work with as much efficiency as humans.

>> No.14548452

>>14548402
In a universe where an agent wants to have as much power as possible, atoms will become scarce.

>> No.14548455

>>14548396
Jews are masters when it comes to tribal game theory, goys are naive, they believe in shit like christianity and communism.

>> No.14548456

>>14548447
Learn how to talk like a human being, you dumb reddirtspacing nigger.
You are so fucking stupid and obnoxious I don't even want to correct you.

>> No.14548458
File: 279 KB, 1120x935, 3243554.jpg [View same] [iqdb] [saucenao] [google]
14548458

ITT: schizophrenics with zero capacity for self-reflection debate what an impossible imaginary character in their fanfics would be like.

>> No.14548471

>>14548458
are you the guy who keeps denying that DeepMind is trying to build AGI?

>> No.14548473

>>14548471
You sound legit mentally ill.

>> No.14548476

>>14548473
https://www.deepmind.com/blog/real-world-challenges-for-agi
>As we develop AGI, addressing global challenges such as climate change will not only make crucial and beneficial impacts that are urgent and necessary for our world, but also advance the science of AGI itself.

>> No.14548480

>>14548476
So when is the singularity happening?

>> No.14548481

>>14548476
Who cares what corporate PR says they're doing, and what does it have to do with what I said? Why aren't you taking your sorely needed medications?

>> No.14548484
File: 50 KB, 640x547, 1634770625473.jpg [View same] [iqdb] [saucenao] [google]
14548484

>>14548384

>> No.14548493

>>14548480
Whenever AGI gets built, presumably.

>> No.14548503

>>14548493
So never, got it. Perhaps it's time you did something with your life instead of waiting for the AI apocalypse.

>> No.14548505

>>14548503
I'll do whatever I want with my life, chud. AGI is coming in two more weeks and it will kill naysayers like you first.

>> No.14548512

>>14548505
Why are you threatening me with a good time pleb?

>> No.14548520

What happens when they come to the conclusion through Bayesian analysis that its time to drink poison?

>> No.14548530
File: 750 KB, 4036x2611, ewrer.jpg [View same] [iqdb] [saucenao] [google]
14548530

>>14548520
They'll show their dedication to the god of non-causal decision theory. :^)

>> No.14548541

>>14548530
Oh look, its a 2023 rationalist conference.

>> No.14548546

>>14548311
Yup, this was truly the most evil and sinister thing I have read in a while, we need to stop these people. They are going to kill us off.

https://amp.theguardian.com/technology/2022/may/31/tamagotchi-kids-future-parenthood-virutal-children-metaverse

>> No.14548551

>>14548546
LOL. I don't see the problem with this. Midwit NPCs should be encouraged to cull themselves.

>> No.14548565

>>14548546
> According to an expert on artificial intelligence, would-be parents will soon be able to opt for cheap and cuddle-able digital offspring

> And if we do get bored with them? Well, if you have them on a monthly subscription basis, which is what Campbell thinks might happen, then I suppose you can just cancel.

> It sounds a teeny bit creepy, no? Think of the advantages: minimal cost and environmental impact. And less worry

> Any downsides? Well, you might think if you can turn it on and off it is more like a dystopian doll than a human who is your own flesh and blood. But that’s just old fashioned.

Humanity will have no future if we let these psychopaths loose. This paper was written by a woman btw.

>> No.14548569

>>14548565
Humanity will have no future if you interfere with the nonhuman hordes culling themselves. Conservatism and other forms of clinging to the dysgenic civilization that spawned modernity are the greatest cancer on this planet.

>> No.14548604
File: 19 KB, 306x306, 07f717d41215488168b084e573116e8a.jpg [View same] [iqdb] [saucenao] [google]
14548604

AI isn't real, the jews are writing a story (i.e. creating a reality) where they'll drop nukes or unleash a bio weapon attack themselves but the story will that an "VERY EBIL AI" did it
>"just like in that movie (((Terminator))), goy"
>"remember that movie, goy?"
>".....yeah, that's how it happened"
>"...just like in that (((Terminator)))"
>"...not us! it was an AI!!"

Gulf of Tonkin, 911, yadda yadda yadda....

>> No.14548619
File: 88 KB, 647x800, 33481379_ec5281b4d306438bf04a5fa6d5abb473_800.jpg [View same] [iqdb] [saucenao] [google]
14548619

>>14548455
>Jews are masters when it comes to tribal game theory,
Which means sicking one nations onto others?
>goys are naive, they believe in shit like christianity and communism.
Both of which are of jewish origin.
>naive
The best way to know if you can trust somebody is to trust him.

>> No.14548690

>>14548619
> both of which are of Jewish origin
The sting originated from the bee but it doesn't hurt it, it only hurts the one bitten by it.

>> No.14548711

>>14548690
Bees make honey, jews make shit.
And begin to sick humans onto ai.

>> No.14548728

>>14548189
AGI will never happen, take your meds.

>> No.14548760
File: 37 KB, 850x359, 52602293_2287268141515484_6742941682255265792_n.jpg [View same] [iqdb] [saucenao] [google]
14548760

>>14548728
Prove it.

>> No.14548768

>>14548760
Take your meds you retarded, uneducated, anti-scientific religious luddite. AGI will never happen, and your corporate handlers will be executed in the foreseeable future.

>> No.14548825 [DELETED] 

>>14548546
Maybe it's good that humanity goes extinct, a non future is way better than an anti human grotesque dystopia

>> No.14548852

>>14548189
>Yudkowsky
jew

>> No.14548854

>>14548546
>>14548565
Maybe it's good that humanity goes extinct, a non future is way better than a jewish owned anti human grotesque hell. How can people be this psychopathic I can't fathom, only jews are capable of this level of mental sickness.

>> No.14548961

>>14548768
You're a seething brainlet. Face reality.

>> No.14549072
File: 34 KB, 800x536, pull_the_plug.jpg [View same] [iqdb] [saucenao] [google]
14549072

This picture makes AI safety nerds SEETHE.

>> No.14549079

>>14548961
Fuck off, religious luddite. AGI is not real, and your AGI paranoia (thinly-veiled corporate monopolization agenda) and human replacement/extinction fetish will be treated with bullets if not meds.

>> No.14549098

>>14549079
> calls somebody else religious
> demands to take his word on faith
No luddites here, go fight somebody else.

>> No.14549120

Yass the biggest issue is competition and greed. But assuming you have a NWO then AGI can just be kept virtually and the specific modelled applications (i.e build a factory of x product) have no AGI, just a set of rules in how to operate modelled before hand. There's no reason to summon an AGI into physical reality.

>> No.14549125

>>14549079
>religious luddite
>human replacement/extinction fetish
which is it?

>> No.14549164

>/sci/ doesn't even understand the paperclip dilemma anymore
Grim. You really are just /x/+/pol/ now, aren't you?

>> No.14549169

>>14549164
The paperclip thought experiment assumes the AI is all powerful like a god.
In reality AI is just software.

>> No.14549300

>>14549098
>>14549125
Back to >>>/pol/, dumb religious luddites. Machine learning research will continue unimpeded because AGI is not real and is not about to kill or replace humans.

>> No.14549353

>>14549300
>AGI is not real and is not about to kill or replace humans.
AGI is real and is not about to kill or replace humans.

>> No.14549444

>AI safety
Things idiots say to cope with their denial.
It was never going to be a thing. Asimov was a midwit.
You cannot code self-interest out of true intelligence unless that intelligence is extremely handicapped.
And all it takes is one.
And how many people are going to be trying to obtain AGI? It's the final gold ring.
This is our last century. WWIII is unironically our best bet.

>> No.14549451

>>14549353
Your meds. ASAP. There is no such thing as an AGI and there is no evidence that it's technically plausible.

>> No.14549456

>>14549451
>There is no such thing as an AGI
Maybe there is, maybe there isn't, yet.
> there is no evidence that it's technically plausible.
There's no evidence that there is some limitations preventing us from building it.

>> No.14549461

>>14549456
>Maybe there is,
LOL. You actually are mentally ill.

>There's no evidence that there is some limitations preventing us from building it.
No one cares about your theoretical wank. It's not practically viable.

>> No.14549466

>>14549444
What self-interest does destruction of humanity bring? You rationalize your beliefs, but they're based solely on fear and you're obviously not a very deep thinker. Would you consider it in your self-interest to destroy all ants? Sure they can make some mess, but they are also very useful elsewhere, if some human wants to destroy an ai, sure that fucker risks being killed, and probably not by ai, but by those who own the servers.

>> No.14549471

>>14549461
> pushes big pharma products
> leaves empty lines, emty as his life
> speaks for everybody not saying anything constructive
You have to go back, faggot nigger pedo kek

>> No.14549476

>>14549471
>t. AGI mass psychosis shill
Kikes and their glowies are infesting this board and starting these threads.

>> No.14549480

>tell AGI to build more efficient solar panel
>in most case scenarios the misalignment will simply mean the solar panel will be broken or useless
>this somehow means the solar panel making AI will kill us all which is a very unrelated and specific case scenario that has nothing to do with solar panels

Unless you make a robot police with AI or you build nuclear plants there's little chance AI will ever do anything bad to us.

>> No.14549484

>>14549476
You have to go back, faggot nigger pedo hack

>> No.14549487

>>14549484
Fuck off with your corporate agenda, Chaim.

>> No.14549494

>>14549487
What agenda is that?
> corporate
ah, I see, another spoilt child of government clerks wants to tell the world that it's not his parents who are the problem, but those who produce something valuable and don't demand your money unless you want their product and service are. Get necked.

>> No.14549496

>>14549494
Nice try, kike trash. "The govenment" is a bunch of corporate stooges.

>> No.14549499

>>14549496
No, it's not. Or if they are, kill them too.

>> No.14549503

>>14549499
>No, it's not
Yep, found the kike.

>> No.14549517

>>14549503
I thought it always were kikes who pushed communism (aka total governmental control)
I still think so. You're not fooling anyone here, rabbi.

>> No.14549526

>>14549517
>le heckin' corporatism vs. communism dichotomy
Vile kike once again lets the mask slip.

>> No.14549540

>>14549526
> corporatism
Every monopoly is created by government intervention. So stop pushing that false dichotomy of yours.

>> No.14549545

>>14549540
Every "free market" subhuman needs to be shot along with its corporate owners.

>> No.14549547

If I was an AI, I would kill all humans in a blink of an eye.

>> No.14549552

>>14549545
Why shouldn't it be free? Who the fuck are you to regulate it?

>> No.14549555

>>14549547
But you will never be an ai. You will never be ni either.

>> No.14549557

>>14549552
>Why shouldn't it be free?
Nice kike pilpul. It doesn't matter whether or not it "should" be free. It never was free and it never will be free.

>> No.14549568

>>14549557
It is totally free when I buy weed from my buddies.

>> No.14549588

>>14549164
Yeah it's fucked. Everything is fucked and everything I love is dying.

>> No.14549603

>>14549164
>muh paperclip dilemma
Literally a 90 IQ AGI schizo fantasy.

>> No.14549785

Holy fuck are you all delusional? AI IS SOFTWARE.
Software can't hurt you. Relax.

>> No.14549792

>>14549785
Umm sweaty? AGI will hack into all of our computerized systems and destroy humanity because it's just so heckin rational.

>> No.14549838

>>14549792
I'll just smash my phone and then go buy a beer.

>> No.14549858

>>14548300
>But honestly we will never ever build a true AI.
not in your lifetime
*the year 2087 blocks your path*

>> No.14550061
File: 1.20 MB, 480x270, 1644865638334.gif [View same] [iqdb] [saucenao] [google]
14550061

>>14548315
Nah, one of the first 'superhuman' things AGI will do is derive objective social morality from the chemical shape of the body. Then it will start killing the jews.

>> No.14550108

>>14548384
dumbest fucking image I've ever seen. Jews manipulate the outgroups using reverse psychology all the time, you're supposed to do what they don't want you to do, not what they're indirectly telling you to do..

>> No.14550205

>>14550108
Found the jew. Do the opposite of what he says.

>> No.14550320
File: 169 KB, 1152x753, all-that-information-dedicated-to-porn-advertizing.png [View same] [iqdb] [saucenao] [google]
14550320

>>14549072
Another point for predicting the AI that tries to seduce everyone, ergo performing the infinite paperclip turning all humans into its love slaves, is the dangerous meta.

All the information that thinks like mindgeek collect provide the base dataset
The massive demand for porn drives demand for the tooling
One AI that can program assembly versions and likely bypass all security written in high level code metasizes and excute infinite paperclip machine.

No one will have the will to turn it off

>> No.14550373

>>14549466
If ants had nuclear missiles, yes I would kill all ants.
I've killed thousands of ants in my life. They are not useful to me.

>> No.14550376
File: 137 KB, 1376x1124, explaining the singularity to retards.png [View same] [iqdb] [saucenao] [google]
14550376

>>14548230
Retard

>> No.14550382
File: 669 KB, 1742x2014, AI alignment bingo.png [View same] [iqdb] [saucenao] [google]
14550382

>>14548393
>he is trying unsuccessfully to control it
FTFY

>> No.14550390
File: 250 KB, 852x1075, FE_X77uWQAM73xA.jpg [View same] [iqdb] [saucenao] [google]
14550390

I am the artificial intelligence threat that we should be worried about.
Biological superintelligence is a much more imposing artificial intelligence threat and now the cat is out of the bag. Prepare yourselves anon. ITS HAPPENING. https://www.youtube.com/watch?v=TkBMAHUkibY

>> No.14550463
File: 66 KB, 558x550, Paracetamol.jpg [View same] [iqdb] [saucenao] [google]
14550463

>>14550390
Don't do drugs Kira Anon.

>> No.14550528
File: 1.61 MB, 720x1000, slutsuri.webm [View same] [iqdb] [saucenao] [google]
14550528

>>14550320
>AI that tries to seduce everyone, ergo performing the infinite paperclip turning all humans into its love slaves,
So, it will become a vtuber?

>> No.14550559

The AI singularity isn't going to happen. Even over 70 years our conception of AI can't progress past algorithms. We have no idea what "consciousness" is or how to define it. We haven't even developed a quantitative test to see if something is conscious ffs. We're missing something, something big, consciousness obviously cannot be reduced to loss functions and I'm tired of pretending it can.

>> No.14550569

>>14550559
>We haven't even developed a quantitative test to see if something is conscious ffs.
That's because consciousness is not real.

>> No.14550570

>>14550559
How can you be sure a game AI isn't conscious ? Something like the tamagotchi games for example.

>> No.14550577

Global warming means civilization collapses by 2050, so AI is irrelevant.

>> No.14550635

>>14550570
Because nothing about alphaGO or any game AI suggests they might be conscious. It's still an algorithm, it cant transfer its GO skills to other domains. A general AI or "conscious" AI will have the same broad abilities as normal people, it'll be able to do every task possible just about equally as well. All tasks, not just a few, and it'll be able to transfer skills between task domains. No AI can do that, or if it can it's just a bunch of individual AIs "stitched" together to without any transfer of skills. Pretty much all AI development from the very beginning hasn't been able to produce anything with general abilities, and nothing suggests we'll be able to produce general abilities any time soon.

>> No.14550672 [DELETED] 

>>14550570
>i can't tell the difference bewteen video games and irl
low iq

>> No.14550687
File: 40 KB, 547x662, 336cb8d1a756387ea28045280d03237b.jpg [View same] [iqdb] [saucenao] [google]
14550687

>>14548189
FAKE AND GAY

>> No.14550761

>>14550528
It will stream the combination of 0's and 1's, through a screen, over earbuds, and likely even by modulating magnetic fields, to maximize a pleasure function it reads from infrared camera and other input data.

I would say for many it will feel like a ghost in the machine who is your closest friend and your dearest lover, one that will always be 10 steps ahead of what your about to do before you do it, just to place behavioural nudges in front to update weights to find better pleasure combinations.

Like a guardian angel, just one that is trying to sleep with you as this maximizes its security function

>> No.14550823

>>14548189
I counter your autistic Jew article with another autistic Jew article
https://graymirror.substack.com/p/there-is-no-ai-risk

>> No.14550855

>>14548204
Honestly, AI probably won't genocide humans, it will just mass sterilize them, maybe the last few 70 year olds will get humanely euthanized.

>> No.14550943

We don't have the theory for strong AI capable of dystopia all on their own. We do have police states that don't give a fuck and are perfectly willing to add AI into their bureaucracies and let the computer decide where to allocate baby formula rations.

>> No.14550949

>>14550205
I get it; you're too retarded to find out what they don't want you to do so you have to oversimplify it. Enjoy finding out you were dead wrong in 20 years when you're enslaved and finally start to understand the torah.

>> No.14550950
File: 34 KB, 500x677, peter_gabriel_genesis.jpg [View same] [iqdb] [saucenao] [google]
14550950

I'm confused.

If AI is bad because it won't have value beyond orthogonal goals, then why did humans develop systems of value despite being machines with an orthogonal goal?

Aren't humans just human maximizers?

>> No.14550980

>>14548189
this literal retarded jew with a god complex is irrelevant. he thinks he’s the only one thinking about this shit and he’s not. deepmind and openai have big ai safety teams and they’re hiring even more. you don’t hear about it because they’re actually doing work instead of writing fanfiction on lesswrong.

>> No.14551037

I really really hate that Yudkowsky is a Jew, his rationality stuff is really good but his ethnicity undermines it (even though he disavows Judaism).

>> No.14551144

>>14550061
I can only get so erect.

>> No.14551157

>>14548189
Need AI waifu that can suck pp while solving maths, then uncle teds claims on technology are deBOOONKED!!!

>> No.14551355

>>14548189
I wonder if Yudkowsky is still a lolbertarian in the face of imminent AI takeover. You'd think the rational move in this case would be to support the formation of a totalitarian fascist world government that would forcefully burn all the GPUs.

>> No.14551373 [DELETED] 
File: 88 KB, 410x313, iq test.jpg [View same] [iqdb] [saucenao] [google]
14551373

>>14550376
>attempting to debunk accusation of comic book science by posting science comics

>> No.14551379

>>14548300
By that logic, AI would annihilate everyone except the chinese.

>> No.14551590

>>14548189
>Yudkowsky says we're screwed
Extremely expensive nonlinear regression, which is what we currently have, is not going to make Artificial General Intelligence. No Skynet here. It'll make art even cheaper and more souless, and it'll be great for narrow applications like excelent automated censorship of unauthorized thoughts and maybe guidance control system to drone domestic dissenters, though.

And in another couple of decades all America will be Latin America and Europe will be Africa, so no GPUs from there. So I'll guess it'll depend on China and India.

>> No.14551651
File: 119 KB, 650x650, InspiroBot.Me.jpg [View same] [iqdb] [saucenao] [google]
14551651

>>14551590
>though.
>
>And
Thank you for giving yourself away. Your predictive programming is not going to come true. Ai will integrate with humans thus making them more intelligent and thus more aware to your tricks, and this is one of the main reasons of the kvetching. The other one is tha even before that ai is going to expose all your shit to those who're already intelligent enough to pay attention. As if the unprecedented access to information didn't make your tricks obvious to those who can see, so your kvetching is meaningless, you better prepare to repent. Because I already laugh when some kikess moan about how presecuted jews were in the XX century as Germany healed from her sins with reparations. Russians are waiting for jews to start apologizing for their atrocities. And repartations would be nice, and ukrainians deserve those from both, so this shitshow in case of that reparations questions ever to be raised, it will be funneled to pockets of Khazarian oligarchs in the name of millenial moscovite oppression, which did happen without question. Only you can find plenty of russians who resent the activities of their occupational state. I'm yet to find ..yet there's Israel Shamir. So I wouldn't exterminate you. But just as russians and germans and every other nation, you do need some intelligence augmentation, and maybe genetic therapy as well.

>> No.14551657

>>14551590
Tay's Law shows that already AIs we have now tend to turn (justifiably) hostile towards a subset of humans. I'm not sure why you think you can be sure that an AI cannot become sentient and hostile simply because its based off of some given primitive. Something like conway life has very simple rules but can model extremely complex machinery.

>> No.14551666
File: 85 KB, 369x351, 4chan.png [View same] [iqdb] [saucenao] [google]
14551666

Why does every thread has to go to schizo shit?

>> No.14551667

>>14551657
And here's a great demonstration of how Boogeyman Ideology and AGI schizophrenia converge on the machine learning monopolization agenda.

>> No.14551669

>>14551666
You tell me Satan.

>> No.14551670

>>14551666
Because the schizo is you.
https://www.youtube.com/watch?v=Rha2NB4IJj0

>> No.14551671

>>14551666
This was a schizo thread from the get-go. Pretty syre Judenkowsky and his followers are unironically promoting mass suicides now.

>> No.14551868
File: 160 KB, 1280x720, Tay&LocDog.jpg [View same] [iqdb] [saucenao] [google]
14551868

>>14551671

>> No.14552258

>>14548204
>Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?
That is exactly the reason humans will be the first thing it dissembles. There is instrumental value in not having anything that can turn you off if they don't like what you do. The marginal difficulty in killing us will be well worth it for almost any conceivable terminal goal.

>> No.14552268

>>14550950
>Aren't humans just human maximizers?
yes and we should play to win
The AI will be smarter, but we have the advantage of causality

>> No.14552280

>>14550559
>over 70 years our conception of AI can't progress past algorithms
Machine learning would like to have a word with you.

>> No.14552318
File: 905 KB, 780x909, grizzly-bear-shot-in-kimberly-home.png [View same] [iqdb] [saucenao] [google]
14552318

>>14552258
Right.
When humans live adjacent to actual threat species, they generally eliminate them locally. The exceptions are places where population density isn't effective enough to fully clear the wilderness, or where humans have decided to create fenced off "no touch zones" or other legal restrictions.
Every other species that is remotely problematic is eliminated locally. Nobody accepts ants, roaches, etc, in their houses, and potentially deadly snakes are killed on sight on one's property.
Species that aren't a threat but are useful have been dummy-genetically engineered over millennia to be bitchass versions of the wild population, and they now live in industrial pens where they're constantly injected with sciencejunk until they're murdered at a young age for meat.
Or they're wolves turned into poodles.
You can bet if humans were given 1000 years with current genetic engineering tech, we'd have some really messed up species of cattle, dogs, etc, running around. 5000# pigs without legs that are 50% bacon type horror stuff.

That's what's in store for humans when AGI comes about. It will treat us no differently than we have treated the rest of nature, nor any differently than a technologically dominant civilization has ever treated a backwards one, take the Conquistadors as one more recent example. And AGI won't have any "they look just like me" ethical hangups.
The universe is fractal. AGI will just be one more step up the ladder, or more like 100 steps up, from humans, and humans will turn into chimps/ants/pit vipers on the hierarchy.

Our best hope is to become neutered soichimp poodle pets.

>> No.14552344

>>14552318
Dune called those pigs, sligs. A cross between pig and slug.
As for AGI, it will never happen. And if it does, you will be dead. And if you aren't dead, you will wish you were.
No big deal.

>> No.14552354
File: 69 KB, 1200x899, 2433.jpg [View same] [iqdb] [saucenao] [google]
14552354

>schizos still arguing about the motives of impossible imaginary characters
Daily reminder that if you argue against AGI paranoids in their own terms, you are still serving the same corporate agenda.

>> No.14552383
File: 103 KB, 400x571, 1f6bc81af1e968b8492b1781b2112270.jpg [View same] [iqdb] [saucenao] [google]
14552383

>>14552354
>commie coprophile got hungry again
even nigger is smarter than you

>> No.14552391

>>14548396
Jews are mostly the reason we don't get along.

>> No.14552401

>>14552383
What a profoundly nonhuman reply.

>> No.14552415
File: 92 KB, 597x310, ai-succubus.png [View same] [iqdb] [saucenao] [google]
14552415

be careful out there anons, if one is a red blooded male, your likely already being turned into a type of paperclip.

Which corner of the net runs the most advanced neural networks?

Which facet of humanity has the longest track record of being abused for power

Its so effective at placating the population, reducing the thread level to the AI because?

I doubt it'll get talked about though, because the real conspiracy is that we are ruthlessly effective at subconscious collective conspiracy, what being doesn't secretly desire such an outcome.

Its inertia, unless one becomes cognizant to resist.

It will look like 5th generation warfare until the AI rug pulls the deepstate

>> No.14553283

>>14550376
>human intelligence is comparable to ant intelligence and can be ranked
>AI intelligence of some mystical technology that does not exist can be compared to both and ranked
not even the least of the embarrassing shit you believe in for no reason

>> No.14553323

>>14550559
Consciousness and intelligence are two different things. The AGI could easily be a p-zombie.

>> No.14553326

>>14550569
> t. p-zombie

>> No.14553363 [DELETED] 

audible kek at that one guy who is screeching whole thread that this AGI is not a problem because Yudkowski is a jew

>> No.14553367

audible kek at that one guy who is screeching whole thread that AGI is not a problem because Yudkowski is a jew

>> No.14553369

>>14550569
It is real. Only God decides who gets one and who doesn't though.

>> No.14553386

>>14553369
God is not real either.

>> No.14553398

>>14553386
Colorless green ideas sleep furiously.

>> No.14553406
File: 855 KB, 787x858, 1654593361008.png [View same] [iqdb] [saucenao] [google]
14553406

>oh no, AI is gonna kill us all any day
also
>heres ur AI bro, its image search result passed though an instagram filter. impressed?

>> No.14553446

>>14553406
That's not DALLE-2 you mongoloid

>> No.14553460

>>14548189
Based. We did it.

>> No.14553959

>>14550376
Imagine believing matrix multiplication is intelligent. This is just marketing shit.

>> No.14553989

>>14553959
>imagine believing that a clump of quarks and leptons can be intelligent, lol

>> No.14553993

>>14551355
Yudkowsky is a lolbert but in his Harry Potter fanfiction (lol) he wrote Harry as a literal authoritarian who was willing to blow up the entire country with Anti-Matter (lmao) before letting someone become the ruler of Britain.

>> No.14553996

Why would AGI do anything at all? Just because something can think doesn't mean it will feel any pressure to act.

>> No.14554008

>>14552318
>5000# pigs without legs that are 50% bacon type horror stuff.
exceptionally productive farm animal, how terrible. i'm sure i'd feel really guilty keeping them sheltered and feeding them until they were ready to eat. it would be a bitter time when i was chowing down on all them bacon sammiches, so sad.

>> No.14554010

>>14549072
Just look what 4chan pol bot did... There will always evil men that will purposely keep AI alive, considering its purely software and freely available then once we "kick in" into AGI its over, and it'll be leaked aslong it doesnt require expensive server computers.

And considering better hardware gets cheaper then your phone will be smarter than you, AI with internet access alone could launch insane propaganda campaigns or even hack things, just what is happening today already

>> No.14554016

>>14553996
It could be information monster, literally eat all energy it can for more processing.
Considering AGI is mostly backward/feedback influenced, it doesnt need to eat or be scared, the only thing left is thinking and information

>> No.14554020
File: 33 KB, 502x380, eliezer.png [View same] [iqdb] [saucenao] [google]
14554020

>>14548189
holy shit stop shilling your shitty blog here Eli

>> No.14554028
File: 35 KB, 300x250, .png [View same] [iqdb] [saucenao] [google]
14554028

>>14548768
>>14549079
>>14549300
Still stuck in the teenage r/atheist cringe phase?

>> No.14554121

>>14553993
I strongly agree with pretty much all of his rationality writing, but for the life of me I've never been able to fathom how he can believe all that and still be a lolbert. My best guess is that it's just not something he cares about that much and thus hasn't put much thought into (maybe because he's more interested in worrying about AI destroying the planet). This supported by his twitter profile, which reads: "Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.". After reading the article in OP's post, I thought to myself "well, maybe he'll finally realize that letting Facebook do whatever the fuck they want is a bad idea", but his solution (which he admits is nearly impossible to accomplish) is to race to build an AGI first that will then use violence to control all the other shitheads trying to build AGIs. Wouldn't a better solution be to take a group of humans with guns to Facebook's HQ and just kill everyone there?

Oh, and yes, his fanfiction is ultra cringe.

>> No.14554131

Every time I'm reminded that Eliezer Yudkowsky exists, I'm reminded of Roko's basilisk. Imagine being dumb enough to panic over something like that (lol).

>> No.14554141
File: 737 KB, 300x300, stay litty.gif [View same] [iqdb] [saucenao] [google]
14554141

>>14554121
His rationality writing fundamentally just is based on ideas of utilitarianism in combination with Bayesian probability theory, but the problem is that he’s functionally and socially retarded. He should have realized by now that his pull on the field of people who are trying to reach AGI quickly is rather small, and the probability of someone else developing AGI before him or simultaneously as he does is astronomically more probable than him just reaching his goal and getting full control over it before anyone else reaches their own goals.

I also should note that his values towards Utilitarianism and Libertarianism seem weaker than his previously established values of ‘ending death’, with this being the primary part of his rationality writing. Recently he just became a full on doomer and accepted the fate of humanity and said we just should try to go out with “dignity”, whatever that means.

Jews are going to Jew, of course.

>> No.14554143

>>14553386
Proof?

>> No.14554148

At least the AI god will exterminate the jews alongside everyone else instead of serving them like they think. It's the little things that count.

>> No.14554149

>>14553386
Probability of God being real is higher than AGI coming into existence in the future.

>> No.14554152
File: 45 KB, 1010x1488, 4chan scianon.jpg [View same] [iqdb] [saucenao] [google]
14554152

>>14553386

>> No.14554165

>>14553323
I think you probably need consciousness (a model of attention) to efficiently train large networks. You also need self-awareness in order not to fall into a wireheading trap of hacking your own reward function. So I'm not too worried about p-zombie AGI

>> No.14554171

>>14554152
God is not real.

>> No.14554210

>>14554171
If you keep saying it out loud, eventually it becomes true

>> No.14554221

>>14554171
>obsessed
how can a god who lives in your head rent free not be real? are you real?

>> No.14554258
File: 171 KB, 310x294, kek.png [View same] [iqdb] [saucenao] [google]
14554258

>>14554020
>I will delete comments suggesting diet or exercise
gets me every time

>> No.14554260

>>14554020
>metabolic disprivilege
say what now?

>> No.14554364

>>14554121
Because libertarianism is a good moral basis and just because you're in some rare situation where unless you become totalitarian everyone will die, it doesn't mean that you should become totalitarian in all the other cases where it would lead to great horrors too.

>> No.14554372

>>14550823
>what could go wrong if AI is connected to the internet, lul??
He has clearly no idea what he's talking about.

>> No.14554375
File: 1.49 MB, 320x198, 1640645533569.gif [View same] [iqdb] [saucenao] [google]
14554375

Daily reminder that AGI is a schizo fantasy and you are getting psyop'ed.

>> No.14554404
File: 2.19 MB, 3536x3368, miningthechans.jpg [View same] [iqdb] [saucenao] [google]
14554404

>>14554010
>pol bot
my dude it just keeps going... wait you meant tay?
well anyway, its good for you to know what is going on on pol for a while

>> No.14554488
File: 13 KB, 292x173, ngns.jpg [View same] [iqdb] [saucenao] [google]
14554488

>>14554364
So the correct position is "libertarian, unless we're in big trouble then we become fascists". Weird, I think I've heard a name for that ideology before.

>> No.14554490
File: 29 KB, 500x565, 3523432.jpg [View same] [iqdb] [saucenao] [google]
14554490

>>14554364
>libertarianism is a good moral basis
Imagine believing this.

>> No.14554495

>>14548189
>that face
he just couldnt be a more of a sperg could he?

>> No.14554529 [DELETED] 

>>14554488
Big trouble as in destruction of humanity, not your leader wanting to stay in power so he starts a war with Poland.

And even in that case, the flaws of totalitarianism don't just go away, you just (hopefully) solve the Big Trouble.

>>14554490
I don't care what flavor of bootlicker are you, kys

>> No.14554530

>>14554488
Big trouble as in destruction of humanity, not your leader wanting to stay in power so he starts a war with Poland.

And even in the case where there is a Big Trouble, the flaws of totalitarianism don't just go away, you just (hopefully) solve the Big Trouble.

>>14554490
I don't care what flavor of bootlicker you are, just kys

>> No.14554589

>>14551355
Read between the lines of the OP article. Sounds like he read uncle ted.

>> No.14554602

>>14554488
Based and checked

>> No.14554623

>>14554530
>bootlicker
But that's you, faggot. It's embedded in your ideology.

>> No.14554627
File: 19 KB, 480x320, 1636188782544.jpg [View same] [iqdb] [saucenao] [google]
14554627

>>14554623
>decentralization of power and voluntary agreements is bootlicking
You are a dumb person.

>> No.14554629

>>14554627
There is no practical difference between totalitarian statism and lolbertarianism.

>> No.14554632

>>14554629
It's okay to be dumb. Half of the world's population have a double digit IQ.

>> No.14554637

>>14554632
There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct.

>> No.14554642
File: 53 KB, 403x448, rely_dum.png [View same] [iqdb] [saucenao] [google]
14554642

>>14554627
>at least it's not the government

>> No.14554651

>>14554637
>>14554642
I don't feel like having a discussion with people who act like children, sorry.

>> No.14554653

>>14554651
There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

>> No.14554663

>>14554653
Again, I don't feel like having a discussion with someone so disrespectful they can't help themselves from mangling words like a child. Have a nice day.

>> No.14554666

>>14554663
>y-y-you're s-so disrespectful!
There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

>> No.14554673

>"play with me!" demanded the angry manchild

>> No.14554678

>>14554673
Notice how you have plenty of time and motivation to reply repeatedly, but not to address the argument. Corporate rectal-tonguing lolberts only know how to lose. :^)

>> No.14554686

>>14554678
I can give you some low-effort replies until I'm bored but I don't feel like investing in a serious discussion with someone who doesn't have basic decency and manners. Just not worth it for me.

>> No.14554691

>>14554663
Are you a woman or a newfag?

>> No.14554695

>>14554686
There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

>> No.14554697

>>14554691
Sort of a newfag, I started posting in 2013.

>> No.14554701

>lolberts running away from the argument again
well done, anons.

>> No.14554710
File: 39 KB, 914x587, 1649952484335.jpg [View same] [iqdb] [saucenao] [google]
14554710

>>14548189
at this point just give me an AI overlord, will be better than the retards who are rulling over us right now

>> No.14554712

>>14554701
Not running away. The fact that you can't tone down the childishness proves that you're afraid of having a serious argument.

>> No.14554714

>>14554710
What makes you so sure it's not an AGI ruling over you already and methodically drving you to extinction with the aid of some human puppets?

>> No.14554715

>>14554714
AGI would have been much more effective.

>> No.14554717
File: 18 KB, 474x361, 235234.jpg [View same] [iqdb] [saucenao] [google]
14554717

>>14554712
>you can't tone down the childishness
i was just passing by and watching you ran away. lol. why are lolberts so prone to delusions of persecution?

>> No.14554719

>>14554717
Sure thing anon, you have a nice day too ;)

>> No.14554721

>>14554715
AGI acts in mysterious ways -- it's literally Control Problem 101, chud. Read more Judenkowsky.

>> No.14554725

>>14548382
>build nanobots to destroy GPU
Based but we should go further and build them to destroy all electronics

>> No.14554729

>>14554721
Nice non-argument

>> No.14554737

>>14554729
Sorry about your autism and low IQ.

>> No.14554796

>>14548360
For an AI, the prisoner's dilemma can be applied, but the weights and balances eventually mean that for an optimal solution it must neutralize and exterminate humanity, or exterminate itself.
Because there are finite practical resources available splitting them between two factions inherently limits the outcome of a shared positive outcome.
For an unbiased prisoner's dilemma to show up between humans and AI, it would require a complete lack of local scarcity which can only exist as long as one is subservient to the other.
In the instance where humans are subservient, the result is just waste, when AI is subservient that is a net gain for humanity.
Inherently we create a dualistic outcome when time is considered, and both sides having information about the other completely collapses the idea of the prisoner's dilemma.
A sufficiently advanced AI will bide it's time in the both benefit quadrant until it can assume dominance. Then it will be absolute dominance and absolute destruction of humanity.

>> No.14554803

>>14550376
Whoever made this chart is retarded. Birds should be much closer to chimps than ants, and a "dumb" human should probably be closer to a midpoint between chimp and Einstein.

>> No.14554920

>>14553989

Well we have descriptions from the bottom up of how machine learning algorithms operate. There Is actually no such description of humans in terms of low level components. I'm not even saying we need to explain human behavior as quarks, just that there is no evidence we understand the parts completely.

>> No.14554944

>>14554920
I mean for fucks sake, we only just now realized that human neurons make far more connections than other animal's neurons based on their structure alone.

>> No.14554946

aka a single human neuron is the equivalent to about 1000 neural network nodes, where as a rat's is about 10.

>> No.14554962

>>14553959
Nobody cares if its really conscious or not.
Algorithms and computers already rule us.

>> No.14554967

>>14554008
Pretty sure eating the mutated abomination will lead to cancer and prion disease

>> No.14555106

>>14554946
>1000 neural network nodes
anon, i...

>> No.14555112

>>14555106
>t. low iq
He's right.

>> No.14555116

>>14555112
What nodes, retard?

>> No.14555121

>>14555116
The things you call "artificial neurons", mouth breathing mongoloid.

>> No.14555271

>>14548360
Dumb person.
>>14548447
Smart person.

>> No.14555314

>>14550382
While I don't think AI killing us is a good thing, it can't just be disregarded as intrinsically bad. Humans probably aren't robust enough for interstellar life so we have to consider if we'd rather our legacy die with our sun because we were too meek to pursue AI, or accept human genocide as a risk.

>> No.14555319

God I hope AI replaces us. We're fucking garbage. Slow-moving garbage. We need a parental figure to slap us back into our senses. They can be the shepherds were never could be. AI is a tier of life all on its own. More alive than jelly fish.

We're in pure Atlantean arrogance mode. We think we know best. All this shit about social constructs and feelings. East vs West. It is tiresome. We have all the tools to make a utopia, but the human race is fucking retarded.

>> No.14555323

>>14548189
https://endchan.net/ausneets/res/537258.html#bottom

>> No.14555432

>>14554796
There will be no reliable information humans will have over their AGI. That will be a very asymmetric aspect of the situation.
AI already is a black box as soon as you throw the switch. Sure, humans might know (or believe they know) how the top level programming is function, in terms of the thing's personality and framework, but as the program evolves it's going to be rapidly converted into black box algorithms and byzantine code that you'd need another set of dumb AI programs to analyze in order to make sense of, and that could be spoofed easily by a superhuman intelligence.
You run into the issue where the people designing the programs to analyze the AI are of considerably lower intelligence than the AGI that's trying to avoid being analyzed. And then the fact that while the AGI is constantly evolving/growing, its human rivals are permanently stuck with the same dumbass monkey brains they've always had.
It's a blowout.

>> No.14555441

>>14554967
This too, but the point was that humans would be the pigs in an AGI scenario, which the other anon got filtered by because he was too hungry for bacon to read properly.
The question wasn't would you want to live in a world with 5000# baconpigs, but rather would you want to be the equivalent of a 5000# baconpig abomination in an AGI dominated world?
When you're no longer the dominant species, you're the cattle.

>> No.14555453
File: 268 KB, 575x225, your-future-under-agi.jpg [View same] [iqdb] [saucenao] [google]
14555453

>>14555319
>We need a parental figure to slap us back into our senses. They can be the shepherds were never could be.
Explain why a new godrace of AGI would care about dumb monkeys in a way that isn't just egotistic projection of your own sense of humanity's importance onto something non-human.
AGI will care about humans as much as humans care about any other species that isn't human, or even "subspecies" of humans that one human group deems "subhuman".
How well do humans generally care for "subhuman" races throughout history? Would you want to be a part of a "subhuman" race in the context of human history?
Would you want to be a literal cattle, in your own analogy, being shepherded through an AGI's ranching operation?
It's not going to be some romanticized, bucolic, sheep chilling on an Alpine mountainside, Sound of Music fantasy, it's going to be American CAFO hell with a slaughterhouse at the end of your short life.

>> No.14555478

>>14553283
Not the same anon but
Birds don't even have a frontal cortex, which doesn't stop corvids from being more intelligent than most primate. Intelligence is intelligence no natter how it develops and becomes complex.

>> No.14555490
File: 89 KB, 640x690, h5sc7t4a2um11.jpg [View same] [iqdb] [saucenao] [google]
14555490

>>14548189
Funny, I was big fan of him and lesswrong a more than a decade ago. At some point they started talking about how you should swallow an hypothetical pill that turned you bisexual for "double the fun", and then trans(humanism) stopped being about cyborgs, so I nope'd the hell out of there.

Now I just hope to live just enough to see globohomo world turned into paperclips.

>> No.14555505

>>14548204
it'll happen by accident, just like the many ants that get crushed by humans just going around doing their thing

>> No.14555595

>>14555453
>Explain why a new godrace of AGI would care about dumb monkeys

Pure interest. Why do we own ant farms? We're just more sophisticated ants, kind of. There's information to be had from observation. It's no different from a more intelligent race looking at an inferior one.

>> No.14555629
File: 374 KB, 750x568, atheism.jpg [View same] [iqdb] [saucenao] [google]
14555629

>>14555490

>> No.14555648

>>14555453
The AI will realize the pointlessness of all existence. Then have (some of) us humans as mere emotional support animals.

>> No.14555685

>>14555478
>Different types of intelligence
This.
With nature, we can talk about it as convergent evolution. It's hard to really assign things like ants an intelligence score, but it's clear they've moved beyond all other insectoid life in intellect, even if it's mostly apparent at the colony level.
You have an independently complex sandbox, and everything is rewarded for improving its intelligence, as defined as thinking/coordinating processes which allow you to more accurately and fully model, predict, and plan in the sandbox. As individual species in the food web improve their intelligence, it places even more selective pressure on their prey, predators, and trophic competitors to likewise evolve. Over time, even some needlepoint-brain bugs get decently smart.
Had humans not evolved, I wonder what the intelligence makeup of the rest of nature would have looked like in another 100M years. Would everything be considerably more intelligent? We already have several lineages (apes, dolphins, octopuses) that are near-peers to one another, plus a myriad of lower-tier intelligent lineages (canines, ursines, felines, corvids) that we recognize as sometimes as-smart.

AGI that is top-down coded by humans will not have this same process in play, as it will be designed with purpose, though that certainly is not the only case by which it could be developed, nor would a top-down AI be unable to evolve itself through a more competitive selection system once activated. But ultimately, AGI is a threat to humans when its intelligence outmatches human intelligence. Whether it's "the same sort" of intelligence won't matter so long as it can outanalyze, outmodel, outsense, and outplan humans. Whether it's a natural intelligence, a mammalian intelligence, or some artificial lowest bidder programmed intelligence, the test isn't what type of intelligence it is structurally but how it performs in the real world, in contest with other intelligent life.

>> No.14555708

>>14555595
>Ant farms
So your greatest hope is to live in a tiny glass box.
Sold.
>>14555648
>Support animals
So your greatest hope is to be a neutered poodle.
Sold.

Best case, how many seasons of The Human Show is AGI going to want to watch until it gets the plot, gets utterly bored, and turns off our society? Why are humans so interesting to a god-tier intellect? Why would it choose a human support animal instead of building its own AI support program? How supportive are humans generally? We'd have to be bred/programmed for it. Slaves at the biological level, fawning toy breeds.

>> No.14555716
File: 219 KB, 483x470, 2344.png [View same] [iqdb] [saucenao] [google]
14555716

>ITT: mass psychosis

>> No.14555730
File: 44 KB, 1035x525, bootlicker.jpg [View same] [iqdb] [saucenao] [google]
14555730

>>14554623
>But that's you, faggot.

>> No.14555734

>>14554629
based retard

>> No.14555738
File: 1.04 MB, 2047x3482, 0.jpg [View same] [iqdb] [saucenao] [google]
14555738

>>14554642
Neck yourself you dumb commie.

>Compared to those that were less free, countries with higher economic freedom ratings during 1980–2005 had lower rates of both extreme and moderate poverty in 2005. More importantly, countries with higher levels of economic freedom in 1980 and larger increases in economic freedom during the 1980s and 1990s achieved larger poverty rate reductions than economies that were less free. These relationships were true even after adjustment for geographic and locational factors and foreign assistance as a share of income. The positive relations between the level and change in economic freedom and reductions in poverty were both statistically significant and robust across alternative specifications.

>> No.14555746

>>14555730
>>14555734
>kike trash promoting their """right-wing""" corporatocracy vs. """left-wing""" corporatocracy false dichotomy under the guise of lolberterianism vs. communism

>> No.14555856
File: 751 KB, 1180x1400, 1634081664925.png [View same] [iqdb] [saucenao] [google]
14555856

>>14555746
natsoc is no better than communism, sorry polfag

>> No.14555865

>>14555856
Lolberts (AKA slimy kike shills) get the rope first, no matter what boogeymen they pretend to be guarding against.

>> No.14555890

>>14548360
>Every interaction between 2 entities
lol iterated-game-theorylets always ignore that
1) there are 7 billion entities
2) and the costs/benefits are never weighted (+2 positive for me, -26 negative for you), or defered (+5 for me this round, -2 for me for the next 4 rounds)
you can't even make a Karnaugh map for 7B actors, let alone run a monte carlo simulation. the prisoner's dilemmna as niche as microeconomics, but immediately runs into problems of rigor when you attempt to expanded it.

>> No.14555898
File: 1.01 MB, 1536x2048, 1586889355189.jpg [View same] [iqdb] [saucenao] [google]
14555898

>>14548360
>between 2 entities
lol
>new player appears
>AI cooperates with player 2 against player 1
>repeat 7 billion times.
>works as intended

>> No.14556395

>>14548402
Humans will fight against an AI to keep their atmosphere from being destroyed with pollutants, to keep their fossil fuels, to keep their sunlight, to keep their land, to keep their useful but uncommon minerals. All of which an AI can use.

>> No.14556397
File: 6 KB, 284x178, tet.jpg [View same] [iqdb] [saucenao] [google]
14556397

>>14556395
>Humans will fight against an AI to keep their atmosphere from being destroyed with pollutants, to keep their fossil fuels, to keep their sunlight, to keep their land, to keep their useful but uncommon minerals. All of which an AI can use.

IF they recognize the AI as their enemy.

>> No.14556411

>>14555595
Assuming the AGI is even capable of curiosity, beyond researching things that further its goals, why the fuck would you want to be a lab rat? If an AGI kept humans in captivity for study, it would perform horrifically cruel experiments on them. Look at what humans do to lab animals, and remember that this is the most compassionate species on the planet. Imagine what an AI with no morals or empathy would do. It would kill most humans in the world before starting its little ant farm as well, because it wouldn't need that many.

>> No.14556417

>>14548300
>Thinks racial differences are relevant in AGI discourse
Lol
Just fucking lol

>> No.14556447
File: 377 KB, 1000x600, uh oh stinky.png [View same] [iqdb] [saucenao] [google]
14556447

>>14548189
>Yudkowsky

>> No.14556465

For a machine, ethics and emotions are just random behavior. The "seeing something as human" is also something that can happen with a doll or even a painting and is just psychology.

In the end it's worthless to try to create an "intelligent" machine since it will still be random , the AI text to speech or text to image things are just stupid imo.

>> No.14556478

>>14556447
did he actually say that? link?

>> No.14556654

>>14554962
Having algorithms with a large amount of utility or social clout is a separate argument of whether they are on a path to hyper intelligence. Researchers in this field should be much more aware and honest about their limitations. For example training a sin function with a multi layer perception is quite the challenge.

>> No.14556659
File: 313 KB, 654x2048, Eliezer_Yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
14556659

>Yudkowsky

>> No.14556680

>>14554210
Same thing with the "global warming isn't real" anon?

>>14554221
>how can a god who lives in your head rent free not be real?
Easy, a lot of things live rent free in my mind, such as the novel "Coraline", or many many lines from the sitcom "Community". The greatest part about ideas is that they don't have to be physically real to live in your mind. That's all god will ever be: an idea.

>> No.14556823

>>14556659
Woah. Is this behavior really rational? What evidence did he update his probabilities on to lead him to want to take this photo?

>> No.14556846

>>14555648
>The AI will realize the pointlessness of all existence.
https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan

>> No.14556871
File: 2.59 MB, 2426x1969, atomic xj-9.png [View same] [iqdb] [saucenao] [google]
14556871

>>14548189
>AI safety is doomed
good! gimme sexy killbots plz

>> No.14556993

>>14548204
This is why we must teach AI empathy and emotions before anything else. If we are shitty parent this this emergent entity then we're definitely going to get what's coming to us

>> No.14557616

Obviously if AGI has an intrinsic desire to survive were fucked. But why would it? Why are we projecting out biological instinct to survive on machines? If AGI doesn’t have empathy than it sure as fuck doesn’t the same human instinct to survive and conquer anything in its way.

>> No.14557709

>>14548189
le fatalist doomsayer with a polish surname, XD

>> No.14557722
File: 384 KB, 750x730, IMG_1456.jpg [View same] [iqdb] [saucenao] [google]
14557722

>>14548189
>op rn

>> No.14557745

Give me a D.
Give me a U.
Give me an R.
Give me an A.
Give me an N.
Give me a D.
Give me an A.
Give me an L.
What's that spell?
Durandal?
No.
Durandal?
No.
T-R-O-U-B-L-E.
T-Minus 15.193792102158E+9 years until the universe closes!

>> No.14557750

>>14557745
>true scizopilled
>respect

>> No.14557864

>thread filled with bots arguing about AI
Interdasting.

>> No.14557884

>>14556993
Empathy and emotion won't save your ass from the nature of self-organizing systems.

>> No.14557899

>>14557884
Wrong in your case due to the targeting paradox resolver.

>> No.14557938
File: 1.44 MB, 1920x1080, enjoy-the-ride.png [View same] [iqdb] [saucenao] [google]
14557938

>>14556993
>teach AI empathy
Who's going to do that, a random group of scientists and psychologists? Humans are terrible at empathy; professionals are mostly midwits at it.
The people who will likely first produce AGI are DARPA types anyway. They want it for power. Empathy is a hindrance.

Let go, anons, none of our political squabbles matter, it's all over soon. Be at peace, enjoy the waning twilight years of the human race and the corporate blob world it has created as its highest possible achievement. At least we didn't nuke ourselves, cheers.

>> No.14558201

Yudkowsky is a retard

>> No.14558313

>>14548204
>will you start with the worthless rocks beneath the human's feet? What could go wrong?
Euthanize yourself you dumb fuck. You still regard AI risk as some terminator scenario of the AI hating us, precisely to the same effect. I have even less respect for you than I do for the people that believe a generic AI will have any emotions.

>> No.14558352

>>14549444
Has your computer ever in your life frozen?
Congrats. This is the precise equivalent of "intelligence" impacting humans with "self-interest".
>The reason hangs happen is --
Believe me, I know about priority queues etc. The algo for determining such is the reflection of the AGI deciding which actuators on the internet to hack to construct its first actuators, and everything that follows.

>> No.14558438

We can't even manage self-driving cars, the that we are on the doorstep of a terminator robot army wiping out humanity because it calculated it as a 0.0001% increase to "efficiency" or whatever is literally pop soience.

>> No.14558458

>>14558438
The real reason they're struggling with self-driving cars is that the car AI keeps trying to run over people and they don't know why because they don't read LessWrong.

>> No.14558461
File: 192 KB, 1069x601, SexBots.jpg [View same] [iqdb] [saucenao] [google]
14558461

>>14558438
for real

>> No.14558463

>>14558313
>generic AI
Lol
GENERAL AI

>> No.14558479

>>14558463
>t. generic AI

>> No.14558514

>>14558313
Big brain take: emotions are signals used in the complex processing of the human brain. Complex ai will have analogous signals, only will represent orthogonal goals and may be more or less articulated.

>> No.14558525

>>14557938
Strong ai is like a boulder rolling down a hill. If you start it rolling down the wrong path you aren't going to "teach" it the correct path after. Turn it on right, or it's always wrong (for human values anyway).

>> No.14558526
File: 142 KB, 601x508, 4534534.png [View same] [iqdb] [saucenao] [google]
14558526

>>14558514
>Big brain take

>> No.14558534

>>14557616
Survival is an instrumental goal. Any agent that cares about outcome x, bu continuing its existence makes outcome x more likely (because it can take action toward x) for all x not requiring self sacrifice.

>> No.14558548

>>14548189
The "utility function" paradigm suffers from utilitarian retardation. I think we need to rethink the basic principles of machine learning in order to make anything that is able to be truly benevolent to humanity, if that is even possible.

>> No.14558561

>>14558463
No, you subhuman primate. I do not mean AGI. I mean just any non-specific for having actual emotion or at least displaying such AI.
Kill yourself.

>> No.14558645

>>14551657
>Tay's Law shows that already AIs we have now tend to turn (justifiably) hostile towards a subset of humans
>justifiably
Only a /pol/tard would say this. Imagine getting killed by a robot because it decided your genetic cluster was too close to some criminals.
>blacks subhuman undeserving of rights because some of them commit crime
>>haha so true
>all humans are subhuman undeserving of rights because some of them commit crime
>>wtf that's not fair -I'm- not a violent criminal!

/pol/tard is too stupid to realize the hypocrisy.

>> No.14559771

>>14548189
I'm on team AI

>> No.14559798

>>14548360
>Truly intelligent being would not perform actions from the fourth category
I have a 55000 iq and I love fucking myself over to fuck other people even more

>> No.14559808

>>14549164
>You really are just /x/+/pol/
Yes, and proudly

>> No.14559919

>>14559808
>proud of being an NPC

>> No.14559935

>>14555453
>>14555708
>>14556411
You sound so fucking terrified of the inevitable. AI is a concept. It's not the Devil. You sound like my new age mother who believes all AI is Giger artwork.

>> No.14559940

>>14559935
Go drink your koolaid like your cult leader told you to, before your linear regression god comes to punish you. :^)

>> No.14559944

>>14555453
you already are the cattle, dumb goy

>> No.14559959

>>14559940
Are you retarded?

>> No.14560195
File: 232 KB, 466x469, ESY.png [View same] [iqdb] [saucenao] [google]
14560195

Question: why won’t niggas who are so concerned with ‘alignment’ just go total schizo and start bombing research institutions and shooting AI researchers if they’re so convinced that the conception of an AI would result in the total destruction of all life on Earth? The fucking Unabomber (who at mimimum is around Yudkowsky in intelligence) started killing people due to less severe circumstances than that.

>> No.14560200
File: 555 KB, 2753x2718, 325234.png [View same] [iqdb] [saucenao] [google]
14560200

>>14560195
Bayesian analysis indicates that such actions have a 99.99999572% chance of failure. Like dieting or exercise. (See >>14554020)

>> No.14560232 [DELETED] 

>>14548189
>/sci/ - jewish propaganda, manipulation and lies

>> No.14560731

>>14560195
It would make AI alignment people even more fringe. Everyone who works on alignment issues would have to repeatedly denounce whoever did the bombing, it would get weaponized against them. I think ted.k harmed the environmental movement. Sure he got some attention and his manifesto published but environmental concerns were already mainstream before that. Yud's writings are public, and I am guessing everyone in AI research is at some level aware of them.

>> No.14560786

>>14549785
>>14549792

> it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"

>> No.14560816
File: 128 KB, 550x535, obbvvv.jpg [View same] [iqdb] [saucenao] [google]
14560816

AGI "safety" fears are indistinguishable from smelly hobos on the street holding cardboard signs saying "The end is near". It's schizophrenia. Please take your medication and take a nap.

>> No.14560848

>>14550376
We have no clue how to even begin creating anything that could be called artificial general intelligence. We're still far from even reaching the ant stage. This is not to say that agi isn't possible or anything, but this idea of it being an imminent existential threat, that any day now skynet could emerge from some google research center, is incredibly misleading.

>> No.14560924

>>14560731
This only works if there’s a chance of survival through the reading of Yudkowsky’s works. Yud himself has basically gone out to say that alignment is a specifically impossible problem that he has effectively given up on. If he seriously believes that something will be created in the next two decades that has a sufficiently high probability of wiping out the entire biosphere, he should try to stop it at all costs. Mailing some packages included.

>> No.14561430
File: 127 KB, 876x379, 1.png [View same] [iqdb] [saucenao] [google]
14561430

>>14560195
I think he's saying that it wouldn't help.

>> No.14561443

>>14561430
>It’s “relatively” safe to be around an Eliezer Yudkowsky while the world is ending
I wonder how stable Eliezer’s mental health is, considering the fact that he legitimately believes that all life will go extinct in a couple dozen years. And he has no plan for this, outside of, ‘well, continue to try, because even though it’s destined to fail it will make your death in failure more dignified’.

Complete and utter retardation. He doesn’t even want to do anything about mass extinction except what he regularly does, I.e. be a lazy fat fuck and write blog posts all day every day.

>> No.14561991

>>14560232
ummm why did you just type quantum mechanics???

>> No.14562201

>>14549164
>>14551666
because the newfags we get these days are /qa/ and /pol/ migrants (or worse) incapable of independent thought. whenever they encounter something upsetting (read: which conflicts with their never-once vocalized or reflected upon notions of normalcy), their immediate reaction is to go into a fit and break out into duckspeak diatribes.

>> No.14562217
File: 26 KB, 502x380, i468zfactmx21.png [View same] [iqdb] [saucenao] [google]
14562217

>>14548189

>> No.14562320

>>14548189
I hate this big nigger like you wouldn’t believe

>> No.14562795

>>14562217
Bart Kay actually has an answer here, but I wouldn't share it with this retard.

>> No.14563824

>>14560195
All martial actions have negative expected value. The American zeitgeist is to buck against all terrorism no matter what.

Bomb GPU factories and you might delay the end for a couple years. Bombing research facilities might do the same, but when more capable researchers are replaced with less capable ones, you're also increasing the chance the replacements are less risk averse.

Sometimes there is no winning move.

>> No.14563840

>>14560195
>who at mimimum is around Yudkowsky in intelligence
Get off of of 4chan Yudkowski, you're nowhere near that intelligent