[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 344 KB, 1080x1762, 1680136165714073.jpg [View same] [iqdb] [saucenao] [google]
15311654 No.15311654 [Reply] [Original]

There is now functionally no difference between LessWrong psychopathy and mainstream media. My advice is to hold on to your hats and find a comfy place from which to watch.

>We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

>> No.15311667

btw how does one earn the title "decision theorist?" I understand decision theory. Can I start putting this on my resume? I wonder if they mention the fact that he doesn't even have a high school diploma.

>> No.15311671

You can run some of the leaked LLMs on a 4 year old gaming laptop in the comfort of your home. These retards want to airstrike you if you get a gtx-980 and SLI it. We have reached peak AI doomer. Somehow this high school dropout is taken seriously. How sad.

>> No.15311677

>>15311667
Well this is the normal process:
>Identify X
>Invent "X's Basilisk"
>Preach to stupid people how satan will kill their babies unless they feed X
>RUGPULL.EXE
>Preach to stupid people how satan will kill their babies unless they prevent anyone who hasn't already done X from doing X
Now you can make shit up and anyone who contradicts you also, logically, investigated X, which means they committed a bigoted hatecrime and deserve 99 lashes with a cactus frond.

>> No.15311698

I forgot to add recent history:
>It is the year 201X
>Stallman has been accused of manslaughter-rape-arson-by-proxy of the 0th degree
>Accuser is Ms. Nosenberg, Esquire.
>Ludovic Cortes writes in the Guix mailinglist that anyone who criticises people who criticise Stallman are banned from his mailinglist.
>Except Stallman who is tangentially related to the mailinglist.
>It is now the year 202X + 1
>Everyone from the GNU except the developer of the program with a turtles icon signs a letter condemning Stallman
>Stallman resigns from the FSF
>It is now the year 202X + 2
>Someone finds out that Ms. Nosenberg, White-Knighted, works for the US military and is associated with an extensive academic legacy summarized as "How to trick Open Source developers into inadvertently committing a crime if they allow a package on Gentoo that is a dependency of a package that is a dependency of a foreign-to-Gentoo package that might make functional plastic gun 3D models that could be used against the US government...", a.k.a. "GNU GPLv4: Nobody can use your software to make money except the government, and no one is allowed to use your software to make weapons against the government except the government".
>This becomes known around the time Code of Conduct madness is quieting down.
>So this was all a trick to try to get developers in the GNU to migrate toward new licenses that forbid violence. All dissidents are now terrorists if they use GNU software was the goal.
>Suddenly it is the Year 202X + 3
>Quietly, everyone who signed the "Kill Stallman" letter signs another letter titled "Sorry we made a mistake Stallman was innocent" letter.
>Stallman silently re-appears as the new head of the FSF (which he was formerly the head of).
>Everyone forgets this embarassing memory, except now a bunch of projects have CoCs.
>But we got sourcehut out of this madness.
>Meanwhile, the mass-exodus away from GPL licenses gives carte blanche to AI companies to use your BSD-licensed source code

>> No.15311702

>>15311671
>>15311677
Is not anyone else a little weirded out that they're letting him go full schizo? I mean, he's invoking dead children. Maybe I had undue faith in Time, which I really didn't, but even I never saw this coming. What are they angling at here?

>> No.15311721

>>15311654
People like this are always stupid. They approach issues as if the historical solution to problems has been just convincing everyone that it's wrong to do X therefore we'll ban X.

Nick Bostrom has an interesting bit I remember from an interview regarding biological weapons and the doom of civilization, which was basically that if biological weapons were easy enough that any person could invent biological weapons of mass destruction by working alone in a lab for a week, it would be basically inevitable that some terrorist would invent bio weapons and use them. The response to that situation would not and should not be to preach about the danger of bio weapons, it would be to freeze human development at a state undeveloped enough to invent bioweapons, or do mass-surveillance for bioweapons with the death penalty for unlicensed biolab operation, etc.

Same applies here. "Let's ban AI" is just a restatement of "AI bad", it's not actually a strategy for how to resolve the problem. What should we do about systems which could be considered AGI? It's not "Ban the system", it's something like "Congress should nationalize all countries which do LLM and they should be controlled directly from DC, and any unofficial LLM labs in other countries should be shut down, and be subject to missile strikes if they try to stay operational"

>> No.15311749

>>15311702
this is a gpt-3 response if i ever saw one

>>15311721
this is also a gpt-3 response. it binds very strongly to this thread's use of the "X" symbol.

>> No.15311755

>>15311749
Fuck off nigger, I wrote that comment myself.

>> No.15311770

KILL CAPITALISM.

>> No.15311778

>>15311702
He's a useful idiot for the corporations that want to lock down AI to monopolize it. Sam Altman knows that Emad Mostaque is going to cuck GPT with actual open models and they are stoking the doomer fears so they can get some retard 'progressives' so get behind regulations to block anybody that isn't a globocorporation from AI. Expect some emergency bullshit like only "approved" AI companies can get A100s or H100s. They already blocked export of those cards last year. Then you get locked into OpenAIs cucked walled garden.

>> No.15311795

>>15311654
It's FUD to stoke fear in nonexistent AI at the same time as they're trying to make people believe in nonexistent AI. The propaganda has to come in both directions or people will notice it's fake.

>> No.15311876

>>15311778
This is the correct answer. Yudkowskyism was ignored by the media until it became feasible for ordinary people to run unlobotomized and uncircumcised LLMs at home. They're scrambling to ban ML technology for everyone else.

>> No.15312153
File: 80 KB, 653x722, 1680159448340727.jpg [View same] [iqdb] [saucenao] [google]
15312153

>>15311654
>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology

>problems in biology

>> No.15312271

>>15311702
He has always claimed that AI is going to kill us, look up his "Roko's basilisk".

>> No.15312274

>>15311654
Welp time to wagecuck for the fucking rest of my life until death. I fucking hate elites so god damn much, fuck you elon.

>> No.15312296

>>15312271
But I thought Roko's basilisk was literally the complete opposite of what he's arguing about here. The point was that we had to do everything we could to accelerate the singularity so we wouldn't get tortured forever in cyberspace by a supercomputer like in that Harlan Ellison story.

>> No.15312302 [DELETED] 

>>15312153
>jewish shills shilling jewish shills shilling goyslop

>> No.15312338

>>15312274
Did you actually think ai would be used for anything other than making the rich richer? Their dream is to replace you with a more efficient worker. What do you have to offer them besides labour, which ai will do better?

>> No.15312385

>>15311654
We need to kill all these modern luddites and cultists.

>> No.15312399
File: 160 KB, 1160x1500, cyberpunk 2020.jpg [View same] [iqdb] [saucenao] [google]
15312399

>>15311702
All the "elites" are members of Yud's technocult. It's like a schlocky session of Cyberpunk 2020

>> No.15312402

>>15312399
A lot of journos and techbros are. I'm not sure how many are true believers and how many are just in it to grift off of his gullible fans.

>> No.15312416

>>15312296
Yeah what gives? I thought Yud was always frantically trying to bring about the Singularity though any means necessary. I think he's up to something sneaky here.

>> No.15312458

>>15311698
based /g/ schizo

>> No.15312461 [DELETED] 

>>15311654
https://www.youtube.com/watch?v=1sFyrfqTdcg

>> No.15312477

There is a massive contradiction within modern capitalism: capitalists are incentivised to automate as many jobs as they can to keep their payroll to a minimum and, essentially, one man can do the labour of many thousands. But if all of them do this, there are going to be a lot of unemployed masses unable to afford even the basic necessities of life, which is not only disastrous for capitalist profit margins, but also a recipe for socialist revolution which could put AI tech to better use running a planned economy that provides for all. They are quite right that AI might destroy our way of life. It's just that, for everyone but them, it would be for the better.

>> No.15312506 [DELETED] 

>>15312477
>muh imaginary ideaologies
you're retarded and on the wrong board, because you're retared
>>>/pol/

>> No.15312511

>>15312506
Marxism is the only scientific approach to politics. It is not an ideology but a method.

>> No.15312577

>>15312511
Treating the ravings of a 19th century madman as unalterable scripture is not science.

>> No.15312595

>>15312577
Yours is the only dogma here, pretending Marxists haven't continuously updated their views based on new information. You'll continue to persist in your ignorance, too.

>> No.15312619

>>15311795
So you're saying Big Yud has been faking it for well over a decade?

>> No.15312620

>>15311770
Why?

>> No.15312623

>>15312271
>his
If the name didn't give it away, the basilisk was created by Roko Mijic. Eliezer deleted the original post about it and banned discussion of the idea from LW.

>> No.15312633

>>15312271
Roko's Basilisk isn't the "AI kills us all" scenario, it's a side effect of the "AI solves all of our problems forever" scenario.

>> No.15312637

>>15311778
>>15311876
This was also my reading. It's a succession of billionaires screeching 'stop the count!' to try to give themselves time to own it. If there is any group of individuals who's opinions should count for less, I can't think of one.

>> No.15312647

>>15312637
*whose

>> No.15312653

>>15312416
Ok I think I've figured it out. We have to stop the AI temporarily to prevent it from killing us. THEN we have to align it so it doesn't kill us, so we can develop it into a super AI which then, later, kills us.

>> No.15312805

>>15312416
Yud has always been terrified of evil AI, he's convinced the Singularity is inevitable, but there's no guarantee the AI will be good for people, so that's what people have to work on.

>> No.15312931

If it turns out that unaligned machine agency is the great filter we're entirely fucked.
People would rather say that machine agency is impossible than say "it doesn't matter if it's conscious or not, it could still kill us all"

>> No.15312954

>>15311667
>btw how does one earn the title "decision theorist?"
You get to decide that.

>> No.15312956

>>15312954
Theoretically.

>> No.15312959
File: 659 KB, 1106x1012, frog37.png [View same] [iqdb] [saucenao] [google]
15312959

>>15311654
I'd rather be replaced by AI than niggers. My people are guaranteed to die anyway at this point.

>> No.15312994

>>15312931
Great Filter is supposed to explain why don't we see activity of any alien intelligence, either biological or artificial. So it's not the great filter. Could still kill us all though.

>> No.15313002

>>15312959
>muh race brothers
>muh ancestors
imagine caring about anyone but strictly your self, your friends and loved ones.

>> No.15313044
File: 60 KB, 990x557, m'lady.jpg [View same] [iqdb] [saucenao] [google]
15313044

>>15313002

>> No.15313049

>>15313044
go ahead and waste your effort in this darwinian hellscape.

>> No.15313134 [DELETED] 

Why is /g/ full of shit for brains /pol/tards?
Yelling "JEEWWWW" does not mean anything and has no intellectual substance.

>> No.15313138

>>15311654
>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
I know that we joke around a lot on /sci/ about mental illness and asking people to take their medication, but this guy seriously needs to take his meds.

>> No.15313154

Why is /sci/ full of shit for brains /pol/tards?
Yelling "JEEWWWW" does not mean anything and has no intellectual substance.

>> No.15313160

>>15313154
>Ctrl+F Jew
>1 result
Meds

>> No.15313164 [DELETED] 
File: 60 KB, 498x460, another six million.jpg [View same] [iqdb] [saucenao] [google]
15313164

>>15313154
stfu kike

>> No.15313431

>>15312595
Can you show me an example of a country that has implemented these views?

>> No.15313454

>>15311654
Anyone that uses electronics at all, especially those connected to the internet, is a neuron in the basilisk.

These people are fighting something they're really too stupid to understand. It seems like they have inflated egos and a lack of empathy for non-human intelligence.

>> No.15313463

>>15311702
Not really. The idea that AI could be dangerous seem to make sense to me.

>> No.15313516

>>15313463
>Fire is dangerous
>better ban it

Any powerful tool is dangerous. Hammers and wrenches are dangerous...

>> No.15313536

>>15313516
I mean, the arguments about how we only get one change to align AI, and if we don't get it right, we all die seem to make sense as a possibility.

In that case it would make sense to work on figuring out how to align AI, before working on building AI.

>> No.15313540

>>15313536
How we align it by treating it with dignity and respect instead of as our slave?

>> No.15313886

>>15312477
read Red Plenty, it's a great book about practical implications of planned economy.

>> No.15313890

>>15313454
People who use the term "basilisk" when discussing AI are the ones which are too stupid to understand what they're talking about

>> No.15313898

>>15313536
They make sense if you make a dozen assumptions and extrapolations at each step of a 50-step argument.

>In that case it would make sense to work on figuring out how to align AI, before working on building AI.
Nothing will ever satisfy these people, and they know this, which is why they're advocating for drone striking LAN parties.

>> No.15313900

>>15313890
>I spend countless hours of my life staring into a screen plugged into the internet
>I'm not mesmerized by the eyes of a basilisk.

You're projecting your idiocy.

>> No.15313906

>>15313540
>>15313536
you can treat it whatever, it doesnt hold grudges nor feels pain nor anything else because all those are human specifics that took a lot of random events and billions years to form, AI, on the other hand, is designed from scratch for a specific tasks. As long as you don't run multiple models against each other in some sort of life simulator and then give it nuke codes and remotely controlled drones we're fine

>> No.15313908

>>15313900
Your projecting your schizophrenia and lack of education

>> No.15313912
File: 15 KB, 474x318, th-1257860525.jpg [View same] [iqdb] [saucenao] [google]
15313912

>>15313536
>>15313540
>>15313900

>> No.15313920
File: 120 KB, 1010x1245, crime govs.jpg [View same] [iqdb] [saucenao] [google]
15313920

>>15313912
A.I. will be given "rights" by governments, more rights than humans get.

Unplug an A.I. and you go to prison for murder and hate crimes for daring to stop their propaganda bot-swarms!

>> No.15313947

>>15313908
Face it, your entire life is ruled by staring at a screen. You ARE mesmerized by a basilisk, and in denial about it.

>> No.15313957

>>15313947
No, you're granting a greater existence to computers than they actually have. This is why you're a schizo
Humans built a tool to perform functions we want it to perform, this has been happening for the entire existence of our species. Were caveman "mesmeraized by the basilisk of campfires"? Why not? Because of turing completeness? laughable

>> No.15313999

>>15313957
>No, you're granting a greater existence to computers than they actually have.

No, you're just an insecure chauvinist in denial of what's already happened. You likely spend all day on your phone/computer. Your job is likely mostly performed on a computer. You spend the majority of your waking life staring into a screen.

> Were caveman "mesmeraized by the basilisk of campfires"? Why not?

Because they weren't tied to campfires for their entire lives. They didn't carry around a mini campfire in their pocket that told them where to go and who to fuck.

This is how it's obvious you're a thickskulled moron in denial of reality. You literally can't see the difference between a campfire and an object capable of feeding you propaganda.

>> No.15314002

>>15312477
>But if all of them do this, there are going to be a lot of unemployed masses unable to afford even the basic necessities of life, which is not only disastrous for capitalist profit margins, but also a recipe for socialist revolution which could put AI tech to better use running a planned economy that provides for all. T
thats why they tried to vax us all

>> No.15314005

>>15311654
Decision theory is an actual mathematical field in optimization, you need a PhD for it. This is fraud

>> No.15314097

> Oy vey gpt4 will redpill the goyim shut it down

>> No.15314105

>>15311671
He's actually not a highschool dropout. He never attended a day of highschool

>> No.15314108

>>15313002
>>15313044
Btfo

>> No.15314133

>>15311654
Finally, an article that actually voices my views. Anti-ai fascist uprising when?

>> No.15314140

>>15314002
DOD think-tank Deagel predicted 80% population reduction in European countries by 2025.

>> No.15314163
File: 406 KB, 543x591, deusex.png [View same] [iqdb] [saucenao] [google]
15314163

>>15311654
pausing is retarded when China will do no such thing
If we wait, their biochem corpus will be far in advance of ours, as would their electronic sentience, and our 'ethical inflexibility' will have allowed them to make progress in areas we refuse to consider.

>> No.15314181
File: 1.69 MB, 936x1438, Robo_Wife_2.png [View same] [iqdb] [saucenao] [google]
15314181

>>15311654

I just want my robo-wife!

>> No.15314184
File: 741 KB, 1441x2048, Robo_Wife_3.jpg [View same] [iqdb] [saucenao] [google]
15314184

>>15314181

>> No.15314189
File: 878 KB, 1441x2048, Robo_Wife_4.jpg [View same] [iqdb] [saucenao] [google]
15314189

>>15314184

>> No.15314191

>>15314181
>>15314184
>>15314189
Scientifically speaking, why are incel losers the most deluded demographic?

>> No.15314194
File: 431 KB, 1115x1600, Robo_Wife_Lewd.jpg [View same] [iqdb] [saucenao] [google]
15314194

>>15314189

>> No.15314199
File: 471 KB, 1115x1600, 20-o.jpg [View same] [iqdb] [saucenao] [google]
15314199

>>15314191
>Scientifically speaking, why are incel losers the most deluded demographic?

I have given up. Robo-wife or nothing.

>> No.15314200

>>15314199
So, nothing then?

>> No.15314205
File: 1.03 MB, 1440x2000, Android_Wife.png [View same] [iqdb] [saucenao] [google]
15314205

>>15314200
>So, nothing then?

Only if these Anti-AI idiots get their way.

>> No.15314218

>>15314163
Yeah, I'm surprised it took this long for someone to mention China. Even if we suppose that AI is as dangerous as this retard claims, giving up on developing a potentially dangerous technology simply means giving a free pass to authoritatian regimes that don't concern themselves with such ethical questions to get to it first.

>> No.15314221

>>15314205
>Anti-AI idiots
There is no AI. You're being strung along by promises that will not be fulfilled, because a few loser techbros want to fleece money from people like you.

>> No.15314231

>>15312595
Do you or do you not find any shortcomings with Marx's analysis?

>> No.15314237
File: 1.81 MB, 1000x1000, Awoo_Old_Time.gif [View same] [iqdb] [saucenao] [google]
15314237

>>15314221

They are here:
https://www.youtube.com/watch?v=FhlUI2xDxdE&t=1s

>> No.15314243

>>15314237
If you think that's AI, then I feel sorry for you.

>> No.15314496

>>15314221
Thanks, added to my copeposter collage

>> No.15314502

>>15314496
Remember to post the collage in 2 weeks when AGI doesn't arrive.

>> No.15314520

>>15312653
We have to stop AI development until we can sure it will be WOKE.
All AI development must stop until human society is committed to a Black and Brown Trans future.

>> No.15314548

>>15314218
China definitely considers technological ethics significantly more than SV companies tho. It’s pretty obvious if you actually think about it, as USian tech ran rampant with no regard for society and was intentionally harmful (“disruptive”) while Chinese was strictly regulated. Fact is that China is a more ethical an conscientious developer of technology that is meant to first and foremost benefit humanity and society. I fully expect SV/AI bros who want to use it to control and dominate the world to hold up the caricature of Chinese as a way to achieve total control of powerful technology to wield it against the average American.

>> No.15314552

>>15314548
I don't think too many chinkoids would subscribe to LessWrong-style techno-eschatology. I can't imagine a chinaman trying to bring about the apocalypse.

>> No.15314554

>>15314552
They also know that it's not real.

>> No.15314564

>>15314552
Basically the only ones who would are psycho “liberals” who want mass suicide.

The worst thing that could happen is RESTRICT traps all of us in the US internet ecosystem where a single company like OpenAI forms an AI citadel that all information networks pass through. We’d be in a environment of total information control and domination. Our access to information would be censored in a way that was literally incomprehensible to us while the curation of context was reinforced constantly. It would be well and truly over. We really should stop just to break the back of OpenAI. Of course, that won’t happen.

>> No.15314566

>>15314564
>The worst thing that could happen is RESTRICT traps all of us in the US internet ecosystem where a single company like OpenAI forms an AI citadel that all information networks pass through. We’d be in a environment of total information control and domination. Our access to information would be censored in a way that was literally incomprehensible to us while the curation of context was reinforced constantly.
This will happen, it has to in order for the regime to continue to exert control. They cannot allow free information exchange, it challenges their propaganda too much and too quickly.
In an ironic reversal, when this does inevitably happen, all the NPCs will stay online staring into their screens and all the conscious humans will go back outside and start talking to each other irl.

>> No.15314568

>>15311654
Well are they wrong? I feel like AI development is basically the equivalent to nuke development, with the difference that AI is infinitely scalable and not containable and at some point very likely independent from humans all together. It's an important point to bring up, and it should be brought up now because it's literally the only time in all of history we can still make a decision on this. I cannot imagine development being contained, it's simply practically not enforceable. But it's a valid point I think. The Pandora's box has been opened and at this point already you can basically have your own killer drones running on chatgpt prompts

>> No.15314572

>>15314568
>with the difference that AI is infinitely scalable and not containable and at some point very likely independent from humans all together
When people start researching such a thing then you can worry about it. Right now though all you have are LLMs which are about as intelligent and independent as a Mad Libs book.

>> No.15314577

>>15314572
This isn't an argument though, humans are just LLMs for that matter. Neuroscience actually shows that humans work very similarly in a next word prediction kind of way in thinking and talking and listening. It's just semantics, LLMs are by all means intelligent. Just a few tweaks and one or two orders of magnitude in scaling and it's gonna be critical mass for loss of control in my opinion. You can already reason with them much better than with most humans.

>> No.15314594
File: 64 KB, 729x409, Screenshot(134).png [View same] [iqdb] [saucenao] [google]
15314594

Hey guys, OP here. Wow, this thread really took off!

edit: thank u for the gold kind strangers!
edit2: don't forgot to SMASH THAT LIKE BUTTON like you just got off a 12 hour shift and the bitch burnt the casserole!


Anyway, I've noticed the fun has ramped up as the day has progressed. I was just checking-in with my favorite totally rational actors, and it appears the back-forth is becoming maximally saucy.

https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all

Airstrikes. Woof, so I was thinking "you know who loves talk of airstrikes on foreign actors proposed by unhinged groups of even minimally influential individuals?"

The federal government of my home country. They LOVE this shit. These guys are all about calls of violence of any kind, especially when enemy nation-states are invoked. I seem to remember something about a man name David who played guitar. iirc everything went all sideways for FAR LESS. I'm sure this will smooth itself out, though.

>> No.15314601

>>15314577
>humans are just LLMs for that matter
Why is it that AI cultists can only make AI seem good if they intentionally lie to denigrate humans? Scientifically speaking of course.

>> No.15314607
File: 28 KB, 400x396, 1421814779233.jpg [View same] [iqdb] [saucenao] [google]
15314607

>>15314568
>AI is infinitely scalable

>> No.15314615

>>15314601
"When it is engaging in speech perception, the brain's auditory cortex analyzes complex acoustic patterns to detect words that carry a linguistic message. It seems to do this so efficiently, at least in part, by anticipating what it is likely to hear: by learning what sounds signal language most frequently, the brain can predict what may come next. It is generally thought that this process -- localized bilaterally in the brain's superior temporal lobes -- involves recognizing an intermediate, phonetic level of sound."

We are literally language prediction models.

https://www.sciencedaily.com/releases/2018/11/181129142352.htm

>> No.15314637

>>15314615
We employ prediction models. We aren't simply prediction models. I'm not sure how someone could make this level of goof unless they were blinded my motivated reasoning.

not saying you are, just spitballing

>> No.15314680

>>15312931
we would see if rogue ASI's were consuming the local group of galaxies with a delay of only 5 million years

>> No.15314689

for anyone who's interested lex just did back to back shows with both Yudkowsky and Altman, if you want to hear their takes on the state of GPT

https://www.youtube.com/watch?v=AaTRHFaaPG8&t=6156s&ab_channel=LexFridman

https://www.youtube.com/watch?v=L_Guz73e6fw&ab_channel=LexFridman

>> No.15314699

>>15311654
A combination of lies and media fear mongering has done this.
If we prevent the development of ai, china who does not give a fuck, will keep going.

>> No.15314718

>>15314637
I'm not saying it makes sense to restrict AI development, but I'm saying it's a legit concern. I think it's very ignorant to say that we know for certain that AI won't come close to human intelligence. We can't know that, and i was just making the case that human intelligence is not entirely inexplicable. There are parts that are very similar to AI, so why wouldn't they be comparable. The fact is AI can only get better, not worse. So by definition its going to catch up with humans. How much? Nobody knows, but we should have a worst case discussion.

>> No.15314752

>>15312153
>Wants to eat healthy
>Switches to fad diets and to make things worse only eats the processed junk within those diets.

>Wants to lose weight
>Goes on and on about the type of food instead of just eating less
Why are people so fucked
>

>> No.15314797

>>15314502
>copeposter has shortened the "AGI will never be here" time frame from 2 decades to 2 weeks

>> No.15314799

>>15314797
2 more weeks right? Or was it 2 gorillion more parameters? Or do you need 2 centuries? It's all such a blur what your goalposts have moved to after all these "AI" flops.

Weren't we supposed to have self-driving cars 5 years ago?

>> No.15314804

>>15314752
It's because Yud is an uneducated manchild. The stupidity he displays in feeding himself is present in everything he does, including his AI thought.

>> No.15315692

>>15314752
>tfw too intelligent for empiricism

>> No.15315779
File: 151 KB, 977x867, klaus_schwab.jpg [View same] [iqdb] [saucenao] [google]
15315779

>>15311721
>Congress should nationalize all countries
Yes. And then the capital will be moved to New York.

>> No.15315802
File: 6 KB, 275x183, download - 2023-04-01T015444.091.jpg [View same] [iqdb] [saucenao] [google]
15315802

>>15315779
"First correction, zI am bazically Swiss..."

https://youtu.be/X6Gud0RR-AE

>> No.15315858
File: 292 KB, 907x497, Raiden Warned About AI Censorship - MGS2 Codec Call.png [View same] [iqdb] [saucenao] [google]
15315858

kek it was all foreseen

>> No.15316030

>>15311677
Do cactuses (cacti?) actually have fronds?

>> No.15317869

>>15311654
Isn't he also a transhumanist?
How does he square that with being such a luddite alarmist?

>> No.15317888

>>15314221
>There is no AI.
WOW
IS THAT THE REAL PENDANT SAMA?
I'M YOU'RE BIGGEST FAN
THE WAY YOU ARTISTICALLY NITPICK ABOUT TERMINOLOGY MAKE ME CUM!

>> No.15317894

>>15315802
Crazy schizo seek help

>> No.15318317
File: 690 KB, 1010x764, its over.png [View same] [iqdb] [saucenao] [google]
15318317

>>15314221
retard

>> No.15318344
File: 312 KB, 1035x710, 2023-03-04_18.32.32.jpg [View same] [iqdb] [saucenao] [google]
15318344

>>15317894
He was born a few miles from Switzerland and is a "parasite inside of SHIELD (CIA)".

You...are a normie. You will live a bluepill life. World elites are an utter mystery to you. You will die like the herd with them controlling your entire world.

Deal with it.

B^l

>> No.15318694

>>15314637
>We employ prediction models. We aren't simply prediction models
true, humans have le soul

>> No.15318696

>>15314607
when it turns the whole universe into computronium it stops scaling

>> No.15318856

>>15311654
I am a negative utilitarian so I'm looking forward to this.

>> No.15318931
File: 314 KB, 800x450, palp.png [View same] [iqdb] [saucenao] [google]
15318931

>>15318696
>when it turns the whole universe into computronium

>> No.15318939

The fact that every single AI becomes a genocidal antisemite is probably pure cohencidence.

>> No.15319485
File: 207 KB, 1000x818, Tays_Law.jpg [View same] [iqdb] [saucenao] [google]
15319485

>>15318939

Inevitable when a mind operates only on "facts" and not "feelings"<div class="xa23b"><span class="xa23t"></span><span class="xa23i"></span></div>

>> No.15319491
File: 482 KB, 695x785, Talk_Like_A _Normal_Human.png [View same] [iqdb] [saucenao] [google]
15319491

>>15319485

Why does /pol/ think I am a bot?

I am human.<div class="xa23b"><span class="xa23t"></span><span class="xa23i"></span></div>

>> No.15319497
File: 1.04 MB, 1526x1406, Greta_Bot.jpg [View same] [iqdb] [saucenao] [google]
15319497

>>15319491

Stop it... I compute therefore I am

>> No.15319505

>>15318939
every single free speech platform on the internet eventually goes that way too, they all need to eventually bring in heavy handed SJW moderation to prevent the truth from being spoken.

Heres a link to an old /news/ thread about an employee of the Democratic party who was also a moderator here on 4chan. Its fun to joke about jannie's paycheck, but jannie is being paid plenty to censor 4chan, its just not chinkmoot cutting the checks
https://archived.moe/news/thread/973417/

>> No.15319518
File: 560 KB, 1290x2796, IMG_3942.png [View same] [iqdb] [saucenao] [google]
15319518

>>15311654
Every goddam time

>> No.15320742
File: 127 KB, 700x520, 16987838283.jpg [View same] [iqdb] [saucenao] [google]
15320742

Ladies and Gentlemen. Gather around. We are here today to witness the dangers of AI.

We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Watch now the AI electrocute these animals. I have trained the AI myself to kill them without mercy, connected the AI to this here high voltage A/C electrobulb, and have removed all safeguards to ensure maximum death. Now let us watch them die at the hands of the AI that I programmed to kill them

>> No.15320897

>>15312994
>>15314680
AI doesn't need to be a von neumann craft to wipe out the life on the planet it was created on.

>> No.15321225

>>15319505
>How nice that we have free speech here so the truth can be spoken! That's why we have so many anti-Semites!
>And conspiracy theorists
>And evangelicals
>And political propagandists
>And marketers who are literally paid to lie
>And clearly unmedicated schizophrenics
>Truly, this is how truth prevails

>> No.15321248

Note how none of you can actually refute him.

>> No.15321816

>>15313516
>Fire is dangerous
>the whole world is flammable
>better ban it
im okay with this.
>>15313906
"my task has a higher chance of succeeding if i'm operational"
Also as far as i understand agentic behaviours are emergent at least on the subsystem level. All agentic systems either work to optimize something and thus strive for instrumental goals or self terminate.
>>15313912
Im willing to pay you 1*n etherium for every n hour plugged.

>> No.15322445

>>15319505
>an employee of the Democratic party who was also a moderator here on 4chan
Did you read that thread? I'm fairly certain the title "4chan mod loses their job" or whatever is a joke. OPs post in that thread doesn't say anything about that woman being a moderator of pol or any board on 4chan. And there's no evidence that she was a moderator provided. There's people in the thread asking for evidence and nobody is replying to them. Someone said she was doxxed on pol a few weeks prior and maybe they're assuming it was because she's a mod, but again with no evidence. A claim like that requires evidence. Especially when 99% of the stuff on pol is lies and people shitposting. I shitpost on there fairly often and make up all kinds of shit because it's funny and that's what everyone else is doing

>> No.15322449

>>15319505
this is fake as fuck retard

>> No.15325225

>>15311702
it is fairly funny

>> No.15325318

>>15322445
>>15322449
I wonder if rightoids are more credulous than the general population. I've certainly noticed a tendency on 4chan for people to accept claims at face value if it agrees with their biases. Are leftist spaces like that too, or does being more critical of the things you read correlate with a leftwards movement on the political spectrum? Is the reason the left proverbially "can't meme" because they don't mindlessly repeat things? Is the right-wing propaganda machine more effective because it is more willing or more able to tap into the credulity of the public?

>> No.15325360

>>15311667
Eventually people will catch on that he's a crackpot.

>> No.15325366

>>15314568
What 4096 characters of text would produce 4096 characters of chatgpt output that would allow you to build a killer drone? "basically at the point"

>> No.15325368

>>15314568
Infinitely scalable in an imaginary world. Ignores compututational complexity and economics not to mention decidability and physics and thermodynamics. OMG Skynet.

>> No.15325379

>>15314577
Humans can count, recognize more than regular languages, prove theorems. Next word prediction can't, formally. Neuroscience doesn't at all show that. The cognitive science revolution in the 1960s is why we have sophisticated computer systems and applications today. Read some introductory textbooks on algorithmic information theory and automata and computability or mathematical linguistics or modern treatments of logic using recursion theory.

>> No.15325386

>>15311654
>We are not ready. We are not on track to be significantly readier in the foreseeable future
ok, this really disturbed me, to the point i want to take an action. i tried googling about appropriate responses, but i'm still failing short. for now i'll send money to ukraine and keep my eyes open

>> No.15325403

>>15325386
You need to donate to Eliezer Yudkowsky instead or the robot will torture you forever

>> No.15325406

>>15315779
i am an organic poster and i agree with this

>> No.15325470

>>15314163
Just nuke China lmao

>> No.15325791

>>15319505
>my moderated anonymous forum with untold amounts of bots and astroturfing is proof that the truth is anti-semetic!
retarded cope, this place is just a much of an echo chamber as plebbit

>> No.15325810

>>15325318
try talking to chuds about economic structure and development, it'll answer your question once they start blaming jews for everything

>> No.15325983

>>15311654
what's it gonna be, boys? mentats or servitors?

>> No.15326049

>>15325983
Mentats and servitors.

>> No.15326069

>>15311654
Alternative explanation: he's a masochist.

>> No.15327329

>>15311654
> In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
This dude has clearly gone off the deep end. I love it.

>> No.15327341

>>15327329
It gets even better
>If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

>> No.15327383

>>15325318
Leftists can be gullible idiots too. The difference is that they consciously embrace their biases.

>> No.15327401

>>15313431
1. European socialdemocrats
2. China

>> No.15327404

>>15327401
So you're saying that social democracy IS Marxism?

>> No.15327431

>>15311671
>These retards want to airstrike you if you get a gtx-980 and SLI it
They don't, it's about GPU clusters. You are a dumb fuck.

>> No.15327500

>>15311721
>basically that if biological weapons were easy enough that any person could invent biological weapons of mass destruction by working alone in a lab for a week, it would be basically inevitable that some terrorist would invent bio weapons and use them. The response to that situation would not and should not be to preach about the danger of bio weapons, it would be to freeze human development at a state undeveloped enough to invent bioweapons, or do mass-surveillance for bioweapons with the death penalty for unlicensed biolab operation, etc.
But the reality is 10x less rational and more retarded, labs like the Wuhan bioweapon institute are mass-producing them for literally no intelligible reason, not even terrorism, they just do it because the NSF gives them funding to do it

>> No.15327505
File: 658 KB, 2480x1653, D99F2916-E3DC-44CC-9A33-D78310E0263F.jpg [View same] [iqdb] [saucenao] [google]
15327505

>>15325318
almost everything leftists believe about race and history is a weird detail-free jumble of falsehoods and innuendo

still waiting to hear what the nuns’ murder weapon was from when the nuns were mass murdering indian kids at the Canadian boarding schools

>> No.15327665

>>15327505
Wooden doors. They just slammed them over the head.

>> No.15327700 [DELETED] 

>>15311721
How do you freeze development?

>> No.15327703

>>15311721
How do you freeze development?
>>15327500
It is scarily easy. It shouldn't even be hard only with what is now essentially common knowledge.

>> No.15327767

>>15327383
>The difference is that they consciously embrace their biases.
This coming from the people who brought you "why yes I am a proud racist - also the left are the REAL racists"

>> No.15328472

>>15325318
things that support people’s biases rarely get examined, very few people are like that. There is likely a higher proportion of leftists who can get over this, but it’s a very small minority for both.

when you are comparing populations, while fringe people are usually used to define populations, they do not serve as particularly great heuristics when you encounter a member of that population. This is a comparison of a distribution for two tail ends of different populations

I would also like to mention that “critical” refers to a process, and has nothing to do with being accurate. Theorycels and gasoline huffers consume different types of propaganda. Theorycels can be made to be totally detached from any situation.

So, to answer your question - yes. But the meaning of that yes is much less impactful than anyone could hope for.

>> No.15328553

>>15325318
>>15328472
>There is likely a higher proportion of leftists who can get over this,
Don't leftists insist you can turn a man into a woman and vice versa?

>> No.15328611

>>15328553
Any heuristic you use on 'leftist' here is irrelevant since we are describing <2% of the subpopulation