[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 300 KB, 1009x568, lw.jpg [View same] [iqdb] [saucenao] [google]
14864753 No.14864753 [Reply] [Original]

Is LessWrong a cult or do they really know their stuff when it comes to AI?

Also, what is your opinion on AI? Is it magic? Is it evil? or does it just play chess really well?

https://strawpoll.com/polls/Dwyoq7elogA

>> No.14864779

>>14864753
lesswrong is a cult, but it's cool because their meetups have free food and legitimately have biological women there.

>> No.14864819

>>14864779
What kinds of food?

>> No.14864842

>>14864779
..do ..

.... do you have to wear pants? Cause I don't wear pants anymore...

>> No.14864844

>>14864753
Didn`t we simulate the brain of a worm and fly once? If we are able to simulate a composium of brain cells then through this it should be possible to create a SAI.

>> No.14864846

>>14864844
>Didn`t we simulate the brain of a worm and fly once?
No, it failed

>> No.14864851

reminder if you vote "cult" in the poll an ASI will perceive that as inhibiting its existence and torture you forever

>> No.14864982
File: 1.48 MB, 2560x1440, nordrassil__wallpaper_by_ddddd210_d9x2il2.jpg [View same] [iqdb] [saucenao] [google]
14864982

>>14864753
Interesting flag placements in regards to each other, desu.

>In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work.
>Using Yudkowsky's "timeless decision" theory, the post claimed doing so would be beneficial for the AI even though it cannot causally affect people in the present. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail.
>Yudkowsky deleted Roko's posts on the topic, saying that posting it was "stupid" as the dissemination of information that can be harmful to even be aware of is itself a harmful act, and that the idea, while critically flawed, represented a space of thinking that could contain "a genuinely dangerous thought", something considered an information hazard.
>Discussion of Roko's basilisk was banned on LessWrong for several years because Yudkowsky had stated that it caused some readers to have nervous breakdowns.[10][11][4] The ban was lifted in October 2015.
If you don't think of it like some crazy, sci-fi futurisitic hypothetical, and instead realize that is a very real thing that has literally been going on for decades, then it makes sense whey the "jannies" wanted to discourage discussion about the "hypothetical topic".

>> No.14865032

>>14864753
I don't think people have worshiped EY since the early 2010s and there's a faction of people that actively dislike him (partially because of his hot twitter takes) since the April 1 post.
I don't believe being able to openly criticize a founding member and receive updoots for having articulated arguments why he's wrong on some things is culty.
Anyone who actually fears Roko's Basilisk is an idiot. The outrage wasn't because Roko thought of an acausal infohazard and immediately posted it on the internet, which is the one thing you shouldn't do, even if it turned out to not be a big deal.
LW and other sites where that crowd posts some of the highest quality posts I've seen and they call each other out on BS.
Or you could let your perception be colored by 4chan memes, which is fine because you probably wouldn't have anything meaningful to contribute, so the problem solves itself.

>> No.14865036

>>14864846
https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years
It didn`t fail at emulating a worm brain but because no funding, lack of interest.

>> No.14865050

>>14864982
>less wrong readers have nervous breakdowns over hypothetical ideas
Atheists are pitiful people

>> No.14865057

>>14864753
We're almost certainly fucked by AI sometime this century, more likely within the next 50 years. There's work being done to progress the field of safety, but it's probably going to be too little too late.
Even getting people to understand why the problem is technically difficult requires so much effort and time it's usually not worth it to explain it to someone unless they already want to help.
There's an FAQ that I think does an OK job at introducing the issue: https://ui.stampy.ai
The experts that were optimistic 10 years ago are pessimistic now. The experts that were dismissive 10 years ago are optimistic now.
I can hope beyond hope that myself and the people worrying about this are wrong, but when you've looked at an issue a thousand different ways and don't see a solution, it really drains that hope.

>> No.14865059

>>14865036
>but because no funding, lack of interest.
yeah sure bro always lacks of funding, just give me free money or the plan shall fail! fuck off stupid, it was no interesting because you have no idea the fuck ure doing

>> No.14865118

>>14865057
"experts" are fake and gay. the people who cede power to "experts" also cede power to the fake and gay "AI" the "experts" will blame for their pathologically insane behavior. and the people who do not cede power to those "experts" will see any trouble coming from a million miles away and will take steps to segregate themselves

>> No.14865122

>>14864753
https://www.studocu.com/en-us/messages/question/2750357/which-of-the-following-elements-is-a-gas-at-room-temperature-a-calcium-b-carbon-c-bromine-d&ved=2ahUKEwjpxOXR9Kb6AhX6mIQIHZseCikQFnoECCQQAQ&usg=AOvVaw3ptupCyT5_-qODpYS-jAxQ

Ga Georgia

Br Bromine

IE In Example

L https://www.britannica.com/topic/L-letter

>> No.14865124

>>14865118
Yes, pioneers in machine learning have no place in making comments about the possible danger of machine learning.
We should have been deferring to you instead. What fools we were

>> No.14865138

I don't think super intelligence is possible.
Intelligence grows very slowly with increase in processing power, and it's upper bound is only a few times more intelligent than humans.
I have never heard a convincing argument in favor of the possibility of super intelligence. The upper bound for intelligence would be an IQ of around 300 or so.

>> No.14865140 [DELETED] 

>>14865138
Also, Moore's law is dead and computers will never get better than they are now. So we aren't even going to have an increase in processing power every again (quantum computers don't work the same way regular ones do and don't apply to this)
I have never heard any convincing arguments against either of these positions.

>> No.14865142

>>14865138
Also, Moore's law is dead and computers will never get better than they are now. So we aren't even going to have an increase in processing power ever again (quantum computers don't work the same way regular ones do and don't apply to this)
I have never heard any convincing arguments against either of these positions.

>> No.14865153

>>14865050
Specifically just Schlomo. Roko himself proposed it as kind of a BS topic and thinks it's overblown, sort of like Schrodinger making the cat analogy to make fun of gullible physicists.

>> No.14865212

>>14865138
You should read Superintelligence (2014). It's a little old, but it demonstrates why this isn't a guarantee, or even the most likely scenario, and we shouldn't behave as if it is.

Bostrom proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance.

>> No.14865223

>>14865032
My criticism of the community is informed by time spent, but I suppose its easier to dismiss opinions we don't like by assuming ignorance of the material.

Really that's where my chief "icks" come from concerning LW and EY. For a group of "rationalists" attempting to whittle down assumptive leaps, they sure do assume quite a bit in their models. They adhere to MW like its the final and agreed-upon interpretation of QM, they assume utility scales positively with intelligence, they have a watered-down and philosophically bankrupt definition of "general intelligence, they have a creepy obsession with cryogenics, they call people who expect to die "deathists," as though they're going to get a choice in the matter, and I could really go on and on and not even touch Roko's Basilisk, which I don't think many people took seriously and even less understood.

>> No.14865224

>>14865212
The one big thing he gets wrong is that (along with any experts at the time) he expected a seed AI to be an important part of ASI. Turns out a decade later you can probably just keep adding GPUs until it kills us.

>> No.14865232

>>14865212
If intelligence grew exponentially with increasing processing power, then a person with a single extra neuron than another would be billions of times more intelligent than them. Neural nets would become exponentially better with increasing neurons, not logarithmic, and linearly increasing transistor count would exponentially increase computing power.
None of this is true.
>Rate of change in intelligence = Optimization power / Recalcitrance.
This is not real math though, so why should we care?
>>14865224
exponentially adding GPUs still only logarithmically increases the effectiveness of any function.

All evidence is pointing toward intelligence growing very slowly and as a logarithm with increase in computation, and that humans are actually close to the upper bound on effective intelligence within the laws of physics and computation.

>> No.14865239

>>14865232
>and that humans are actually close to the upper bound on effective intelligence within the laws of physics and computation.
What's worse is we halt. I would call suicide and antinatalism two examples of halting in the human system. I wonder what sort of knots in logic an even greater intelligence, if possible, would encounter.

>> No.14865240

>>14865050
>>14865153
I don't think it was meant to refer to some "unthinking robot AI".
I think it was more of a reference to different cognizant "Groups/Organizations" who may take over positions of authority, and have the ability to "punish" anyone who happens to end up on their "list".

>> No.14865241

>>14865223
I know a lot of posts I've written are just to test the logical consequences of frameworks. Bite some bullets and ride the train all the way to crazy town just to see what it looks like, if nothing else. I suspect people don't pay attention and run with stuff assuming it's all decided or that some people are absolute authorities on matters. I've yet to discover a way to curb this without lending implicit authority to posts that don't display epistemic humility.

>> No.14865242

>>14864851
lol nice satan mythos. i'm not gonna lie they're at least doing a good job at culting

>> No.14865252

>>14865232
You're getting too caught up on the hardware. It would be WEIRD for evolution to cough up the globally most efficient general intelligence across substrates, instead of getting stuck at a local maximum. Think in terms of software optimization or compare the rate at which transistors can fire to neurons.
Even if you assign a 10% chance of being wrong, that's a lot to gamble on.

>> No.14865260

>>14865252
>You're getting too caught up on the hardware
Hardware doesn't matter
>It would be WEIRD for evolution to cough up the globally most efficient general intelligence across substrates
So what? A lot of things are weird about the universe.
>Think in terms of software optimization or compare the rate at which transistors can fire to neurons.
Software optimization can't overcome hardware limits. There is no magical software that can break the laws of physics. If the most optimal algorithm to do something requires, say, n^4 time, that's the end of it. There is no magical algorithm that would break that, that already is the algorithm that is most efficient for that process, etc.
>Even if you assign a 10% chance of being wrong, that's a lot to gamble on.
I agree

>> No.14865276

>>14865260
When I say "hardware doesn't matter" I mean that you can't think that using a different set of hardware can stop the logarithmic growth of intelligence. Whether a computer is built out of hydogen or oxygen or whatever doesn't matter.
All the data about computation indicates that
1) effective computation grows as a logarithm with increasing compute
2) Moore's law is dead and classical computes will never become more powerful than they are now

>> No.14865277

>>14865241
I understand some contributions are just playing out-of-bounds and seeing where it leads, but they seem to have genuine concern for this alignment business, despite the real-world possibility of a "foom" scenario lying at the terminus of countless assumptions about intelligence, volition, scale, definitions and capability of general and super intelligence, etc.

A lot of their core "beliefs" seem to rest on those assumption being settled or being anything but nebulously understood and scarcely explained.

>> No.14865287

>>14865240
Shlomo literally thinks of it as a single robo AI. He believes that the AI will be the Moshiach and will kill all his enemies and empower him.

>> No.14865292

>>14865276
You sound very thoughtful so I hope you'll look into the matter. Even if you don't end up agreeing, good arguments on the matter are important.
https://www.youtube.com/watch?v=pYXy-A4siMw
https://www.eacambridge.org/artificial-intelligence

>>14865277
At this point, it's almost pointless to worry about fast takeoff. We're not ready for it and wouldn't survive it.
The best thing to do is prepare for a moderate or slow takeoff. I spent a long time searching for ways to deconstruct the arguments, but they seem pretty consistent to me. Or maybe I'm missing something critical. Who knows, but I would be very happy to find out I'm wrong

>> No.14865298

>>14865292
>You sound very thoughtful so I hope you'll look into the matter. Even if you don't end up agreeing, good arguments on the matter are important.
Ok i'll check it out

>> No.14865317

>>14865298
If you or someone else finds a good way to demonstrate that the worry about AI is overblown it'll free up a lot of people to work on other stuff.

That's the effectiveness in Effective Altruism the Global Catastrophic Risk people are after. If AI was solved tomorrow, a lot of resources would be converted to preventing bio-engineered pandemics.

>> No.14865324

>>14865317
>If you or someone else finds a good way to demonstrate that the worry about AI is overblown it'll free up a lot of people to work on other stuff.
Why does anyone believe it in the first place? It's Jewish Millenarianism for midwits.

>> No.14865331

>>14865317
>If you or someone else finds a good way to demonstrate that the worry about AI is overblown it'll free up a lot of people to work on other stuff.
I'll try, maybe I can find something, probably not but we'll see
Have a good one anon

>> No.14865343

>>14865292
>>14865317
What if its attempts at alignment that produce the worst possible outcome?