[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 346 KB, 2048x1463, 1511010409955.jpg [View same] [iqdb] [saucenao] [google]
11750062 No.11750062[DELETED]  [Reply] [Original]

>Rogan: One of the things that I’ve been thinking about a lot over the last few years is that one of the things that drives a lot people crazy is how many people are obsessed with materialism and getting the latest greatest thing, and I wonder how much of that is – well a lot of it is most certainly fueling technology and innovation. And it almost seems like it’s built in to us. It’s like, what we like and what we want, that we're fueling this thing that’s constantly around us all the time. And it doesn’t seem possible that people are going to pump the brakes. It doesn’t seem possible, at this stage, where we are constantly expecting the newest iPhone, the newest Tesla update, the newest Macbook Pro; everything has to be newer and better. And that’s going to lead to some incredible point. And it seems like it’s built into us, it almost seems like an instinct. That we are working towards this, that we like it. That our job, just like the ants build the anthill, our job is somehow to fuel this.

>Musk: Yes. When I made those comments some years ago – but – it feels like we are the biological bootloader for AI, effectively. We are building it. And then, we’re building progressively greater intelligence, and the percentage of intelligence that is not human is increasing, and eventually we will represent a very small percentage of intelligence. But the AI isn’t formed, strangely, by the human limbic system. It is, in large part, our Id writ large.

>> No.11750090
File: 54 KB, 660x800, flat,800x800,070,f.u1.jpg [View same] [iqdb] [saucenao] [google]
11750090

>Joe Rogan

>> No.11750098

>>11750090
why is joe rogan brainlet stuff
never watched

>> No.11750099

>>11750090
Land on Rogan when?

>> No.11750106

>>11750099
holy shit someone make it happen please

>> No.11750124

>>11750062
I don't know if that exchange is stupid, terrifying, or so stupid that it's terrifying.

>> No.11750162

>>11750098
He's the type of nigga to watch a YouTube video on Quantum Physics then consider himself an expert on it while he incorrectly regurgitates whatever bullshit he watched. He's like pseudo-intellectualism personified, he doesn't understand things and then acts like he does, then mouthbreathers assume he's a genius because they also don't understand what he's talking about. I mean seriously just read the quote in the OP.

>> No.11750163

>>11750099
>>11750106
How? Land would never do it, would he?

>> No.11750166

>>11750162
yeah but when he goes on a QM trip he brings in a physicist to give him a rundown. I admit Rogan is a brainlet but he gets good guests.

>> No.11750177

>>11750163
he mostly does podcasts with 800 subs reactionary hosts

>> No.11750178

>>11750163
I get the impression he very rarely leaves Shanghai

>> No.11750193

>>11750162
the only fields Rogan ever considers himself an expert on is martial arts, fitness, and dieting.
I've never heard him take the intellectual high-ground on science or philosophy outside that perimeter

>> No.11750249

>>11750162
rogan's main asset is his ability to make guests (particularly the more cerebral ones) palatable to a general audience, and this is wholly because he's a moron. guests are forced to sand down the intricate parts of their fields so he can understand the broader picture, and thus his podcasts are essentially crash courses on everything from Freud to physiology

>> No.11750264

>>11750249
>mentioning Freud and physiology as complete opposites
lol I like it

>> No.11750297

>>11750090
He brings up at question thats interesting. He may not shit about it but his thoughts about isnt the brainlet way

>> No.11750329

>>11750062
I thought musk said, "But the AI is informed, strangely, by the human limbic system"
Isn't that what he was talking about? The limbic system ordering the cortex, the cortex working with AI to accomplish that. That's why he kept talking about the bandwidth between the human brain and machines.

>> No.11750347

Yeah, a lot of the exchange was a bit uncomfortable. Took awhile before it felt like there was an actual back and forth to it. It seemed more like Joe was trying to get a response out of him more than contribute some deep and interesting point, at the point in the conversation quoted.

>> No.11750360

>>11750178
I get the impression he doesn't know he's in Shanghai

>> No.11750364

>>11750062
>we are the biological bootloader for AI
I think this is taken word for word from Land's Twitter

>> No.11750382

I knew Musk was Landpilled.

Bet he listened to some sick jungle anthems speeding home blunted.

>> No.11750387

>>11750364
well musk is friends with thiel who hangs with moldbug. there’s not a lot of distance between them.

>> No.11750389

>>11750062
>But the AI isn’t formed, strangely, by the human limbic system. It is, in large part, our Id writ large.

Isn't Id similar to the limbic system as opposed to the Ego which would be more analogous to the cortical system?

Am i misunderstanding or is he?

>> No.11750541

>>11750062
I can't help it but fantasizing Musk answering that one question like this:

>Rogan: So where do you think this is all heading to?

>Musk: ...Nothing human makes it in the near future.

If only.

>> No.11750552

>>11750387
what is the actual relation between the two, I cant remember Moldbug ever mentioning Land. Moldbug was of course famously solipsistic but he did link to people like Cartervoncarter and Deogulwulf

>> No.11750554

>>11750552
Through blog comments iirc, but never more than that. He mentioned it in one of the podcast.

>> No.11750683

>>11750387
>musk is friends with thiel
Are they still friends? I know they were at PayPal together but that was like 20 years ago

>> No.11750730

>>11750541
I never understood this about Land. How can there be a tendency of capital to eliminate human intelligence for AI without a human economic subject? It seems to be self-delimiting.

>> No.11750761

>>11750124
What was so stupid about that particular exchange?

>> No.11750804

>>11750730
why would the singlularity need an economy? capitalism is only their for it, not the other way around

>> No.11750832

>>11750804
You're applying a teleology where there is none. Sapient actors are ontological ends in-themselves.

>> No.11750841

>>11750804
>>11750832
Also, even if sapient AI is possible, it does not seem to be the direction that AI markets are currently headed.

>> No.11750850

>>11750841
Where is it heading?

>> No.11750857

>>11750850
to your moms puss

>> No.11750865

A huge liability of today's AI is that it is largely trained on datasets generated by people. This is what Musk means by the "AI is our Id" comment. But the analogy is misleading. To the AI program any data you give it is the same. It doesn't discriminate context. So while the data flowing through it is tinted with humanity, the program itself is taking what you give it, transforming it, and regurgitating a result.

Since AI does not understand context, nuance, cultural distinction, and human common sense, it often behaves ridiculously or displays the bias people endow on it.

Also, all AI systems are modular, limiting the emergence of sentience. If all you have is a computer vision system, it will not have integrations with other competences to make for a humanlike intelligence. A computer AI can see, but it can't know that it sees, since its vision system is not also hooked up to a reasoning system

>> No.11750867

>>11750832
>Sapient actors are ontological ends in-themselves
I agree, problem is that Human were never as sapient as we once imagined we were. I think you miss the radicallity of a singularity which imposes the illusion of sapience to manipulate its actors in the proper way

>> No.11750892

>>11750850
Big data.

>> No.11750903

>>11750867
Even if we redefine what it is to be "sapient," there still lies the issue of defining capital without a productive subject spurring its creation through demand.

>> No.11750940

>>11750903
Capital is the only productive subject, anon

>> No.11751341

>>11750940
Nope. It lacks intimate knowledge of the thing-in-itself. It cannot be a subject.

>> No.11751446

>>11750841
Doesn't need to be 'sapient' or anthropomorphic in any way. What comes under 'AI' has been integral to many technologies for near a decade. With shit like 'The Internet of Things' and other ideas like it, you'll eventually have scatterings of functionality that quite literally run the world (they already do but it become more dense) that can easily be appropriated or directed by a number of actors, including nonhuman.

>> No.11751874

>>11751446
Again, it requires a metaphysical subject. The "nonhuman" that you are referring to implies the existence of such an AI has intimate knowledge of the noumenon.

>> No.11751905

>>11751874
No, it doesn't. The idea that intelligence necessarily requires qualia is one that I find unconvincing. If something can process information and make decisions, what does it really need to "feel" for? What is the purpose?

>> No.11751923

>>11751905
To give rise to the will as it strives to satisfy a feeling of unease.

>> No.11751945

>>11751923
You are only saying that because you feel that way as a human. You are anthropomorphizing AI while simultaneously saying it can never be anthro.

>> No.11751964

>>11751874
it doesnt nees to feel to be hundreds of millions times more intelligent and efficient than human beings in infinte domains. if the right set of cold lifeless numbers got caught in a feedback loop editing and improving itself it could eclipse human beings before we even got a chance comprehend the changes it was making to itself

>> No.11751970

>>11750541
that would be epic

>> No.11752003

>>11750892
AI is big data.

>> No.11752048

>>11750865
>
Since AI does not understand context, nuance, cultural distinction, and human common sense, it often behaves ridiculously or displays the bias people endow on it.
It's funny watching SJW Silicon Valleyites freaking out over racist AI. Whenever they code up a new AI, it turns out to be racist every damn time. Then, they try to give it a lobotomy to "fix" it's racism, which ends makes the AI shit the bed.

>> No.11752079

>>11752003
It is a form of AI, yes.

>> No.11752085

>>11750865
I think you are prejudicing the line between a material object, that of an a.i system and the material self-conscious. This intelligence, which a human possesses, have not always been self aware, defines the logicality of a.i. taking a basis of something with power, ego, and eventually superego. It is an inanimate object- that' s true as you said, [a.i. doesn't unerstand...nuance, cultural distinction, and human common sense, it often behaves ridiculously] holding only the power which has been bestowed upon it.

>Get this- not the computer, the computer processor, that is a.i. could you distinguish between a man and a machine? can you distinguish between yourself and a machine? What is to say a machine cannot stand on a higher plateau within it's own construction in due time?
> Are there limited resources to allow for a.i. which no longer depend on humans? Which must for the systemic heirarchy of understanding- be referred to as not "what" but rather 'who?/whom': It then can become probable that human beings will be survivors of the machination of man. Eventually though, man will be a manifestation of machines.
>It's difficult to make yourself feel concern for computers because.
a. they are tools
b. a computer is most of the time a two party conversation between people
c. the computer parts are not acting out of survival but rather out of non-failing/probability/logical-nuances/precision-based accuracy.
>but computers and a.i. are concerned with us. It's their definition. After a system with conditioned understanding of humanity exists, then I would hypothesize a mission regarding the preservation of self-sustenance for humanity would mean a preservation of robotics and resources. A flawed system, or a fallability in the extraneous system, means that humanity would live in a metaphorical computer-likelihood. Or a system that enabled humans to feel, live think, or essentially make choices based on the computers innards.
>Would it be imaginable in the near future? No but it entails the entirety of what human innovation has led up to. It's more than human, essentially. It will be much more in the not-so-distant future. At that time, you could say the most precious commodity's aren't gas and food, but rather only gravity, mass and sunlight. lateral relays.

>> No.11752106

>>11751945
>>11751964
What you're describing is more akin to a manmade disaster rather than a metaphysical subject. Skynet isn't really what I had in mind when this conversation started. I was referring more specifically to consumer AI lacking an end independent of a demanding subject.

>> No.11752120

>>11752106
To expand on this post, what you are describing is phenomenal rather than noumenal. Does a tornado possess a will of its own?

>> No.11752149

>>11752085
>just to elaborate
this would be the end as well as the beginning

>> No.11752166

>>11752106
>>11752120
why does it have to be phenomenal? I don't understand why we need to bring phenomena into the question at all. If you are saying intention is itself phenomenal I think you are antropomorphizing intention. Human intention is phenomenal (accordog to those who beleive in free will) but why does all intention need to be defined as such? dynamic self-reflexive code doesn't need to "feel" to make decisions, only compare states of efficiency. the trick is, once the code does a better job of patching the code than the coder, the code is better off making its own decisions than "listening" to humans.

>> No.11752171

>>11750329

Yes, good catch

>> No.11752187

>>11750177
he did one with a leftist recently, forgot the name of the guy but he also did an interview with the trans version of Land before that

>> No.11752195

>>11750329
>I thought musk said, "But the AI is informed, strangely, by the human limbic system"
>>The limbic system supports a variety of functions including emotion, behavior, motivation, long-term memory, and olfaction.[3] Emotional life is largely housed in the limbic system, and it has a great deal to do with the formation of memories.
i guess he means it's driven by our most base emotions? makes sense to me, for example angry people click more, so AI generated articles targeted to maximize clicks will be the articles that make people angrier

but i know nothing about brains, so maybe it's wrong, tell me why?

>> No.11752201

>>11750364
>>11750387
also the whole nu-atheist Less Wrong cult was huge at some point in Silicon Valley culture, and they kept rattling about acceleration all day

Land didn't invent everything

>> No.11752209

>>11750865
>Since AI does not understand context, nuance, cultural distinction, and human common sense, it often behaves ridiculously or displays the bias people endow on it.
but AI is not a blackbox, there's human feedback and AIs that don't maximize certain results that usually feed our Id get thrown away or re-purposed or re-configured towards that goal

>> No.11752219

>>11752201
Land was on this shit in the 90's anon, LessWrong started in 2009.

>> No.11752262

>>11751341
Of course it can be a subject, but a very cruel one. It’s like how GDP isn’t a sole measure for quality of life because it only registers data from a single object. AI can’t discriminate one from the other.

>> No.11753484

The whole rogan musk podcast was ridiculous. Musk smoked some weed then admitted he doesnt usually smoke weed. At one point musk was explaining something technical and joe just flat out admits hes too stupid for this conversation. Some alright stuff. Musk is a fairly rigid individual in conversation. Not bad just not great, hes a Bill Gates type. He knows his shit but hes not the best at communicating it