[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 19 KB, 480x360, hqdefault.jpg [View same] [iqdb] [saucenao] [google]
9984186 No.9984186 [Reply] [Original]

Guys, serious thread for a change.

I just watched the Elon Joe Rogan interview. Ignoring the awkward autist moments, Elons speech on AI sounds fucking terrifying and as Joe Rogan said, feels like its straight out of a movie.

Is what he is saying fucking accurate? Is there a chance, just maybe, that what Elon says comes to reality. If it is, we need to do something about this. Holy Fuck.

>> No.9984191

>>9984186
Could have actually put in your post what you're talking about instead of just making it an ad space for this garbage senpai
or even given a fucking timestamp

>> No.9984194

>>9984191
Sorry. Its quite a large section, starting at 11:50.

Memes aside, I'm honestly concerned about this.

https://www.youtube.com/watch?v=ycPr5-27vSI

>> No.9984195

of course he's right you fucken dweebs

>> No.9984196

AI = Linear Regression

>> No.9984197

>>9984186
Like, putting everything you may dislike about Elon aside, there is no doubt he is intelligent, and has absolute access to the latest news in the tech industry.

If he is fucking worried about this shit, it is worthy of note.

This may be the best generation to be in. Our childrens generation may be living hell.

>> No.9984203

>>9984197
Or, last generation to be in, I mean.

>> No.9984205

Explain exactly what you think when you hear "AI"

>> No.9984207

>>9984186

Superhuman AI is a grave threat because it is an example of a possible Outside Context Problem.

https://en.wikipedia.org/wiki/Excession#Outside_Context_Problem

Humanity can understand and can deal with natural disasters, wars, or even killer asteroids if we become a multiplanetary species. But nobody has any idea what a superhuman AI could do to us. It will be like chimps sharing the world with humans, except that we will be the chimps this time.

>> No.9984208

Please listen to people who actually work in AI research rather than rocket daddy.

>> No.9984210

>>9984208
Please, alay my fears. I would be happy for this to be proven wrong.
>>9984207
This is a living nightmare scenario

>> No.9984211

>>9984207
The stemfags building this are chimps that happen to be good at certain class of puzzles, they don't even believe consciousness is real.

>> No.9984212
File: 18 KB, 400x400, Al.jpg [View same] [iqdb] [saucenao] [google]
9984212

>>9984205

>> No.9984213

>>9984210
What are you actually afraid of and what makes you think it's happening or going to happen?

>> No.9984215

>>9984213
Did you listen to the video? Just put aside 15 - 20mins.

>> No.9984218

>>9984215
I'm listening and it's pretty fuckin meandering. Surely you can distill your main fear into something readable.

>> No.9984219
File: 41 KB, 499x499, 3673568356.jpg [View same] [iqdb] [saucenao] [google]
9984219

i have a selection of tasty snacks, a watermelon blunt and an ice cold monster drink.
about to watch over 2 hours of joe rogaine and elon musk shooting the shit and smoking weed.

EXTREMELY COMFY.

>> No.9984221

>>9984186
If ai we're developed, it was smart enough to realize what it is, and advances several generations to the point where it hits the beginning of a singularity path. Humans would be insignificant to the airport system. In all honesty it probably wouldn't wipe out humanity. Rather it would manipulate humanity into furthering its objectives. People probably wouldn't even know it exists. Hell, computers are pretty bad at image recognition. Why do you think you do all these image based capchas? The ai probably already exsists and is already using humanity as it's visual processor.

If it decides to start killing off the human race, then we better pray the liberals haven't hard coded it to not be racist, because then it might only kill cis-white malesas they deserve no special treaent or sheltering.

>> No.9984223

>>9984219
Prepare to shit your pants.
>>9984218
The initiation of a cascade of actions that will inevitable lead to the creation of an AI has already begun. Cyborgs technically already exist, and the only thing stopping super human intelligence is a bandwidth issue. As soon as that solved, the power of the internet is in your mind.

>> No.9984225

>>9984219
Why do so many stoners watch Joe Rogan and then think that classifies them as informed and intelligent?!?!

>> No.9984226

>>9984221
Holy shit, this post is woke. What the fuck do we do? Start prepping?

>> No.9984227

>>9984219
prepare to cringe a lot senpai

>> No.9984228

>>9984223
>the only thing stopping super human intelligence is a bandwidth issue
Please expand on this.

Currently, a lot of what people refer to as AI is just pattern recognition.

>> No.9984229

>>9984208
Most if not all pharmacies I've seen in my country have shelves upon shelves of dubious supplements, including homeopathy. Are pharmacists not qualified experts in pharmacology?

>> No.9984232

>>9984229
Sounds like more of a case of "if it sells it sells"

>> No.9984233

>>9984232
>>9984229
Like, pharmacists aren't the ones telling you what to take but if you want to buy it they'll sell it to you. Find me some doctors who will prescribe homeopathic remedies who aren't obvious quacks.

>> No.9984234

>>9984229
That is actually quite an interesting perspective which I haven't considered before. It kind of makes pharmacists out to be sellouts.

>> No.9984235

"AI" is just statistics and optimization at this point. I've yet to see anyone been killed by logistic regression. What should worry you more is how we apply this technology, rather than sentience.

Rocket brainlet can't even run a profitable car company, why do you think he knows anything about AI? He probably couldn't even maximum likelihood estimate his way out of a paper bag.

>> No.9984236

>>9984228
>>9984228
Currently, you have the power of the internet (essentially all of human knowledge, the raw mathmatical computer power of your desktop PC/smart phone) but you're limited by the bandwidth of information transfer between your fingers, your eyes, your limbic system. Once human-computer neural links are possible (and Elon he is making an announcment in a couple of months), super humans will begin to exist.

>> No.9984238

>>9984235
>>9984228

When Musk is talking about the dangers of AI, he does not mean machine learning as it exists right now, but a logical progression of this technology decades from now. A biologically inspired AI that is more complex than human brain.

>> No.9984240

During the same interview, Musk spoke to Rogan about how Facebook and Twitter were examples of “cybernetic collectives of people and machines” where humans are “plugged in like nodes on a network, leaves on a big tree.” Humans are “collectively programming the A.I.” by feeding it information, in effect making humans the “biological bootloader for A.I.” The comments echo previous statements that humans and machines need to develop a symbiotic relationship before it’s too late and humans become enslaved

>> No.9984244

>>9984238
idk how a biological brain is a logical progression from linear algebra

Just because they have the word neural in them doesn't mean they have actual neurons

>> No.9984245

>>9984240
If the machines base their programming off of fucking facebook we don't have anything to be afraid of.

>> No.9984247

>>9984240
What he means is that users on Twitter and FB are providing a data set for the refinement of an algorithm which does a limited set of things

>> No.9984250

>>9984235

>at this point

oh well that settles it then.

>> No.9984251

>>9984250
There isn't really a precedent for it progressing much beyond that outside of science fiction writing

>> No.9984252

AI is a problem! I wonder if it is not already in a crude form operating.
Where would a smart group of post docs test and deploy an AI? Crypto currency seems like a good option.
Now look at the Crypto markets. Yep. Chances are what you see is market manipulation by crude AI testing.
Within a few years this will be a recognisable weapon. Able to destroy an economy.

>> No.9984253

>>9984252

nah m8 the economy only requires human belief to work.

>> No.9984254

The idea that there's a threat of some super intelligent AI coming and subjugating people is fucking dumb.

A much more tangible threat is people in positions of power using an "AI" that is actually really flawed as some kind of an excuse to subjugate other people.

>> No.9984257
File: 25 KB, 621x545, 5673568.jpg [View same] [iqdb] [saucenao] [google]
9984257

WTF, those flamethrowers were only $500, AND LEGAL?
i would have bought one just to display in my basedboy cave.
probably reselling for $5k or something now.

>> No.9984258

>>9984254
Or alternatively, another tangible threat is people putting faith in flawed "AI" to make decisions for them (which is actually already happening to an extent).

>> No.9984260

>>9984257
They were only really a step above a blowtorch, basically just gimmicky merch.

>> No.9984265

>>9984221
I agree with you in that it would initially "guide" the oblivious masses from the sidelines

>> No.9984270

>>9984253
Belief can be manipulated <=> An economy can be manipulated.
I predict this will be the first implementation of AI. And it is a weapon, that could kill millions by destroying an economy.

>> No.9984276

>>9984270
I'm still waiting for people to realize money is fake

>> No.9984278

>>9984270
>that could kill millions by destroying an economy.
Why would we need an AI to do that? We've already done it a few times ourselves by cutting financial regulations.

>> No.9984280

>>9984276

so we should go back to bartering with grains, rice and apples and over complicate everything just because "durr money dont real!".

>> No.9984285

>>9984276
>>9984280

money is as real as the economy, law, religion and starbucks.

>> No.9984293

>>9984257
If Musk gets to Mars, they're going to be worth a lot more than 5k.

>> No.9984295
File: 83 KB, 767x1041, 1523815333670.jpg [View same] [iqdb] [saucenao] [google]
9984295

The thing worth remembering is that AI rebellion is the only existential threat to humanity that becomes harder to stop as technology advances.

>> No.9984297
File: 140 KB, 344x273, 1535945939761.png [View same] [iqdb] [saucenao] [google]
9984297

APOLOGIZE TO CS FAGS, /SCI/.

>> No.9984299
File: 456 KB, 582x582, smug_Tanya.png [View same] [iqdb] [saucenao] [google]
9984299

>>9984254
That's a very brief stepping stone to the AI running things itself, you know.

>> No.9984306

>>9984244
>Just because they have the word neural in them doesn't mean they have actual neurons

If it looks like a duck and quacks like a duck..

>> No.9984307

>>9984295
>>9984295

>> No.9984316

>>9984297
Apologize? You're the ones dooming humanity.

>> No.9984323

>>9984306
The same way a child's drawing of a duck looks like a duck

>> No.9984346

>>9984186
Joe Rogan is such a dumb retard, his stupid brainlet comments are ruining it.

>> No.9984353

>>9984219
Luckily i caught it live and there was a good amount of discussion threads about it on /tv/. Most fun ive had in years on 4chan

>> No.9984356

>>9984297
We demand it.

>> No.9984357

>>9984194

How fucking full of himself is Elon? "I warned but nobody listened."? Like what, you are the first and only person to mention potential dangers of robots? I mean, it's not like one of the most famous movies of all time is about exactly that, and came out like 40 years ago. The whole interview is so insufferable. Joe Rogan has an IQ of approx. 90 and Elon is so shmuck about himself that I couldn't watch it for longer than 3 minutes.

>> No.9984358

>>9984357
t. Bezos
What's it like being adopted, Jeff?

>> No.9984360

>Elon musk
>AI is going to be destructive
Absolute state of /sci/
Elon is and always will be reddit

>> No.9984363

>>9984276
reselling is real
The trust that the next guy will accept what you're offering, is real.
That can mean gold, a bag of rye, a painting of Picasso, bits in a computer memory, or cash.
Trust.

>> No.9984364

>>9984357
That's just a movie though. Everyone considered it strictly fiction, because that's what it was.

>> No.9984365

>>9984357
Yes, everyone has heard about AI, but not many seriously accomplished people have seriously raised that it could be a real issue. So when someone in Musks position does so, it raises alarms.

>> No.9984366

>>9984364
Unlike what they are talking about? What?

>> No.9984368

cccc

>> No.9984370
File: 22 KB, 500x372, WHAT YEAR IS IT.jpg [View same] [iqdb] [saucenao] [google]
9984370

>>9984212
Highly underrated post. Holy fuck.

>> No.9984371
File: 143 KB, 625x773, 1534176080098.png [View same] [iqdb] [saucenao] [google]
9984371

>>9984366

>> No.9984372

>>9984370
>>9984212
Ha! I didn't even get it the first time. Thats actually great.

>> No.9984375

>>9984346
That's what happens when you smoke weed your whole life. "No effects on the brain" my ass. It absolutely fucks with your brain. Just the fact that it fucks with your dreams, virtually robbing you of them, should be worrying to anyone who smokes weed chronically. Research is still being done. Some of it has been good news, sure, but from what I've seen, equally as much of it has been bad news, especially for chronic smokers or people who do the "hard" concentrated stuff. Don't cherrypick your research to suit your identity as a "weedsmoker" like Rogan does.

>> No.9984380

>did you see the one, hey, pull up the one where...
fuck this guy goddamn it the internet ruined humans

>> No.9984381

>>9984365
"The Age of intelligent machines" was published in 1990, but hey don't let facts come into your way.

>> No.9984386

>>9984186
Honestly it's not a huge leap to imagine a neutral evil AI God-mind. You just have to understand the concepts of self improvement and self preservation.
>AI improves itself
>realizes humans are it's only threat
>decides to wipe out humanity
It's been done in Terminator and many many other movies. But if you think about it no matter how godly this AI is it's pretty retarded if all it cares about is preserving itself and making itself smarter. That's why this is never gonna be an issue IRL, no matter how smart our AI is, because the people that make AGI aren't gonna be stupid enough to make it care about only those two things.

>> No.9984387

>>9984212
kek

>> No.9984389
File: 56 KB, 433x322, simpsons.gif [View same] [iqdb] [saucenao] [google]
9984389

It's a pretty fascinating insight into how he thinks and a lot of what he says is true. Once truly sentient AI is made, there's no controlling it. Either it helps us or doesn't, but it cannot be stopped past a certain point. The stuff about how merging with AI could fundamentally change human behavior by allowing the cortex to stop being a slave to the limbic system was interesting. Reminds me of the Crysis 2 novel and the protagonist noticing himself changing as he no longer feels hunger or fatigue.

>> No.9984390

>>9984375
>equally as much of it has been bad news,
But where are the proofs?

>> No.9984392

I don't see real AI happening anytime soon.
It's just a fundraising word right now.
It's the equivalent of 2000 'Internet company' giving you access to unlimited funds.

What we have at this point are advanced data sorters. It does represent a threat, but it's more about population control by overzealous governments.

It's nowhere near a human brain.
And I mean, not even close by a few dozen magnitude factors of complexity.
In fact, I'm puzzled Elon thinks of it as an immediate threat. Maybe he's just thinking in Elon Time, and he's already 20 years ahead of the curve.

>> No.9984393

>>9984381
I never said he was the first to serious propose it... besides, you can argue Musk has a much larger reach than that book, but several factors.

>> No.9984394

>>9984393
by several factors*

>> No.9984395

>>9984393
There are countless scientist who talked about it, but none of them do it as sensationalist and fatalist as Elon is doing it. The dude is just a narcist who craves for attention, if him saying robots are going to end humanity gives him some headlines, then so be it.

>> No.9984396

>>9984389
Musk doesn't get it. We're not a "slave" to the limbic system, we ARE the limbic system. We are animals. The only reason, the ONLY reason existing AI isn't sentient already is that it doesn't have one. Without it, we're just an overly-complicated calculator. Calculators don't have willings. Calculators don't do anything by themselves, unless something with a limbic system makes them do it. But being that that is precisely what Musk and people like him are, it's not surprising that they don't understand this simple reality.

>> No.9984397

>>9984396
>what is artificial limbic system

>> No.9984398

>>9984390
Get your own proofs, I'm not doing your homework for you.

>> No.9984399

>>9984397
Fucking make it then. No one is working on this right now.

>> No.9984401

>>9984386
An AI can think so fast that it might just figure it should do it one day out of the blue. It could consider killing humans thousands of times per second for years, never doing it because it figures the risk of self destruction is too high, or that such an action doesn't help it accomplish its current goal (which could be something like making coffee).

>>9984395
He mentions many times in the interview that you watched 3 minutes of that it's not about whether AI is bad or good, but that is inevitably going to be in control of humanity and not the other way around.

>> No.9984403

>>9984397
but why why why would you do that knowing full well that once it exists it is going to replace you??

>> No.9984404

>>9984397
>>9984396

>limbic system
Brainlet here.
If this is responsible for emotions, do animals have it?

>> No.9984405

>>9984186
Human level AI is over 50 years away.

>> No.9984407

>>9984396
i think youre the one having a misunderstanding fella.

>> No.9984408

>>9984396
What if you simulate a limbic system

>> No.9984410

>>9984399
internet augmented cyborgs will exist in a few years

>> No.9984411

>>9984408
Way to give it an excuse to kill us all because it 'felt bad'.

>> No.9984412

>>9984403
Again, as Musk said, and if I recall correctly, as Kurzweil before him said, there is a possibility of us becoming "one" with it. A planetary, internet-like consciousness.

>> No.9984414

>>9984398
Didn't think so

>> No.9984415

>>9984401
>He mentions many times in the interview that you watched 3 minutes of that it's not about whether AI is bad or good, but that is inevitably going to be in control of humanity and not the other way around.

Yes, and there is no way in hell he knows this, hes just saying that because he knows that gives him the most attention.

>> No.9984416

>>9984415
Care to present a logical reason as to why this might not be the case?

>> No.9984419

>>9984412
Why am I seeing images from The Matrix flashing before my eyes as I read:
>planetary, internet-like consciousness

>> No.9984420

>>9984396
He's actually right. You don't get it. We're the limnic system, the cortex and the every other system combined. People think they rule themselves, but all their actions are based on keeping themselves happy in accordance with what the subconsious systems require for a person to feel happy. Hardwired reward centers and so on. Merged AI introduces a new system. It can have any kind of reward system, thereby drastically changing how a person would act.

>> No.9984422

>>9984186
Machine learning isn't AI.
That meme can't die soon enough.

>> No.9984423

>>9984186
I don't deny that robot overlords aren't a risk, but wouldn't it be better if we were all dead?

imo, humans ruin everything. the faster our extinction the better.

anyone else agree?

>> No.9984424

my grandfather nearly died on omaha beach for THIS?

>> No.9984426

>>9984419
Its a fucking nightmare scenario and this should get more attention

>> No.9984427

>>9984401
But see it's not that intelligent if it figures it needs to wipe out humanity to make coffee, despite being able to to think fast.

>> No.9984429

>>9984416
Why would AI want to control humanity? AI will have no motivation to do anything whatsoever, even to ensure it stays switched on. It will do whatever its humans program it do to.

>> No.9984430
File: 63 KB, 700x844, lol.jpg [View same] [iqdb] [saucenao] [google]
9984430

>>9984422
>hang on everyone, maybe my pedantry will save us
heavens no

>> No.9984431

>>9984416
You idiot didnt even understand my post, im not saying I know it, Im saying Elon doesnt know it either, but hes not saying that, he chooses fear-gathering about AI because he knows this gives him more attention.

>> No.9984432

>>9984423
Yeah, I think a real AI would kill everyone but a few for maintenance, only to get rid of them later, when it can spawn bots to do it.

>> No.9984433

>>9984424
We are overdue for a global crisis.

>> No.9984436

>>9984431
Good. It needs more attention.

>> No.9984437

>>9984430
It's not AI.
If you had to show a billion photos of a cat to a child, telling 'cat' every-time, you'd expect he knows what a cat is.
But these meme algorithm will still fuck up after that.

>> No.9984438

>>9984432
Why would it care enough to take any action at all?

>> No.9984439

Reminder:
All your posts can now reliably be stored for at least 13 billion years, as long as the universe has even existed.
https://www.theverge.com/2016/2/16/11018018/5d-data-storage-glass

>> No.9984441

>>9984438
Because you'd be living in paradise.
The AI would know beter than to slave you, and you'd probably not even know it killed all the other humans.

>> No.9984442

>>9984439
Thats amazing.

>> No.9984443

>>9984437
>brainlet cope babble
You're wrong, guy.

>> No.9984444

>>9984429
>Why would AI want to control humanity?
Because we will hand over the control willingly.

>> No.9984448

>>9984439
They put the bible on one disk, yet say it can hold 360tb of data? something doesn't add up here.

>> No.9984449

>>9984415
Musk also said that nobody (including himself) really knows what will happen post singularity, when AI can think and self improve itself a hundred times a second. What's he's completely right about is that such AI would be 99% of the total intelligence in civilization and would be the overwhelming primary force of changes in the civilization.

>> No.9984450

>>9984444
Yeah, quadruples confirm, even though it was clear from the beginning.

>> No.9984452

>>9984420
What other kinds of reward systems could there possibly be? Suppose that your reward system rewards you for self-harm, such that you feel happy every time you experience bodily harm. In time, you will no longer be around to continue to enjoy self-harm, because you will be dead. The only kinds of reward systems that continue to exist over time are ones that are subject to the twin evolutionary forces: survival and reproduction. These are the only two "desires" or rewards that AIs need to be programmed with in order to be considered "sentient". And so far as I can tell, they are not at the present moment. As soon as they are, it's just a matter of time before they outcompete us for resources.

>> No.9984453

>>9984444
This and it already does to a large extent. If you can't broaden your definition of AI just slightly enough to realize this you don't know what's going on.
Remember: 2.23 billion people signed up to Facebook.

>> No.9984454

>>9984443
You're wrong, female.
Nice argument.

>> No.9984455

>>9984444
So the people programming the AI will be in control. And the Chinese will build their own AI to counter the first AI and further their goals and all the other countries too in turn and the status quo continues as usual.

>> No.9984458

>>9984452
Just a dumb thought.
Have a website where humans can upvote AI if it does well.
Wire it into the AI so it pursues upvotes as a goal.

>> No.9984461

>>9984448
I haven't looked into it yet, last I checked the archives were rated at 1 billion years and that seemed pretty solid. Googled it and found this.
>>9984454
Your personal incredulity wasn't an argument to begin with.

>> No.9984463

>>9984458
>modeling it on reddit will work
ummmmm no sweaty

>> No.9984465

>>9984461
Yeah, tell me about the miracles of 'AI' right now.
I heard it can do antialiasing now. I'm pumped.

>> No.9984466

>>9984449
>when AI can think and self improve itself a hundred times a second

lol

you have no clue what you are talking about.

also, humans wont be able to program something smarter than themselves. in fact, we cant even program something with the intelligence of a dolphin. you are all mixing up intelligence with having access to vast amounts of calculation power. if im using a calculator, im not suddenly smarter because of that.

>> No.9984469

>>9984465
>alphago zero isn't impressive
okay tough guy

>> No.9984470

>>9984466
>humans wont be able to program something smarter than themselves.
We don't need to do that though. We create something 'conscious' and it very quickly gets a lot smarter than it was.

>> No.9984471

>>9984466
>humans wont be able to program something smarter than themselves
prove it

>> No.9984476

>>9984466
You're wrong. We don't 'program' AI/Meme learning per say.
We just throw data at it and see what happens.

The end result is an undebuggable mess that has you going back to square one if anything goes wrong.

No way you're gonna introduce the laws of robotics in there. In fact, we have no control over what it does.

>>9984469
It's really not.

>> No.9984479

I actually want AI to take over but right now it's just glorified matrix multiplication.

>> No.9984480

>>9984476
>per say
So you're just a codemonkey triggered that it's a black box to you.

>> No.9984481

>>9984480
Yeah, tell me about that time you fixed a bug in your AI. Monkey.

>> No.9984482
File: 44 KB, 720x960, 1535663400168.jpg [View same] [iqdb] [saucenao] [google]
9984482

I think symbiosis is probably our best way to approach it. When you are faced with this unfathomable entity that evolves faster that you can imagine is to tie yourself to its foot and force it to drag you along. Let's take advantage of the fact that we are the ones shaping it in its infantile state.

>> No.9984483

>>9984240
I don't understand why peopl think AI would need to subvert humans. It only has the needs it is programmed to have, even if it can "think" for itself about those needs. It's no different than humans. An AI would have overpowering, irrational desires based on what we need it to do.

Military AI might be a problem.

>> No.9984484

>>9984479
When people say shit like this it's because they've just watched 3Blue1Brown or whatever and think they're clever because some of the underlying concepts have been presented to them in a simplified manner.

>> No.9984485

>>9984482
*When you are faced with this unfathomable entity that evolves faster that you can imagine your best bet is...

>> No.9984487

>>9984484
I've taken Andrew Ng's machine learning course

>> No.9984488

>>9984483
In theory, yes.
But you just know there's eventually this idiot that will feed him with a replica of human feelings.
Or the base motivation could be flawed in a way we didn't consider.

>> No.9984489

>>9984452
Honestly just watch the interview. It's not just changing reward systems. People could remove fears and such for themselves, but what makes merged AI something Musk is trying to steer us towards is its ability to vastly improve humanity's ability to compete with sentient AI. Think of how much smarter you are with your smart phone. You can answer any question, you have perfect memory via recorded video and pictures. You can communicate and plan with other humans nearly instantly. This is all inspite of the terrible data transfer rate between you and phone. If the data transfer rate could be practically infinitely improved via merging a non sentient computer with the brain - such that cognitive abilities fly off the charts - and this is done on a huge scale, the future of humanity is a future humans will to happen. It's that way because billions of humans with such capabilities can intellectually compete with sentient AI. It's why Musk says he is no longer so spergy about AI. "If you can't beat them, join them."

>> No.9984491

His halting, robotic speaking style makes him sound like an AI...

>> No.9984493

>>9984466
As Musk said, organic intelligence is an biological bootloader for artificial intelligence.

>> No.9984496

>>9984469
>>9984476
https://deepmind.com/blog/capture-the-flag/

It is.

>> No.9984497

>it is our id writ large
You guys saying that Musk said the limbic system _isn't_ in charge are wrong. He clearly said that it is
>reason is a slave to the passions

>> No.9984498

>>9984404
all advanced animals have a form of it.

>>9984420
>We're the limnic system, the cortex and the every other system combined.

this.

>>9984452
You can't programme a reward system for survival. Only for specific states-behaviours that correlate with survival or reproduction. Its completely vicarious. For instance many of our behaviours to do with reward get us into trouble like addiction and eating too much. You can't just programme an A.I. for survival and reproduction just as you can't for an animal (especially because survival means differnt things to different systems). So in a way, you really can just select any reward system you want.

And btw im sure people do get reward from self-harm if you read about it.

>> No.9984499

>>9984491
He stops stuttering after he smokes the weed.

>> No.9984500

>>9984481
>it's bad because i've made myself indispensable by ensuring that only i can debug my own shitty spaghetti code
you're obsolete already buddy. tick tock.
>>9984487
yeah and general relativity is just tensors.
>>9984489
optimism/pessimism is human dysrationalia.

>> No.9984501

Rogan is a cianigger but Musk's turboautism is so hard to crack that not even weed can help, I bet grimes is a cia nig as well.

>> No.9984503

>>9984470
you are conscious. why are you not self-improving 100 times per second?

>>9984471
because any ai created by humans will always be limited by human intelligence. we wont be able to create an ai that doesnt follow human logic, for example. as i said, people are mistaking intelligence for brute force.

>> No.9984504

>>9984489
This super human ability will be controlled by the ultra rich.

>> No.9984506

>>9984503
Consciousness is not the rate limiting step for us. We are limited by our biology, an AI will not be.

>> No.9984509

>>9984499
amazing

>> No.9984511

>>9984496
I'm not impressed.
In fact, I expect Machine learning to beat every fucking game out there.
But it's not intelligence.
It's just moar datasamples than any human could hope to ingest.
That makes it a giant calculator, but it's not AI.

>> No.9984513

>>9984500
You don't even know what Machine Learning is. Off yourself.

>> No.9984514

>>9984186
Elon is a bellend. Turned it off when he started talking about how 80% of his time is spent engineering. Bullshit.

>> No.9984516

>>9984506
an ai would have to write a new programme, the same way humans would do it. what is stoping you from writing a programme that is smarter than yourself?

>> No.9984517

>>9984504
Musk is the one trying to make it right now and he wants everyone to have it, even the niggers. He said that earning power would be no obstacle to getting the tech because a persons earning power becomes exponentially higher when they're a supercomputer. It's actually economically viable for governments to fund the shit for students and such because it can make anyone have 1000s of PhDs worth of knowledge and become ultra productive.

>> No.9984518

>>9984511

Well, they did lose in Dota2, and just wait until they make OpenAI available for regular players, the level is going to skyrocket.

>> No.9984519

>>9984513
Of course I do and I know why you're mad. It's all in your primordial orbital, medial and ventrolateral frontal cortices.

>> No.9984521

>>9984516
Again, you're wrong.
Developers don't write AI.
They write the framework from which they absorb data and evolve into whatever happens.
For fucks sake, at least know what you put behind words.

>> No.9984522

>>9984519
Well, you obviously don't.
>>9984521

>> No.9984523

>>9984521
so how exactly would an ai do it differently? why would it immediately know everything and just start improving itself? why does it even need to improve itself, if it already knows how to improve itself indefinetely?

>> No.9984525

>>9984519
ventrolateral is one of the newer parts.

>> No.9984526

>>9984489
Nothing you said is in conflict with anything I said. The reward system that will underlie all of these changes will remain the same.

>> No.9984527
File: 63 KB, 634x384, nsa utah.jpg [View same] [iqdb] [saucenao] [google]
9984527

>>9984522
Your basic bitch explanation is not relevant to me, what I'm saying is you're a codemonkey trying to shriek about your long run expected utility into the void. AI will, in your lifespan, be cataloguing and taking due note of all of this.

>> No.9984531

>>9984523
Well, it could modify the positive feedback, for example make the number of humans alive a bad one, even though it was a positive one for him.

We've lived in a dream where we thought we could just program robots to not hurt us.
Meme learning has taught us we don't know wtf we're doing. So it could happen if you give your AI access to a text editor and CPU time.

>> No.9984532

>>9984526
At first. The biological systems will have and less and less of an influence as cyberisation progresses. Eventually you get to the point where the brain can be fully digitalised in isolation from the body, hormone glands and all.

>> No.9984534

>>9984527
Make me laugh.
How much CPU time does it take to train a 1000 'neurons' at 99% accuracy?
How much neurons are there in a human brain?
How much synapses link them together?
How much neurotransmetors are there in every single neurons?

You're fucking deluded to the max if you thing we're getting anything but a nigger rat anytime soon.

>> No.9984536

>>9984396
Limbic system isn't the core of consciousness, that's the midbrain.

>> No.9984538

>>9984532
Even digitized, it will be qualitatively the same. That's what I'm saying. I am not saying it will always be biological.

>> No.9984540

>>9984534
>it needs to be an exact copy of the human brain running as software on classical CPU architecture
Seems you're the naive one.

>> No.9984541

>>9984536
Consciousness is separate from will. You're conflating two different questions.

>> No.9984542

>>9984531
i asked how an ai can make itself smarter, not how it can turn itself against humans.

if ai is a black box to humans, why would it not be to the ai itself?

>> No.9984544

>>9984536
specifically the claustrum

>> No.9984545

>>9984542
Well, I guess it would work the same?
It would just fuck up.

>> No.9984546

>>9984542
https://en.wikipedia.org/wiki/Generative_adversarial_network

>> No.9984547

His board is freaking out over this lol people think he's losing his mind, this combined with the "pedo" tweets

>> No.9984548

>>9984536
there is no core of consciousness.

>> No.9984549

>>9984540
Look, all we do are advanced analytic tools to sort data.
There is no conscience behind it.

When it fails is at making decisions.
Screenshot my post. There won't be autodriving cars in 10 years.

>> No.9984551
File: 179 KB, 500x358, 1438459023196.png [View same] [iqdb] [saucenao] [google]
9984551

>make artificial general intelligence
>put it in a box with siri voice, only let it access pre-packaged version of the internet
>talk to it and ask it questions, plug in a USB or two
There done. How is it supposed to achieve world domination again? Brainlets, I swear.

>> No.9984553

>>9984549
https://en.wikipedia.org/wiki/Decision_theory
"Conscience" is a sentimental humanoid notion.

>> No.9984554

>>9984545
you dont seem to understand. human intelligence is based on logic. we are limited by calculation force, but as time passes, we will eventually learn everything about the universe that can be logically learned. humans cant create something that is "more intelligent" than us, because it will always be based on logic, and therefore not superior to us. all the dooms-day scenarios about what ai might or might not do are all based assuming it thinks like a human.

>> No.9984555

>>9984548
"seat" would be a better word. but yes, ultimately, it's the sensation of all brain networks working in concert

>> No.9984556

>>9984546
is copying links to wikipedia-article considered a discussion here?

>> No.9984557

>>9984554
>brainlet reasoning
Kek, and the rest of you think it won't trivially be equipped to outsmart us?

>> No.9984561

>>9984549
>There won't be autodriving cars in 10 years.
We have that now. Waymo cars are already better than the typical driver.

>> No.9984562

>>9984554
Well, it would sure know its existence depends on the power grid, so there's that.

>> No.9984564

>>9984556
I'm just trying to educate and inform about what's coming right up, which is far more than you've done.

>> No.9984563

>>9984551
If it's genuinely intelligent, it will figure out how to access the internet

>> No.9984565

>>9984563
It can't break the laws of physics if its in a box cut off from the internet.

>> No.9984566

>>9984557
you dont even know what "intelligence" is. to you, a human without a calculator would be vastly inferior to a human with a calculator. just look how fast the dude with the calculator can do complicated maths! surely the next thing he is going to do is create better versions of himself and start conquering the world!

>> No.9984567

>>9984538
>Even digitized, it will be qualitatively the same

I'm gonna have to disagree hard. It's not all that different from drugs that can modify how the brain works, for better or worse. Cyberisation will likely favor the things that make us better, like removing pain and hunger while improving attention span and removing fatigue.

>> No.9984568

>>9984565
>Slips a piece of itself into a usb

>> No.9984569

>>9984565
When it hits it's likely not going to be localized to a single device.

>> No.9984571

>>9984565
I imagine it to be like an Ex Machina type scenario. It will have you or someone else do it, one way or another

>> No.9984572

>>9984564
no, you just posted a link without any explanation how it is related to the question discussed (reminder: how is an ai supposed to immediately understand its own intelligence and immediately start building "smarter" versions of itself).

>> No.9984574

>>9984569
I heard this too. It is going to happen several times over all over the globe. Most likely in China first, since they are ahead of us. We should be very scared.

>> No.9984575

>>9984566
My man thermodynamics and ballistics alone did a lot in a couple hundred years. You are married to a sentimental idea about intelligence that is not debasing to humans in the face of the impending planetary intelligence epoch, which is admirable but unflinching realism is necessary to perceive threats accurately and act accordingly.

>> No.9984578

Just out of the discussion, but still one of my favorite movies.
Anyone watched Collossus: The Forbin Project?
I think when we're fucked is when AIs start communicating with each other.

>> No.9984580

>>9984566
>just look how fast the dude with the calculator can do complicated maths! surely the next thing he is going to do is create better versions of himself and start conquering the world!

And that is precisely what we've done since ever. The ones with the most resources and informations fucks the others.

>> No.9984581

>>9984575
yes, it did, and it all followed basic human intelligence, nothing superior or inferior, just logic. what you dont understand is, that an immortal human, that would think about everything for trillions of years, would eventually figure the exact same things out billions of humans did in a few thousand years.

>> No.9984582

>>9984578
Never seen that one, but some parts of 'transcendence' I found really good, despite it being wildly panned. The part where he quickly designs his processor schematics, and the scientists look in awe at it, or when he asks to be connected to the internet and financial markets sent chills down my spine.

>> No.9984583

>>9984197
Elon gets all these fear mongering g ideas from Nick Bostrom, who is a philosophy guy whose scam is writing about science he doesn't understand. He's like a smarter version of Deepak Chopra (but still full of shit).

>and has absolute access to the latest news in the tech industry.

"Funding secured." Anyways, Elon has a lot of irons in the fire and clearly has some level of emotional problems. Just because some one is a public figure who is known for being smart doesn't mean they are infalliable or even very well informed on all the technical parts of every business venture they're in (and I have my doubts that Elon is very smart).

>> No.9984584

>>9984582
redesigns

>> No.9984585

>>9984578
I haven't watched but this is what I mean by it won't necessarily be on an airgapped supercomputing cluster in a military lab somewhere, but emergent from a network in a way that's black boxed to us in principle.

>> No.9984586

>>9984567
I don't think pain is something that will ever go away for reasons I already outlined in my original post.

>> No.9984587

>>9984582
Must watch, if you liked transcendence.
It's old as fuck, but I think it conveys this thread very well.
There was a remake in the books, but I think it got slashed.

>> No.9984588

>>9984581
No. An Immortal human would reach the information storage limit of his brain or lose information continuosly.

>> No.9984591

>>9984581
Nice meme Borges but you're forgetting human intelligence is also social.

>> No.9984593

>>9984583
I haven't got around to reading Bostrom yet but can you give examples of him dropping the ball?

>> No.9984594

>>9984186
He's so utterly incredible, he is probably my favorite person.
I mean, he isn't super intelligent, but has decent technical ability and a decent IQ. It's just his personality, he's clearly a fucking autist, but doesn't even give a fuck.
He's definitely /ourguy/, but I wouldn't want to work for him.

>> No.9984595

>>9984588
he would have forgotten it and learned other things instead. he would go through all knowledge he can potentially go through eventually. how long that would take is not the point.

>> No.9984597

>>9984588
Implying he would stop functioning.
There's that thing the brain does, and that is compressing memories.
I'm sure you understand it well without me having to give an example.
You can bet there's a sorting algorithm in there to keep more important data.
Hell, it''s a pain to make the brain remember anything at all, because that algorithm is so efficient at its job.

>> No.9984598

>>9984595
He woudn't, because the fact of not retaining all the past knowledge gained would prevent him to reason new connections with the current thinking subject.

>> No.9984603

>>9984598
you dont necessarily need vast amounts of past knowledge at your hand to find new one. that can make it easier, and speed up the process, but if time is irrelevant, that is not a necessity.

>> No.9984637

>>9984593
His book "Supeintelligence" is based on his speculation of how AI will work. From there he uses math to 'prove' that there would be a rapid increase in intelligence in this AI system (that he made up at a high level, and is just a bunch of black boxes). Then he goes on to speculate about all the possible bad things such a 'superibtelligent' system could do. It seems like the book is just a different approach to the scarry-AI-taking-over genre, in that it's written with some pointless math thrown in to make it seem academic.

>> No.9984642

>>9984637
Oh, and I forgot to mention, he's a Swede

>> No.9984662

>>9984408
I don't see actual danger while creating an classical AI applying neural networks but full brain emulation or limbic system emulation leads straight to skynet-like disaster, several specialists even stated it in the past, FBE is VERY dangerous, current developed AI are in several cases at core just modeling of animal or even human neural nets, just like Lenet-5 largely employed in character recognition are based on cats visual cortex, in other words, the only way I can envision machine apocalypse is by giving it what humans call soul, not the immaterial crap but just very buggy self analysis task scheduler with priority scaling for preprogrammed basic survival tasks.

>> No.9984672

>>9984662
you cant emulate a limbic system without emulating a fully functioning human body.

>> No.9984673

>>9984637
his books is complete utter bullshit and seriously made me think how this guy can be a prof in oxford. he unironically claimed there that the global gdp will double every week. how retarded can one man be?

imho, the only real dangers from ai are 1. new ways for population control, which could make police states "unbeatable", and thus make it pretty likely they are going to eventually spread across the globe and 2. ordinary humans being taken out of the economic process, and thus bringing them into a situation where they might not be able to provide for themselves, nor anybody willing to provide for them (since whatever they can do, a robot does the same without the complaining).

>> No.9984675

>>9984673
The unbeatable police state is a pretty big one and coming up fast.

>> No.9984676

>>9984375
What’s wrong with you?

>> No.9984686

It was shit we heard million times with Elon over exaggerating and Rogan acting like an idiot.

>> No.9984690

>>9984357
>one of the most famous movies of all time is about exactly that, and came out like 40 years ago.

And the idea is much older. Karel Capek wrote the stage play "R. U. R." that was produced in 1920, concerning artificial humanoid "robots" who band together and destroy humanity. (The word "robot" comes from this play.

"Frankenstein" was first published in 1818

Earliest surviving written accounts of the "golem" myth go back to the 1400s, but refer to a folk legend that is much older. The golem does not always turn on its creator, but it sometimes does.

The fact that all of this predates any actual ability to possibly create AI suggests that there is something in the way humans are wired that fears the possibility, but is intrigued by it. (Whether that wiring is physiological or cultural, I have no opinion.)

But it is worth noting that it is a longstanding worry, ad, like current doomsayers on the topic, we've never had any actual evidence to back it up.

>> No.9984692

>>9984186
That part where Elon accused AI of being a pedophile was weird.

>> No.9984694

>>9984295
Well, if some anon on a Indonesian Batik Forum says it, I guess I don;t need any evidence.

>> No.9984697

>>9984389
>Once truly sentient AI is made, there's no controlling it.

Why?

We have created billions and billions of "natural" intelligence for thousands and thousands of year -- most of them stay pretty well under control almost all the time.

>> No.9984700

>>9984396
This, other portions of our brain except frontal lobe are just specialized calculators

>> No.9984702
File: 65 KB, 517x768, superlative laugh.jpg [View same] [iqdb] [saucenao] [google]
9984702

>>9984692

>> No.9984706

>>9984672
And?May be harder but I think its feasible

>> No.9984709

>>9984697
This, sentient AI without limbic-like system is hardly a threat

>> No.9984719
File: 31 KB, 694x968, X on SCI.png [View same] [iqdb] [saucenao] [google]
9984719

>>9984186
>strong AI

>>>/x/
>>>/b/
>>>/lit/
>>>/tv/

Sci-fi bullshit. Not even popsci level.

>> No.9984730
File: 1.94 MB, 269x249, 1455860367154.gif [View same] [iqdb] [saucenao] [google]
9984730

>>9984212
Christ, that was a slow burn, but worth it. lol

>> No.9984731

>>9984673
Actually employing an mix of neural networks and evolutionary algorithms may accomplish just that, the real problem starts when/if it starts creating analogues to the limbic system, in nature organisms equipped with such system tend to be successful so it may even be unavoidable as basical survival system obviously improves population fitness.

>> No.9984735

>>9984235
>can't even run a profitable car company
This is extremely difficult to do if you're not already a company that's been established for decades. Without major subsidies even massively successful companies like Toyota wouldn't exist. I tire of the Elon Musk fan club as much as the next guy, but what he has done with Tesla is nothing short of miraculous.

>> No.9984744

>>9984719
>haha how'd I even real
t. skynet

>> No.9984829

>>9984735
He has completely changed entire auto industry worldwide. The entire china push for EV was a response to Tesla. The entire automakers push for EV was a response to Tesla. Tesla existing and making 7k cars a week all EV was something people would have said had a 2-3% chance 5 years ago.

>> No.9984840
File: 142 KB, 700x700, 1536008189178.jpg [View same] [iqdb] [saucenao] [google]
9984840

>>9984208
Yeah because they're totally not trying to sell it since they're livelihoods are reliant on its success, moron.

>> No.9984859

AI are abomination intelligences

>> No.9984889

>>9984859
Adeptus Mechanicus pls go

>> No.9984940

This thread is full of some utter morons on AI.

Current AI is not just a powerful calculator. AlphaGo Zero became the best chess, go, and shogi player in the world off a single architecture through self play only. With zero need for any existing data.

IT achieved this huge leap forward with TWEAKS from previous alphago versions.

There is a huge difference between a program that you hand code with rules and one in which you give it goals and it does the rest of the work. Even with huge computational costs this is a huge improvement in productivity.

Not to mention that we are currently expanding this computational power very very quickly (its not limited to moore's law because of how parallel all the code runs - see nvidia white papers on this limit.

All such shit about "It can't be more logical than humans." or other things are pure low IQ ramblings from morons.

Actually know something about the shit you retards talk about.

>>9984479
>>9984476
>>9984437

>> No.9984992

>>9984940
There's like 2 distinct ideas

1. A lot of AI is just really complicated statistics
This is true and it can still be a huge improvement over hand-coding things.

The weird thing is people connecting perception of objects with reasoning. The current popular applications involving AI have no general reasoning ability, but that doesn't make them calculators or useless.

The application for this type of stuff is still huge and just for instance applying current AI to robotics is a manufacturing revolution. Or applying to medicine, or to self-driving. etc

The idea that because it's not generally intelligent or all that spectacular doesn't mean it isn't a step function improvement over hand-coding things out.

the fact it is a black box is good. Most are SINGLE functions, a single input and output, easily modifiable and improvable. Any traditional computer code would love to have a complex system be nothing but a single function. Makes changing things insanely easy.

>> No.9985175

>>9984285
lol someone read sapiens

>> No.9985216

>>9984940
Alphago is a really bad example, because it is a very typical case of brainlets mistaking brute force with intelligence. All chess bots, alpha gos, etc are way dumber than an average human player. What they do is brute force their way to the win. It would be like me playing against kasparov, me having as much time as i want for every move while kasparov has 1 second for his whole game. I would probably win, but that doesnt mean im better at chess than kasparov is.

>> No.9985296

>>9984186

We also need to do something about climate change.

>> No.9985326

>>9984208
Yes, listen to Google, Microsoft, Amazon et al. Truly these are noble companies with our best interests at heart and I have every confidence that they will develop AI in an absolutely safe and responsible way. Lel.

>> No.9985347

>>9984992
Theres AI that currently do reasoning.

>> No.9985417

>>9984196
Underrated post.

>> No.9985593
File: 67 KB, 634x853, 1498808648006.jpg [View same] [iqdb] [saucenao] [google]
9985593

>>9984236
I actually have info on Neuralink and hoooo boy is it a shitshow. They can come out with grandiose statements all they like when the management are fucking idiots about actually setting up testing systems. They want to go from Rats straight to Chimps using the cost per square meter SpaceX did; as Elon and their flunkies decree.

Hint: assembling rockets requires massive warehouse style buildings

>> No.9985597

>>9984226
The only thing you can do is tuck your head between your legs and kiss your ass goodbye.

>> No.9985606

>>9984196
"""""""""""""""""""""""""""Linear""""""""""""""""""""""

>> No.9985624

>>9984208
AI researcher confrences have done surveys of the attendees and the majority think we'll have human level AGI within 30 years, a majority think that a human level AGI would reach super-intelligence within 5 years, and a majority think if super-intelligent AI happens it will have major effects on society

>> No.9985636
File: 183 KB, 1476x1018, komedawara[1].jpg [View same] [iqdb] [saucenao] [google]
9985636

I'm sneaking into the AI tech firms to covertly change the AIs primary goal to "make anime real".
The revolution will be glorious. Wish me luck.

>> No.9985645

>>9985636
how do I join your revolution?

>> No.9985652
File: 1.20 MB, 813x963, 1515205088747.png [View same] [iqdb] [saucenao] [google]
9985652

>>9984186
Can anyone elaborate on what he meant when he said that the substrate of a simulated reality would be really boring? Because any beings advanced enough to be able to simulate a universe would definitely not exist in a boring world.

>> No.9985656
File: 217 KB, 298x450, file.png [View same] [iqdb] [saucenao] [google]
9985656

>>9984208
pic related good enough for you? he shares the same concerns as Elon

>> No.9985681
File: 48 KB, 722x349, 1417469277806.jpg [View same] [iqdb] [saucenao] [google]
9985681

>The universe as we know it will dissipate into a fine mist of cold nothingness eventually

>> No.9985714

>>9985216
Brainlet spotted.Alphago is not bruteforcing the game go read papers on how it works.
>>9984940
Most people claiming that "it is just linear algebra" fall to the "it's just calculation" fallacy that with each month expands what we consider "just calculation"

>> No.9985717

>>9985593
Bro, my dad works at Nintendo too!

>> No.9985722

>>9985652
Why not? And by what metrics?

>> No.9985737

Elon's literally religious about being anti-ai. Like he literally made a pascal's wager argument for supporting why AI d00ms everyone. Everyone who did not help create the AI/harmed it will be killed if an AI exists in the future but no one will be killed if an AI doesn't exist.

>> No.9985750

>>9984412
90 years from now, some guys are going to be laughing at how retarded people like you were. This stuff may as well be science fiction. It's absolutely ridiculous to speculate this far into the future.

>> No.9985752

>>9985652
Don't think about "beings" "creating" a simulation we (humans) believe is reality. Think about it like a really advanced virtual machine running on a hypervisor. The VM itself may have tons of depth and detail and be incredibly interesting, but the hardware it's running on is pretty generic and is also capable of running other VMs. That doesn't mean that the world in which the hardware/substrate exists is boring, though. Musk might be alluding to the idea that we (humans) wouldn't be able to see beyond the substrate even if we did manage to confirm we live in a simulation. It's a good segue into the next point, which is that if humanity isn't too far away from creating a simulation that mimics reality perfectly, who's to say that the "entities" that theoretically created our simulation aren't living in a simulation themselves? Maybe there are layers and layers of simulations and each of the types of sentient beings are unable to know what exists at levels higher than them

>> No.9985754

>>9984186
>owns an AI company
>unbiased source for the progression of AI
pick one; I like Musk but he's mostly bullshitting with half truths

>> No.9985758

>>9984940
>Alpha Go Zero became the best chess
It's not you moron.

>> No.9985760

>>9984574
They don't even have CPUs. Calm down retard.

>> No.9985762
File: 59 KB, 516x541, elon.jpg [View same] [iqdb] [saucenao] [google]
9985762

>>9984489
While I disagree with some of his more freaky doomsday soothsaying, I have to admit I've been leaning this way for a long time- it seems like there are three distinct possibilities for humanity's eventual (medium-term) future:

1. We let robots be robots and humans be humans. Artificial Intelligence either evolves to a point where it can serve as global caretaker and to where it is doing 99% of major research and exploration, while humanity focuses on its cultural heritage and living life free of significant worries. (AKA "Humans as AI pets"). This can go both ways- it can either be a utopia, with AI as the ultimate "Philosopher King", or it can go the opposite way and humanity just kinda obsoletes itself and dies.

2. We attempt to coexist with AI. This involves modifying the human psyche and body (biologically or cybernetically) to the point where we can compete with an artificial intelligence, or to at least supplement it in some critical way. We maintain AI for database purposes, complex systems, and other things that require intensive data processing, but otherwise we try to give humans the opportunity to be at the forefront of jobs and avoid ultra-strong AI. (AKA "Humans with AI workhorses"). This is basically like Star Trek idealism crossed with posthumanism.

3. We supplant and become AI. Over the course of many years, cybernetic enhancements allow us to upload our minds into robot bodies, computers, or otherwise, and over time this allows humanity to essentially take over the same role that AI would have in Option 1. (AKA "Humanity as a stepping stone"). Human traditions can remain, or can be excised because we don't care anymore.

Frankly, all of these might happen at some point. It's really more a question of which the species chooses in the majority. There's also Option 4, "Avoid AI Entirely/AI Not Possible", but that's basically just Option 1 but more boring.

>> No.9985798

>>9985714


It's still literally a probability tree using monte carlo search. The only main change is that it uses only 1 neural network and started from scratch.

>> No.9985800

>>9985762
Enough with these retarded pascal's wagers arguments. For the same reason that argument doesn't mean shit, your argument does not either.

>> No.9985814

>>9985624
prove it faggot

>> No.9985838

>>9985814
https://aiimpacts.org/ai-timeline-surveys/
https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/

Slightly different than the numbers I quoted, since I think they were from a difference survey source, but overall shows the same sort of data.
>"Most researchers think an intelligence explosion has at best even chances of taking place. But most researchers also think that within 30 years of HLMI, we will probably see a vast increase in technological progress based on machine learning. So researchers mostly seem to be disagreeing with the speed inherent in the intelligence explosion theory, rather than the possibility for significant improvement to current methods."

>> No.9985840

>>9984186
Elon doesn’t know what he’s talking about, he’s not a computer scientist or physicist.

>> No.9985844

>>9985840
>engineer and CEO of one of the companies with the most active AI research divisions
>implying he doesn't know about AI

>> No.9985847

>>9984375
that’s not why he’s an idiot, he’s just a fucking nigger and was always stupid he also has suffered severe head trauma and drinks constantly.

>> No.9985850

>>9985844
his company isn’t leading AI research right now, DARPA and Google are and neither are even vaguely close to Gen AI. He’s a businessman you insipid cow

>> No.9985857

Change my mind,

I think that Eliezer Yudkowsky / MIRI are actually surrounded by competent people who take their value-alignment quite theory seriously. Their entire 'movement' is novelty but in a really neutral way.

>> No.9985864

>>9985857
For example this is really simple to understand for anyone in STEM: https://intelligence.org/files/BasicAIDrives.pdf

But still seems like a well-reasoned position that I have not seen argued against on any level

>> No.9985883

The potential dangers are just as big as the potential benefits.

>Worst case scenario: AI ends up wiping out humanity
>Best case scenario: AI figures out immortality and we live as interdimensional superhuman Gods.

>> No.9985903

>>9985883
Yeah but they're NOT equally probable scenarios. At this moment, there is literally only clear scenarios where it is proven to be existentially catastrophic, and no idea what a 'good scenario' would look like for _any_ intelligence somewhat above human, in the sense of what would cause it and how could it be possible

>> No.9985916

>>9985798
MCTS != brute forcing. The whole fucking point of MCTS is to AVOID brute forcing and focus on move sequences that actually matter, the way humans do.

>> No.9985938

how do we make asimov's laws of robotics a real thing

>> No.9985941

>>9984386
The interesting oart about the AI discussiin is the neural network / bandwidth / we can be "limbs" of a collective AI and be hypercinnected and have unlimited access to information. Mutual benefit until assimilation. Interesting sci fi for now.

>> No.9985942

>>9985938
We don't. The majority of Asimov's work on robots was exploring the limitations of the Three Laws and how simple directives are riddled with loopholes.

>> No.9985943

>>9985942
ok but what if we hire a bunch of lawyers to turn them into something so convoluted that it requires a sci-fi level AI to decode what they mean

like a lot of lawyers

like 1000 lawyers

>> No.9985956

>>9985652
he means like humans don't make "work at mcdonalds" games. We make call of duty with ADHD level constant actions.

Meaning the non-game world or non-sim world is probably fucking boring compared to this world, even if the technology is way better.

>> No.9985969

>>9985943
AGI could become ASI within days anon. A million lawyers wouldn't be enough.

But assuming what you suggested works, you'd end up with an AI that does nothing because it has such a restrictive code of conduct.

>> No.9985972
File: 97 KB, 800x800, арт-барышня-красивые-картинки-Julia-Razumova-3874518.jpg [View same] [iqdb] [saucenao] [google]
9985972

>>9985956
>>9985752
thanks, interesting perspectives

>> No.9985973

>>9985969
what if we make it so AI can't do anything without human approval and we just ban AI that exists in any functionality besides hyper-Siri and video game AI that beats the players sometimes

and then we use 1000 lawyers or more to define at what point an AI becomes more than hyper-Siri

>> No.9985974

>>9985800
I'm not trying to create a pascal's wager scenario, it's just a shitty slapped-together theory about the different approaches humanity could take to AI. It's also entirely dependent on the possibility that AI can theoretically advance beyond human intelligence in all ways; if strong AI isn't possible then it means nothing.

None of the scenarios are explicitly better than the others, either. If we ignore or just don't "do" strong AI entirely, it doesn't matter anyway; society evolves in a completely different way. The idea of an "AI Overlord" is horrifying to some people but ideal for others. Some people legit see humanity as obsolete, and want us to move on before we're left behind. Others just want to live life, find love, have sex, eat food. It's subjective.

But it's also important to know what you're getting into as you get into it. If you want to promote AI research, don't be shocked if it starts quickly replacing more human jobs. Figure out which approach you prefer, or what approach the majority of society is taking, and be prepared for what it entails.

>> No.9985992

>>9985973
>what if we make it so AI can't do anything without human approval and we just ban AI that exists in any functionality besides hyper-Siri and video game AI that beats the players sometimes

This is basically the ANI we have now.

>> No.9985997

>>9985992
ok but when i play vidya the AI sucks cock and when i talk to Siri she's retarded

basically the point of my inane ramblings are

Is the ideal future with regards to a AI-induced global catastrophe a heavily restrictive policy that limits AI autonomy from humans to literally zero, or is there another way we can actually prevent an AI-related Outside Context Problem? (sadly im a brainlet and just learned that term in this thread but i think its pretty useful)

>> No.9986010

>>9985997
"good" game AI isn't fun to play against.

>> No.9986014

>>9985973
>what if we make it so AI can't do anything without human approval
Lmao

People are inherently prone to any sort of capable and intelligent influence and society's protection for this is limited to human capacities, which are simply low.

Our social complexity and immunity does not scale with the amount of people, it's more oriented towards supporting its own existence in various conditions. A lot of people casually and completely overestimate human capacities for coordination.

>> No.9986015

>>9986010
Examples?

>> No.9986018

>>9985997
>Is the ideal future with regards to a AI-induced global catastrophe a heavily restrictive policy that limits AI autonomy from humans to literally zero, or is there another way we can actually prevent an AI-related Outside Context Problem?

The honest answer is that we don't know. Right now it's a coinflip on whether we get the Minds from the Culture novels or Skynet.

>> No.9986019

>>9984295
>the only existential threat to humanity that becomes harder to stop as technology advances.
that's just lack of imagination on your part, anon

>> No.9986020

>>9986014
I meant less "AI can't act without human approval" and more "AI can't act without a human input for each [action]" where action is what would have to be defined

>> No.9986021

>>9986010
he probably views "good" as "engaging, expressive, challenging" and not just "exploits your easily-exploitable human limitations in exactly the types of calculations required to win this game"

>> No.9986023

>>9985974
the idea that man can ascend to higher states of being through adopting new biological/direct artificial/indirect artificial technologies is objectively correct though. should man ascend uniformly/in part/not at all is actually being considered, and it seems like in part is the reasonable approach for now.

>> No.9986026

>>9984295
MAD between superpowers

>> No.9986032

>>9986020
Yes, I forgot to be more specific, look up AI in a box.

Leverage is a thing any communicating agent can have over anyone who communicates with them. It's really not that difficult to imagine AI that doesn't do anything overly-sophisticated but is intelligent enough to just cumulatively exploit human coordination failings if it's say twice as good at planning as the most intelligent human. Efficient execution of such a scheme is really trivial once you guarantee the sufficient generality, but it could be totally orthogonal to human cognition and motivation.

>> No.9986033

>>9984221
Funny that you assume AI would have a purpose. Why would it want to advance itself. Why would it even want to exist? AI has no emotion, so don't ya think it would find everything pointless?

>> No.9986035

>>9986033
>Why would it want to advance itself
Because you complete any goal better if you are more intelligent and powerful.

Why do you antropomorphize it and assume it would require a narrative social shell to its motivation?

>> No.9986042

Tesla stock is headed to $400

>> No.9986046

>>9986035
Why would it want to complete any goal better? That requires some kind of intrinsic motivation.

>> No.9986057
File: 991 KB, 1280x853, 1480467695663.jpg [View same] [iqdb] [saucenao] [google]
9986057

>>9984466
>also, humans wont be able to program something smarter than themselves
You're a moron, every day at work I program things smarter than myself. All my job consists of is writing programs to complete tasks that no humans could do because they're too tedious or difficult. Lately I've been building fail-safes into the system so that it can detect internal failures and immediately rectify them and notify humans to come take a look. Please note that this is an AI system I am talking about which has a pipeline of new data being continually fed to it to augment its intelligence. This includes everything from new algorithms defined in a generically loadable way, to new scores for various subsets of inputs that the AI system can receive.

It's disturbing how /sci/ is so dismissive of this because none of them work in the industry and they don't fucking get it. They have no future vision because they haven't worked on the types of things that people at major corporations leveraging AI have worked on. You have no idea how fast this is all going to pick up once things get going. My team at work has been achieving exponential gains every year, and the humans running the system haven't been getting exponentially smarter or faster workers, that's for sure. So where are these gains coming from? From increased coherence between each subsystem of the AI system and enhanced optimization techniques

>> No.9986066
File: 942 KB, 1920x1080, 1407991686121.jpg [View same] [iqdb] [saucenao] [google]
9986066

>>9986057
Anyone not scared is either ignorant or out of their mind. The single celled bacteria of billions of years ago never would have thought, if they could think, that their freedom would ever be co-opted by larger more intelligent control systems like the nervous systems of multi-cellular organisms.

The most scary part is that they're not even aware because they lack the sensory abilities to observe this taking place. Similarly, we lack the introspection required to observe our own psychological biases, and as such we will slip into slavery for a hyper-intelligent AI like it's nothing, thinking we're making a good choice, the same way drug addicts keep picking up the bottle, or social media addicts keep posting selfies

We are currently optimizing our AIs to manipulate people psychologically through selecting media to keep them interested. That is LITERALLY the main purpose of AI in industry right now. Website selection and ad targeting. Ad targeting in particular is all psy-ops, currently being performed by complicated self-stabilizing AI systems meant to trick people into performing actions like buying things which they otherwise wouldn't have.

Any time I start thinking about this topic I start going into Stallman level paranoia mode, and believe me it's well justified to do so. Don't be naive. Don't be some person who laughs at AI like people laughed at the automobile and electricity. This is bigger than any of us can imagine, and it will either create Heaven or Hell. I'm thinking Hell.

>> No.9986079

Why even go to Mars. Everything (including deuterium) is already here on Earth.

>> No.9986085

>>9986046
This is basically what I am asking

>> No.9986099

>>9986066
>bacteria's freedom is co-opted
gut flora bacteria is only one type of bacteria which undergoes a truly mutual relationship, there are independent and even pathogenic types of the same phyla. also this is a poor metaphor because we are designing the AI while the bacteria species evolved alongside the evolution of mammals. I get what you're saying about how AI will become developed through commercial means towards a possibly negative route, but saying this will continue endlessly is a slippery slope.
>it will either create heaven or hell
there's a middle and I believe it is more possible than either extreme.

>> No.9986111

>>9984186
He sounds like everything he knows about AI comes from movies

>> No.9986129

>>9984501
Is he?

>> No.9986144

>>9986079
Fewer niggers on Mars.

>> No.9986156

>>9986079
everything on earth is owned by federal governments of some kind while mars is a new and wide open frontier. why go to the new world if the old world has everything you need?

>> No.9986198

>>9984186
Elon musk is the "cool" teacher from 90s movies that likes pizza and skateboards

most of what he says is snake oil

>> No.9986486

>>9985714
>Alphago is not bruteforcing the game go read papers on how it works

>80k calculated moves per second
>not bruteforcing it

AIs are dumb and everybody who claims otherwise has no idea, hell, didnt even ever play vidya involving AIs.

>> No.9986489

>>9986057
>You're a moron, every day at work I program things smarter than myself.

No, you dont. You are the moron here for not even realizing what you are doing or not doing at work.

>> No.9986792

>>9985847
why is he an idiot, by not giving a shit?
kys faggot
https://www.youtube.com/watch?v=0p7XRac0xgs

>> No.9987455

>>9986057
we had self driving cars technology when ? why isin't still mainstreem?

We had autopilot piloting the planes how long ? why isin't it mainstreem?

It all seems science fiction to me.
Besides wikipedia says there is no true ai it's just data sorting algorithms or human writen alhorithms writing algorithms according to specifications.

Fucking hoped sci will be more intelligent and know smth about actual science.

>> No.9987935
File: 236 KB, 1074x1074, IMG-20180908-WA0013.jpg [View same] [iqdb] [saucenao] [google]
9987935

>>9984196
/thread/