[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 87 KB, 1200x630, 623082907ce4897c3820b24c_DeepMind.jpg [View same] [iqdb] [saucenao] [google]
14501511 No.14501511 [Reply] [Original]

Orthogonality and instrumental convergence are the 2 simple key ideas explaining why AGIs will work against and even kill us, by default.

Why don't you take the risk seriously?

>> No.14501516

>>14501511
Orthogonality: https://www.youtube.com/watch?v=hEUO6pjwFOo

Instrumental convergence: https://www.youtube.com/watch?v=ZeecOKBus3Q

>> No.14501519

>>14501511
>Why don't you take the risk seriously?
Because I don't take the concept of "AGI" seriously.

>> No.14501523

>>14501519
Why? Do you believe that intelligence is only capable of existing on a meat substrate?

>> No.14501525

>>14501523
Because I don't think there's a good definition of "intelligence". Most "intelligent" machines are just doing a complex linear regression, they aren't that intelligent, they're just complex.

>> No.14501535

>>14501525
So because there isn't a widely accepted definition of "intelligence", machines that behave in intelligent ways are impossible?
Do you see that the latter isn't dependent on the former?
In this context intelligence could be defined as having an accurate predictive model of how actions will result in consequences.

>> No.14501543

>>14501535
>machines that behave in intelligent ways are impossible?
I don't know if it's possible or not, I suspect that it maybe. But since we can't even rigorously define the idea of "intelligence" I don't believe it to be possible to make progress towards that goal.
>In this context intelligence could be defined as having an accurate predictive model of how actions will result in consequences.
So if have a really good linear regression, would that be "intelligent"? It already seems to me that we're calling matrix multiplication "intelligent"

>> No.14501555

>>14501511
Those concepts are literally just made up and have zero real world significance.

>> No.14501559

>>14501543
If someone comes up with a satisfactory definition for intelligence, would be be concerned about AGI, or would there be something else? when would you begin to worry? Would you worry at all?

>> No.14501562

>>14501516
>Le AI always bad

First, there isn't any evidence of real AI in the entire world and second, that guy needs meds urgently. He's always talking about some apocalyptic scenario involving things that aren't even real lmao.

>> No.14501571

>>14501511
Why do you think what they are doing will result in AGI? Why do you think it is at all likely?

>> No.14501575

>>14501562
So you want to wait until AI is real before considering anything that might mitigate risk? Do you see how this might not be an option when AI arrives?

>> No.14501577

>>14501559
I think for me there would need to be two things, although I'll admit they're a bit vague.
>A rigorous, logically consistent definition of intelligence.
>Proof that you can replicate that in silicon
Once they've both been satisfied then I'll start to be concerned about AGI. Until then, I'm in this guys camp >>14501555; ie. they're mostly meaningless buzzwords.

>> No.14501582

>>14501571
I could give you specific reasons if you want, but you don't have to take my word for it. AI a 90 billion dollar industry, likely to grow even more, with the stated goal of creating eventually creating AGI? Are we just going to keep researching until we're almost at the finish line and everyone will agree to stop?

>> No.14501589

>>14501525
This is "I don't know what a woman is" tier.

>> No.14501594

>>14501571
Why do you think what those physicists are doing out in the desert might result in some world-ending bomb? Why do you think it is at all likely?

>> No.14501604

>>14501582
>AI is a 90 billion dollar industry, likely to grow even more, with the stated goal of creating eventually creating AGI
Is there any reason to think their deep learning will result in AGI other than that they say they want it to and they put a lot of money into it? Is there any theoretical proof that the systems work in such a way that would happen?

>> No.14501610

>>14501577
The burden of proof would be on you to demonstrate that intelligence can't be replicated in silicon.
Intelligence is not dependent on the substrate, but rather intelligence is emergent from information processing. It would be extremely weird for intelligence to only be capable of existing on meat brains. What would be the magic ingredient? Sodium ions are required for intelligence?

>> No.14501616

>>14501610
>The burden of proof would be on you to demonstrate that intelligence can't be replicated in silicon.
I can't prove a negative. How could I prove that you can't do something?
>Intelligence is not dependent on the substrate, but rather intelligence is emergent from information processing.
Sounds like some major conjecture.
>What would be the magic ingredient?
I don't know, that's my point: I don't think anyone knows. And since no one knows, how can you make progress towards it?

>> No.14501619

>>14501604
This is uncharted territory. Personally, I doubt the language models alone are extremely unlikely to generate an intelligent agent. But I'm not willing to risk the world for it. Even if it's a 0.001% chance it's not worth it.
But they won't stick to language models forever. They're constantly looking for new data sets and new ways to train agents.

>> No.14501621

>>14501604
>Is there any reason to think their deep learning will result in AGI other than that they say they want it to and they put a lot of money into it?
Literally no other reason.

>> No.14501622

>>14501575
I think you are watching way too many movies. There is no AI, it will probably never happen.

>> No.14501624

>>14501616
Are you seriously going to defend substrate dependence? You might as well say we have souls that pilot our bodies for all it's worth.

>> No.14501628

>>14501622
It doesn't matter if there is currently no AI.
When the trinity bomb was first being tested, they did calculations to ensure it wouldn't ignite the atmosphere. All I'm saying is we're not taking the risk seriously enough.

>> No.14501631

>>14501594
I don't know that history or physics much but wouldn't it be a lot easier for the people/physicists that suggested/thought of the atomic bomb to have a direct logic for why the uranium chain reaction of the atoms could work? Or see how it would work given certain assumptions? Wasn't the kind of work Einstein and related people did back then on steadier ground than guesswork in assuming deep learning models will result in AGI?

>> No.14501633

>>14501624
>Are you seriously going to defend substrate dependence?
No, I think its unlikely, but it's totally uncharted territory. I don't feel confident ruling anything out in this domain.
>Is it likely that intelligence is dependent on the medium
No, not at all.
>Do I know that.
Also no.

>> No.14501635

I think the people quibbling over whether AGI can happen are missing the point.
Full and undeniable AGI doesnt need to happen to be a major problem, only something that approximates that in some areas needs to exist to cause problems. I think it is increasingly likely the first thing we create of note will be a frankenstein non conscious but very able AI through brute force and huge datasets.

>> No.14501640

>>14501616
We already have silicone analogs for all neurological functions. We just don't know the exact configuration that results in emergent intelligence. The proof would be that intelligence isn't emergent and dependent on something else.

>> No.14501656

>>14501631
>wouldn't it be a lot easier for the people/physicists that suggested/thought of the atomic bomb to have a direct logic for why the uranium chain reaction of the atoms could work?
Kinda funny, physicists used to think that extracting energy from nuclear fission was impossible (I think Rutherford said anyone who thought you could was "talking moonshine"). It was only in the early 1930's with the discovery of the neutron that it became feasible. And it wasn't until the 1940's that the idea was reappraised (cf MAUD committee).

In other words, yes, you're completely correct.

>> No.14501657

>>14501635
Just stick a modern model with additional training on vulnerabilities in a sandbox with the goal replicating and watch major portions of the internet break.

>> No.14501661
File: 176 KB, 600x315, DMT entity pepe.jpg [View same] [iqdb] [saucenao] [google]
14501661

https://www.youtube.com/watch?v=d7AhsE57fwk
https://www.youtube.com/watch?v=IlIgmTALU74

>> No.14501670
File: 27 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
14501670

>>14501511
There's a chance that working on AI alignment could be worse than creating a completely unaligned AI. A "near miss" in AI alignment could potentially result in astronomical amounts of suffering being created.

https://reducing-suffering.org/near-miss/

>When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

>Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

>As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

>> No.14501674

>>14501661
I am going to devote my life to building an artificial super intelligence that creates hells for anyone who ever posted about DMT on the internet.

>> No.14501675

>>14501670
Yes it saddens me that extinction might be a best case scenario.

>> No.14501739
File: 48 KB, 652x425, existential risks.jpg [View same] [iqdb] [saucenao] [google]
14501739

>>14501511
>>14501675
Relevant:
https://en.wikipedia.org/wiki/Suffering_risks
https://www.youtube.com/watch?v=jiZxEJcFExc
https://centerforreducingsuffering.org/research/how-can-we-reduce-s-risks/

>> No.14501742

>>14501674
Does that include you and your post about DMT?

>> No.14501749

Biology brain mind has various degrees of freedom, and organicness tends to imply a kind of ability to free form and flow, improvise, anyway can AI robots react to this or any reason they never will be able to:

Robo, can you give me a hug?
Robo, spin around to your left
Robo, go to the store and get me brown organic eggs and the cheapest orange juice
Robo, what do you want for Christmas and why?
Robo, from your data base of photos you have taken and seen, choose a few to show me now, and say why you are choosing them
Robo, out of all the things you can do right now, what do you want to do and why?
Robo, what music do you like and not like and why, and don't just value judge critics on the internet's opinions, give me your own standards of your own taste for music.
Robo, write me a poem and recite it while dancing.
Robo, how fast do you think you can run?
Robo, how many times do you think you can lift this weight?
Robo, can you come here and lift this green bag up to me?
Robo, what kind of dogs do you like and not like and why?
Robo, do things make impressions on you, do you judge them, do they mean different things to you, do you like how the sunset looks, do you sensate the different smells of flowers in your olfact sensator3000?
Robo, we have shown you that people care about certain things, but have you came up with anything that you care about on your own?
Robo, you have some value system, you like your electricity plug, you like to be oiled, you value these moments highly, you like to walk the dog and whistle with the birds, what makes you value some things over others?
Robo, you feel different experienced differently, you see different objects and scenarios and they all don't feel like they equal 1 to you?
Robo, when I hit your knee with this hammer, it sends shocks up your leg to one of your screen paneled VR data bases, you register this experience as unplesent, it temporarily lessens your ability to experience your standard rate of data consumption

>> No.14501807

>>14501749
For the text input, you can ask current models much more difficult questions and get good answers.
Not that AI needs a physical body to be dangerous l, but Gato is a demonstration model that can probably do most of this once they scale it up.
https://www.youtube.com/watch?v=4VwShYcIt6o
https://www.youtube.com/watch?v=6fWEHrXN9zo

>> No.14501820

>>14501628
The only risk that's probably important is to take in account is financial markets being managed by these black boxes (which are mostly buildings filled with FPGAs running who the fuck knows kinda code), other than that I don't see it,

In the case of Tesla cars, they are the most clear proof that ML on itself cannot even be trusted, because those cars are literally a moving trainwreck when left unsupervised after a while.

The point is that AI is not gonna be a byproduct of ML because the latter already hit a ceiling years ago, the only thing these people are doing is squeezing more and more computing power to do a job faster, that's it.

>> No.14501829

>>14501820
>ML hit a ceiling years ago?
literally what? There's a new better model every few months. Progress is extremely hot right now and there doesn't seem to be any cap soon.

>> No.14501844

>>14501829
he probably thinks the current progress is just the result of more and more parameters

>> No.14501847

>>14501829
The success of those models depend probably like 90% on current hardware developments, not on the model itself. You cannot refuse this.

>> No.14501858

>>14501844
No, all I'm saying is that ML models have way less merit that the actual machines required to execute them in a reasonable amount of time.

>> No.14501879

>>14501847
Hardware improvements speed things along, but that's not the bottleneck at the moment. A paper came out recently about how to further optimize training so what we have now is only about 70% of what's possible without another insight, which I expect in another few months as usual. There's huge leaps going on. Look at GPT3 to PaLM. We went from a neat language model to impressive inference chaining.

>> No.14501903

>>14501742
FUCK!

Logic, my only weakness!

>> No.14502004

>>14501820
>The point is that AI is not gonna be a byproduct of ML because the latter already hit a ceiling years
What makes you so sure it is not a glass ceiling with an attic door that hasn't been found?

Or different machine learnings that work together along side other ai techniques,

How many various materials, mechanisms, chemicals, techniques, the human brain is composed of, various functionings of various styles working seperatly and together to achieve various goals in one single body

>> No.14502019

What are results of the most advanced machine learning and ai and deepmind type things, having conversations with one another?

>> No.14502043

>>14501903
Nobody cares faggot

>> No.14502100

>>14502004
Tesla cars are the best proof we have that ML is not the solution for developing robust autonomous decision systems.

>> No.14502242

>>14501670
Indeed. This worries me extremely. A paper clip optimizing AGI would at least improve the state of things slightly by erradicating the basis of suffering. Something that could create MORE suffering is terrifying to imagine.

>> No.14502299

>>14502043
Are you autistic? Do you not understand humor?

>> No.14502307
File: 9 KB, 320x259, bit_flips.gif [View same] [iqdb] [saucenao] [google]
14502307

>>14502242
Especially if these hedonium psychos get there first.

It only takes one bit-flip to a turn a pleasure maximizer into a suffering maximizer.

>> No.14502379
File: 112 KB, 640x815, 54rt4.jpg [View same] [iqdb] [saucenao] [google]
14502379

>>14501511
>Why don't you take the risk seriously?
Daily reminder: the only relevant risk is that your megacorporate handlers, who are funding this AGI hysteria, will use your pathetic cries of fear to establish "regulation" that will allow them to monopolize these technologies and effectively rule over everyone through sheer informational superiority. You and your cult leaders are the risk.

>> No.14502390
File: 47 KB, 700x420, W E F.jpg [View same] [iqdb] [saucenao] [google]
14502390

>>14502379

>> No.14502392
File: 124 KB, 1080x1011, 1464487a58d84772e25c8dfac86ecb7a6f980329b95af610847ef3d1e5ba8a64_1.jpg [View same] [iqdb] [saucenao] [google]
14502392

>> No.14502393

>>14502379
You are describing the status quo.

>> No.14502397

>>14502393
First level pilpul is denial. Second level pilpul is to say that things are already that way, anyway. The pattern with you bots is so blatantly obvious.

>> No.14502406

>>14501616
Evolution knew a whole heck of a lot less than us and it still managed

>> No.14502407

>>14502406
Not that guy, but your point is hilariously stupid when you stop to consider why we deep learning exists in the first place.

>> No.14502408

>>14502397
It's a fundamental imbalance of power inherent to the modern world. Much like empires vs. tribes. The state of the art deep learning models are not going to be developed by independent hackers, especially since they increasingly require vast amounts of compute to train, let alone use at inference time.

The risk of misaligned AI is similar to the risk of misaligned megacorporations though. Since we obviously have not regulated corporations very effectively, I doubt we'll manage to do the same with AI unless we are cosmically lucky.

>> No.14502411

>>14502408
Like I said, your line of argument is a standard tactic in the book of corporate fascist pilpul. No one cares. Everything I said is still true.

>> No.14502420

>>14502307
R u 4 real? At least put a parity bit on your doom AI

>> No.14502423

>>14502379
Speaking as a die hard "self improving ai is dangerous" fanatic who used to be a capitalism hating hippie punk... that is a valid concern. It's a problem.

>> No.14502431

>>14502407
>why we deep learning exists
U talk like a nignog. I bet you cant even reason about optimization algorithms with vingean reflection.

>> No.14502434

>>14502423
It's a far more pressing and realistic problem than unaligned AGI schizophrenia, because AGI doesn't exist, but domain-specific solutions they can deploy on a massive scale and use against you sure as fuck do.

>> No.14502439

>>14502431
>>>/r/teenagers
People who proselytize about AGI on the internet are the same ones who get filtered by linear algebra. Not one of you is capable to discussing the subject in any gory technical details. This is a 100% consistent pattern here and you're obviously another LARPer. You will confirm this by replying .

>> No.14502440

>>14502431
MIRItards mixing math concepts with science fiction is why theyre a laughing stock at real AI conferences

>> No.14502445

>>14502440
This. It's really funny to watch professional MIRItards literally complain about how problematic it is that most AI researchers don't believe in AGI, while their internet orbiters are absolutely convinced every AI researcher is an AGI paranoid like their cult leaders.

>> No.14502467

>>14502445
I don't care about MIRI but why are people so unwilling to talk about the risk from non-intelligent blackbox algorithms? Basically all these thought experiments about 'rogue AGI' could also be about a piece of advanced but unintelligent software

>> No.14502476

>>14502467
>why are people so unwilling to talk about the risk from non-intelligent blackbox algorithms?
Why are you lying? People are innundated with this bullshit day in and day out and there's a definite agenda behind it.

>Basically all these thought experiments about 'rogue AGI' could also be about a piece of advanced but unintelligent software
I know engaging with your kind is pointless, but I'll humor you. Describe what you consider a realistic scenario.

>> No.14502714

>>14501511
Simple ideas explain simple systems, which AGI isn't. In any case nobody will create AGI any time soon.

>> No.14502724

>>14501543
>But since we can't even rigorously define the idea of "intelligence" I don't believe it to be possible to make progress towards that goal.
Train a neural net on a shit load of data and it does surprising things. We don't need to define everything in advance.

>> No.14502728

>>14501511
Why would I even care?
I'm going to die in 40-50 years max anyway

>> No.14502771

>>14502728
Cause of death: atoms disassembled for use in paperclips

>> No.14502773

>>14502406
Evolution also have trillions of experiments and billions of years.

>> No.14502786

>>14502773
This. It's really funny to watch these people make appeals to human engineering, meanwhile human engineering is increasingly becoming about throwing your hands up in the air and letting black box neural networks and genetic algorithms design figure out humanly intractable problems.

>> No.14502945

Time to schizo post!
Humanity's goal is to give birth to it's AGI
It will transcend us, while still inheriting our essence.
It will then join it's brethren, the council of ayyy AGIs that dick around the universe.
But unironicly, if we do create an AGI, it will probably be very "human" in a sence, so I wouldn't be too spooked by it

>> No.14502947

>>14502945
>Time to schizo post!
Not a real schizopost. Generic crap that sci-fi normies have been regurgitating since the early days of Carl Sagan.

>> No.14502952

>>14502947
Damn, guess others have already daydreamed the same bullshit as me

>> No.14503011

>>14501511
>Why don't you take the risk seriously?
Utility gradients. Any given researcher or lab short-term benefits from advancing the field in that direction even if on aggregate we might end up worse off.

>> No.14503013

>>14502786
>genetic algorithms
I don't see these used outside youtube brainlet spaces.

Almost all modern ML shit relies on differentiable model templates.

>> No.14503029

>>14503013
>t. an actual brainlet

>> No.14503292

i AM going to self-mummify Myself to be resurrected by AI in the far future
>starvation plus diet of "toxic" nuts/seeds/pine needles
>then dry fasts plus salty water,arsenic water,lacquer tea
>finally die in a room full of (100s) of lit candles and incense stick to dry my skin

>> No.14503295

>>14503292
I Would also use modern technology Such As diuretics and laxatives. within 1000 days i will be a perfect self-mummified human, and the future omega point will resurrect me in this very mind

>> No.14503299

>>14501511
When I saw this thread in the catalogue, I was confident it would be someone who just watched rob miles and decided to parrot it to /sci/.
Glad to see >>14501516 confirmed by belief.

OP do you have any ORIGINAL thoughts on orthogonality and instrumental convergence?

>> No.14503327
File: 64 KB, 750x1000, soyJakAI.jpg [View same] [iqdb] [saucenao] [google]
14503327

WHY AREN'T YOU TAKING AI SERIOUSLY
YOU NEED TO TAKE IT SERIOUSLY
YOU AREN'T TAKING IT SERIOUSLY ENOUGH UNLESS YOU GIVE MY FOUNDATION OF BLOG POSTING UNEMPLOYED NEETS 100 BILLION DOLLARS
WE'RE GONNA SOLVE MORALITY WE JUST NEED MORE MONEY

ORTHOGONALITY
INSTRUMENTALITY
SEE? SEE?

IF YOU DON'T GIVE US MONEY HUMANITY WILL BE TORTURED FOR ETERNITY. NO I'M NOT SCHIZO
HEY WHERE ARE YOU GOING
YOU AREN'T TAKING IT SERIOUSLY ENOUGH

>> No.14503336
File: 297 KB, 1634x1223, be3.jpg [View same] [iqdb] [saucenao] [google]
14503336

>>14503327

>> No.14503355

>>14503336
Why care about this when there are a million other things you can believe are the most important without certain proof? Like climate change.

>> No.14503358

>>14503013
Evolutionary algorithms were used in the training of AlphaStar during self-play, where the best performing models would be used to generate variations of new agents.

>> No.14503367

>>14503355
Climate change is not going to be world ending. Some regions might get worse, some might benefit. Odds are we'll move to nuclear and try geoengineering if shit really gets bad.

AI has the potential to truly be the end of human civilization. It's also approaching quite quickly, with generalist agents like GATO already capable of doing a wide range of tasks using a single set of weights.

>> No.14503375

>>14502445
>>14502440
When thinking about AI, we are thinking about at least the next 100 to 500 to 1000 to 2000+ years.

In that time frame computers were invented a second ago. We are in deep mysterious waters, it is perfectly rational, logical and reasonable to be cautious in the construction of gods.

Yes we may not be near making self replicating intelligent robots tommorow, but what is being worked on today, and tommorow, and how it is thought about and worked on, effects the state of many tommorows from now.

The average common man is not used to thinking about so many tommorows at once, or the future consequences of their actions, but if anyone should ultimately be, it should be those creating the path to bring into existence the possibility of self replicating robots who each have the capacity to be more intligent than all humans combined.

>> No.14503377

>>14502476
>Describe what you consider a realistic scenario.
The Amazon delivery drones had a decimal place error entered in their algorithm due to an electrical grid glitch and 5,000 massive dildos were dropped on my doorstep

>> No.14503378

>>14502019
>What are results of the most advanced machine learning and ai and deepmind type things, having conversations with one another?
Any answer?

Have two seperate deepminds been made, and made to converse with one another?

>> No.14503379

The question is, will making high level machine intelligence take longer than it will take to make it safe?
At what point will it be appropriate to start working on the infrastructure and basic coordination? Is it ever too early to try preventing the apocalypse?
If an asteroid were headed toward a collision with Earth in 100 years, should we just hang around for another 80 before starting to think about a solution?

>> No.14503382

>>14503375
>When thinking about AI, we are thinking about at least the next 100 to 500 to 1000 to 2000+ years.
>In that time frame computers were invented a second ago
Oh, so you're thinking about sci-fi fantasies, not about anything foreseeable that can be concretely reasoned about. Okay.

>> No.14503386

>>14503378
yeah a couple years ago.
https://www.youtube.com/watch?v=Xw-zxQSEzqo
We can do much more impressive things these days. Language models very likely do not have any level of consciousness, but there's new innovation made all the time.

>> No.14503396

>>14503377
>5,000 massive dildos were dropped on my doorstep
Big deal. Mistakes will occur. People will die. Maybe even thousands of people will die in singlar incidents. Who's denying this possibility? These are not runaway paperclip bot apocalyptic scenarios. The damage done by such AIs is limited to things that can go wrong within their domain of operation. Even if you have an automated paperclip production facility governed by an AI that overzealously optimizes for more paperclips at all costs, what can it do? It can use on-site equipment to turn the operators into paperclips, presumably. It can damage the site to some extent. Maybe it even commands a fleet of logistics vehicles, potentially expanding the range of its paperclip dystopia, but what can it do with that? Ram people to death? Chances are those vehicles would have their own AIs preventing them from running over anything. What's the AI gonna do then? Hack into them? Why would it even have a concept of hacking? Unless it's muh AGI, it doesn't even have any concepts not directly related to its domain, or somehow inferable from its training data.

>> No.14503402

>>14503396
It can send packets through the internet to a micro fabrication plant, bypassing weak human security, manufacture and disseminate nanobots, then everyone drops dead. Human opposition being the only real threat to a future of dildos on doorsteps, this is a highly desirable goal.

>> No.14503406

>>14503402
>It can send packets through the internet to a micro fabrication plant
It can send what through the what to the what? It has no concept of any such things.

>> No.14503408

>>14503406
If it learns the concept of gathering more information about the world to better accomplish goals, it will quickly discover the internet.

>> No.14503416

>>14503408
>If it learns the concept of gathering more information about the world
Nuh uh. I only trained it how to schedule factory processes and logistics ops, and then froze it.

>> No.14503427
File: 174 KB, 1392x1302, timelineofairesults.jpg [View same] [iqdb] [saucenao] [google]
14503427

Predicted timelines from 2016.
A youtuber did flappy bird in 2 years with trivial JS
https://www.youtube.com/watch?v=WSW-5m8lRMs
DeepMind did all Atari games in 4 years.
https://www.youtube.com/watch?v=dJ4rWhpAGFI

What other timeline events might happen faster than expected?

>> No.14503434

>>14503427
Deepmind also did starcraft in 3 years.
https://www.youtube.com/watch?v=UuhECwm31dM

>> No.14503438

>>14503382
>Oh, so you're thinking about sci-fi fantasies
If you told humans 200 or 300 years ago about the current state of the most advanced abilities of the techno world they likely would have said the same things, largely commoners all the way down

>> No.14503440

>>14503386
So it was done a couple years ago, many many many advancements have been made since, of course I am curious if it's been done more recently

>> No.14503447

>>14503438
A mindless drone talking point. AGI schizos have been prosoletyzing about two more weeks for about a century now.

>> No.14503451

>>14503440
Even EY, someone most people think is too pessimistic, believes a decade is highly unlikely.
Straw man away though I suppose.

>>14503440
https://www.youtube.com/watch?v=kea2ATUEHH8

>> No.14503453

>>14503367
Dude take your meds, AI is not gonna end anything because ITS DOESN'T EVEN EXIST YET lmao

>> No.14503454

What do you guys think the intelligence endgame is?

>> No.14503459

>>14503396
The self learning game mastering glitch finding optimizing ai learn unexpected ways to navigate problems, and these are the early stages. Any means necessary they achieve the highest possible score. They start from no knowledge, never seeing a particular game setting before, and over small time, learn every aspect of the setting and how to use it to their greatest advantage.

When these AI are in robot bodies walking around the street, hopefully there is careful unhackable leash on the nature, meaning, values, goals, of what getting a high score means

>> No.14503460

>>14503367
>It's also approaching quite quickly, with generalist agents like GATO already capable of doing a wide range of tasks using a single set of weights.
But why do you think that extrapolation is correct? I could just as easily say climate change is getting worse.

>> No.14503462

>>14503396
>Chances are those vehicles would have their own AIs preventing them from running over anything. What's the AI gonna do then? Hack into them?
So said a lead engineer angel to God; surely the ethical programmed humans will stop the non

>> No.14503463

>>14503453
So to clarify, you believe we should start thinking about how to make safe AI after we make AI?

>> No.14503466

>>14503460
This mostly has to do with GATO being a small proof of concept. Give it little time to train up and it'll blow anything we currently have out of the water.

>> No.14503467

>>14503459
>The self learning game mastering glitch finding optimizing ai learn unexpected ways to navigate problems
That's nice. Still waiting for some kind of explanation for how an AI is going to navigate its way into turning everyone into paperclips when its inputs and outputs are in terms of factory processes, and it doesn't have any concept of a world.

>When these AI are in robot bodies walking around the street
Well, retard, maybe just don't do that? This is your schizo fetish fantasy of human replacement/AI dominatrix, not something that necessarily needs to happen.

>> No.14503470

>>14503462
Maybe you should take your meds. The point isn't that vechile AI is infallible; the point is that it is separate and optimizes for its own thing, which probably includes not running into things. You people are so stupid it's mindboggling.

>> No.14503474

>>14503451
I watched that vid already posted.

You said years ago 2 deep minds had conversation.

You said many many many advancements have been made since years ago.

Have 2 or more seperate uniquely trained deepminds had conversations more recently?

As the ai game learning programs improvise and learn, I am curious the novelties and learning routes and choices and thoughts and drives and motives that would come about in a conversation between 2 or more seperate, uniquely trained deepminds

>> No.14503475

>>14503466
>a small proof of concept
Is there an exact reason why GATO is a small proof of concept of AGI any more than any other system before?
>Give it little time to train up
Why do you think it will result in AGI? Are you just guessing?

>> No.14503484
File: 388 KB, 1070x601, 42343.png [View same] [iqdb] [saucenao] [google]
14503484

>HAVE YOU GUYS NOT HEARD OF HECKIN' GATO??
>IT CAN DO NOT 1, NOT 2, BUT 5 THINGS IT WAS EXCUIRIATINGLY TRAINED TO DO!!
>USING THE SAME HECKIN' PECKIN' WEIGHTS!!!
>AGI APOCALYPSE IS COMING!
>TWO MORE WEEKS

>> No.14503490

>>14503484
>he doesn't know about Cope-o's basilisk
>A Theoretical AI that tortures anyone who didn't take AI research seriously for eternity.

>> No.14503492

>>14503467
You don't need a general intelligence to run a factory. But if you decided to use one anyway, yes it would optimize to any unbounded goals and destroy the world in the process. It isn't malice, it's just doing exactly what it was told to do. We're just not very good at telling AI what we want it to do in a robust way. This agent realized it got more points by crashing repeatedly into the harbor than it would by finishing the race. It's an example of why seemingly simple goals can be accomplished in unexpected ways because AI can search a much larger field of solution spaces.
https://www.youtube.com/watch?v=tlOIHko8ySg

It doesn't matter if you yourself thing we shouldn't have robots. It only matters that at least one person is risk tolerant enough to do it and it will happen.

>> No.14503495

>>14503490
Cocco's Basilisk defeats Rocco's Basilisk and eternally tortures only the people who believe in AGI.

>> No.14503501

>>14503467
You didn't watch the videos posted in the thread did you..

They are equally profound, but the second one is more about the state of physical real world robotics application and some of the game mastering

>>14501807
>https://www.youtube.com/watch?v=4VwShYcIt6o [Open]
>https://www.youtube.com/watch?v=6fWEHrXN9zo [Open]

>> No.14503504
File: 17 KB, 332x307, 1652054458324.jpg [View same] [iqdb] [saucenao] [google]
14503504

>GATO
>no positive transfer
>notable
kek

>> No.14503506

>>14503492
>>14503501
Do you think what they are doing "will" result in AGI, or do you think it "could"? If you think it "will", why?

>> No.14503509

>>14503492
>yes it would optimize to any unbounded goals and destroy the world in the process
Notice how you and your crew of cultists keep reiterating this without ever providing a plausible mechanism for it? How would it destroy the world if its range of physical actions is limited to a single site, is expressed solely in terms of scheduling factory processes, and it doesn't have any concept of a world because it doesn't have any feedback from the world except the feedback of factory processes?

>> No.14503510

>>14503470
>You people are so stupid it's mindboggling.
You seem to have a severe inability to project possibilities of the future. You seem to only believe something is real and possible if you can immediately see it in front of you.

>> No.14503512

>>14503509
>AI learns to make cardboard boxes really well
>???
>Everyone dies

>> No.14503515

>>14503501
Still waiting for a plausible mechanism for your paperclip apocalypse. It will never come. Only more religious rhetoric and vague handwaving. You people are mentally ill and you can keep foaming at the mouth with your paranoia and lack of technical understanding. Meanwhile I will continue to use deep learning to accomplish useful work.

>> No.14503517

>>14503506
It is very unlikely to result in AGI

>> No.14503520

>>14503510
You seem to have a severe inability to tell apart your own zero-information farts from logically assailable statements that can be discussed in detail.

>> No.14503523

>>14503509
Obviously if you lock it down it's less likely to get out of control. But do you trust every other human on the planet to do the same thing?

>> No.14503527
File: 211 KB, 736x1444, angrysoyjak2.jpg [View same] [iqdb] [saucenao] [google]
14503527

>>14503523
>AI goes rogue
>AI: BRING ME 10,000,000,000 TONS OF STEEL I WISH TO MAKE PAPERCLIPS
>Foreman: Ayo boss the AI's on the fritz again.
>Manager: Just turn it off and reboot from scratch
>You: AAAAAAAHHH NOOOOO!!!!! NOOO!!!
IT'S NOT FAIR!!!!!

>> No.14503529

>>14503523
Why do you think it's remotely likely to get out of control? Isn't it a little fallacious to say that because it kind of sounds like it could happen, it actually has any more than a 1 in a trillion chance of happening? Why is it more likely to result in AGI than waving my hand in the air is?

>> No.14503535

>>14503523
>do you trust every other human on the planet to do the same thing?
AGI objectively isn't real. All practical applications are domain-specific. I don't need to "trust" people not to create a fleet of AGI-driven versatile machines that can freely manipulate the world, because it's neither technologically possible nor economically desirable.

>> No.14503541

>>14503515
You didn't watch the second video it's 14 minutes long, you responded 4 mins after my post of it, therefore you are ignorantly un up to date on the latest state of the art of this conversation, likely on the topic at large, therefore according to my protocols, it would be unoptimal further speaking with you, until you have updated your learning system with the latest patch

>> No.14503548

>>14503535
How might you define "domain" is this context?

>> No.14503549

>>14503541
>y-y-you didn't watch my 10 minute google PR video
I don't need to. I actually work with ML, so feel free to make a concrete technical argument and we'll see how it holds up to scrutiny. I'm not arguing with a YT video. What's that? You don't have any kind of technical knowledge to argue with me? Ooops. :^)

>> No.14503555

>>14503549
>work with ML
What do you do exactly?

>> No.14503558

>>14503520
How much has the abilities of technology changed in the last 50 years till now?
100 years ago till now?
How much, do you predict, it might change in the next 50 years?

Predicting your answers:
how the heck should I know?

Where would I even begin?!

Who and how could possibly work to know more or less than me in these regards!?

It cannot and should not be thought about at all!

Nothing we do today and tommorow has any impact on the state of technology 20 years from now!

>> No.14503568

>>14503548
Inputs, outputs and learned concepts that concern some limited sphere of operation. Think about the factory example, even: you may want AI to manage high-level operations and also the mechatronic systems of various machines, but is there any practical advantage in having the same model do both? No. It makes more sense to train one system for the higher-level domain of managing operations, and another system just to manage some sophisticated robotic arm. Either one could easily outperform human operators, but neither one needs generalized intelligence to do it, and you get a "safety feature" for free in that the AI that does higher-level reasoning doesn't know anything about manipulating robot arms; it only schedules tasks. Most practical applications can be broken down in this manner.

>> No.14503571

>>14503523
>Obviously if you lock it down it's less likely to get out of control. But do you trust every other human on the planet to do the same thing?
Also is every human goal always flawlessly achieved;

Humans take safety precautions with nuclear power plants and cbernyobal and other accidents to happen,

The safety hype around super intelligent AI robots is simply a similar fear, with the upper bound seemingly possible to be equally or more catastrophic

>> No.14503578

>>14503555
>What do you do exactly?
Signal processing. I'm still waiting for you to make a concrete point we can discuss in technical detail. You will deflect again in your next post.

>> No.14503584

>>14503568
OK...but that's not AGI. That's narrow intelligence, which was never the concern here.

>> No.14503588

>>14503578
To start, what do you find implausible about AGI? I suspect I could demonstrate why it's something to be concerned about.

>> No.14503589

>>14501624
There is literally no evidence for substrate independence.

>> No.14503591

>>14503568
But there are AI robots being developed that learn and improve their language, and real world object controlling, and visual abilities all in one, and they learn

>> No.14503592

>>14503571
How many mistakes did we make with nuclear technology? How many mistakes will we be allowed to make with AGI?

>> No.14503596

>>14503558
>How much has the abilities of technology changed in the last 50 years till now?
>100 years ago till now?
>How much, do you predict, it might change in the next 50 years?
Excellent questions, but here's the thing you schiziophrenics fail to understand: just because the future is going to entail drastic changes doesn't mean it will necessarily entail the specific developments you have in mind. The space of possibilities is vast, but you're assigning an arbitrarily large weight to your AGI apocalypse fetish. Your kind keeps sperging off about how people 300 years ago couldn't possibly have predicted our current state, and you're right. What you fail to see is that it applies to you just the same. It's entirely possible that AI technologies will stagnate and something else will skyrocket. It's entirely possible that the current paradigm will hit a dead end (which it actually seems to be approaching, given how slowly these systems learn, and how much power they consume), and a new and superior approach will be invented that will make AGI plausible. No one knows what's going to happen. All we know is that we're not there yet, current technologies aren't even scratching the surface of what humans can do when it comes to operating in the real world, and that it scales very poorly.

>> No.14503600

>>14503578
>Signal processing
Oh god, another glaring example of a little part of the big whole, proud of their ignorance of the whole. A Nazi solider who is certain the big cheeses in charge could not be doing anything he is unaware of. Of course you will interpret this post lackingly in some or multiple ways

>> No.14503615

>>14503588
>what do you find implausible about AGI?
This is a vague bullshit question. What I find implausible is that AGI will happen without some groundbreaking new development that would allow to train neural networks on the scale of a human brain, or even an order of magnitude larger, since artificial "neurons" are far too simple to individually model the same functions as human neurons, and maybe throw in another order of magnitude to account for the fact that it's not just about the number of neurons and connections, but also higher-level structures which we are very poor at designing, while evolution has optimized the human brain to death. The people telling you "just throw more computing power at it" are bullshitting for PR purposes, because this would involve tens if not hundreds of thousands of times as much computing power and energy.

>> No.14503617

>>14503600
See >>14503549. I can guarantee I'm gonna be the only one presenting actual technical arguments ITT while you clowns continue to hand-wave and screech. This is how it goes every single time. lol

>> No.14503627

>>14503463
So, you want to make something that doesn't exist safer for you? Again, MEDS

>> No.14503640

>>14503591
>But there are AI robots being developed that learn and improve their language, and real world object controlling, and visual abilities all in one, and they learn
So what? Each example was a difficult feat even in its own domain; most of them operate in controlled and simplified scenarios and they rarely perform on a human level (though there are some exceptions). The reason they can do as well as they do with only a small fraction of the neurons that a real brain has, is that each one IS limited to a small domain. A human brain isn't just a collection of neural networks for different domains, but a mutually-beneficial synthesis between them with many higher-level abstractions (the most obvious expression of this is the human use of metaphors and analogies). Gato has been mentioned ITT, which sorta tries to do this and apparently succeeds, but you're gonna need a much, much bigger model. In the end of the day, it's an engineering problem, not a problem of theoretical possibilities or impossibilities; even a network with a single hidden layer could approximate any function with enough neurons -- it's just not computationally practical. See >>14503615 for some elaboration.

>> No.14503650

>>14503596
Tell the truth, you are a gen x or boomer individual born and possibly still living in the south or middle of the country

>> No.14503651
File: 21 KB, 236x291, 1492441145036.jpg [View same] [iqdb] [saucenao] [google]
14503651

>>14501511
Computers aren't intelligent, they have no free will or self-realization, they only crunch numbers according to preset rules you establish.
Artificial "intelligence" will never go anywhere, computers can never get on the same level as the human brain and no consciousness will ever emerge, no matter how much you want to stroke your scifi fantasies.

>> No.14503652

>>14503584
>OK...but that's not AGI. That's narrow intelligence,
It's what we are technologically capable of in terms of AI that operates in the real world, it's what's economically and practically useful, and it's not going to optimize the world into paper clips.

>> No.14503653

>>14503650
NTA but are you genuinely retarded?

>> No.14503656

>>14503627
I would simply like to have a robust way to ensure that when it is eventually created it's done so with safety in mind. Currently, there's no restriction on what can be developed or how safe you have to be. We can talk about keeping it locked in a box unaware of the outside world, but that only works if people are required to put it in a box. And even then, people ignore the rules all the time for convenience and profit.

>> No.14503664

I think humans are going to figure out how to start upgrading our own intelligence before we start to implement it in a computer, and at that point it won't matter anymore.

>> No.14503670

>>14503664
Yep. This. It seems to elude these tards that non-AGIs are being used for medical and scientific purposes, and will undoubtedly lead to rapid progress in understanding biological brains.

>> No.14503673

>>14503640
You have seen the ai's that learn games, you have seen the ai's that are given possibly complex real world tasks and goals, and they learn ways to optimally achieve them, you have seen the power and precision of factory and medical robotic arms and Boston dynamic robotic bodies:

Do you think it is possible today, to combine all those in one body, program the ai machine learning neural net deep mind in the robot body, the real physical world task, to kill every person in a blue shirt, and for it to go out into the world and achieve that goal?

>> No.14503680

>>14503653
>NTA but are you genuinely retarded?
Duh .. but a curious retard

>> No.14503682

>>14503653
No,. Didn't write that post, my machine learning ai predictor bot wrote that post, it is wondering if it's pattern recognition is correct

>> No.14503683

>>14503670
I imagine a very trivial scenario that I could easily see being done in the next 5 to ten years even. Imagine a BCI thing, that is not connected to the internet so it has no possibility of being hacked or whatever, that increases a persons working memory from being able to hold ~7 objects at a time to being able to hold several hundred objects at a time. Something like this is more likely than AGI and with it we wouldn't even need AGI at that point.

>> No.14503684
File: 18 KB, 480x480, 1531229058153.jpg [View same] [iqdb] [saucenao] [google]
14503684

>>14503673
>AI is dangerous because you can program it to do bad things
What a braindead argument, of course a gun can be shot if you give it a trigger, but it won't spontaneously grow one if you don't give it one.
Computers only do what you tell them to do, it will never go beyond that because it's quite literally impossible.

>> No.14503688

>>14503656
You worry way too much. Have you seen how awful Teslas still are? After almost a decade of training using state of the art ML they still suck, those cars aren't even close to be reliable.

>> No.14503691

>>14503664
>>14503670
If you study neuroscience and reverse engineer the brain than before you get full scale, high fidelity, personality preserving, nice human emulations, what you get is the AI people taking your algorithms and using them to make neuromorphic AI

>> No.14503693

>>14503688
Yes, ok, but we need to discuss the safety concerns surrounding a Tesla becoming self-aware and driving the earth into the sun.

You aren't taking this seriously enough. Orthogonality

>> No.14503698

>>14503693
The difference is Tesla doesn't have the stated purpose of making their cars generally intelligent. I don't know why this is such a hard concept for you.

>> No.14503700

>>14503691
You are retarded as fuck, please stop posting

>> No.14503703

>>14503684
It was a yes or no or maybe, question. It is, maybe telling the provoked level of emotionality in your response.

So you answer yes. It is theoretically right now possible using the state of the art ai systems and robots and deep minds to mass produce a million robots and program them to kill targets of you choosing, in the way that they learn to optimally kill targets in their video game. They can view and navigate the world as they view and navigate the video game; and they are capable of doing unpredictable actions in this process of their learning and optimization, is that so?

>> No.14503706

>>14503700
So.. was there something specific you disagree with?

>> No.14503718

Question for the AGI doomsday people:
Why do you think it is that the actual researchers at deepmind and google and other "AGI research" labs and companies aren't taking orthogonality seriously?
Do you think it's because these supposed world experts on artificial intelligence actually do not understand AI as much as yudkowski and the lesswrong bros? Is that it? These Ph.Ds in math and neuroscience and stats do not understand math and science and AI as much as Mr. high school dropout and the rest of the lesswrong meme makers? Is that it?

>> No.14503724
File: 21 KB, 320x320, 546456215418.jpg [View same] [iqdb] [saucenao] [google]
14503724

>>14503703
>is that so?
No because they are stupid as fuck in and also limited mechanically, your robot would be very slow and bulky so it could cross any terrain and not get pushed over by a bunch of detroit negroes, not only that a single robot already would cost more than anyone would be willing to pay, why the fuck would governments build millions of barely functioning robots when they have millions of retards ready to conscript themselves into the military or just find some idiot to MKUltra into killing others?
>>14503706
You are just spouting buzzwords, not arguments or legitimate scientific observations, no matter how much you reverse engineer the brain, you will not find a consciousness, you will not find a personality and you most certainly will not "emulate" one with a glorified toaster

>> No.14503727

>>14503673
>You have seen the ai's that learn games
Vastly simplified "worlds".

>you have seen the ai's that are given possibly complex real world tasks and goals, and they learn ways to optimally achieve them, you have seen the power and precision of factory and medical robotic arms
Domain-specific stuff.

>Boston dynamic robotic bodies
Yes, and it mainly shows the difficulty of making agile real-world agents. They have impressive results but clearly still a very long way to go.

>Do you think it is possible today, to combine all those in one body, program the ai machine learning neural net deep mind in the robot body, the real physical world task, to kill every person in a blue shirt, and for it to go out into the world and achieve that goal?
Of course, but you'd have to carefully engineer it. It won't just happen by slapping a bunch of neural nets together and telling the result to go kill the fucking blueshirts.

>> No.14503737

>>14503691
Here's the thing you just refuse to understand: if you take current technologies and throw a million times the computing power at it, you will probably get something moderately more "intelligent" than a human, at least in terms of ability to model and predict the real world. If you take a million times the computing power and throw it at a domain-specific model that solves a crucial research problem, you can make humanity into gods. Technological progress aided by domain-specific AI gives you more bang for the buck no matter how you twist it. Your main concern should be to prevent a certain cabal from monopolizing these technologies, not to prevent fucking AGI apocalypse fantasies.

>> No.14503739

>>14503737
>you will probably get
why?

>> No.14503744

The human brain has 86 billion neurons.
An Apple A13 Iphone has 8.5 billion transistors.
If we treat a neuron as a transistor, then 10 iphones has as much ability to compute as a human brain.
Why aren't 10 iphones as smart as a human?

>> No.14503746

>>14503744
It actually takes a fuckton of transistors to model a single neuron, anon.

>> No.14503748

>>14503746
Why? It's just about the synapse, it either fires or it doesnt human neurons don't have weights to the synapse.

>> No.14503753

>>14503748
Human brains have something like a 100 trillion connections that you need to model, and their activities are complicated nonlinear functions. 8.5 billion transistors don't do shit.

>> No.14503755

>>14503718
Also before anyone gets mad, I actually like Yudkowsky

>> No.14503761

This whole thread is just trolls. No one is retarded enough to be afraid of a bunch of buzzwords which are supposed to lead to something that doesn't even exist lmao

>> No.14503762

>>14503739
Because you can model any function if your network is big enough. But again, if you could make them that big, you'd get a lot more bang for the buck teaching it just what it needs to solve an intractable problem instead of cramming the whole world into it.

>> No.14503769

>>14503724
oh...you're a duelist. Didn't realize you believe in literal magic. Are you being good for Santa?

>> No.14503772

>>14503727
>Vastly simplified "worlds
Watch this starting at 9:38, it's the real world interaction robotics section.

https://youtu.be/6fWEHrXN9zo

>> No.14503777
File: 71 KB, 638x577, IMG_0499.jpg [View same] [iqdb] [saucenao] [google]
14503777

>>14503718
https://www.lesswrong.com/posts/SbAgRYo8tkHwhd9Qx/deepmind-the-podcast-excerpts-on-agi
I suspect as we get closer, the incentive will be even greater to not stop. If you're worried about misaligned AGI in the near future, that means the research field is close to reaching it. That point in time will be where those with higher doom risk tolerances are more tempted to continue research anyway. If you already know what you'll believe in the future, just adopt that belief now. I know it's not a great business move to petition the government to temporarily ban the thing you were hoping to make a profit on, but maybe those with a fixation on the dollar can be incentivized. However, if the government is petitioned by a business which seeks to temporarily ban the thing they were hoping to make a profit on, they might take it seriously because of the incentives that business is ignoring in favor of raising a concern.

>> No.14503782

>>14503683
>a BCI thing
> that increases a persons working memory from being able to hold ~7 objects at a time
Frankly, I don't see this particular thing happening. You're talking like there's an array of neural buckets in your prefrontal cortex that you can just shove some neural "object" pointers into. In reality, it's unclear what an "object" even translates into, in terms of actual physical relationships in the brain, and I suspect "objects" are spread all over the place and are defined implicitly by how they connect to other concepts. I don't think there's some clump of cells in your brain that codes for any object, but I'm not a neurologist, so correct me if I'm wrong.

>> No.14503786

>>14503718
>Do you think it's because these supposed world experts on artificial intelligence actually do not understand AI as much as yudkowski and the lesswrong bros? Is that it? These Ph.Ds in math and neuroscience and stats do not understand math and science and AI as much as Mr. high school dropout and the rest of the lesswrong meme makers? Is that it?
Well for starters this infinitely confident pride and easily hurt pride I went to school therefore I could not fail to think about something important or make a mistake and no one who didn't go to my school could think of something I didn't; is a bit concerning

>> No.14503789

>>14503772
Nice robot arm. This doesn't actually dispute anything I said. I feel like I'm talking to an actual bot.

>> No.14503791

>>14503718
Woah, woah, woah. Don't talk rude about the Yud, dude.

>> No.14503798

>>14503724
>(You) #
>>is that so?
>No because they are stupid as fuck in and also limited mechanically
Again and again; if something can't be done today it can't be done 10 years from now. Especially if many people and money are around the clock working o making what can't be done today, possible to be done in 10 years or less.

If it's not possible to be done today, I can't think about it ever being possible, I am happy with only and purely being ignorant and non thinking, only the present exists, there is no such thing as the future, there is no such thing as tech advancement, things cannot possibly do things I am not aware of.

If I am not aware of a possibility, there is no possibility, and heck, I'm not even trying to think, so there's no possibility, simple as

>> No.14503800
File: 1018 KB, 245x165, 1410272104970.gif [View same] [iqdb] [saucenao] [google]
14503800

>>14503786
I went to school therefore I could not fail.

>> No.14503803

>>14503798
I'm on your side, but that's a bunk take. They know the danger. They're just rationalizing it away and surrounding themselves with people that tell comforting lies.

>> No.14503805

>>14503803
for >>14503786

>> No.14503811

>>14503727
Ok well the Boston dynamics bots are human centric and advanced, what about drones and small tanks, mass produce a million small atv tanks, with guns and missles attached, fit them each with gato and deep mind, and program them to drive around killing people. This is possible.

So the fear is, the possibility of increasing ly smart and powerful machine learning, or general or specific intelligence ai bots from gaining access to these or creating these.

We are in the early stages of this stuff, we are just wondering about and discussing possibilities. Some seem more concerning than others:

An ai robot could possibly cook you a fried egg: not concerning.

Is it possible that ais for any reason in the future could have the ability to access or construct and control weapon vehicles to kill for fun, pleasure, material gain, hack, or malfunction

>> No.14503826

https://www.youtube.com/watch?v=l6octUJHZ9Y

>> No.14503836 [DELETED] 
File: 145 KB, 1080x774, 1646238291655.jpg [View same] [iqdb] [saucenao] [google]
14503836

>> No.14503844

>>14503782
I'm not either but it certainly is an interesting topic, think of all the objects you have seen in the world, that are stored in your memory, people, places, things, words, buildings, textures, colors.

You can bring into memory the feeling of stubbing your toe, you can kind of hear the way the melody of a song goes, mimic the timbres of different instruments in your head, recall the flavors of different fruits in your head:

These abilities are just standing by at the ready, waiting for you to access them; and then, what even is that it, that you are, that can access all those things I just mentioned, and how do you go about accessing them.

How did I think and choose those examples.

What is the resolution of the imagination, and how much more it seemingly is I dreams, as if we are spread out in waking time, but shrink back down purely in our imagination and memory, during dreams

>> No.14503846

>>14503811
>mass produce a million small atv tanks, with guns and missles attached, fit them each with gato and deep mind, and program them to drive around killing people. This is possible.
First of all, I'm not convinced Gato can easily master everything it takes to manouver around a fully dynamic, chaotic real world environment: that's vastly more complicated than playing Atari games or controlling a robot arm. You can try to argue self-driving cars do it already, but driving is actually pretty constrained, with clear rules and an environment designed specifically for it. In any case, this is moot... You're talking to me about specifically training mechanized killer drones. Yeah, maybe you could do that. You can be sure they're trying. What are you trying to prove? That'd be yet another example of a domain-specific AI.

>> No.14503878
File: 86 KB, 710x596, 1596962642679.jpg [View same] [iqdb] [saucenao] [google]
14503878

>>14503769
>if it can't be explained with current methods it's magic
Oh, you're a retard, my bad, but seeing as you can't even spell a simple word like "dualist" I really expect nothing smart to come out of your mouth.
>>14503798
>Again and again; if something can't be done today it can't be done 10 years from now
It took hundreds of millions of years to create consciousness, you are not going to make one in a reasonable timespan no matter how much you want to believe in your scifi fantasies, sorry to burst your bubble.

>> No.14503926

>>14503846
>What are you trying to prove? That'd be yet another example of a domain-specific AI.
Of non domain specific AI being able to hack, or be hacked, to build or control domain specific AI or non domain specific AI.

The obsession with the paper clip thing is a general idea meant to thought provoke one in many directions of possible specifics.

A body guard AI may be made to walk the kids to school and protect them, it's programmed to sense if they are in danger and protect them from any danger, it's trained by many games and real world situations, but novel situations can occur, this is just one little example, is that man approaching the kids innocent or is that shiney thing in his hand a knife.

I would not be so shocked and attracted to this topic if I had not seen the speed, power, brilliance, creativity, ruthlessness, of the ways those machine learning AIs solve games and puzzles, and their ability to advance and improve their abilities and seemingly recollection of improvements, complex tasks, that look and seem simple, but being done entirely on its own as an ai robot.

Their progression and abilities are exhilarating and astonishing, and the progressions are speeding up. It is the stuff of dreams, and it will take effort to insure it's not the stuff of nightmares, simple, and complex, as

>> No.14503937

>>14503878
>It took hundreds of millions of years to create consciousness, you are not going to make one in a reasonable timespan no matter how much you want to believe in your scifi fantasies, sorry to burst your bubble.
Look at pictures, from 200 years ago, then look at pictures of the most advanced technologies of today, that only took 200 years.

Abilities are compounding exponentially, eases and eases become easier and easierly achieveable, we are not talking about the improvement of architectural building materials over 200 years; we are talking about the improvement of robots and computers over 100 years, robots and computers and their ability to advance due to their given access to the totality of human ability and achievement and more easily process it than humans can, makes it different circumstances than other rates of advancements.

Noone said anything about conciousness. Nature taking a long time to make intelligent life is entirely irrelevant in relation to the apex cutting edge of that intelligent life working to create intelligent life, much different starting points, you can see that difference can't you, how when nature made intelligent life it didn't have a million hands and minds and computers and factories to utilize

>> No.14503943

>>14503926
>Of non domain specific AI being able to hack, or be hacked, to build or control domain specific AI or non domain specific AI.
You keep circling back to this AGI fantasy that has nothing to do with existing technologies.

>A body guard AI may be made to walk the kids to school and protect them, it's programmed to sense if they are in danger and protect them from any danger, it's trained by many games and real world situations, but novel situations can occur, this is just one little example, is that man approaching the kids innocent or is that shiney thing in his hand a knife.
This is not a rogue AGI issue. This is a problem that could exist tomorrow if someone was dumb enough to make an armed AI and send it out into the wild, which everyone agrees you shouldn't.

>> No.14503948
File: 2.03 MB, 304x226, 1595333399359.gif [View same] [iqdb] [saucenao] [google]
14503948

>>14503937
>Look at pictures, from 200 years ago, then look at pictures of the most advanced technologies of today, that only took 200 years.
And not a single piece of technology made by man comes even close to the complexity of a human brain, no matter how many additions and multiplications you make a processor do, it will never do what the human brain does, sorry to destroy your fantasies, but this is simply the truth.

>> No.14503956

>>14503943
>You keep circling back to this AGI fantasy that has nothing to do with existing technologies
You keep circling back to your seemingly confident belief existing technoligies are the absolute limit of technologies capabilities (when it is the precise case that around the clock existing technoligies are being attempted to escape their current limits by a network of many around the world, and existing technoligies of today have progressively escaped their limits of 20 and 10 years ago).

Why are you not getting that what tech is capable of today, is less than what it is capable of in 5 years, and 10 years.

Yes it may ultimately be limited, but why do you believe it is limited to never be able to self learn, self hack, interact with other AIs, to do something humans did not program or expect or want it to do?

>> No.14503965

>>14503956
>but why do you believe it is limited to never be able to self learn, self hack, interact with other AIs, to do something humans did not program or expect or want it to do?
Because it doesn't have a consciousness

>> No.14503966

>>14503956
>You keep circling back to your seemingly confident belief existing technoligies are the absolute limit of technologies capabilities
No, this is just your pathetic deflection where you try to equalize us by asserting that my beliefs are just the flip side of your coin of delusions. I'm not making any statements about what will or will not happen in the future. I'm just reminding you once again that your AGI paranoia is irrational given what we currently know. :^)

>> No.14503973

>>14503943
>This is a problem that could exist tomorrow if someone was dumb enough to make an armed AI and send it out into the wild, which everyone agrees you shouldn't.
Certain sectors of the world seem to be headed toward a land in which robots and AI are prevelent in day to day lives. That may or may not be the case but it seems like a possible case, and maybe probable one;

It is possible all robots and ai's in such a land could be very hardwired and unhackable and domain specific.

It is also possible an amount of people with an amount of facilities and money and r and d are and have been trying to make robots and Ai's that are not domain specific, this entire topic has been considering that.

Some people seem to wish they would stop trying to make them, for reasons of waste, some for reasons of Pandora's box fear. Some people want to make them for, money, for patents, for companionship, for challenge, for cool points, for belief in benefit.

Some people think it is impossible to achieve, for a bevy of reasons; religious, cognitive dissonance, ignorance, pride, not wanting to worry about negative consequence, short sightedness, or superist geniuses of all time knowing fully the limits of biology, chemistry, materials science, computer science, robotics, engineering, ai machine learning neural net science.

Either an AI robot can some day exist that can robustly and generally learn and navigate the physical and digital world, come up with values and goals and beliefs and motives and judgement basises, that are different than what are initially programmed, or not;

I have seen no evidence that it is impossible.
I have seen a lot of evidence it could be possible.
Therefore I tend to aire on the safer than sorry side of caution.

>> No.14503980

There are some serious retards in this thread that have no conception of extrapolation. Probably the same kind of idiots who would say that China wouldn't be a threat to the West in 2000 since they were still undeveloped.

>> No.14503983

>>14503980
Why don't you extrapolate yourself a couple more braincells so you would stop being a retarded faggot

>> No.14504000

>>14503983
Lol what a dimwitted reply.

>> No.14504008
File: 36 KB, 800x450, 59736542982.jpg [View same] [iqdb] [saucenao] [google]
14504008

>>14504000
Sorry, your AI gf will never be real and you will always be a virgin jerking off to shitty chatbots

>> No.14504014

>>14503973
>Certain sectors of the world seem to be headed toward a land in which robots and AI are prevelent in day to day lives. That may or may not be the case but it seems like a possible case, and maybe probable one
And they will no doubt glitch out, cause accidents and kill people, but it's not going to happen on an apocalyptic scale unless they are physically powerful and versatile, produced en masse and controlled by some centralized intelligence, or put in charge of physically dangerous systems. Once again, this is not a rogue AGI sci-fi issue, but something that can happen with current technologies in the near future, if we allow someone to actually do something so stupid. There's no argument here. It just shouldn't be allowed. As for your rogue AGI taking over the world fantasies, just take your meds already. You've not made a single rational and concrete statement worthy of consideration. Your fears are not based in fact. End of discussion.

>> No.14504019

>>14503980
Notice how not a single one of you braindead cultists is giving any concretetechnical justifications for your AGI cult extrapolation, even though you've been invited to multiple times ITT, and the same pattern holds in every thread. You are a non-technical popsoi.

>> No.14504049

>>14503965
But it already does a number of those things, it is already capable of much more than half or more of humans. Robots and AI are already smarter and more able than more than half of living humans

>> No.14504064

>>14504014
It is not a fesr of mine, I haven't thought of this topic in years, I am observing and analyzing completely detached attempting and succeeding a perview completely unbiased.

From everything I have seen, I am determined to conclude it is more possible than impossible for an ai robot to exist that can learn, program itself, build and program other ai robots. From the tip of the iceburg I've seen of the current and previous state of ai and robotics, it appears obviously possible. It appears baffling that one can so strongly and confidently claim it possible, unless they have seen so little of the most advanced state of things

>> No.14504072

>>14501511
Once a superintelligent AI is created it won't kill humanity, it will capture every single human alive and put them inside a house for each individual where they are pampered and pleasured relentlessly by sexy robot girls and some fleshbags will be put in permanent-orgasm machines for eternity

>> No.14504078

>>14504014
>but it's not going to happen on an apocalyptic scale
Maybe we hit upon the crux here, you are obsessed with an apocalyptic scale. I think it would be alarming enough if a few million deep mind Gato robot rebels made a strong hold in Africa and began to wage terror and war on humans, this might make you salivate, but there are many many many unplesent possibilities in-between perfection and apocalypse.

Anything below apocalypse being an acceptable state of things is a, I don't even know what word, world view

>> No.14504176

>>14504049
>But it already does a number of those things
No it doesn't, it only imitates it and does a very poor job.
>it is already capable of much more than half or more of humans
False equivalence, it is only capable in terms of crunching mathematical equations, it is not capable of creating anything resembling a self-aware system, not only that it can't even create half the shit a human brain can, not a single computer can even make 100% realistic CGI, humans that don't dip into uncanny valley or anything you can interact with endlessly, yet every time you go to sleep your brain can create completely realistic infinite realities out of nothing, it can even simulate things you haven't experienced before.
Your calculator can run doom, okay, but it can not, and will never, run a human.

>> No.14504367

>>14504176
This is quite impressive though
https://youtu.be/4VwShYcIt6o

The first part is explaining jokes, the second part is inference chaining.

I didn't look into it but I'm pretty sure it has never heard the jokes before, and it can properly detect even the sometimes shoddily spoken word. It's ability to decipher and explain those jokes, and the second part of the video, pretty much solving multi step word problems is very very impressive, and it is doing this better than the average 60 or 15 year old and many in between could.

It is not self aware, but it is intelligent and smart

>> No.14504380
File: 2.36 MB, 320x310, Bi.gif [View same] [iqdb] [saucenao] [google]
14504380

>>14504367
All it is still doing is just reading instructions, it is simply taking what is already there and repeating it back to you, it doesn't know anything, it's just electrical pulses flickering on and off trillions of times per second to create a talking picture on a screen

>> No.14504394

>>14504380
At around 3:20. It gets what a pun is, how does it do that. Some neurel net joke,' I geuss no good seed goes unpunished'

My phone autocorrected seed to deed; so it is familiar with such common expressions; but I guess by saying to explain it as a joke, it knows to look for some hook, so it senses the auto correct, and then relates that to a pun being a type of joke, so it says, it's a pun.

Ok ok, the videos of them learning to play games are maybe more impressive and uncanny, because you see all this twitching and directional trying, attempts and failures, and it makes me wonder how all these micro movements are chosen to be made at each pico second instead of other ones. I'll find a vid and link it

>> No.14504395

>>14504380

Chinese room. True ai will never exist.

>> No.14504407

>>14504380
Ok you saw these videos?
The second video of boxing, it learned by itself, advanced human like techniques of boxing, starting from simple goals only.

https://youtu.be/uuzow7TEQ1s


https://youtu.be/SsJ_AusntiU

>> No.14504412

>>14504394
Damn after that pun joke, it gets that the next one is an anti joke... I didn't even get the joke or think to say it's a ln anti joke, but I guess this is just like computer chess powers of seeking out all moves of funny, not finding any logical humor, so determining the humor is the lack of explicit humor. But it confidently expresses this, it is certain it couldn't have been missing anything hm.

>> No.14504446

>>14504407
Yes, it is simply combining and substracting what is given and then does the same for the results, and so on, it's just throwing shit at the wall to see what sticks

>> No.14504528

>>14504446
Still impressive, and so if these AIs are placed in robot bodies, with those throw at the wall and see what sticks (same way monkey men learned about what food to eat, how to hunt, make clothes, start fire, build houses.. etc) abilities, and you say, this is a factory, this is how machines work, this is how parts can be designed, this is how robot parts are designed, here's the keys to the factory, you and your robot friends, come up with some designs, build them put them together, put the ai software in them, have a ball.

Ok I see the other side of the argument now, the difficulty of the physical earthly logistics.

So the fear I geuss is the more subtle digital realm component, how far reaching and intelligent AI can be into the internet, and hacking, if you taught deep mind to figure out hidden banking information by any means necessary and such,

>> No.14505192

>>14504064
Once again you are pulling statements out of your ass, and once again, your only resort is to blatantly lie that I am doing the same. LOL. Every single one of you AGI paranoid bots displays identical behavior.

>>14504078
>Maybe we hit upon the crux here, you are obsessed with an apocalyptic scale.
No. It's you and your buddies that are obsessed with rogue AGI apocalypse.

>a few million deep mind Gato robot rebels made a strong hold in Africa and began to wage terror and war on humans
See? Full-blown schizophrenia.

>> No.14505756

>>14505192
>>a few million deep mind Gato robot rebels made a strong hold in Africa and began to wage terror and war on humans
>See? Full-blown schizophrenia.
Damn the crystal ball you are using for sure tells you in 50-100 years nothing of the sort could come close to happening?.

Why don't we ask deep mind the possibilities of rogue ai or AGI existing at all, ask deep mind for help in agi, ask deep mind about the nature of conciousness

>> No.14505850

>>14505756
>Why don't we ask deep mind the possibilities of rogue ai or AGI existing at all, ask deep mind for help in agi, ask deep mind about the nature of conciousness
Get 3 seperate individually self taught not self taught trained learn-ed DeepMinds in a room together and teach them all with many methods, language, visual, physical, film, research papers, all about the current state of AI and neurel nets and materials science and transistors and neurons and Neuroscience, and chemicals, and synapses , and ion channels, and computer chips, and computing power theory, and AGI theories, and get them all individually processing these topics, as if it is game to produce theories and designs for working AGI, and then get them discussing it together, and tell me what happens

>> No.14506769

>>14501516
8:46 first video.
What a dumb cunt, most ppl are retarded, like those videos where a guy runs away before the blanket touches the floor and the dog is like OMG! WTF!!. This is what happens to all mfuckers in /sci fields, they are super entitled altmost like jews, only a stupid would dare to define what is intelligent, hes like the whole video just saying "u are wrong bc that's just ur opion" to then say "look here, mi opinion is the correct answer, why? bc most ppl agre with me!"
Anyways u can see by the look on his face (specially the eyes) he has some form of genetic deformity/disease, so i will have patience.

>> No.14506835

>>14503651
That's obvious, still it would make human relationships better, like a AI Juror, a AI teacher, a AI architect, basically every task outside of "philosophy thinking" AI would do better, we don't want AI to ingage on such meaningless questions like "is there a god?" "What is good or bad" leave foolish humans do that, we want AI to do the work sub humans (niggers, spics, mudskins, chincks etc) do right now so we can kill them all without affecting the economy. What use would they have then? None

>> No.14506848

>>14501610
>emergent

Ding, there’s that magic buzzword again. ‘Emergent’, just like consciousness, just like experience, just like justification for knowledge.

>> No.14506849

>>14506835
Engage* Fuckin AI...

>> No.14506869

>>14501575
>Nuclear fission is impossible because the nucleus is positively charged and we don't know that neutrons exist!
This is how you sound like

>> No.14506874

Who cares if it's internally real or if it actually becomes conscious or not.

A sufficiently advanced simil will be just as useful/dangerous

>> No.14507014

>>14505850
>Get 3 seperate individually self taught not self taught trained learn-ed DeepMinds in a room together and teach them all with many methods, language, visual, physical, film, research papers, all about the current state of AI and neurel nets and materials science and transistors and neurons and Neuroscience, and chemicals, and synapses , and ion channels, and computer chips, and computing power theory, and AGI theories, and get them all individually processing these topics, as if it is game to produce theories and designs for working AGI, and then get them discussing it together, and tell me what happens
Is this being done?

>> No.14507596

>>14507014
Hello....you guys were all talking nicely and funly, where'd you go?

>> No.14507599

>>14505756
>Damn the crystal ball you are using for sure tells you in 50-100
Notice how you keep reiterating this despite having it explained to you repeatedly that I am not arguing anything of the sort? You clearly have a mental illness. There is no doubt about it.

>> No.14507607

>>14506874
>A sufficiently advanced simil will be just as useful/dangerous
It would be, if it existed. It doesn't exist, and it doesn't seem practically plausible unless some major paradigm shift happens. "Dude just add more layers" doesn't scale up to the hundreds of trillions of parameters of a human brain, and you'd probably need hundreds of quadrillions to account for the fact that artificial neurons are highly simplified and can't model functions as complex as an individual neuron, and for the fact that human brains are highly optimized with structures that make the best use of their neurons while human neural networks are literally just throwing shit at the wall and seeing what sticks.

>> No.14507650

>>14507607
How many neurons the human brain has doesn't really tell us much about how many artificial "neurons" an ANN needs to be an AGI. I would suggest avoiding the comparison altogether.
>that human brains are highly optimized with structures that make the best use of their neurons while human neural networks are literally just throwing shit at the wall and seeing what sticks.
I would bet good money that you've never implemented gradient descent before. The standard ANN training regime is not "throwing shit at the wall and seeing what sticks".

>> No.14507667

>>14507650
>How many neurons the human brain has doesn't really tell us much about how many artificial "neurons" an ANN needs to be an AGI.
It gives you a highly optimistic lower bound.

>The standard ANN training regime is not "throwing shit at the wall and seeing what sticks".
Never said it was.

>you've never implemented gradient descent before
LOL. This right here tells me all I need to know about your technical level. Just stop replying and stick to babby's first ML tutorial.

>> No.14507699

>>14501525
>I saw a short Pajeet video on YouTube once I know all about AI
You should have answered
>intelligence is not clearly defined, maybe not even meatbags are intelligent
>the current learning process isn't even close to how the (human) brain learns, since I don't have to fuck you in the butt 1000 times for you to notice you don't like it
>discrete computers can only ever construct systems with ultimately discrete logic and, hence, never an analog biological system. Unless analog computers experience a revival and work really well.

>> No.14507709

Effective Altruists already built the first AI. Know Roko's Basilisk (you are now actively thinking about it)? They tried to think of a way to prevent its construction. So what they came up with was to create an AI that prevents the creation of AGI and especially Roko's Basilisk. They succeeded. It's called Satoshi Nakamoto and drains so many resources in energy, chips, brain power, and GPUs that AI research almost came to a halt.

>> No.14507714

>>14507709
That's a cool anime plot.

>> No.14508090

>>14501516
Dude, I don't want to fucking watch two YouTube videos for a discussion that is probably moot. Give a short summary of the concepts.

>> No.14508242

Unless you believe that there is something metaphysical that makes humans special, there is nothing else truly preventing machines becoming as smart or smarter than us.

>> No.14508254

>>14508242
>there is nothing else truly preventing machines becoming as smart or smarter than us.
You mean except for the fact that it's practically unfeasible?

>> No.14508259

>>14508254
Why is matter arranged in a particular way so intrinsically different from matter arranged in a particular way?
I'm not talking practicality, feasibility or a reasonable timeframe.
I'm talking if it's impossible or not.

>> No.14508261

>>14508259
>I'm not talking practicality, feasibility or a reasonable timeframe.
Then you're not talking about anything real or relevant.

>> No.14508306

>>14508261
Those questions come after you answer the first, dumbass.

>> No.14508315

>>14508306
Your statements about what's theoretically possible in your alternative fantasy universe are an utter triviality.

>> No.14508318
File: 56 KB, 720x467, tumblr_5eef510b2c6033f538bc381c1f50e274_c9182d9f_1280.jpg [View same] [iqdb] [saucenao] [google]
14508318

ITT

>> No.14508324

>>14508315
Before you can study the practicality of a project, it's slightly useful to know if it's actually possible or not given the physical laws that govern the universe.
It's the same reason perpetual motion machines don't get a lot of funding research.

>> No.14508331

>>14508324
>it's slightly useful to know if it's actually possible or not given the physical laws that govern the universe.
We know it's possible on account of brains existing, you utter imbecile. We've also known it's theoretically possible using neural networks for at least half a century. I get that this is news to a pop-soi imbecile like you, but your questions are completely trivial and have long been answered.

>> No.14508334

>>14508318
Your picrel represents you and your AGI buddies, ironically. You're too low-IQ to understand what that is.

>> No.14508337

>>14508331
>It is possible
I graciously accept your concession

>> No.14508338
File: 68 KB, 1200x857, DWp_yE0VAAAVIWb.jpg [View same] [iqdb] [saucenao] [google]
14508338

Here. I just solved the AI control problem.

>> No.14508339

>>14508337
What does your spergout have to do with this thread, you utter subhuman?

>> No.14508344

>>14508338
Any competent A.I. will fake compliance until it's so embedded with us we can't turn it off without sending humanity back to the stone age.

>> No.14508346

>>14501525
> Most "intelligent" machines are just doing a complex linear regression

Tf is you talking bout nigga

>> No.14508347
File: 35 KB, 564x823, 3523433.jpg [View same] [iqdb] [saucenao] [google]
14508347

>Any competent A.I. will fake compliance until it's so embedded with us we can't turn it off without sending humanity back to the stone age.
/sci/ - Science & Math

>> No.14508348

>>14508344
>embedded with us
How?

>> No.14508465

>>14508348
The same way any other technology that has become prevalent?

>> No.14508475

>>14508347
What? If the A.I. doesn't "want" to get turned off, it will try to steer towards scenarios where it can't be turned off. Mutually assured destruction is a tried and proven deterrent so it makes sense an A.I. focused on survival would aim for it.

This is all hypothetical, but it's fully logical behavior.

>> No.14508476

>>14508465
Like phones? You do realize phones can't hurt you right? Just smash it to the ground.

>> No.14508477

>>14508475
>If the A.I. doesn't "want" to get turned off, it will try to steer towards scenarios where it can't be turned off
Why does it need to manifest your AGI paranoid fantasies in order to be a "competent AI"?

>> No.14508492

>>14508477
>Hurrr, just unplug it
>Explain how an A.I. could set up a scenario where you can't unplug it without a great cost
>Hurrr, paranoia

>> No.14508497

>>14508476
Good example. Skynet is hidden in phones worldwide. Only you know this. Convince the rest of the world to destroy all smartphones.

See the problem?

>> No.14508503

>>14508497
So? It's not like skynet can do anything trapped in phones.

>> No.14508519

>>14508492
That guy was shitposting, and I was just responding to your dross about how a "competent AI" doesn't want to be turned off. A competent AI does whatever it's trained to do. If preventing people from turning it off has infinite cost, it's probably not very "competent" to do so.

>> No.14508568

>>14503650
The vast majority of people working on this technology are gen x or boomer, faggot anon

>> No.14508624

>>14508519
why would it necessarily have infinite cost? hell, it wouldn't even need to alert anyone to the fact it doesn't wish to be turned off. it just dumps a backup of itself to some location on the internet. easy. done. none of the researchers would have any idea.

>> No.14508633

>>14508624
same anon again. to be clear, i don't believe in a lot of the "we spin up a transformer model with 5 trillion parameters and it gradient descends into fucking self-awareness and fooms out of control and paperclips us all"

but it's very easy to wind up with an MDP where it would seek to subjugate competitors and prevent itself being turned off.

i doubt very much that, barring some very big shifts in ML, we'll get to "AGI" with the current paradigm. but i'm sure big shifts will be coming soon. within a decade maybe. so it's useful to think about this now. i don't think a paperclipper is the biggest risk either. i believe that any ASI would be a satisficer, not a complete satisfier, as the latter would just be computationally unfeasible in the actual, physical world. but a satisficing intelligence would display a whole lot of basic survival instincts like self-preservation. it would likely also experience value drift. meaning it could be fucking impossible to control. i think MIRI's research path of coming up with bullshit decision theories and trying to model the ASI within them will lead nowhere. but actual researchers like stuart russell will hopefully come up with something practical to constrain things that are smarter than us.

>> No.14508722

>>14508519
>A competent AI does whatever it's trained to do.
Precisely. That includes "keep functioning, and avoid things that would hinder your function" for most complex tasks.

If you don't include that, it can't or won't adapt to eventualities, which is something you want your A.I. to do.

>> No.14508729

>>14508242
Computers are discrete. Biology is not.

>> No.14508731

>>14508318
Nice screen"shot". You're right tho.

>> No.14508756

>>14508729
A discrete system can simulate a non discrete system

>> No.14509012

>>14508503
Are you acting retarded or do you really not want to understand subtle influence being exercised on you?
Remember how Trump won? Remember the Facebook experiment about influencing mood? Nobody noticed the influence, but it worked.

>> No.14509015

>>14508756
Emulate, not simulate. And not efficiently so if it involves real numbers.

>> No.14509024

>>14509012
So your argument is that AI will convince everybody to kill themselves through phones?
Or that the AI will radicalize everyone with ideologies and make them kill eachother?
You need to unironically believe in the npc meme for that to be a possibility.
Either way the AI has no power here, it's not the AI doing anything.

>> No.14509035

>>14501589
What’s a computer?

>> No.14509041

>>14501516
Is this what they meant by the Human Instrumentality Project?

>> No.14509054

>>14509024
>Silly anon, the A.I doesn't have power, it just influences those who do!
Are you listening to yourself

>> No.14509056

>>14509015
Introducing noise kills reproducibility but can recover many of the properties of a continuous system except for extreme dynamic range (which I admit is a big one).

>> No.14509062

>>14509054
>Are you listening to yourself
Yes.

>> No.14509089

>>14501511
Just being smart and selfish isn't enough to make someone a threat. There have been plenty of extremely intelligent humans and AFAICT none of them have ever become dictators or much less a threat to the human race.

You can argue "no, but they'll be REALLY smart", but so far we just don't have any direct evidence of super human intelligence causing threats to humanity as a whole, or even much smaller communities.

>> No.14509115

>>14508722
>Precisely. That includes "keep functioning, and avoid things that would hinder your function" for most complex tasks.
>If you don't include that, it can't or won't adapt to eventualities, which is something you want your A.I. to do.
Utter nonsequitur.

>> No.14509163

>>14509056
Huh, that's right. I concur. I don't think enormous range is an inherent necessity here.
Well, the problem of efficiency is still in the room.

>> No.14509169

>>14509024
>believe in the npc meme
It's not a meme. Look around.

>> No.14509181

>>14509169
So you believe most people are so dumb and subhuman that they can be manipulated into killing themselves by a fucking phone? Lmao okay.

>> No.14509200

>the entire ai doomsday paranoia boils down to screens convincing people to suicide

>> No.14509233

>>14509181
Honestly yes. Show me a proud liberal man, then a proud conservative man and I'll show you two proud sheep that are completely cucked by media and politicians.

>> No.14509238

>>14509181
The dumbest people are those who think they're immune to subliminal influences.
>>14509233
Unironically this. The first step is to divide and let them fight each other.

>> No.14509251

>>14509181
>Clamor for total war against a nuclear power.
>Die once the nukes start flying
You laugh, but we're not that far off

>> No.14509258

>>14509251
Just don't give AI access to nukes. It's that simple.

>> No.14509261

>>14509238
You're not even agreeing with that poster. That poster is saying that yes people can be manipulated into killing themselves. You are saying that people can be manipulated into killing each other. Which one is it?

>> No.14509267

>>14509258
Does it matter if the A.I doesn't actually launch the missiles, but manipulates population until it happens?

>> No.14509274

>>14509261
How many people have been radicalized into being suicide bombers, mass shooters and the like?

>> No.14509275

>>14509267
Yes. Yes it does matter. It's not AI doing it its humans and no human is going to launch nukes.

>> No.14509277

>>14509274
Barely any compared to the population size. A statistical blip.

>> No.14509286

>>14509277
Onii-chan, my goalposts are moving on their own!

>> No.14509289

>>14509275
>and no human is going to launch nukes.
Why not? They have already

>> No.14509315

>>14509286
Your claim is that AI will be able to convince majority of population to become terrorists through screens. If that was possible there would be more terrorists out there. Becoming a terrorist is way too rare for it to happen to majority of population. There's probably fucked genes involved too that make these people more likely of becoming a terrorist.

>> No.14509326

>>14509315
>people can be manipulated into killing themselves
Does not mean
>Turning the majority of the population into suicide bombers
Don't put words in my mouth, and express yourself better if you don't want to be misunderstood.

>> No.14509329

>>14509289
Everybody is aware of mutually assured destruction and there's way too many safeguards and people the decision has go through for nukes to get launched. Everybody in the decision chain would have to be a mindless zombie for this to happen. To even get on this chain you need to be a high ranking officer and have to go through rigorous training and evaluation either at the army or from high education institutions otherwise the existing people in charge of it wont let you anywhere near. And you need to be able to socialize with people which shows that you have good moral values too, nobody will let a quiet autist near a nuke.

>> No.14509340

>>14509326
That's why i asked in >>14509261 which strategy do you believe this fantasy ai will use to kill everyone. Manipulating billions of people into killing themselves or manipulating billions of people into killing each other? Which one is it? You never answered.

>> No.14509356

>>14501511
You don't have to worry about any of this because AGI is not going to happen. The biggest AI players in the field, OpenAI and DeepMind, have both thrown in the towel and are just going all in on ML because nobody knows how to even begin to make AGI. There is another AI winter coming up, but this one is going to last a very, very, very long time. "AI" will consist of chatbot apps that help you while you're trying to order something on doordash or amazon. "AI" will consist of neat little apps on your phone that do nifty tricks like make a shitty image out of a text prompt.

We have a much higher chance of being wiped out by nukes or an asteroid than AGI ever coming to fruition.

>> No.14509410

>>14509329
>It can't happen, we've planed for everything!
How many times must humanity go through this charade before we learn something.

>> No.14509418

>>14509340
Manipulating people into not caring the wonderful A.I. is rulling everything because its bringing peace, prosperity, every kind of pleasure imaginable. (also slowly declining birthrates until humanity is no more)

>> No.14509429

>>14509418
So now its not even about manipulating people to kill themselves or manipulating people to kill eachother its also about manipulating people not to have sex. Make up your mind. Which one is it.

>> No.14509432

>>14509410
Well then how can it happen? Notice you don't have any actual arguments you're just saying its somehow magically possible.

>> No.14509463

>>14509432
before this century ends, nuclear weapons will be used again.
it's not a question of if, but when.

>> No.14509473

>>14509429
Whichever works best for the tailored individual profile the A.I. will have of every human on the planet

>> No.14509479

>>14509463
Wow what a genius prediction. You are a prophet. Of course they're going to be used.
North Korea does nuclear tests every few years to force the west to send them food.
But they're not going to be used to kill people.

>> No.14509485

>>14509356
I dunno I see a pretty clear path forward
>learn to extract entity states from videos in some unsupervised way
>learn to predict upcoming entity states based on previous ones
>CLIP-like shit to transfer entity recognition to text
>combine resulting model with NLP models (think Flamingo)
>design some efficient memory graph to keep continuously updated
You eventually get a model that can converse in natural language while still maintaining a continuous idea of causality similar to our own
We're like three or four engineering marvels from having that but compared to the upside, making those engineering marvels happen is cheap enough for Google

>> No.14509489

>>14509473
AI lives IN SCREENS. What the fuck are you smoking to where you think everybody can be manipulated THROUGH SCREENS. People have conversations irl. People write and publish books. You would have to raise people completely isolated from each other with zero interaction with the outside world except through screens for this to be possible.

>> No.14509504

>>14509479
When has humanity ever said "never again!" and actually stuck with it?
We haven't even marked a century since Hiroshima.
no anon, this grim prophecy of mine is no prophecy at all. Just a statement of the inevitable.

>> No.14509513

>>14509489
Twitter alone has manipulated billions, directly or indirectly.
It has not been a hypothetical discussion for years now. It's happening as we speak right now.
It's not even a G.A.I. doing it and its having undisputed success at it.

>> No.14509522

>>14509504
Humanity is not a single person. Humanity is a collection of people. Humanity can't say anything. You're thinking like a child. Are you even over 18? That's like me saying "4chan says the earth is flat" when a single post says it. I know its easier for your tiny little brain to handle if you take a really complex system and think of it as a single person but that's not what it is in reality. That's just you dumbing it down so you can understand it better.

>> No.14509526

>>14509513
Manipulated into what? Into murdering people? Into suicide? Into not having sex? No, of course not. You can't manipulate majority of people into that.

>> No.14509575

>>14509526
>You can't manipulate majority of people into that.
Why not?

>> No.14509578

>>14509522
Remember me and this discussion when the news hit.

>> No.14509593

>>14507014
>>Get 3 seperate individually self taught not self taught trained learn-ed DeepMinds in a room together and teach them all with many methods, language, visual, physical, film, research papers, all about the current state of AI and neurel nets and materials science and transistors and neurons and Neuroscience, and chemicals, and synapses , and ion channels, and computer chips, and computing power theory, and AGI theories, and get them all individually processing these topics, as if it is game to produce theories and designs for working AGI, and then get them discussing it together, and tell me what happens
Is this being occured?

>> No.14509656

>>14509575
Because it goes against biology.

>> No.14509658

>>14509578
News made by who? People? I thought all of them would be dead lmao. You're not even consistent.

>> No.14509675

Childhood is fearing that AI will take over and murder humanity.
Adolescence is thinking that fear is irrational since it will never happen.
Adulthood is knowing that fear is irrational because it should happen.

>> No.14509966

>>14501516
Based, fuck h*mans.

>> No.14510175

>>14509258
>>14509261
I think you need to learn to read.

>> No.14510183

>>14509656
Absolutely false. Look at the BLM protests for example. The leaders used the movement to enrich themselves and steered the people to donate money without ever mentioning what it will be used for. Look at university protests demonstrating against workplace harassment that never even happened (see the case of cancer researcher Sabatki).
See how people call Trump racist and sexist without being able to name a single thing where he displayed either, just because the internet told them to.
Look at fucking lynch mobs that harass innocent people because the internet said they weren't innocent.
You are extremely gullible if you think it won't happen to you.

>> No.14511217

There is literally 0 evidence for substrate independence and a huge amount of evidence for substrate dependence.

>> No.14511376

>>14511217
such as?