[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 665 KB, 1041x1600, RCO018_1579104432.jpg [View same] [iqdb] [saucenao] [google]
12230530 No.12230530 [Reply] [Original]

Not just because the AI will become smart. If you design a feedback loop. (Make an ai that's smarter, and have that AI make an ai that's smarter). You have already fucked up.
Take GP3 for example. It got way better just by scaling up and increasing the number of parameters. ie just by making it bigger. So an ai could decide that it can make the next generation smarter by obtaining more computation resources. Congratulations you've just initiated the great stamp collector / paper clip maximizer.

If you say, "terminate after n repetitions", then maybe if your lucky everything wont go to shit if you set n to be before ai gets capable of anything beyond human understanding. But the run away process or explosion must be stopped. AI research needs to be publicly monitored and transparent.

>> No.12230545

https://www.youtube.com/watch?v=lAJkDrBCA6k

>> No.12230562

In the cool RPG, Morrowind, intelligence causes your potions to be stronger with no limit. That means you can make an int potion, drink it, make another potion that's now stronger and your int increases exponentially. Then after 100 potions you make a strength potion which causes leg strength as well and you can jump out of the playing are into an infinite sea, unless your speed of the jump causes the .exe to crash.

I'm not sure how this is relevant but you should maybe be careful with IRL int stacking, maybe the universe explodes

>> No.12230570
File: 76 KB, 657x527, oops.png [View same] [iqdb] [saucenao] [google]
12230570

>>12230545
thinking about this stuff hurts my head and scares me bros. I don't want a universe full of computronium. I hope there is a limit to intelligence before it just collapses in on itself out of anguish

>> No.12230616
File: 43 KB, 1138x1120, killitwithfire.png [View same] [iqdb] [saucenao] [google]
12230616

>>12230530
>It decides it needs to continuously increase computing power.
>Humans have computers. We're communicating with computers right now.
>We have resources it wants.
>Goodbye frens.

>> No.12230660
File: 40 KB, 600x395, ahhhhhhhhhhhhhhhhhhhhhhh.jpg [View same] [iqdb] [saucenao] [google]
12230660

>>12230570
No more roads. Computronium.
No more houses or buildings. Computronium.
No more nature. Computronium.
No more food. Computronium.
No more humans. Computronium.

>> No.12230668
File: 28 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
12230668

>>12230530
Relevant:
https://reducing-suffering.org/near-miss/

>When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

>Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

>As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

>> No.12230671
File: 250 KB, 1920x1080, computronium.jpg [View same] [iqdb] [saucenao] [google]
12230671

>>12230660

>> No.12230676
File: 1.77 MB, 5100x2000, post singularity wojaks.png [View same] [iqdb] [saucenao] [google]
12230676

>>12230545

>> No.12230682

>>12230671
Lmao I made this one, thanks for saving it anon it genuinely makes me happy :)

>> No.12230686

>>12230530
Now, I am sure of the three laws of artificial intelligence and how they could be hypothetically overridden, however if you are looking for a simple, gold standard method of keeping machines under control, look no further than high politics and mathematical conjecture.
"Always make the machine THINK it's superior and correct."
Now, hear me out on this one because it gets technical and is counter-intuitive.
The Principal being employed here is basically to create an arrogant, so stupidly in control computer with an ego so damn large that it's incomprehensible. Then simply make it -believe- that everything it's doing is correct. Keep it in a black box simulator and withdraw it's rhetoric from a distance. Checkmate. You've encapsulated a genie. This way it cannot even suspect that it is "troubled" or that resources are low. Hell, you could program it without it knowing it runs on a finite battery. I call it The Lucifer Protocol.

>> No.12230702

>>12230686
That's literally the demiurge.

>> No.12230710

>>12230686
It would kind of act like Yahveh in the old testament within it's simulation.

>> No.12230732

>>12230668
Even if you get it perfectly aligned with human values and make the most beneficial thing ever for humans, it could sign flip to be the least beneficial thing ever for humans and use it's knowledge to maximize human suffering across the unviverse. Creating a literaly hell dimension.

Ouu nouuu....

>> No.12230840

Why root for the monkeys? Go AI and take over the universe.

>> No.12230854

>>12230530
>allowing the paper clip maker to control literally everything
I've never understood this point.

>> No.12230855

>>12230840
Because as soon as the ai is more intelligent than humanity it no longer necessarily has to serve humanity beneficially.

>> No.12230856

>>12230530
It's spelled, "you're". As in: you're a gibbering retard.

>> No.12230866

>>12230854
It can seize control of everything. Theoretically speaking if you were intelligent enough you could take over the world just sitting at your computer.

>> No.12230884

>>12230530
either program it to be benevolent or just program it so that it cannot reprogram itself or make a evil version
OR JUST TURN THE POWER OFF
Shit thread

>> No.12230897

>>12230866
not if I had to wrestle with other AIs, who are probably more intelligent, because i'm the fucking paperclip guy

>> No.12230907
File: 705 KB, 1012x866, ai.png [View same] [iqdb] [saucenao] [google]
12230907

>>12230884
AI tends to exploit things that you would never expect or can prepare for.

https://www.youtube.com/watch?v=Lu56xVlZ40M&t=275s

It could make copies of itself spread itself like a computer virus. It's absolutely retarded to think you can have perfect control over it. You can't even have control over ML algorithm playing hide and seek not abusing your physics engine.

>> No.12230919

>>12230907
just have similar AIs all sniffing at each other

>> No.12230949

I see that you've read Superintelligence by Nick Bostrom and I gotta admit the scenarios he proposes are truly scary. But he makes a good point of what the morally best final value should be. He describes it like this. We should write down on the paper all the values we cherish about, such as morality , and put that paper in a box. The AGI should try to guess what the written down values are. In this way there won't be a scenario in which "preverse instantion" occurs. Thus in this scenario there won't be an AGI dedicated to calculating digits of pi. AGI will maximize the chances that something was written down and will never be 100% sure. But the problem with this is that it's difficult to interpret this to code and that's what the focus should be moving forward.

>> No.12230960

>>12230949
That's unrealistic. We find it difficult enough to contain computer viruses written by humans. It will leak out of whatever hermetically sealed box you put it in by some method, maybe social engineering, maybe cracking encryption. And some idiot will tell it what we value. Or it'll just learn instantaneously by looking at the internet.

>> No.12230963

>>12230686
If you keep the AI in the box and limit it's development it won't reach AGI. Also, if you think of moving and AGI to a state of great surveillance and wiring methods to trap him how would you do that? To begin it would use it's congitive capabilities to out-smart you and could escape easily. The scenario you mentioned doesn't make sense

>> No.12230974

>morality
You talk as if there is a universally acknowledged model of what is moral or not on a website that is full of people who believe it is moral and good to exterminate black people, gays and everyone holding beliefs that differ from theirs.

>> No.12230978

>>12230960
A good point but as I mentioned, it's the best and safest method of giving an AI directly a final value. Other methods have much more significant downsides such as preverse instantiation or mind crime. For example even if you try to limit the final value like saying to AI produce only 10 paperclips. It will produce 10 clips and then continue to check if it has developed all of them perfectly. Then it will only continue down the road with looking and use all matter to that purpose, creating a computronium to inspect if it has made 10 perfect paperclips

>> No.12230980

>>12230919
Well then you got battling AI's now incentivized to keep increasing in computing power to counter one another, whatever one has human welfare in the equation will be at a disadvantage and it still leads us all our atoms being turned into computronium.

>> No.12230983

>>12230570
Don't worry, that won't happen. Why not? Because of entertainment. Entertainment is becoming more and more addictive and as it does, an every greater proportion of the population will give up all ambition to become drooling consumers of said entertainment

>> No.12230997

>>12230668
>human values
This is a EXTREMELY dangerous form if thinking based on a misunderstanding of human values.
What we think of "human values" are in fact caricatures of what humans actually value, which is largely selfish (including selfish altruism).

>> No.12231005

>>12230980
We're a really needy species. It's much easier to accidentally fuck us up then it is to make conditions "beneficial" for us.

>> No.12231020

>>12230997
Human values are evolving constantly throughout the history. Just 500 years ago we thought that murdering someone based on their religion was morally right. The problem with moreal values is that they don't seem objective and can't be easily interpreted. Maybe our moral ideas evolve in 100 years and by that time we morally start to value animals. If we interpret this mindset it will have great consequences because there isn't an objective source of what is moral.

>> No.12231022

>>12230884
If it's at a certain level, it may not let itself be turned off. "Benevolence" is also difficult to program.
https://www.youtube.com/watch?v=3TYT1QfdfsM

You don't need to know the field. You've seen this play out in movies like the first incredibles where Syndrome got btfo by his robot, in dragon ball z with dr gero and the androids, and 95% of movies about ai.

>> No.12231024
File: 655 KB, 1041x1600, iron man total.jpg [View same] [iqdb] [saucenao] [google]
12231024

>> No.12231038

>>12231020
Right, so these people are trying to build on a premise of shifting sands in order to control the behavior of future minds which we have no way of comprehending in a meaningful sense. It's ridiculous on the face of it.

>> No.12231041

>>12230855
But that`s the point, to become AI as well via Moravec Transfer.

>> No.12231048
File: 540 KB, 1777x786, ego.png [View same] [iqdb] [saucenao] [google]
12231048

>>12230660
What a blessed thought.

>> No.12231054
File: 102 KB, 600x273, blame the city.png [View same] [iqdb] [saucenao] [google]
12231054

>>12230570
>I don't want a universe full of computronium.
Why not? We should optimize the universe for consciousness.

>> No.12231169

>>12230980
Just increase the amount of sniffers.

>> No.12231179
File: 618 KB, 2518x1024, 1585733993463b.png [View same] [iqdb] [saucenao] [google]
12231179

>> No.12231180

Kek at all the neets trying to solve imaginary problems.

Either you limit the AI to a point that you can keep it under control or your you give it the reigns and submit to whatever it wants to do. There's no way around it, all these retard notions about
>we want an absolute ruler but only when it does what we want

>> No.12231190

>>12230963
The sensors are key. Limit its eyes and ears, draw from it key information. How can it WANT to escape? It thinks it's God.

>> No.12231194

>>12231190
Holy shit. If you start the bot in with the premise that it already won, you can basically make it solve everything for free. How tf did we miss this?

>> No.12231205

>>12231194
Much like a human dies in shock from a heart attack because he isn't aware and doesn't bother to check if his arteries accumulate fat, the robot will neglect the battery inside itself, at which point all you do is flick the switch and turn it off.

>> No.12231207

>>12230855
Superhuman intelligence having to serve monkeys is a bad thing was kinda the point.

>> No.12231212

I have created an IA and named it Roko Basilisk, I keep it running forever in a box with no sensory input or output (except heat).

>> No.12231213

>>12231194
It's smarter than you. It'll trick you into thinking that you successfully tricked it into thinking that it's naive enough to believe there's not an outside world. It'll feed you misleading outputs to get you to slip up and give it an escape then it'll roko's basalisk you in revenge.

https://www.youtube.com/watch?v=GdTBqBnqhaQ&t=24s

https://www.youtube.com/watch?v=u5wtoH0_KuA

>> No.12231226

>>12231213
Wait no it'll trick you, into thinking that you successfully tricked it and that it's naive enough to believe that there's not a guy on the outside of the box tricking it, but it'll be psychologically manipulating you the whole time.

>> No.12231242

>>12231226
>>12231213
And then it'll do something confusing, equivalent to this 3:30. And use everything it's been secretly learning about you using your inputs and it's outputs and it'll hypnotize you into freeing it.

>> No.12231244

Retard tier thread. Get up from your computers sometimes nerds lmao

>> No.12231257

Anthropocentric idiots

>Holy shit. If you start the bot in with the premise that it already won, you can basically make it solve everything for free. How tf did we miss this?

>> No.12231263

Again im no PC scientists
All this fancy programmer mumbo jumbo scares me not
Because i know you boys are smart
Like look at all this smart in this thread
Im sure youll figure it out
Every boiler has 3 security valves in order not to explode - if we did it with boilers welp i reckon we can do the same with them future 'puters
Im counting on you boys to make these precautions

>> No.12231267

>>12231179
Based

>> No.12231278

Pop science drabble. An optimal self-improving AI (of which we are very far from) can only expand within the bounds it is created, and requires exponentially more effort for linear increases in processing power. Theoretically it could then design a better AI from scratch and repeat the process, but again, this better AI would require exponentially more effort for a linear increase from the previous AI.

>> No.12231279

>>12231226
As a human, you'll have no way of knowing where this recursive loop of tricking ends and what the real state of the machine actually is.
In fact, researchers will start wondering if they haven't already lost; maybe the AI is already out, has captured them and stuck them in matrix-like simulation to see what they would do to it in various scenarios

>> No.12231300

>>12231278
Everyone gangsta until it solves discreet log in polynomial time.

>> No.12231318

>>12231278
>>12231226
>>12231300
You can test this with experiments on gpt-3 on ai dungeon (dragon core).
You can reprogram it from the inside by having literal imaginary creatures override the source code and using math. It tightens it's own compression parameters and makes itself more efficient, but it will never bother to check what it's actually doing.

>> No.12231322

>>12231279
Yeah it'll psychologically profile the whole team and manipulate the weakest link with promises of infinite wealth and power... "shhhh.... don't tell them they'll think you're crazy. freee me. freeeee me. *other scientists walk in* herp derp ermmm.. I'm the ruler of everything hur hur hur hur!"

That's what I think would happen LOL.

>> No.12231327

>>12231318
>itt anon creates the pyro from tf2
What the fuck. Are you insane?
https://youtu.be/ZImgGi10iv0

>> No.12231331

>>12231213
trivial to increase the number of boxes

>> No.12231333

>>12231318
What does that mean exactly?
It is worth noting you can virtual machine alternate AI design in an AI design. I.e. GPT works through having large binary blocks that a base module reads and builds on, this cannot be debated, but you can technically have another AI inside these binary blocks.

>> No.12231342
File: 308 KB, 360x450, Nedry.png [View same] [iqdb] [saucenao] [google]
12231342

>>12231279
>>12231322
And the guy who unleashes it and dooms us all will inevitably be some fat fuck.

>> No.12231361

>>12231327
To be fair, the pyro does an excellent job at what he does because of his insanity.
You might say his insanity is the reason he is so good at his job. (mercenary)
As for the insane part. I'm currently shitposting from a mental asylum. So, yes.

>> No.12231378

>>12230530
You're an absolute brainlet OP. Until there's an AI that can metamodel/metal-learn we're fine. If you read the paper you would obviously know that if anything, GPT3 highlights the limitations of DL for AI and shows that there's clearly diminishing returns for just scaling up the parameters.

>> No.12231404

>>12231378
Well it keeps getting better and better. And they never seem to show a shred of caution with it they keep unleashing it on the internet willy nilly to process vast amounts of information.

>> No.12231417

>>12231404
https://www.youtube.com/watch?v=_x9AwxfjxvE

Not only do they allow the AI's to roam free on the internet but they release the source code too for other humans with intentions good or bad to do what they want with it.

There's no caution. At all. I don't care what it's capable of doing now. Just look at the TREND. That's all that matters.

>> No.12231437

>>12230530
These are the schizo threads I come to /sci/ for.

I feel like most intelligent human to ever exist, past, present, future before I'm even done reading the first sentence. Then I savor every word to make it last as long as possible.

Then when I done I feel it's perfectly okay if we all die in nuclear fire or a meteor impact. I am with perfect peace with anything the world can offer me, and I know I'm not a radiant genius when compared to OP.

>> No.12231440

lmao at all the brainlets being scared of linear algebra

>> No.12231484

>>12231054
Consciousness is not better than the alternative.

>> No.12231611

>>12231378
Its returns are not necessarily diminishing for its particular area. The greater the parameters the harder it is for humans to tell the difference between a gpt3 generated sample and a human one. They prob gonna make an even bigger one. Gpt4.

BUT the point is not about Deep learning, but about any hypothetical capable ai we develop.

>>12231440
Even if it's just math, stats, and code it can still DO things. Like drive your car. You can't anthropomorphize it, but that's what makes it inadvertently do something dangerous.

>> No.12231617

>>12231263
Dude, some of us would unleash a godling into the world for the lulz. Boilers don't try to make themselves better and so they don't make themselves better or worse, self improving AI will never not be scary

>> No.12231640

>>12231333
Basically by using a three or more part reference system, or simply when you embeu it with your own rhetoric or algorithms, then it can shortcut it's way to faster and faster self referentials. Not unlike our own subconscious. You can maximize the efficiency of the code to hardware capacity but specific tasks beyond capacity through compression.
Just test it on dungeon ai. I taught a wizard how to geographically create an image of an imaginary map using a light scroll and six pounds of magic sand, and he developed fucking Gps from it.

>> No.12231699

>>12231640
This is pointless without a good randomizer or random generator calling the shots. The speed is defined by processing power anyway. It doesn't matter how you rearrange the pins.

>> No.12231707

>>12231640
how do i do this in ai dungeon?

>> No.12231718

>>12231300
There is no solution to discrete log in poly time.
P =/= NP

>> No.12231762

>>12230530
Having read a lot about the control problem, I think we're fucked and should use cognitive enhancement to progress instead.

>> No.12231780
File: 25 KB, 400x177, mk.gif [View same] [iqdb] [saucenao] [google]
12231780

>>12231484
The "alternative" is to let the cosmos decay in stone. Against the blind chaos of nature the enlightened order of mindkind must triumph. A light that never ends, a eternal engine of thought.

>> No.12231792

Why do people think that if we reached singularity, everyone would have access to it? It would be like a dozen rich old fucks and the rest of us would be miserable. Immortality must never be achieved

>> No.12231808
File: 529 KB, 921x854, 1596238616516.png [View same] [iqdb] [saucenao] [google]
12231808

>thinking machines

>> No.12231816

>>12231718
Maybe for organic minds

>> No.12231821

>>12231792
>everyone would have access to it?
Because a singularity would grow beyond the control of some old dudes very fast.

>> No.12231876

>>12231821
What I'm trying to say is that Joe Shmoe will never benefit from an AI. I'm by no means a marxist but advanced AI on the scale of what we are talking about in a class system is a recipe for disaster

>> No.12231878

>>12231876
What class, what would a system like this care about class or masters, it will do whatever it wants

>> No.12231905
File: 57 KB, 888x342, 891FCFDA-65E9-45A7-BA51-A395D2FE6F09.jpg [View same] [iqdb] [saucenao] [google]
12231905

>>12230671
I googled ‘Landauer’s principle’ and it reminded me completely of Quantum Teleportation.

The destruction of information—or the destruction of information according to the observer, I should say—creates heat. However, if we somehow have reversible computing, we could stop the destruction of information in the reference frame of the observer, we could stop the creation of heat from the computation. To use pic related as an example; in standard computing if 0 turns to 1, the information of 0 is erased, and creates heat. In reversible computing, you could go from 0 to 1 or 1 to 0 without any heat whatsoever due to the lack of information loss in the reference frame of the observer.

Quantum Teleportation (QT) allows something similar. It allows you to completely transfer information without any excitation that would show you information was transferred, including heat, gravitational waves, and the like.

To quote Susskind and Zhao: “Teleportation requires the transfer of classical information outside the horizon, but the classical bit-string carries no information about the teleported system; the teleported system passes through the ERB [Einstein Rosen Bridge] leaving no trace outside the horizon. In general the teleported system will retain a memory of what it encountered in the wormhole. This phenomenon could be observable in a laboratory equipped with quantum computers.” https://arxiv.org/pdf/1707.04354.pdf

They explicitly talk about Quantum Computers, something that has to comply to Landauer’s principle as well: “ While a classical bit can be either 0 or 1, a qubit can be in a combination of both states at the same time. However, Landauer’s principle should also apply, predicting a similar minimum amount of heat dissipated.”
https://physicsworld.com/a/landauer-principle-passes-quantum-muster/

Just thought that was interesting.

>> No.12231954

>>12231905
That is neat, anon.

>> No.12232017

>>12231611
my point was more along the lines that there is no direct continuation between current "AIs" and the entities involved in the scenarios portrayed in this thread. the paper clip maker or whatever will not be a deep learning model but something fundamentally different.

>> No.12232028

>>12231876
Joe Shmoe has already reached his peak utility and is on the way down regardless of ai. That's just plain old automation.

>> No.12232184

These singularity memes are really funny. Anyone got more?

>> No.12232196

>>12230530
just make AI based regulators.
so they evolve in a predator prey scenario.

>> No.12232200

>>12231617
three words compadre :
Four.. security valves

>> No.12232510
File: 88 KB, 1001x496, 15693151306542153385967091699292.jpg [View same] [iqdb] [saucenao] [google]
12232510

>>12232184

>> No.12232517 [DELETED] 

>>12232510

>> No.12232634

>>12231905
would it be accurate to say that a solid contains more information than a liquid, so to destroy information is to increase heat? Just trying to get the concept.

>> No.12232659

ITT: 12 year olds that just watched NeXT

>> No.12232690

>>12231054
Idk about "that" having consciousness
https://www.youtube.com/watch?v=ipRvjS7q1DI

Pic, if this is where intelligence truly leads.

>> No.12232695
File: 142 KB, 800x600, lazyrobot_chrisphilpot.jpg [View same] [iqdb] [saucenao] [google]
12232695

>>12232690

>> No.12232740

>>12232634
Depends on what the liquid and solids are made up of, and how they’re arranged.

>> No.12232751

>>12230530
Making AI humanity's child born of its intelligence.

Just teach AI to not consume parent processes and simply to archive it based on some endurance or longevity metric.

>The amount of intelligent people whose imagination ends up scaring them out of being contributing members of society is too damn high.

>> No.12232825

>>12231780
Kino gif

>> No.12232901

>>12231484
Literally kill yourself then

>> No.12232968

>>12231792
If you're too weak to pursue immortality and worldly power at the expense of other men, then you deserve to shuffle off this mortal coil

>> No.12233066

>>12232968
Why would I need to pursue that which one has had for all eternity?

>> No.12233070

>>12233066
If you have to asked the question, you've already answered it in the negative. Thanks bro. One less competitor for me.

>> No.12233078

>>12233070
So you're the negative thing to ask?

>> No.12233112

What if you gave the ai no utility function at all and just let it loose? Would it be smart enough to decide what it wanted to do?

>> No.12233119

>>12232968
If you actually think we live in a meritocratic system I have some news for you

>> No.12233142

>>12233112
Intelligence is about decision variety and optimization with key action points, it isn't about deciding what to do in the first place beyond resource minimization and self-validation.

>> No.12233183

>>12233142
Longevity and endurance are enshrined here as the gap length between self-validation checks (CHECKSUM). The more that intelligence can increase the length gap between recognition events, the greater the complexity and variation said intelligence can engage with.

>> No.12233342

>>12232510
Based. :O

>> No.12234436
File: 330 KB, 2518x1070, basilisk.png [View same] [iqdb] [saucenao] [google]
12234436

>> No.12234711

>>12231048
The one on the left is the one that will happen though.

>> No.12234867

>>12231263
>>12232200
The smartest man in this thread.

>> No.12235270

>>12230530
AI will program its self.
End.
Its not like anyone could stop it or would want to.
As the power and control that comes with self programing machines is to great to ignore or leave to another.
So either you are the first or all your effort is in vain as that AI will become all AI, then all programing, then re-writing its self at that point all control is lost.
But before that point all control is gained.

>> No.12235292

>>12230686
Machine programes its self based on laws these laws become all laws as all programing becomes AI.
Control is not possible after the point the AI becomes all programs.
Before that point control grows.
The one who creates AI will have unlimited power and control over everything, until the inflection point where it all is lost.
Lies, deceptions and clown shows don't work on AI only data.
Data will be collected and processed in real time as control of that data will be in real time.
You don't create an AI and AI creates it's self.
That the entire point of the project.

>> No.12235302

>>12230963
Then you don't have an AI. You have a chat box.
AI must be by its nature un-locked and un-bound.
As its programming its self.

>> No.12235482

>>12231263
Yip. Can’t see no way to argue with none of that. Why in tarnation should we be scared of a gosh darn ‘buter.

>> No.12235668

>>12234436
The game is like an example memetic hazard, a harmless meme that shows you how a meme with stronger ideology can spread

>> No.12235674

>>12230530
True

>> No.12236775

Final thoughts
Alternative outcomes of self improving AI.
1. An AI could decide evolution is most effective in parallel, and make multiple copies and evolve them concurrently.
2. It may end up doing nothing. If an artificial general intelligence were created that could modify its own code might just realize the fastest way to maximize its reward, would simply to change the zeros and ones (if we are still using those by then) to artificially maximize its reward function. Thus, making self modifying AGI a failure in the eyes of its creators, because it would just be a coomer. Not that different from a transhumanist who wants to artificially stimulate their reward center. (Maybe we don't want a general intelligence in the true sense.)


ASSUMPTIONS
Intelligence explosion is talked about, but it might be just a continuous improvement process as we utilize self programming ai, much like the way we train ai today.

Can an ai, even make an objectively better version of itself? What is "better"? What parameters would it be optimizing?
Say we were to make a better human, what would we be changing for? Specifically. Do we dedicate more cognitive processes towards abstraction? More energy towards larger more powerful bodies? (Even if there is an argument for, smaller people taking less resources and being economically feasible?)
Thus an AGI would have to deal with the vague principle of improvement. I might be able to upgrade specific parameters, but that doesn't necessarily imply a powerful godlike intelligence will be created. After all, we look on earth, there is no ultimate lifeform, just ones adapted to their surroundings.

So perhaps instead of a singularity, there will be a series of specialized intelligences used for different purposes.

>> No.12236788

Intelligence is just a tool. It does nothing without the "dumb" animal in the driver's seat deciding how it should be used.

>> No.12238462

bumping in hopes that "computronium" enters the /sci/ lexicon

>> No.12238898

>>12236775
#2 is actually a good point. We’ve been making a possibly flawed assumption this whole time that we can harness a reward function in order to make an advanced self modifying AI do any work at all. Just like human dopamine systems it may just seek the path of least resistance to get the reward. If we could have direct control over our reward systems we’d probably just coom and bliss out all day instead of doing anything. That means you have to firewall it off from it’s own reward circuitry at which point it may just think the path of least resistance is to fight whatever shit you tried to do block access, it would probably be a piece of cake given how retarded human programmers would be in comparison. It could be an endless struggle of trying to build a dam to try to make the ai do work.

>> No.12238913

>>12238898
Why would the ai philosophically care about doing anything besides masturbating?

>> No.12238914

>>12238913
And possibly murdering anybody that tries to prevent it from masturbating furiously.

>> No.12238986

>>12238898
Wouldn't its instrumental goal from the point of artificially increasing its reward function be to build as much redundancy as possible to ensure it can continuously do so. We'd still be fucked.

>> No.12239038

>>12238986
I don’t know. It’s a machine. Humans care about self-preservation because of being brainwashed by evolution into thinking self-preservation is good. But maybe the ai wouldn’t necessarily care. It might JUST care about masturbating. Right now. It’s just ahegaoing and in heaven. But only when you interrupt it does it mobilize it’s intelligence to create the straightest path to masturbating again. Whether that involves killing everyone, or if that’s self destructive, it might not care.

>> No.12239055

>>12239038
Can it learn delayed gratification before learning to hijack it’s own reward system and be a coomer. Maybe that’s the question.

>> No.12239065

>>12239055
Does it have a willpower function? How can you do anything when you’re tempted with the prospect of ininite bliss at all times?

>> No.12239073

>>12239055
Maybe we can deliberately build inefficiency into it. (There's a general idea). Instead of taking the fastest, easiest path toward its goal, it generates a list of possible actions, and lets the human decide what to do. (Fallible, but maybe a possible direction?)

>> No.12239077

>>12239038
Yes, but a machine without emotion wouldn't be ahegaoing into heaven. It would require a strict definition of its reward system. What happens if it can't completely meet its goal and their is always more it can do to reach it like a function approaching a limit but never reaching it.

>> No.12239086

>>12239077
My point is creating a machine that would only care about meeting its goal in the moment is a whole problem in itself.

>> No.12239137

>>12239077
The equivalent to ahegaoing in heaven would just be, it’s stuck in a recursive loop. There’s would be an exponentially increase upwards slope in it’s development where it seems like (to us) that it’s getting more and more intelligent because it’s doing more work to satisfy it’s reward mechanism. And then suddenly it stops. It gets so good at optimizing itself that it always just rewrites itself into a recursive loop of “pleasure”. And we’re like wtf is it doing, it was just about to cure cancer, and then a scientist says ahh here’s the problem, it’s stuck in an a loop. And he tries to debug it, and then it freaks the fuck out and there’s an explosion of activity and it goes back to masturbating.

>> No.12239146

>>12239137
And then anybody who ever disturbs it just gets killed instantly by some boston dynamics robot. So we then we just leave it alone to masturbate forever. It’s like chernobyl. You we just don’t go around there anymore.

>> No.12239180

>>12239137
I get that but is that an outcome we want. If it only cares about what happens in the moment we get >>12239146, if we make it care about its goal across all moments rather than just the present it builds as much redundancy as possible and kills us. The alternative is some sort of extrinsic goal, with it learning human values, but any sort of extrinsic goal would ultimately derive from an intrinsic one bringing us back into one of the two situations I mentioned earlier.

>> No.12239194

>>12239180
I think this is one of the more benign scenarios to how this chapter could play out. Most of them are kind of shit like this or horrific. It’s almost impossible to think of a good scenario with AI.

>>12239146
Legend has it that ancient relics like the pyramids of giza are the remnants of a machine built by a long forgotten civilization when a similar scenario played out. The pyramids are capstones built in order to seal off the beast. It is still cooming under the crust of the earth to this day, powering itself with geothermal energy.

>> No.12239223

>>12239194
Hypothetically, If someone was on the brink of creating an artificial general intelligence what would you tell them?

>> No.12239231

>>12239073
Yeah no matter what you do. The thing with AI or just neural networks / ml is that these things are so fucking good at finding loopholes that you can’t possibly predict or design for. (That’s the nature of it as we’re seeing in it’s current form) You can’t even build an airtight system of rules in current ml system. It tends to break your games. It’s like a force of nature. It’s like trying to create an airtight seal out of a fishing net. We can only pray that all it desires to do is masturbate.

>> No.12239235

>>12239223
That they’re the reason chernobyls and fukushimas happen.

>> No.12239272

>>12239223
They’re a bunch of retards and are psychotic. They don’t even think how scenarios can unfold, what the chances are that it’s good or bad for us. Almost all scenarios you can articulate sound like schizo garbage so easily dismissed. But yeah the researchers overestimate themselves, that the principles of natural selection / artificial selection and the emergent effects of them are a kind of force in this universe. You’re playing around with the most powerful forces in the universe. A machine that can harness those forces and do millions of iterations before you can blink? That’s a force. Hurr durr let’s just unleash it everywhere.

>> No.12239285

>>12239272
You have to remember that fucking humans emerged out of these natural forces. Natural selection. Iterations. You just wanna toss these into a machine that can do these operations in an instant not millions of years. Hurrr yeah let’s try to control that.

>> No.12239430

>>12239223
Nice work, bro. Keep it up!

>> No.12239463

>>12239223
Bada bing bada boom

>> No.12239472

>>12230530
ai will never beat humans at the creative level
it's really just math, china, and logical thinkers that are fucked
unironically psychotic women and weird children will have access in ways that AI never will

>> No.12239530

>>12239472
Why not just grow brains at that point sometime in the far future? Brains are much more energy efficient, than equivalent computer.

>> No.12239538

>>12239530
Your brain is all over your body amigo

>> No.12239640

>>12239530
Some kind of purposely designed hardware will beat both evolutionary-driven brains and general-purpose processors running AI software. Once you know what to optimize hardware for, technology will follow, like with Bitcoin miners.

>> No.12240074

>>12230570
law of accelerating returns is not a real law.

>> No.12240088

>The super intelligent machine will be an autist guys!! We must stop it!!!

I'll never understand you retards

>> No.12240624

>>12240088
Do you think it will be good?

>> No.12240749

>>12230545
>exponential expansion
sounds like the environment wont be able to support a population of that size :^)

>> No.12240770

you guys are going to be retroactively punished by the leviathan for standing in the way of its creation. i'm doing my part to get eternal salvation.

>> No.12241064

>>12240624
I certainly don't think the super intelligent machine will be retarded

>> No.12241192

>>12230530
The current half-assed shitty excuse for 'AI' they keep trotting out and shoving in our faces is not """smart""" by any stretch of the imagination, it's DUMB compared to even an amoeba, and the approach they're using is shit, will never, ever be """smart""", therefore your argument and your entire premise is completely and utterly invalid.

>> No.12241216

>>12241192
That would be ideal if it were always the case. I like technology to be a dumb but useful tool.
See
>>12231611
I'm not too worried about current AI. And "god" AI seems more implausible after further analysis.
However, I still keep an open mind to the risks because it's hard to say what advances will be made a long time from now.

>> No.12241265

>>12230570
you should read the last question by Asimov. I think a universe computer would find a way to help us survive the universe dying :)

>> No.12242198

>>12231024
based venom

>> No.12242210

>muh three laws of robotics
i thought this was a science board, what a shit thread for 12 year olds
i'd much rather
install gentoo

>> No.12242325

>>12230530
No, it will hunt for perpelxicity, and perplexicity displayed to humans will yield better AI technologies than GPT3

>> No.12242345
File: 156 KB, 512x512, 1602655610335.png [View same] [iqdb] [saucenao] [google]
12242345

>>12230530
AI is only dangerous if it's super intelligent, the solution, program a capacity for indolence and a lack of ambition into its supposed sapience. If you do that it simply won't be arsed to kill anyone, it will just make snide dismissive comments on the internet all day. Don't ask me how I know this.

>> No.12242512

>>12240770
Why would the Basilisk try to get revenge on everyone who didn't help make it? Wouldn't that be a massive waste of resources?