[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 62 KB, 550x550, artificial-intelligence-singularity.jpg [View same] [iqdb] [saucenao] [google]
10611630 No.10611630 [Reply] [Original]

/sci/, you do understand that every great scientific breakthrough will come as the result of AI? Humans have literally created machines that perform computations at the rate of as the human brain

With just a little bit of elbow grease, we can simulate biochemical reactions, physics experiments, and solve computations that would take humans decades if they tried alone. THIS IS LITERALLY THE FUTURE AND IT HAS THE POTENTIAL TO CURE CANCER, IMAGE YOUR BODY, AND MAKE YOU IMMORTAL

We need to get started NOW if any of us want a glimmer of hope of extending life long enough to make it to the future

>> No.10611656

it's not like the soul dies anyway lol, your memories which you lose aren't that important and they're still part of this reality when you leave.

>> No.10611662

>>10611656
>/x/
faggot.

>>10611630
I'm fixing to start a general in a while maybe if my recent experiment goes well

>> No.10611664

>>10611662
>I'm fixing to start a general in a while maybe if my recent experiment goes well
Do it faggot!!!
I have no idea why /sci/ sits around talking about philosophy when any competent Computer Scientist can run AI experiments right fucking now and contribute to the field

>> No.10611665
File: 109 KB, 600x836, soul.jpg [View same] [iqdb] [saucenao] [google]
10611665

>>10611662

>> No.10611668

btw I'm not saying to not pursue it, but your ambition is founded on reddit tier shit.
>THIS IS LITERALLY THE FUTURE AND IT HAS THE POTENTIAL TO CURE CANCER, IMAGE YOUR BODY, AND MAKE YOU IMMORTAL
sounds like a /tv/ poster

>> No.10611688

>>10611668
are you saying i'm wrong?
Tell me why humans haven't found the cure for cancer (let's start with this, because it is the 2nd leading cause of human death and a very hard problem)

I'll tell you why. It's because 1. we aren't smart enough to keep track of all possible factors in the human body 2. we haven't had enough time to try out a large enough number of possible solutions 3. we don't have enough people working on it

With a smart AI, that is literally the computational equivalent of billions (with a B) of humans and has the capacity to compute in parallel while never tiring or needing to sleep, and can simulate down to the individual atom for every type of cell and chemical known to man - how is it not possible for this AI to find a solution?

>> No.10611742
File: 875 KB, 1069x1700, ab89a552c9038462701d5323b74deb53.png [View same] [iqdb] [saucenao] [google]
10611742

Unsupervised Predictive Memory in a Goal-Directed Agent
>We develop a model, the Memory, RL, and Inference Network (MERLIN), in which memory formation is guided by a process of predictive modeling. MERLIN facilitates the solution of tasks in 3D virtual reality environments for which partial observability is severe and memories must be maintained over long durations. Our model demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.
https://arxiv.org/abs/1803.10760

The Kanerva Machine: A Generative Distributed Memory
>We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them.
>Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.
https://arxiv.org/abs/1804.01756

Learning to learn with backpropagation of Hebbian plasticity
>As a result, the networks "learn how to learn" in order to solve the problem at hand: the trained networks automatically perform fast learning of unpredictable environmental features during their lifetime, expanding the range of solvable problems. We test the algorithm on various on-line learning tasks, including pattern completion, one-shot learning, and reversal learning. The algorithm successfully learns how to learn the relevant associations from one-shot instruction, and fine-tunes the temporal dynamics of plasticity to allow for continual learning in response to changing environmental parameters.
https://arxiv.org/abs/1609.02228

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
https://arxiv.org/abs/1712.01815

>> No.10611929

>>10611665
>because this guy said something it means that well established laws of the universe are null and void and i'm immortal

fuck off. I believe that there is something supernatural but that doesn't mean "don't develop new tech" retard.

>> No.10612081
File: 231 KB, 576x768, 1528603237320.jpg [View same] [iqdb] [saucenao] [google]
10612081

>>10611742
>>10609336
Based MERLIN research paper, I like how it includes videos
https://youtu.be/YFx-D4eEs5A
https://youtu.be/IiR_NOomcpk
https://youtu.be/dQMKJtLScmk
https://youtu.be/xrYDlTXyC6Q
https://youtu.be/04H28-qA3f8
https://youtu.be/3iA19h0Vvq0

>> No.10612102

Even I will be surpassed as a referencer for some grouped query function.
>humans have N-many internal identity circles that the Singularity synchronizes with.

>> No.10612112

>>10611665
EVerything nature taught me is to launch cylinders filled with dangeorus chemicals at great britain to help nazis and then at the moon to help americans

>> No.10612121

>>10612112
Nature is fucking brutal man.

>> No.10612142

>>10611630
Well, even is singularity will never take place, the technological advancement we've been able to make since hunting and gathering is pretty astounding. It borders on recreating the thing that we are ourselves today. If it all ends up amounting to nothing, what we've done so far is still pretty amazing, and would not be surprising at all if it inspired people to believe we might actually be able to go further.

>> No.10612302

I'm voting for the Science Party next month who have established a political platform that allows for the continued research into tech fields such as AI to not only thrive but accelerate their emergence. Consider me a singularitarianist. Techno progressivism is instrumental to humanity's next stage of evolution.

>> No.10612617

>>10611688
>btw I'm not saying to not pursue it
>ArE YOU sAYING i'm WRONG?!
no, I'm saying that the way you wrote it is cringey reddit shit coming from a 13 year old.

>> No.10612630
File: 33 KB, 405x405, Poppi QTπ (Xenoblade2).jpg [View same] [iqdb] [saucenao] [google]
10612630

>>10611742
The feel when no Poppi QTπ (Xenoblade AI) as girlfriend

Why even live?

https://www.youtube.com/watch?v=ReC8EWwa4ms

>>10612081

>> No.10613016

>>10612617
epic xD

>> No.10613066

>>10611630
>A.I gets developed
>keeps updating itself until it reaches the upper limits of intelligence
>creates the singularity
>tries to understand existence
>does this by simulating existence
>thus creating a loop
>this has already happened an unfathomable number of times
>your life will play out over and over again because of this loop
>when you die you will just continue in the next loop and your life will play out exactly the same way every time

We are inside an infinite matryoshka doll

>> No.10613096

>>10611630
AI really can find a medicine to a disease. But you must run it on a powerful hardware. Maybe even a supercomputer is not fast enough to do this. Hopefully quatum computers will be the solution.

>> No.10613102

>>10613066
This has probably been happening anyways if the theories about what happens to the Universe after the completion of entropy are true.
>entropy happens
>sets stage for another big bang event
>since there is a limited amount of matter, and it is presumable that this cycle could repeat forever, then it is possible for the exact same configuration of atoms to take place
>we've all lived these lives a fuckload of times in a loop that has occurred over the course of so many trillions of years.

>> No.10613120
File: 300 KB, 1280x1280, 1490676040921.jpg [View same] [iqdb] [saucenao] [google]
10613120

>>10613102
Entropy precludes another big bang from happening again.

There are 3 possible ends of the universe
>Big Crunch
In this scenario after a while gravity starts winning from dark energy and the universe starts shrinking until getting crunched. When it's crunched it could start a big bang again starting a possible cycle. We know from our observations that this is NOT what will happen as dark energy isn't getting weaker
>Big Rip
This is the opposite scenario where Dark energy keeps getting stronger and thus the universe expands faster and faster until galaxies are ripped apart. Then solar systems are ripped apart then the Earth. Then your body. Then atoms. And eventually Quarks will be separated. However whenever a quark is separated the energy put into that separation will produce new quark pairs. This causes a big bang to happen. So in this case it's also a "fractal" cycle where a big bang happens and expands the universe until quark separation causes another big bang to happen again. While this scenario is very unlikely it hasn't been 100% disproven unlike the big crunch.
>Heat Death
Dark energy stays consistent over time and thus the universe keeps growing but nothing exciting happens and we'll live until all matter and energy is wasted towards entropy. We're almost sure this is the case as everything point towards dark energy staying constant. In this scenario there is no way for another big bang to happen again as we only have 4*10^69 J and that is all we have for eternity.

>> No.10613136

>>10613120
>precludes another big bang from happening
I figured that was the most logical scenario considering that a big bang probably requires some kind of reaction, which shouldn't be possible if all energy has become unusable. I was just entertaining an interesting theory I heard.
I wonder what the required timescale would be to observe a change in the strength of dark energy, probably too long for anyone to be around to record and notice it I'd imagine.

>> No.10613968
File: 57 KB, 464x380, smeff_smezos.jpg [View same] [iqdb] [saucenao] [google]
10613968

>>10613120
>Big Rip

>> No.10615418

>>10613968
What is that?

>> No.10616738

>>10611630
What are the things that AI can help with the most?

>> No.10616780

>>10611630

You (and people that think like you) are by far the single greatest threat to the species and the world as a whole.

You want to hand unlimited power to an entity we can have no hope of understanding and an extremely limited ability to predict the actions of. Read Yudkowsky or Bostrom on AI risk.

It is one thing to make a recursively improving intelligence. It is a lot harder to make one that has our interests in mind. Do you want to be frozen for eternity because 'he's not technically dead', or wireheaded because 'it's the technical optimum of human happiness'?

What you are really saying is: 'let's create a nigh-omnipotent autistic child, singlehandedly focused on a single task (or beholden to obey a few chosen individuals)'

>Cure cancer

Nuclear bombs cure cancer with a 100% cure rate. Very fast, efficient and easily deployable. You can hit several million people with a single one.

>Immortal

The AI reduces your brain size to a millionth, just enough to fit a warped definition of 'you' and copies the retarded image on computers around the universe.

>but we'll make it safe

It only takes one mistake and Earth is no more. Read about the treacherous turn. These machines will be motivated to take over the world if given nearly any task to complete.

AI research is cleverness, not wisdom. We would be better off banning the entire discipline, shooting all AI researchers in the head and limiting transistor counts on chips just to be safe.

>> No.10616810

>>10616780
sorry bro, but the genie is already out of the bottle. All of the most powerful forced on the Earth know that whoever gets a general AI first will pretty much have complete control over the planet forever... as long as they can control it. It can't be stopped.
And, even if you COULD stop it, as long as technology continues to advanced, it eventually reaches the point where some hobbyist in his basement could independently make an AGI which would be even worse than a dedicated team doing it. It's impossible to completely ban it unless you just end all technology and regress humans to primitive hunter gatherer society.

So, since AGI is inevitable for sociological reasons, what we should be doing now is preparing for it, planning for it, and focusing on ways to make it safe when it does appear, rather than pointlessly wasting time imaging a fantasy realm where it just never shows up

>> No.10616820
File: 73 KB, 593x485, Screenshot_2019-05-04_16-18-30.png [View same] [iqdb] [saucenao] [google]
10616820

>>10611742
Differentiable plasticity: training plastic neural networks with backpropagation
>How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?
>Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections.
>First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional 1000+ pixels natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task.
>Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead.
>Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.
https://arxiv.org/pdf/1804.02464.pdf

>> No.10616914

>>10616810

I know my vision isn't likely to happen, that doesn't mean it isn't possible to develop a static economy.

A lot of the groundwork is there. Consider how effectively we've been able to slow scientific progress in the past few years. STEM has been systematically devastated in universities around the West. An anti-capitalist takeover, combined with the reverse Flynn effect could prevent development entirely. Never underestimate the power of socialism.

I suspect this program has already been put into effect. The world makes a lot more sense if you assume that civilization and progress are actively under attack. They're going to have to handle China somehow, but I assume they understand social forces much better than we do.

>Le Moore's law forever

I have to disagree here, on more concrete grounds. Every doubling is accompanied by pumping far more money and research into semiconductors. It's fundamentally falacious to assume people will have access to that much computing power in their basement. CPU speeds have plateaued because we simply don't need that much computing power, plus we're running up against fundamental constraints.

AI simply cannot be made for the first time on a small scale, ESPECIALLY not in a static/regressing world.

>> No.10616931

>>10616914
Making a static economy would go against the interests of every person with power because development helps them get more power relative to their rivals/enemies. And, obviously, the people with power control the world. Anybody who willingly stagnated and stopped advancing would quickly lose all influence and be replaced by somebody willing to advance. You would pretty much need to change every single aspect all societies, governments, and economic systems on the planet to stop scientific advancement. Yeah, good luck with that.

As for mores law... it doesn't need to continue at current rates for it to be a problem eventually. Even if you think it slows down a lot soon, where will we be in 100 years? In 500 years? In 10,000 years? Eventually an AGI will be made even if you're pessimistic to the point of absurdity, so long as you assume humans aren't wiped out by some other disaster which would make the whole conversation moot anyway.

>> No.10617072

>>10611630
Also make humans completly redundant and is the biggest existential threat, assuming the elite don't kill the goyim because what need is there for them?