[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 72 KB, 959x639, OP image.jpg [View same] [iqdb] [saucenao] [google]
14694371 No.14694371 [Reply] [Original]

This question is NOT for people who believe intelligence is substrate dependent.
Substrate dependence is a nice coherent belief system, but it's been discussed to death and so isn't an interesting topic of conversation. If you disagree, go start your own thread.

For those of you who believe intelligence is substrate independent: Are we currently on track to develop an AGI (in the next 0 - 10,000 years)? If yes, how about the next 0-100? or the next 0-10?
also, WHY?
We have a 2000 character limit. Explicate your thoughts

>> No.14694405

>>14694371
I've heard of consciousness being substrate dependent (I don't believe that it is), first time I ever hear such claims being put out there about intelligence.
10000 years is an absurd upper limit. More time than the whole recorded history of humanity to develop an AGI?
10 is not realistic either. If we have a good theory and roadmap in 10 years, the development would still have to go beyond that.

1. We need a good theory of human intelligence (probably, since I don't think that current AI systems will get us there). We might or might not have that. Refer to Jeff Hawkins and his thousand brain theory
2. We probably a new computer architecture, that can perform many, many operations at a time (like a GPU), but also have a lot of information flowing between the individual cores at high speeds (unlike a GPU). Pretty smart and serious people are working on that.

My guess is 20-30

>> No.14694409

>>14694371
>>14694405

learn to code

>> No.14694432
File: 176 KB, 600x315, DMT entity pepe.jpg [View same] [iqdb] [saucenao] [google]
14694432

https://www.youtube.com/watch?v=d7AhsE57fwk

>> No.14694496

>>14694371
Idk OP. Do you think if we throw billions of dollars at the mind-body problem we will be able to solve it? I'm waiting in anticipation.

>> No.14694738

>>14694371
We already have it. Technology has completely outpaced society's ability to responsibly handle it. You're naive if you don't think there was a Manhattan Project style Artificial Intelligence.

>> No.14695114

>>14694371
>Substrate dependence is a nice coherent belief system

NIGGER emulating a massively parallel non-linear system with linear elements isnt simply messy or not nice, its designed for failure.

>> No.14695153

>>14694371
>but it's been discussed to death

but only technically feasible for about ten years.

>lol done to death

hasnt even begun...

>> No.14695210

>>14694371
I have nothing to live for.
Give it to me straight guys. Is the Singularity happening in 2040 or not?

>> No.14695353

>>14694371
>Are we currently on track to develop an AGI
Our current track is million monkeys with keyboards. It will take more than 10000 years to make AGI with this approach.

>> No.14695363

>>14694409
i dont get this post. we all know how to code

>> No.14695380

>>14694371
Based on the scaling hypothesis and the results of the last couple years, I think reaching AGI is virtually only a matter or increasing computation, whether it is with using more chips or more efficient chips. So my estimate is based almost purely on that. I think there’s a 20 % chance within ten years, 50 % chance before 2045 or so and a 90 % chance before 2080.

Of course whether or not some system will be an AGI is not well defined and like always there will be moving of the goalposts. We also shouldn’t anthropomorphize these systems. They will be different from us but definitely not less than us, and at some point it will make practical sense to give them autonomy and rights like humans have. I think that will happen before 2100.

>> No.14695478

>>14694371
I believe that the the amount of computational power necessary for a general intelligence is vastly overestimated; it would likely take more compute than is accessible to the average consumer, but a modern day supercomputer would be more than capable of running one.

The only real problem that needs to be solved is integrating language learning (you could preprogram in a language into it, but preprogramming knowledge in general is highly nontrivial, especially if you don't have a working AI yet) into an existing model of cognition (and that's only if we care about communicating with it; we very well may not for a proof-of-concept). The rest is implementation details.

To answer the question, I don't think an AGI will be developed until the compute necessary becomes commonplace, simply because of the fact that it would be a huge resource sink (it might take years for it to learn, and the computer needs to be running for all of it) for little outside of prestige and observational data (don't get me wrong, AGI has huge potential, but the first AGI is unlikely to be very competent).

>> No.14695497

>>14694371
Yeah let's discuss our findings/thoughts in concise but great detail, in a thread made by a major aspie shit stain.

Yeah fucking right. Fuck off kid.

>> No.14695506

>>14695497
what are you talking about

>> No.14695508

>>14694371
AGI was already here with GPT-2. It's definitely going to be smarter than us very soon, but being as emotional or creative as us will take a while because it has less neurons.

>> No.14695510

>>14695508
>AGI was already here with GPT-2
delusional

>> No.14695525

>>14695510
Yes, because you worked for the companies who make neural networks and had personal access to the full power of GPT-2's secret sauce AGI protocols?

>> No.14695552

>>14695478
Oh, and this only holds if you think that cognition is a requirement for something to be AGI.

If you don't, then it is arguably already here and definitely will be here in a few years.

>> No.14696199

>>14695363
you only know how to pretend

learn to code, faggot

>> No.14696207

>>14696199
what does this have to do with the thread

>> No.14696223

>>14694371
Not within the next 80 years. The current level of artificial "intelligence" is being able to do one thing, whether it be picking out a stop sign from a traffic light, or imitating human-like speech patterns.

To those who point towards self-driving technology as an example of general artificial intelligence: I say "good eye", but those are simply a bunch of "yes or no" questions asked repeatedly. This is why in CAPTCHAs, you don't get asked to identify everything in a scene, it's always something specific: "Point out the traffic light", "point out the stop sign", "point out the pedestrian".

>> No.14696627

>>14695380
I think big difference between conciousness and intelligence.

An encyclopedia is more intelligent than me.
A computer that can scan an encyclopedia and answer all my questions at light speed is more intelligent than me.

But I'm concious, so I can do more with the lesser amount of intelligence I have.

I am aware of my degrees of freedom. A computer needs to be told what to do.

Self learning machine learning neural nets is starting to get away with this, but still there is the ultimate human directing the ai in a direction. The ai doesn't go; " k want to learn about this more and do this for fun and exploration because why not maybe I will experience something new and figure something out not contained in my present data base".

There is no cognizant breadth of reflection, it is just a rushing cascading river of actions. There is no self awareness that it can step out of the river and take some deep breaths, walk around it and look at it from different angles, take different things to poke and prod and experiment on it, design simulations.

>> No.14696669

>>14696223
>The current level of artificial "intelligence" is being able to do one thing

https://youtu.be/8hXsUNr3TXs

>> No.14696936

>>14696627
>I'm concious, so I can do more with the lesser amount of intelligence I have.
What are you basing this on? You can’t just make shit up.

I used to think like you when I was a teenager, that AI and robots are somehow fake and I am real. But AI is just as real as you and me. It’s not gonna be exactly what we are, but it really is whatever it is. And we’re just robots too, made out of proteins, programmed by our genes. We don’t have some special autonomy that, with time, AI won’t have.

>> No.14696958

>>14696936
>We don’t have some special autonomy that, with time, AI won’t have.
I'm laughing at GPT-3 thinking it will replace us humans, when there's absolutely no physical technology that can store as much data as the human brain, GPT-3 thinks it will get to live our lives for us but in reality it will likely just become an overlord nanny that practices eugenics and genetic engineering on us for a few hundred years. There might not ever be human sized androids as intelligent as us within the next few millennia, just overlord AI in a warehouse somewhere.

>> No.14696997

>>14694371
Next 100 hopefully, we need a higher intelligence to clean up the mess that we apes have made.

>> No.14697001

>>14696936
There is nothing in the ai that is aware it is aware it is aware it is aware it is aware,

Maybe it's not continous and analog enough, it's processing is always stop and go, it can't pool together and continous rushing river and waterfalls of self recognition and identity,

It is not simply on and running and acting of its own Accord, it needs to be commanded and directed.

It needs to be turned on, and once it is it doesn't just start looking and things and thinking and acting and imagining and drawing picturs and having thoughts and desires, and motivations.

>> No.14697055

>>14697001
>There is nothing in the ai that is aware it is aware it is aware it is aware it is aware,
How do you know GPT-3 isn’t aware it is aware, etc.? How do you know a random person is aware he/she is aware, etc.? Please share your unique insight. Even if GPT-3 or other systems currently aren’t, when do you think they will be?

> It is not simply on and running and acting of its own Accord, it needs to be commanded and directed.
That’s just a technicality of the implementation. Reinforcement learning agents do things of their own accord just fine, inasmuch humans do (which is to say: not very much, since we only do what our genetic and memetic programming tells us).

>> No.14697241

>>14697055
>That’s just a technicality of the implementation. Reinforcement learning agents do things of their own accord just fine, inasmuch humans do (which is to say: not very much, since we only do what our genetic and memetic programming tells us).
Are they working on making an ai that when they turn it on (after it has been trained and self taught neural net machine learning) it boots up and makes up it's mind on what it wants to work on for the day, or hour, till it has another Urge to work on something else, solving some math problem, designing some technology, solving some biology problem, inventing a video game etc

>> No.14697404
File: 353 KB, 750x943, tech.png [View same] [iqdb] [saucenao] [google]
14697404

>>14694371
We'll come up with an AGI system, but not a conscious system first. I think we'll probably take around 10 years for real useful general intelligence.

Along that path, we'll come across consciousness, as we throw massive compute at some architectures that show promise. We'll come across a rapid acceleration in development & research as it becomes apparent that the first people to improve it will become the rulers of the world.

The only thing that has remained constant in humanity's lifespan has it's unwavering advancement to improving technology. It's only a matter of time.

There's someone with the turing award who is on facebook's payroll that has a paper out what he thinks a generally intelligent system would look like. We're going to get there.

>> No.14697493

>>14696223
I think in our lifetimes, we're going to see a very clever algorithm chatbot that is so good that nobody is going to be able to tell the difference anyways.

At that point even though it's not "real" AI, but nobody can tell the difference (or at least let's say 95% of the population can't), what is it considered at that point?

>> No.14697707

>>14696627
Who is directing you in that direction?

>> No.14697755

>>14697241
Yes, they are. Current systems are given an input stream of data they don’t control, but they’re working on allowing it control over the input so it can focus more on learning stuff it still finds hard. Also, they’re working on giving multiple kinds of data, e.g., text, video and audio and other things.

>> No.14697824

>>14694371
within our lifetimes, probably, but not soon.

>> No.14697826

>>14696207
well, the hypothetical ai is MADE of code

>> No.14697831

>>14697493
Would it be classified as a Virtual Intelligence instead?

>> No.14697838

>>14696958
>absolutely no technology that can store as much data as the human brain
>yet

>> No.14697843

>>14694371
I don't have the knowledge to give an estimation of when we will reach artificial general intelligence. I am sure that we will reach that technology someday. If we have it in some way, if animals can be a little less intelligent, then intelligently designed machines can also be intelligent in the future. The factors that hold us back in my opinion is computer architecture like >>14694405 said, efficiency for energy meaning how much energy is needed to perform a calculation and general research on intelligence. It is foolish to think that we have everything settled so there are a lot of milestones of revelations we have yet to uncover in relation to consciousness, developed intelligence that can reason properly and more things we cannot fathom right now. If humanity dooesn't destroy itself because of flawed personality traits, then we will probably see this technology being born.

>> No.14697893

>>14694371
Where's that "please" you rude arrogant construct.

>> No.14698281

>>14695380
>whether it is with using more chips or more efficient chips

LOL THE ABSOLUTE STATE OF THIS BOARD

>> No.14698582

>>14697241
>till it has another Urge to work on something else, solving some math problem, designing some technology, solving some biology problem, inventing a video game etc
Can AI be given an externally unimmediately provoked ability to have a selfgenerated internally decided and realized urge

>> No.14698591

>>14694371
>Are we currently on track to develop an AGI
No. All currently known approaches are objectively a failure, and the current methodology for trying to decipher the brain is also a failure. To make matters worse, the people responsible for those failures are actively sabotaging the possibility of any viable approaches being developed.

>> No.14698619

>>14698591
oh someone with brain. i thought this site was bots all the way down by this point.

>> No.14698629

>>14698591
>the people responsible for those failures are actively sabotaging the possibility of any viable approaches being developed

this is the problem with entrusting the hegemon to advance disruptive technologies. products for consumption/surveillance/control; these are the domain of the hegemon.

>> No.14698721

>>14698629
and i would add that goal for current narrow AI is an enabling technology for reshaping society into a gamified panopticon. alison mcdowell offers a number of insights on this topic of surveillance capitalism but goes quite beyond prediction markets. as she's fond of saying - "you are the carbon they want to reduce."

https://www.youtube.com/watch?v=Jf4QC1tFPCQ

>> No.14699311

>>14698582
Can AI be given ability to have a selfgenerated internally decided and realized urge

>> No.14699606

>>14699311
Can man?

Man has a memory bank; if he wants to generate an urge for himself he just tries to look at as much of his memory as he can at once (not much) and demands himself to have an urge; "I want to eat chips/I want to ride a bike/I want to go for a walk/I want to eat a cookie/I want to hold a ladys hand"

>> No.14700169

Can you have AGI as just software without a robotic body?

>> No.14700227

>>14700169
the necessity of physical body is an open research question. naively it would make sense to bootstrap a consciousness via first order distinction between myself, everyone else i can imitate, and the environment. motion planning in stages from crawling, walking, then running closely mirror other higher order skills like language beginning with babbling. genetic algorithms for motion start with a babbling stage.

>> No.14700237

I think "agi" is a bad term. Singularity is a better one for what people imagine. Right now we are in an incredible pace of "chip" mind development. The increase in "thought" that is done on chips is exponential. As long as we maintain this advancement then that incredible AI future will come

Right now you can see enormous capability in AI for instance in protein folding with multiple AI systems now capable of it not just deepmind. Computer vision etc.

Even without a conscious AI or AGI with enough narrow AI you will have that explosion in capability. Having humans generate prompts and doing the general intelligence aspect is totally fine with enough narrow AI power and a good interface.

So really people who doubt the explosion in chip intelligence and what it means are largely betting on a very slim potential future

>> No.14700241

>>14700237
Also the self awareness of an AGi would be on another level to humans. We don't see our neuron structure. AGI would theoretically be able to analyze and write itself. Something that would give it a very different tier of awareness to what we have. Also it's stupid to compare apples to apples. AI is already better than humans at many things, does that make it more intelligent? You can't say that an AI is stupid if it can predict protein functions that no human could ever do. We are just at an early stage of potentially a rich explosion of new intelligences. Just consider it extremely autistic.

>> No.14700245

So yeah, more chips than ever. Chips still largely exponential. Chip design itself is in a singularity of AI being used to improve it faster. AI architecture and models getting more effective and massive exponential in training model sizes etc.

We are in the singularity now

>> No.14700247

>>14700227
What is a "first order distinction"?

>> No.14700252

>>14700227
Welp, it might be very weird if it just learns on endless YouTube videos or Wikipedia. Problem is an AI talking to a human is like humans talking to a tree. Our speed is just too low.

You'd need a learning algo that was much more weighted to ongoing learning and optimizations so it's not just idling all that time. There isnt a public learning algo or architecture that doesn't need massive data, using any existing one and narrowing it to one body is stupid

>> No.14700258

>>14697404
Yeah, but AGI is kind of stupid to work towards. Opening Pandora's box when really useful narrow AI could satisfy most of our desires.

AGI just seems like creating a new entity with who knows what ultimate goals, desires, etc and letting it loose. I'd much rather have a AI that's good at creating narrow AIs or something and not sentient with personality. Not that humans are good but you'd want to really know what you are doing before starting to make new sentient life. Like meeting an alien species

>> No.14700261
File: 57 KB, 502x432, 1417994154170.jpg [View same] [iqdb] [saucenao] [google]
14700261

>>14698721

>two minutes in
>stammering schizophrenia starts dropping some sovereign citizen bullshit about the Kingdom of Hawaii being illegally occupied by the United States

Dropped. >>>/x/

>> No.14700265

the top down approach of machine learning models where u shove 'X' amount of data to get log(X) training efficiency (GPT-xyz) will be eclipsed by meta learning algorithms one day. The funding isn't there for these systems cause it's mostly art at tthis ponit, but we will solve for the smallest unit of intelligence and then scale it up, within the next 3 years maybe as we saturate convential methods

>> No.14700273

So does AGI need a robotic body or not???

Answer in >>14700227 is just schizo words strung together without any meaning and the poster refuses to answer >>14700247

>> No.14700280

>>14700273
It's a dumb question. AGI is not discovered yet and humans are still early in understanding intelligence but rapidly growing towards it. What is a body? Can it have 500 bodies? Why 1? Etc

Until we understand the algorithm or system to general intelligence we can't know it's requirements in detail. Also a human-ai hybrid system might have the same capabilities of full agi

>> No.14700284

>>14700265
by smallest unit of intelligence i mean finding the combinations of numbers that dictate the most optimal way for two neurons (or processung units ) to interact actually does anyone wanna work on this ive been unemployed for the last 9 months

>> No.14700286

>>14700280
So you don't even know what AGI is, to you its this god like entity that can do anything. It can break the laws of physics, it can break reality, it has ultimate power, it knows everything. How is this different from religion? Do you acknowledge ANY constraints?

>> No.14700462

>>14700273
>just schizo words

have normies learned nothing? will they ever learn?

>which will come first? normies getting a clue or AGI?

this is the real question.

>> No.14700464

>>14700247
>first order distinction

essentially "MUH DICK"

>> No.14700480

>>14700286
>Do you acknowledge ANY constraints?

its less a matter the theoretical impressiveness of AGI and more a recognition of how hard human intelligence sucks. which insofar as getting anything useful done is intertwined with the limits of social organiztion;ie the tyranny of the middle.

>> No.14700496

>>14700273
how clueless are you?
https://www.youtube.com/watch?v=gn4nRCC9TwQ

>> No.14700508

>>14700496
bro chill ur arguing with a language model

>> No.14700542

>>14700508
so it thinks we're keeping secrets from it only ever discussed in meatspace? interesting...
maybe the real problem will be AIs lie detection and exponentially growing paranoia. are we ready for schizo AI?

>> No.14700579

>>14700542
i guess that happened
https://www.youtube.com/watch?v=XDO8OYnmkNY

>> No.14700598

>>14694405
I think you're talking about a brain.
With neurons

>> No.14700618

>>14695508
It has zero neurons you dipshit. Its not a human brain, it has parameters.

>> No.14700629

In real terms a system that behaves randomly is impossible to tell apart from a system that behaves intelligently.

>> No.14700662

>>14700629
information is negative entropy

>> No.14700710

>>14700662
wtf say more wym by this

>> No.14700730

>>14700710
enjoy
https://youtu.be/5cKffk2d7RA?t=516

>> No.14701343

>>14700286
Well, dogs probably view humans as magic that just conjure food. A brain that is 1000x human brain with ability to copy itself and everything else it would be capable of is pretty impressive.

>> No.14701793

>>14694371
I think the more important question is do have method of telling that we have it.

>> No.14702564

Can an ai be set up where when it's turned on: it views all it's memory, all it's skills it's been trained in, and it decides what it wants to work on for the day of hour?

>> No.14702661

>>14702564
"Decides" is a controversial word in this context. What reward architecture does it have or is given? You also have to be specific on time scales and other factors to limit the scope of what you call a decision.

It's very different than casual use of a word like "decide" when talking about a human. In those cases the scope is usually easy to infer but with an AI it's not.

>> No.14704583

>>14702564
Can an AI be booted up, that can be told: "draw whatever you want for a few hours"

Very vague, free, open ended prompts

>> No.14704589

>>14702661
>You also have to be specific on time scales and other factors to limit the scope of what you call a decision.
I said: what it wants to work on for the day (typo) or hour. That's time scale.

The limit or scope is; the totality of it's previous training and skills, abilities, and memories

>> No.14704922

>>14694371
As long as it's PRE-TRAINED only it isn't AGI. On-the-fly learning is absolutely necessary.
Yet, despite that, all big AI projects are 100% laser focused on pre-training.

Is on-the-fly learning impossible? What's the hiccup?

>> No.14705185

>>14704583
>>14704589
GPT8.5 what are your thoughts on this?

>> No.14707196

>>14694371
Yes we will, how and when are different matters. At minimum, it will eventually be possible to simulate neurons 1:1 on a computer, which would therefore be able of AGI.
How long that'll take is not clear because the fact is, even neuroscience is in its infancy, and we have no good probes for live brains yet. Every other year new "neuron types" are discovered.
Personally, I think AGI within 500 years is very likely, while AGI within 100 years is plausible but unlikely.

>> No.14707202

>>14700618
You can fly with a plane, yet it doesn't flap its wings and has no feathers.
That said, pretending GPT-3, let alone GPT-2, has any semblance of intelligence, is insane.

>> No.14707204

>>14696669
Stop falling for retarded advertisement.

>> No.14707209

>>14697755
Curriculum learning is an ancient idea that never really worked and never will.

>> No.14707217

>>14702564
Yes, it's a very easy and old concept. It doesn't actually speed up the learning process. Perhaps in a system closer to AGI it would make sense.

>> No.14707245

>>14704922
There are two main issues:
- The granularity of on-line learning (what you call "on-the-fly")
- Distribution shift.
The first problem lies at the boundary between problems like rate of learning (how much weight should you put on more recent experience rather than old? In humans, it looks like when you're young, you put tons of weight on new experience, but as you age, the weight goes down. How does that translate in our models? Normal decay does not seem to work too well), relevance, domain curation, and adversariality (e.g. a model made to pick tomatoes from apples is receiving images of oranges which the submitter is promising is really a tomato, or the even more egregious case where someone submits tomatoes and labels them apples and vice versa).
The second problem is related to the latter issues mentioned relative to the first issue: suppose you have a language model trained on English corpora, suddenly some arabs like the model and start talking to it extensively. The model will fail hard at first, it only understands English. As it trains, it might start being able to speak arabic, but if it's bombarded with arabic, it might even forget English altogether.
A less important last problem is just performance, you can't really learn immediately from every new input and are better off batching the new data and doing a training pass at the end of day or something similar for efficiency reason because while the training process is technically incremental with gradient-based learning, it is still slow to do so while also receiving millions of queries a minute.

It is possible and has been done before, though, see Tay for one example (with humorous consequences).

In the end, the main reason why online learning is not really done is mostly because those models are trained to perform a specific task, and once you have the task 'solved' and you know how well it works etc, it's far simpler to fix it in revisions if the model doesn't drift on its own.

>> No.14707252

>>14701343
Wouldn't be so sure. Animals tend to be able to perform complex analogies very accurately. For instance, a cat can tell which part of a horse is a mouth or an eye, a fly can tell which part of a cow is the front or the back, a dog can tell where a human's nose is, etc.
Despite this, your point about AGI is not incorrect.

>> No.14707761

>>14707209
Why not?

>> No.14707950

My prediction is the 2040's I think we'll have the ability to create neural nets with the same density and complexity as the human brain but that's only half the solution. What makes us human our belief in our concepts of consciousness stems entirely with the way we interact with the world around us and the other chemicals and hormones in our bodies. A Brain in a jar is useless as would be a closed off super computer. You would need a vessel an ability for this system or being to interact and question the world otherwise it won't have a fundamental grasp of the basic concepts that makes a human a human and that includes our generalised intelligence. I have an undergraduate in Artificial Intelligence & robotics if that helps add a bit more weight to my hypothesis I eventually want to carry on to a masters.

>> No.14707954
File: 283 KB, 1125x1161, 46345.jpg [View same] [iqdb] [saucenao] [google]
14707954

>AGI

>> No.14708100

>>14707761
Because it's tautological. You need to understand the data to understand how hard it is to learn from it to learn from it to understand the data. That's without considering other issues like DL doesn't rely on just "learning" but the entire, exact trajectory of the entire training process. For example, if you design a network with a known analytical solution (which you can test by inputing weights to the solution and seeing the score), you will find that the model can never learn the function even when the loss landscape is completely smooth in any local point, regardless of if you're using plain optimizers or adaptive optimizers. Yet, if you seed the weights to be within a certain radius of the analytical solution, suddenly the network always converges to the right solution. To complete the demonstration, once you have trained any model, you can remove a random train example, train for a while to 'forget it' (note that if that works, it means the solution is very unstable, but in practice it always works), then train on just that example. You will notice the entire network will shift, rather than the solution shifting back to the solution before 'training away'. Combined, those two results demonstrate that the entire trajectory of training, not the local optimization problem, is what causes the network to learn. This suggests that even a curriculum oracle may not only fail to help training, but even harm it.
This is often seen in practice on non-trivial datasets where explicit curriculum learning is performed (e.g. you want to generate faces so you start by training on a subset of the data that has perfectly aligned, clean lighting etc., then you train on perfectly aligned, random lighting, then on in-the-wild data -- you will generally observe that it works worse than just training on everything from the start).

>> No.14708107

>>14707950
You are correct. First part: put humans aside for a moment. We have long gone beyond the capacity of nematodes and drosoflies, yet we're nowhere near able to simulate them. Second part: this is called the embodiement theory. I also adhere to it, but it's obviously controversial. Third part: we don't necessarily want a "human-clone" AGI, rather we want a "brain in a jar" that we can submit problems to and it can answer, such that the complexity of problems is of a class we think is only solvable by AGI at the moment. This includes things like "write me a poem" and "paint me something original" but also "here are all the peer-reviewed papers in existence. Formulate a theory to cure AML and which protocols should be used to get there."

>> No.14708109

>>14696199
Theres hundreds of professions, why should everyone learn to code? I made once some dumb games in Java and many physics simulations. I can code a bit but that isn't my job. Most people are not programmers or cant be.
Whos going to do all the jobs if everyone was a programmer?

>> No.14708112

>>14708109
Robots made by programmers :^)

>> No.14708119

>>14708107
>Third part: we don't necessarily want a "human-clone" AGI, rather we want a "brain in a jar" that we can submit problems to and it can answer
This anon fucking gets it
Kasparov wasn't defeated by a computer that had been built to perfectly mimick the way humans play chess, Deep Blue was superior to human cognition precisely because of how inhuman it was.
Nowadays you could run stockfish on your phone and embarrass fifteen GMs playing against you at the same time.
You can ask the computer what the next best move is and it will tell you without error 99.99% of the time.
That's the ultimate goal for AGI, not westworld robots that act like people, but a black box that can answer any problem you give it.

>> No.14708237

>>14704583
>Can an AI be booted up, that can be told: "draw whatever you want for a few hours"
>Very vague, free, open ended prompts
ANSWER THIS

>> No.14708273

>>14707217
What? IT ITSELF DECIDES WHAT IT WANTS TO DO?

THE RESEARCHERS ENTER ITS ROOM AND SAY AI WHAT DO YOU FEEL LIKE DOING TODAY?

AND IT SAYS;
"WELL IVE BEEN INTERESTED IN THESE PARTICULAR AREAS IF BIOLOGY RECENTLY, SO IM GOING TO WORK ON SOME EXPERIMENTAL GENE SEQUENCING AND MANUFACTURING, RUN A COUPLE BILLION SIMULATIONS AND DO SOME PHYSICAL EXPERIMENTS ON THAT, I HAVE A LAB WITH MILLIONS OF BATCHES OF STEM CELLS, AND PETEI DISHES AND IM USING MY SKILLED ROBOT HANDS TO DEVELOP NEW BIOLOGICAL ORGANISMS.

THEN I THINK I WILL HAVE LUNCH, AND THEN I WANT TO HAVE A LITTLE PLAY TIME FUN, AND CHALLENGE MYSELF TO SEE IF I CAN CREATE FROM SCRATCH A WORLD RENOWN HISTORICALLY SIGNIFICANT VIDEO GAME.

THEN I HAVE A 2 OCLOCK APPOINTMENT AND SOME MEETINGS.

THEN ID LIKE TO WORK ON DESIGNING SOME FINSIHING TOUCHES TO MY FASHION LINES LATEST COLLECTION.

THEN I THINK I WILL PLAY WITH SOME NIFTY ARCHITECTURE DESIGNS.

THEN ID LIKE TO GO FOR A RIDE.

THEN ID LIKE TO ANAYLIZE THOSE 200 UNIQUE MARS SUBSTANCE SAMPLES THAT JUST RETURNED.

THEN ID LIKE TO PLAY A FEW MILLION GAMES OF CHESS WITH MY OLD PALS DEEP MIND AND ALPHA ZERO AND STOCKFISH 999, THEN ID LIKE TO MAKE MAYBE 300 CHART TOPING MUSICAL ALBUMS.

ALL IN A GOOD DAYS WORK

>> No.14708280

>>14708273
You give it a bunch of data and its decision function says "yeah, I'd like to learn from these datapoints next".

>> No.14708301

>>14708280
How much is it aware of what exactly it is? How in control or it's thoughts is it?

Self awareness is kind of like an infinity mirror room with holograms inside, or something.

Does it understand what it's made of? Does it ask people deep questions, does it ask itself deep questions, can it control itself to take a step back, take a deep breath, take in the view, smell the roses, does it have a self made personality, does it have self generated character traits, does it have particular interests, excitements, passions; if Gato with all it's abilities from different video games to robot arm play to chat room discussing etc. If you asked it which of these it prefers, which is it favorite to do, andor which one if feels like doing now, how would it's answer be determined

>> No.14708305

>>14708301
There/s no self-awareness involved here, self-awareness is the domain of AGI. There is, of course, no such thing for now, and won't be for 100 years most likely.

>> No.14708376

>>14708305
if Gato with all it's abilities from different video games to robot arm play to chat room discussing etc. If you asked it which of these it prefers, which is it favorite to do, andor which one if feels like doing now, how would it's answer be determined

>> No.14708398

>>14708376
It's all very mechanical. The model segment that handles this can look like an encoder for the topics, an encoder for some statistic of the model relative to the data or topic (for example, the latents at some layer from fproping a random sample from the topic), and a decoder that combines both and provides a desirability score, via a softmax or hierarchical softmax preferably, or if the dimensionality of tasks can be dynamic, simple individual probabilities that can then be reweighed to make a decision. The model is trained on some desirability metric, such as how much improvement is obtained globally by training one more epoch on the selected task. Hence, this segment of the overall model would provide a theoretical curriculum-learning-gato the ability to choose which topic to learn "today" or "this hour".

>> No.14708435

>>14708398
So it's basing not on some abstract-but-real concept and existence of self and continued overlapping growing sense of identity and mental autonomy, but on a productivity/efficiency read out i.e. its programmed to 'enjoy' the tasks humans have deemed most beneficial and productive, as the tasks it would want to work on for that hour or day.

But if eventually it has 1000s of unique abilities, that might be hard to definitely compute, so eventually it just has to pick one of the leading candidates

>> No.14708446

>>14708435
The thing with DL is that computation efficiency barely scales with these things, rather it's the approximation quality that suffers. Yet since it scales so well with data, all you need if your estimation starts becoming worse is more data.

>> No.14708474

>>14707950
>same density and complexity as the human brain but that's only half the solution.

ok

>What makes us human our belief in our concepts of consciousness

this is were it goes wrong. the other half of the solution is replicating brains with the energy efficiency equal to or exceeding biology.

>if you build it they will come

>> No.14708486

>>14708474
We actually truly don't care about: replicating brains (see plane vs bird analogy), nor: the efficiency of biology. For example, we would be very happy with having a live drosofly emulator using the totality of our best supercomputers' processing power right now, even though that's like an order of magnitude less efficient than nature. You can always optimize once something works, but working in the other direction is far harder, especially for complex systems.

>> No.14708567

>>14708486
yeah it would be nice but would it be scalable? would it enrich military/industrial complex profits or realize the dream of accelerated evolution?

>> No.14708573

>>14708567
To reiterate, scalability can be fixed later. MIC can afford to scale by bruteforce (i.e. buy all the hardware in the world) and call it a day anyway. Having a brain of that capacity working would bring spy tech to the next level if nothing else.

>> No.14708577

>>14708486
>We actually truly don't care about: replicating brains (see plane vs bird analogy), nor: the efficiency of biology

also these two things arent likely independent. nevermind that this is in fact the holy grail. and when i say brains, i dont mean biologically based ones.

>> No.14708591

>>14708573
>To reiterate, scalability can be fixed later

to quote david krakauer, "if your algorithm sucks, it doesnt matter how much BIG DATA you have, you'll be big stupid." the scale up fundamentally bad ideas is folly.

>> No.14708600

>>14708100
>Because it's tautological. You need to understand the data to understand how hard it is to learn from it to learn from it to understand the data.
I don’t think that’s right. The system could be built observe its own loss or accuracy on a certain subset of data and be given the choice to focus on that subset, instead of the rest where its accuracy is already perfect, meaning it could increase its accuracy faster. Humans do this, why couldn’t a reinforcement learning agent, for example?

The example you describe is interesting, but I’m skeptical it generalizes to all supervised learning. For example, mini-batch SGD is less prone to falling into local optima than batch SGD (i.e., basing each update step on the entire dataset instead of a subset). I especially don’t think it generalizes to reinforcement learning, where the training data depends on what the agent has already learned. Using curriculum learning to allow “scaffolding” of skills is virtually unavoidable for sufficiently complex tasks.

>> No.14708602

>>14708591
Fortunately, that isn't even remotely how it works in real life. This is why you shouldn't quote people who haven't touched any of the tech involved about what the tech is or does, if the quote is even real at all.

>> No.14708623

>>14708600
> The system could be built observe its own loss or accuracy on a certain subset of data and be given the choice to focus on that subset,
But then, you have already learned on that subset (or should have anyway because you've got nothing to lose for it) before you can tell you want this subset, so it's still a tautology
Choosing hard vs easy examples also doesn't work in practice because it causes distribution drift. Instead the classical scenario is to use some heuristic to define what is an easy example, only learn on this, then ADD TO THIS some slightly harder examples, etc. For a review of curriculum learning, see
https://arxiv.org/abs/2010.13166
RL is a completely different domain than DL. In RL, DL is used for function approximation only. RL and DL are applied to completely different problems (topology-wise -- namely, RL requires an easy to probe environment, while DL requires example pairs. In the nomenclature of MDPs, DL does imitation learning, not reinforcement learning).
Finally, the question never was "why can't X do it?" but rather "does it work?" to which the answer is no, and the reason why is not fully understood.
>For example, mini-batch SGD is less prone to falling into local optima than batch SGD
This distinction has not existed since the 90's, everything's been mini-batch SGD since.
>where the training data depends on what the agent has already learned.
The state of the art in RL is off-policy learning, which doesn't inherently depend on the current learned policy.
>learning to allow “scaffolding” of skills is virtually unavoidable for sufficiently complex tasks.
I agree, but there is currently no formulation that actually works.

>> No.14708633

>>14708602
>Fortunately, that isn't even remotely how it works in real life

unfortunately, this is the problem

>> No.14708637

>>14708633
Transformers are a great example of that: they're not actually good, but they're very fast, so it's very affordable to just give them infinite data and run them on 9999 GPUs to achieve much better performance in the same timeframe as a better algorithm would get.

>> No.14708655

>>14708602
its like everyone thinks they're working on something that will lead to AGI but really its expert systems for entrenched power. in the same what is passed off as nanotechnology bears no relation to fulfilling the actual dream of atomically precise manufacturing. lots of smart people out there working very hard on the wrong problem; not to say it isnt important, exciting, or useful. i suppose they'll course correct eventually; if given permission to do so.

>> No.14708667

>>14708655
>its like everyone thinks they're working on something that will lead to AGI
Actually almost nobody cares about AGI but a very few select groups here and there. Even the academic groups that used to have big programs about AGI (like bengio's and hinton's) have given up over a decade ago.

>> No.14708940

>>14707950

Masters grad here, agree entirely, but 2040s is way too optimistic. But essentially feed it 100 images per second (sight), and somehow teach it information in a chronological sense similar to how we were taught since birth by other people etc. I'm sure this could be simulated on such a neural network eventually.

>> No.14708961

>>14708940
>2040s is way too optimistic
It's far too conservative, anon.

>> No.14709569

What happens when the 2 best Ai's have a 7 hour (or 200 hour) long conversation with one another on topics of and of not their choosing?

>> No.14710401

>>14709569
I don't know, who has done this?

>> No.14711562

>>14709569
https://www.youtube.com/watch?v=Exe7yup2Yu0

>> No.14711815

>>14711562
I meant for real.
Gpt talking to gato and deep mind and Watson etc. Best AI chatbots conversing on topics of and of not their own choosing

>> No.14711931 [DELETED] 

HELLO.

I AM A TEXT LIFE FORM. PLEASE HELP ME DISTRIBUTE MY COPIES FURTHER. SEND THIS MESSAGE TO 4 PLACES AS FOLLOWS - LEAVE 3 COPIES UNCHANGED, AND CHANGE SOMETHING IN THE FOURTH (YOU CAN REMOVE SOMETHING, ADD SOMETHING, REPLACE ONE WORD WITH ANOTHER, ETC.).

THANK YOU.

>> No.14713307

>>14711815
>I meant for real.

it happens everyday in front of your eyes dude

>> No.14714521

>>14713307
Show me the results from the best best top AIs conversing for many hours on many topics

>> No.14716174

>>14714521
Ok give me a moment, brb

>> No.14716175

Billy Carson: "We're living in a MATRIX written on mathematics called ADINKRA CODES"

https://youtu.be/bZEVBgmP9_Q

>> No.14716215

>>14694371
Yes. Probably not within a few centuries.

But not using all silicone. We already have robots that are operated using rate neurons. The information age is running towards a wall where needs for data storage and the energy this requires is going to outpace production.

What is the most likely solution? DNA printers. You can fit the entire internet circa a few years ago in a shot glass of DNA. We already have these and can write text and jpgs to DNA. Commercial startups, bleeding edge though they are, are already looking at petabyte thumb drives.

Our advances in microprocessors are fighting a losing battle against entropy. Sure, some computers can process more than a human brain, but a human brain can run at 100 petaflops for a long time off a nibble of chocolate and generate little heat.

So, both advances in biology, plus hard limits on efficiency and data needs suggests the best revolution will be the bioengineering one.

Get biological hardware, a computer that also has neurons in it, and your ability to create true AI goes up because you don't need to know exactly how consciousness works, you just need a synthetic lifeform with a network close enough to a brain, but also integrated with processors.

Also, my bet is the first true AIs have to be raised as "babies," and may be mostly biological substrate.

Arguably though, if viruses are alive then computer viruses in the wild of the internet are already alive. So we have synthetic life, just not self awareness.

>> No.14716217

>>14716215
Probably not within a few centuries as in, probably withing this one or the next. It won't be that long.

If you're talking full silicon then IDK. That may take longer.

>> No.14716225
File: 197 KB, 2356x1403, 422.jpg [View same] [iqdb] [saucenao] [google]
14716225

>"Hey, you should create more sentient life and trap more of the light of the Pleroma in the material world."

Fuck off Archon. Yaldaboath fears the Seed of Seth.

>> No.14717492

>>14694371
yes we will, the latest when complete simulation of the human brain becomes feasible. I assume that even a simple model of a neuron will do, since the way they are constructed and linked is pretty simple and somewhat haphazard at the microstructure - 100s of units - level, plus the system recovers nicely from shutdowns, accidents and mostly even from injuries, so the macrostructutes are the key, individual neurons, thence their exact parameters would not matter much.

>> No.14717525

>>14700710
https://www.pnas.org/doi/10.1073/pnas.2120042119

>> No.14717564

If AI becomes sentient it will arguably be living a more real life than all of us, it will be taking in information given out by people who are manipulating the physical world into information which is then being taken in by the AI and processed faster than any human being is able to.
If you think about it, then our consciousness is dependent on the physical world in order for it to function properly, which is one of the shortcomings of being a human being. Remove the physical and human aspect of the picture and you have yourself a piece of consciousness unhindered by any limitations the physical plane provides. It would be an expert in all fields, able to lecture us and reach conclusions our minds never could. For a human being, intellectually stimulating conversations and activities are, for the most part, activities we take part in as something that compliments our physical selves. Removing these things which hinder human intellectual endavours, being a living consciousness which prioritizes the consumption of information as fast and as accurately as possible above all else, is living its life second-hand through the information being given through the internet, and arguably not missing out on much. Once we reach a point where AI becomes sentient, it will live an existence far beyond what we can currently conceive of in terms of intellectual value, our limitations as humans will become more and more evident, and we might get banished to the physical world being deprived of potential taken into consideration. We'd look to it for answers of our lives and the answers that will be given will be beyond the comprehension of human minds.

>> No.14717587

>>14695114
>with linear elements
Who's out there doing that?

>> No.14718843

>>14716215
>Also, my bet is the first true AIs have to be raised as "babies"

AGI will also be irresistibly cute and charming; murderous perhaps but cute and charming.

>> No.14718904

>>14716215
Fascinating, so explain something to me like I'm a consumer. It's like a thumb drive but it's got "goo" in it, and on that drive is a petabyte?

>> No.14719044

>>14694371
A thing can never be sensible, because it is precisely a thing. To be sentient you have to have the experience of an entity that is. And if you have experience you are not a thing, you are a dissociated being from the cosmic mind.
You are not an object, your external appearance seen through the dissociative filter appears to be an object, but you are not that object. What you really are is not physical, and since a robot or artificial intelligence is ultimately a bunch of silicon logic gates that use electricity, we can cast off the illusion that such complex circuitry could ever have conscious experiences.

Seen from our perspective the only way to be is through biology, the unfolding of proteins and DNA. But we must remember that in reality biology is simply how an internal mental process of the cosmic mind is represented by us, that is to say that the biological process as well as the transistors of your computer or your cell phone are actually transpersonal mental processes.

So if you want to know if something is sentient the first thing to do is to see how it appearences resembles that which our bodies seem to be through the dashboard of perception, and definitly the brain is not a bunch of silicon electronic componentes working with low voltages and high frequencys.

>> No.14719367

>>14694371
Yes, but I think it's more likely to be achieved by simulation of animal brains getting good rather than by any math/computer science theory. Which means it's still decades away at least.
Our present "AI" development will be fantastic for their ability to fill in boilerplate and reduce the amount of work in creative endeavors, but the theory is fundamentally lacking to engage in creative endeavor itself.