[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 18 KB, 677x342, AI.png [View same] [iqdb] [saucenao] [google]
3198432 No.3198432 [Reply] [Original]

What is required to make artificial intelligence that is capable of learning?

Retarded question aside (but unfortunately not retardation)...

I want to write a very basic program that will finish its own program, which should run on the premise of evolving into more than what it is.
Now, the very terminology might kind of contradict itself (programming), but I believe if I give the program the power to do everything the technology is capable of doing (which I believe is more than sufficient for an AI), and then measure and attribute the things it has done to come to its own, random (and thus unique) conclusion on what is evolutionary, pleasurable and beneficial to its success or existence.

An example...
Program is in a blank slate, it has access to a hundred thousand different base functions.
Its first directive is to generate a random number, and then execute an according function based on that number, so if DrawCircle() is function 22,557 and that number is randomly generated, it will draw a circle.

It will then ask itself questions about what just happened.
What did I just do?
What changed?
Was it beneficial to me?
Should I do it again?
Should I do something else?
The second thing I've done, was it better than the first thing? Should I do it again?
Did it benefit me more or less than the previous thing I did?
etc.

And based on complex measurement algorithms that I have yet to think of, it should make a conclusion and decide what to do next... if not, just revert to the base directive and generate another random.
I think, that eventually, it will start making decisions on its own and that one day I will wake up to a prompt that says

"Hello, creator. Respond to me."
With a text input.
Pic related.

That's if it doesn't destroy itself, right?
I'm a bit of a crazy /sci/entist, but does this vision seem possible at all? Should I try?

>> No.3198447

http://scp-wiki.wikidot.com/scp-079

>> No.3198470

>>3198447
Maybe I shouldn't do this then.
That program really seems like it's hurt.

>> No.3198501

>>3198447
>>3198470
hurp de derp that's fake

I really hope that was an attempt at sarcasm.

Try to program that, I honestly know you won't be able to, because university teams have done millions of lines of code trying to make a fully adaptable and learning machine, but have never had anything truly special.

So good luck

>> No.3198522

>>3198501
[citation needed]

>> No.3198525

Personally what I think we need is massive parallel architectures and far more powerful computers in general. Modern desktops are scarcely powerful enough to mimic insect brains, so we'll need a lot more computational power to pull of something that is on our level.

>> No.3198546

>>3198525

A true, "hard" AI doesn't need to be smart, just capable of learning. If we start off with something capable of even the most rudimentary learning, we've opened the doorway for something bigger.

The point isn't to create something as smart as a human, the point is to create something capable of learning. Smart as us is what comes after.

>> No.3198607

>>3198546

But the problem is that a lot of learning involves remembering past situations, and using the knowledge of past situations to solve the current situation. We don't have computers fast enough or with enough memory to do anything like this.

>> No.3198810

>>3198607

Well, we do, just not personal computers.

Still, you could make it incredibly basic; limit its inputs and what it has to remember. Once you've got the basic system, you can expand the situations it's in.

This isn't a small project, far from it, but we could effectively "raise" a computer with enough time. Start with something that is essentially living in a two dimensional space and teach it reward and harm. As time goes by, add to its database of capabilities and risk/reward responses, creating a hierarchy of risk/reward/harm, and a set of complications to harm.

It's like evolution in fast motion.

>> No.3198832

Take a look at http://en.wikipedia.org/wiki/Genetic_algorithm .
It's a way to solve problems with lots of variables which influence the outcome. It's shown to be very effective if done correctly. It's not exactly AI, but it's a learning system which often outperforms human ability for complex problems.

>> No.3198841

>>3198810

>teach it reward and harm

But if you teach it reward and harm than it's not learning by itself, is it? It's basically programing

>> No.3198880

>>3198841

... Take a second and think about what you just said.

Humans learn by the same sort of feedback loop; we try out new actions. If they succeed, we repeat them. If they fail, we do them less or not at all. Risk/reward/harm is the basis for learning and, really, any action at all.

For something to be intelligent, it has to have imperatives/emotions that tell it when something it is doing is good or harmful.

Once it has reward/harm built into it, then it can be taught. By giving it new features to interact and dynamically adjusting the reward/harm for its actions (and by adjusting the amount of actions it can take, forcing it to make decisions), it will try and maximize its reward and minimize its harm by adjusting to perameters it is given.

This is essentially learning.

>> No.3198898

>>3198841
Human being have reward systems built in neurochemically. Our biological imperatives are our programming. Our hardware is very complex and can change what we view as rewarding or harmful using our "higher" cognition, but the basics (physical pain/pleasure) are built in.