[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 38 KB, 620x360, ratbot2-1360741943809-1360776782921.jpg [View same] [iqdb] [saucenao] [google]
5577156 No.5577156 [Reply] [Original]

Regarding Connectionism and its use in AI

While it seemingly work pretty well in various demonstrations. And I was intially very impressed by it. The more I read about it however the less elegant it seems and leans more towards some variation of brute force. The whole process looks like learning to play the piano by bashing it in a hamfisted manner and looking at the audience reaction and vary the bashing based on that, as opposed to refined study and practice.

>> No.5577164

What exactly do you mean by connectionism?

>> No.5577175

So it works, but because it's not how you would learn to play the piano, it's what, inelegant? There is so much wrong with that statement, I don't know where to begin.
Machine learning works. Neural networks, fuzzy logic, these things work. Save the poetry for your GF/BF.

>> No.5577182

>>5577164
>What exactly do you mean by connectionism?
The practice of making a bunch of 'neurons' with random weights, and then adjusting the weights until you're happy with the outcome.

>> No.5577186

>>5577175
I mean that it's unelegant in a manner similar to making passenger aircrafts out of million cubic meter aluminium blocks and a ten-axis mill.

That is, a terrible mismanagement of resources and miraculous that it works at all and even more so that it works good.

>> No.5577187

>>5577182
So you're talking about neural networks. What model are you working with and what algorithm do you use to adjust the parameters?

>> No.5577200

>he whole process looks like learning to play the piano by bashing it in a hamfisted manner and looking at the audience reaction and vary the bashing based on that
Welcome to Evolution. You may recognized that it already produced an AI at least once.

>> No.5577204

>>5577187
>What model are you working with and what algorithm do you use to adjust the parameters?

All neural networks are connectivist models.

>> No.5577211

>>5577200
>You may recognized that it already produced an AI at least once.
Yes, and it took 4 billion years. And was a blind process.

With us as intelligent designers, purpose-made algorithms ought to be a quick fast way to surpass the slowness of evolution, yet it seemingly is stuck on some unseen obstacles.

>> No.5577217

>>5577211
Rather, it took ~400 million generations, which is merely a matter of computing power.

>> No.5577224

Organic intelligence was "brute forced", seems fine that AI is produced in a similar way.

>> No.5577225

>>5577186
But you are not asking it to mill an airplane. Learning in humans and in human society is evolutionary. Learning in AI is task specific. If you want to abstract it further you could network the AI, so that a machine that sorts screws could talk to a machine that diagnosis refrigerator failures but why? In inventing an algorithm that learns a common language between the two, what have you learned? What has it learned? Maybe it will start spouting poetic analogies too.

>> No.5577235

>>5577217
Good point. Nice observation.

>> No.5577261

>>5577225
>I have no imagination so everything that suggests something different or new is poetry and terrible!

Status quo apologists disgust me.

>> No.5577307

>>5577261
Actually, I was trying to point out that, as an AI yourself, you have inherited language that developed from such diverse experiences that, to communicate your ideas, you have to use abstractions that are not far removed from any shared experience from which they were formed. In order not to reinvent the language each time you which to communicate, you use analogies, such as learning to play the piano by watching an audience, to describe the laborious process of maximizing a task through the repeated application of neural network algorithms, in this specific case. But that is part of your evolutionary learning network and that of your society's. I was trying point out that the AI that sorts screws doesn't need to communicate how it does what it does, it just has to sort screws.
We did not take an efficient path to our intelligence because our intelligence was not designed - we have no more purpose than survival.
Now if you want to talk about putting a priori information into your neural network, above and beyond the goals of the network, then good luck. SInce it did not form the rules itself, and the rules are not necessary for the operation of the network, the network will probably fail - or at least will ignore rules it did not create and amend.
Try putting your ego aside for a second and see the irony and poetry in the lesson.

>> No.5577369

Until we can map out the brain essentially neuron by neuron, we've going to have to brute force our neural networks to find out what works.

You also have to bear in mind that any intelligence still has to learn as well. A baby couldn't do more than bash random keys on a piano. It needs to spend years trying things out and looking at peoples reactions before it learns how to do things properly. Almost everything a human level intelligence does is based off of years of beforehand experience

>> No.5577374

>>5577307
>as an AI yourself

I think the person you're talking to is rather a natural retard than an artificial intelligence.

>> No.5577413
File: 378 KB, 600x850, cortex column.jpg [View same] [iqdb] [saucenao] [google]
5577413

>>5577182
Random weight on a neuron represent a particular nonlinear model on the inputs.
Tuning them is purposefully making that combination approach the desired answer.
Nothing brute force about it.
You are just exploring the infinetely multidimensional space of all possible algorithms giving limited computational resources.
If you can start from the most basic examples and build from there you can train your neurons to reach more complex algorithms sooner than just starting with the already complex problems. Thats what learning piano does. It builds complexity. And neuron networks have that ability too, its just frequently ignored by people working on them.