[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 428 KB, 471x470, holyshitcool.png [View same] [iqdb] [saucenao] [google]
3158959 No.3158959 [Reply] [Original]

What systems would be in place for an artificial intelligence program?

So far I have these:

Symmetry detection
Object categorization (through symmetry)
Linguistics operator

The symmetry detection will be the system that allows the AI to start objectifying the world around it. I'll use a Tree as an example. After the AI has seen many trees the Object Categorization system will allow the AI to understand that even though there are many different types of trees, they are all Trees.

If we can imagine what systems the AI needs in order to successfully have some sort of problem solving, then we can begin to program it.

Want to help?

>> No.3158963
File: 26 KB, 500x333, naota_nandaba_from_flcl_furi_kuri-13230.jpg [View same] [iqdb] [saucenao] [google]
3158963

>>3158959
What? Like a Turing Test?

>> No.3158962

a penis for raep

>> No.3158966
File: 649 KB, 900x1393, lhs.jpg [View same] [iqdb] [saucenao] [google]
3158966

so far, i have your mother naked in my mind.

>> No.3158970

>>3158962

I am talking about pre-programmed abstract systems... not physical appendages

>> No.3158977

>>3158970
>implying a penis is just a physical appendage

>> No.3159023

One thing about this is that you've got to let the machine be wrong. Humans are wrong all of the time. There are things that I think are trees but are really bushes. You can't be writing code to index shit like that so that they are always correct. You have to have the machine make an assumption, be told it's wrong, and then adjust it's answer based on the reasoning behind why it was told it was wrong. No idea how you'd do this.

>> No.3159038

>>3159023

Well, what you just said is how I imagined it. Notice how I said "after the AI has seen many trees". You would have to teach this robot just like any other.

>> No.3159052

>>3159038
Yeah I'm not arguing with you, I'm just thinking out loud. Emphasis on the "No idea how you'd go about this".

>> No.3159063

Categorization? People categorize only sometimes, you could as well implement 'uncertainty module' which would allow the AI to *not* categorize world and make mistakes and still be able to function. I can't quite imagine how it could be made.

>> No.3159085

>>3159063
The idea isn't to create a pre-stumbling robot that is programmed to fail.

The idea is to create a program that is capable of exploring and understanding the world around it, and eventually, conversing, hypothesizing and problem solving with others.

Naturally, you would expect it to make mistakes like any other. I am not looking to create super-intelligence, rather, I am looking to give birth to machine intelligence.

>> No.3159088

Has OP heard of OpenCog?

>> No.3159092

>>3159085
my best guess is that you would have to have the machine attach a confidence-interval to each piece of data it gets, based on some metric of how authoritative a source it thinks it is receiving the data from, which itself would need another confidence interval.

Every piece of knowledge it has would have some sort of "55% sure this is true" attached to it and that number could be adjusted based on future engagements.

>> No.3159094

>>3159088

No I have not, googling it now though.

>> No.3159101

>>3159092

I disagree. The AI would operate most efficiently by constantly rebuilding its paradigm. In the beginning, after it sees its second tree (perhaps they were both maple trees), the AI knows without a doubt that the Maple trees all have canada looking leaves. Obviously this is wrong and the AI will learn so.

>> No.3159105

>>3158959
>>3158959
AI doesn't need to interact with the world to be AI.
It just needs to solve problems intelligently in a way that if a person had solved the problem you wouldn't notice a difference between the method or result.

What system does AI need?
An intelligent problem solving system.

That's it.
/thread

>> No.3159118

You assume that AI can be created modularly from the ground up.

>> No.3159120

>>3159105

We envision different machines.

>> No.3159128

>>3159118

Rather than spontaneously appearing?

then yes, I assume so.

>> No.3159142

>>3159128
No, I mean from the top down.

The brain has specialization, yes, but I doubt that you can "program" an AI in a modular ground-up fashion. Our brains are far too decentralized and emergent to be created under this paradigm. There are specialized regions, to be sure, but they all interact constantly. You can't cut out the visual cortex and having a "vision module" that functions independently.

>> No.3159152

>>3158959
It has to learn and adapt.
Individual programs do not change once written and thus cannot learn.
Therefore it will need a system for writing new code and spawning new threads to run the code.

>> No.3159160

>>3159152

>Individual programs do not change once written and thus cannot learn.

As long as you run them in an interpreter rather than compiling them they can self-modify.

>> No.3159166

>>3159142
I agree that these subsystems cannot be independent.

The design of the program contains a database of objects that each have properties. Each of these properties has a unique detail variable of a specific subjective nature. Each system works with the other systems, screening the database of objects and properties. They will all call from this database, refer to it.

>> No.3159175

>>3159152

There are multiple programs that will not change, however, the database to which they refer, will. The programs will be designed to be so fundamentally simple that they don't NEED to be altered for correction.

>> No.3159188

>>3159166
This is still ground-up, in my mind. We've been barking up this tree for decades, with very limited success.

I imagine that the first AI will have low-level symbols that are incomprehensible, just like it's very difficult to say "where is your memory of the smell of strawberries stored, and in what format". It exists, to be certain, and we can identify neurons that are involved, but the low-level structure you're imposing is very much unlike what the brain does.

>> No.3159204

>>3159188

The program itself cannot operate like the brain does. The brain, our brain, is biological and its means of comprehension and analysis are beyond me. The rendition of intelligence within a computer will no doubt be different on the lowest levels, but that is not what I am concerned with. I am concerned with the end result. IF the program can be made in this way, IF it can be programmed. I do not know it it CAN be programmed, but I believe that if it can, then it will.

>> No.3159207

Essentially you might approach the problem in the same way the brain creates intelligence.
As far as we know currently, intelligence is an effect of the collective operation of our neurons.

I think we might just create a thread spawner that makes intercommunicating threads. Each thread could link to several others or even all the others to hold data based on a set of input threads.
Set that up and see what happens?

>> No.3159225

>>3159207
Only if you're fine with pulling a random item out of the set of possible minds. Most likely it's horribly broken and stupid. And if it's not, who's to say it's friendly?

Creating AI should not be undertaken lightly, unless you're fine with killing your creation.
http://lesswrong.com/lw/x7/cant_unbirth_a_child/

>> No.3159234

>>3159204
>IF the program can be made in this way, IF it can be programmed. I do not know it it CAN be programmed, but I believe that if it can, then it will.
Sure. It may be possible. But history isn't supporting the assumption so far, and our only example of human-level intelligence is nothing like this.

>> No.3159235

>>3159204
Look into bio-computing to see how much the line between biological and technological gets blurred.
As far as programming an AI, I think in a very real sense you are programmed on a daily basis.

It really gets down to the philosophy of mind and Determinism vs Libertarianism. If we are completely determined, which I think is about right, then our brains are not all that different from a computer and we are programmed daily by our experiences.
In that case, an AI may be designed very much like the brain and programming it would operate in the same way as getting someone to change their mind.

>> No.3159255

>>3159225
In a real sense, creating an intelligence is undertaken lightly every day around the world.

What most people fear, I think, is creating an intelligence and giving it some serious firepower.

But you may just be referring to the question of person-hood, on which I would certainly agree that one should consider the ethical implications of creating and destroying any intelligence before undergoing such a feat.