[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]

/sci/ - Science & Math


View post   

File: 265 KB, 950x711, 1311217359125.jpg [View same] [iqdb] [saucenao] [google]
3456634 No.3456634 [Reply] [Original]

Hey /sci/,

can you tell me IN DETAIL what problems/obstacles are keeping us from creating strong AI?

>> No.3456640

Explaining to the AI the difference between "race" like a vehicle race, and "race" like a presidential race.

>> No.3456647

Define intelligence. You can not create artificial things if you don't know about the original.

>> No.3456677

far too many variables and minuscule differences in those variables making large changes to the situation that are a pain in the ass to program an ai to adapt to.

>> No.3456680

Computer vision has still got a ways to go.

>> No.3456691

>>3456640

Is language in general a problem or just synonyms?

>> No.3456704

>>3456640
This could be solved by teaching the AI to use context clues. I wouldn't imagine it'd be utterly difficult.

>> No.3456709

Conventional methods of programming for EVERYTHING computer-related are not consistent with our own "processing" methods. You would essentially have to reinvent the entirety of computer programming.

>> No.3456720

A true artificial intelligence is not just a sophisticated program with a ridiculously large number of things programmed into it to make it seem human. An AI would have to be able to learn on its own. It would have to behave like a human brain, and if we ever successfully create AI, it will probably mimic the way a brain works.

Protip: The human brain is not a big hard drive.

>> No.3456723

>>3456691
Have you seen what AI that are supposed to converse are like? They're shit.

The problem is context - those aren't synonyms, they're homonyms, completely different meanings attached to the same sequence of letters. Whether "race" means a presidential race, some other election, a pseudo-ethnicity, something horses are made to do, something people driving do, or any number of other things, is dependent on the context in which it's used. It is, in fact, a very hard problem.

The fact that you ask what obstacles there are for strong AI is worrying.

>> No.3456733
File: 26 KB, 252x270, 1270414367820.jpg [View same] [iqdb] [saucenao] [google]
3456733

>>3456709

>> No.3456748

Well, for one you'd have to invent a completely new understanding of computation. It wouldn't be just inventing a new coding language or anything, it'd be redoing the idea BEHIND all that. Computers process things step-by-step, but in the brain all information is constantly connected to all other information.

>> No.3456761

>>3456704
Try it, seriously. Read up: http://en.wikipedia.org/wiki/Frame_problem
http://plato.stanford.edu/entries/frame-problem/

>>3456709
>>3456748
Parallel computing is uh, you know, not new

http://longnow.org/essays/richard-feynman-connection-machine/ You may enjoy this.

>> No.3456766

>>3456723

Oh yeah, homonyms. My bad.

>> No.3456808

>>3456761

Nice links, bro. Feynman is such a badass.

>> No.3456819

>>3456761
DUUUUUDE. Thanks.

>> No.3456858

>>3456808
But do note that the idea of the connection machine was not his, and that it was not the first parallel computer - just an especially interesting one.

Also? You don't need a new theory of computation to simulate a brain. The brain is a parallel "computer", yes, but parallel machines can't do anything traditional machines can't, in computability terms. They're both only Turing-complete, and each can emulate the other. This is, in fact, why you can post on 4chan while getting porn in another window - doing "two things at once". It's just slower than using a truly parallel machine.

And neural networks are very simple programs, seriously. Each individual neuron isn't much more than a glorified adding machine. The thought comes in when you have billions all connected.

And simulation is hardly the only route to AI anyway.

>> No.3456890

Seems to me like AI could be accomplished today if we had the right people on it.

>> No.3456968

we can't recreate something we don't understand

>> No.3456976

>>3456968

cool fallacious logic bro

>> No.3457002

Computational power. To build a computer powerful enough to simulate the human brain would require a computer far more powerful THAN the human brain, which we dont have yet.

Secondly, we dont understand how the human brain works, lolwut, so simulating it would be hard.

No one has written software like it yet, its an entirely new thing that has to be developed.

This is assuming you mean strong AI as in a conscious AI.

>> No.3457009
File: 14 KB, 250x373, hrp-4c_female_robot_anime_real_2.jpg [View same] [iqdb] [saucenao] [google]
3457009

my answer won't be in much detail but i guess it would be better with it than without it

1. what is intelligence? humans believe that they're the only intelligent spices on the planet, but is this really true....
because we don't have a clear definition of intelligence we would have to build a machine that thinks like a human.

2. emotions. these are the driving force that makes you do and think about things. we can emulate them but we don't know exactly how they're linked together and in what proportions are they in a persons mind (maybe we need more fear than curiosity, or the other way around)

3. learning algorithm and database. there are some things that the brain just does, such as gathering information and storing it in an efficient database. we don't know how to build/program something like this

there are different approaches to building an AI
another one is studying neurosicence and emulating what neurons do with a digital circuit
the main flaw there is that we don't know how the different parts of our brain are connected

>> No.3457065

Restricted boltzman machines can do it today if there was enough computer power. I would guess training 100 trillion connections on 100 megapixel images using Contrastive Divergence for deep learning many layers with tens of millions of neurons and some backpropagation for fine tuning would take 100 PetaFLOPS for 30 images a second, thats a rough estimate. Please don't cite it as precise.

>> No.3457089
File: 18 KB, 266x400, 4634819-young-happy-business-man-reading-newspaper-close-up.jpg [View same] [iqdb] [saucenao] [google]
3457089

>>3457009
>what is intelligence? humans believe that they're the only intelligent spices on the planet, but is this really true....
>humans believe that they're the only intelligent spices
>only intelligent spices
>spices

>> No.3457098

We're so fucking lost about AI that we're resorting to brute-force simulation of physical brains now.

>> No.3457116

Processing power, and our understanding of the mind.

The latter solves many other problems.

>> No.3457146

supposedly china has a whole crew on this shit and a dude said there probably guna be a war over this shit...people on one side that dont want it...and people on the other side that do

>> No.3457299

There seems to be two camps of thought in this thread.

Lack of computing power.

Lack of understanding.

Or even more simply,

>Hardware vs. Software

>> No.3457334

>>3457098
General AI. We've made strides at specifics, like computer vision and walking without falling on one's ass.

>> No.3457365

>can you tell me IN DETAIL what problems/obstacles are keeping us from creating strong AI?
No, because to know in detail why we can't, we'd first have to know in detail what intelligence is, which we don't... which is why we can't.

>> No.3457386

P<>NP

>> No.3458590

>>3457365
I gave details

>>3457098
I didnt do brute force

>>3457065
>Restricted boltzman machines can do it today if there was enough computer power. I would guess training 100 trillion connections on 100 megapixel images using Contrastive Divergence for deep learning many layers with tens of millions of neurons and some backpropagation for fine tuning would take 100 PetaFLOPS for 30 images a second, thats a rough estimate. Please don't cite it as precise.

>> No.3458661

Hey fuckers,
Lurking here and got two questions.
How sure are we AI will not fuck humanity up?
Why the fuck do we want AI when we could work towards making humans better?

>> No.3458684

>>3458661
>Why the fuck do we want AI when we could work towards making humans better?

One day you will be able to just download the AI to your PC and run it. Meanwhile you can't make human better without some expensive surgery or drugs.

>> No.3458693

Couldn't a kind of evolution be used to reproduice human-like intelligence?

>> No.3458703 [DELETED] 

Can we create conscious life by machinery.
Inb4 define conciousness (look it up genius).

>> No.3458718

>>3456634
>can you tell me IN DETAIL what problems/obstacles are keeping us from creating strong AI?
Uhh, the simple one - we don't know how.

>> No.3458721

>>3456640
That's an easy problem. The hard problem is structuring it so that it can learn (almost) arbitrary data, and so that it has wants and desires and goals. Coupling these two is a bitch.

>> No.3458732

>>3458661
>How sure are we AI will not fuck humanity up?
We don't know and we don't care.
>Why the fuck do we want AI when we could work towards making humans better?
You can't work with humans when religion is still around.

>> No.3458743

We simply don't know how to do it. Also processing power is not quite there yet.

>> No.3458765

Current AI is like putting a baby in a dark box and giving it books it can't read.

>> No.3458767
File: 27 KB, 482x321, laughing-women-friendship-greetings.jpg [View same] [iqdb] [saucenao] [google]
3458767

>>3458661

>He thinks humans will ever change of their own accord

>> No.3458771

You're talking about creating an improved version of a copy here. The copy being computers. Computers are basically human brains, sans certain regions.

So what's left to say? A copy of a copy is still a copy, and copies of copies come with inherent flaws, a la incest. So, yeah.

>> No.3458773
File: 3 KB, 126x126, 0-10.jpg [View same] [iqdb] [saucenao] [google]
3458773

>programing a computer to detect certain changes, and respond accordingly is different from programming a self learning compuer.

>major difference is that biological brains are self learning

>> No.3458776

>>3458693
You could make a program that reproduced and mutated and died if it was not fit in it's environment.

Thought that is quite hard. Especially creating a complex environment enough. And take time.

>> No.3458782

>>3458765

>baby in a dark box and giving it books it can't read.
>giving it books it can't read.
>books it can't read.

>Assuming infants can read at all.

>> No.3458796

>>3458782

Depends on your definition of reading. If reading is just looking at the words, sure an infant can read, and thereby placing a baby in a dark box prevents it from seeing the words.

If reading is seeing and understanding the words, an infantile brain has not the development for that.

>> No.3458798
File: 148 KB, 581x480, 1291941309867.jpg [View same] [iqdb] [saucenao] [google]
3458798

>>3458771

>> No.3458800

>>3458773

>programing a computer to detect certain changes, and respond accordingly is different from programming a self learning compuer.
>major difference is that biological brains are self learning

>a self learning compuer.
>biological brains are self learning

Congratulations!
You've contradicted yourself and failed at explanation miserably in only two sentences !

>> No.3458802

>>3458798
Makes sense to me. Are you an idiot?

>> No.3458804

>>3458771
>Computers are basically human brains, sans certain regions.

This cannot be more wrong.

>> No.3458808

>>3458804

Based on human brains*
Think about it. How does a brain work? The transmission of electrical impulses from one area of the brain to another. How does a computer work? Much the same, am I right?

>> No.3458809
File: 4 KB, 144x142, 00.jpg [View same] [iqdb] [saucenao] [google]
3458809

>>3458796

>If reading is just looking at the words, sure an infant can read, and thereby placing a baby in a dark box prevents it from seeing the words.

>Read:To examine and grasp the meaning of (written or printed characters, words, or sentences)

>I seriously hope you morons dont do this.

>> No.3458810

>>3458808
They are wildly, wildly different computing devices. A computer is a single threaded, procedural device.

A brain is not. It is massively parallel, with a bunch of hardware for heurestics, like getting emotions from eye gaze, and facial recognition.

Unlike silicon computer hardware which has no such thing.

>> No.3458813

>>3458809

I've met kids who believe that reading is just looking at words, you don't even know...

>> No.3458817

>>3458808

>Bears excrete in the woods in the same manner as Humans do.
>Hence we are the same.

>If you care about Humanity, you must kill yourself immediately.

>> No.3458819
File: 30 KB, 385x477, jim-carrey-dumb-dumber-c10102378.jpg [View same] [iqdb] [saucenao] [google]
3458819

>>3458810
In that case I'm just going to herp a derp on over to the 'fuck off dumb guy' section.

>> No.3458822

The desire to copy human intellect is profoundly idiotic. We already -have- human intellect. Let's make something that supplements our abilities instead of doing the likes of teaching arithmetic to a horse.

>> No.3458825

>>3458817
Well it's a shame I don't care about Humanity, isn't it? Useless bunch of heathen/zealotous/murderous apes who like blowing stuff up, we are. Fragile and pompous, like the glass statue of an English king.

>> No.3458827

>>3458822

The desire to copy bird levitation is profoundly idiotic. We already -have- birds. Let's make something that supplements our birds instead of doing the likes of constructing airplanes.

>> No.3458828
File: 6 KB, 484x290, Prog_Fig3.gif [View same] [iqdb] [saucenao] [google]
3458828

>>3456634
We still don't understand what algorithm is running inside of the human brain to produce consciousness, and that's the thing we need to produce to get a strong AI. In the future when we look back at present day attempts at creating a strong AI it will be like when we look back at the shitty flying machine attempts from the 1800s. People were trying to replicate the abilities of birds, but they didn't grasp the fundamental principles involved, so they were grasping at straws.

>> No.3458829

>>3458822

Why is it idiotic? It's genius. it's just not a wise course of action.

It's an attempt to outdo 'God'. If anyone succeeds it's genius. It's preposterous and unwise, but hey.

>> No.3458831
File: 11 KB, 237x213, 000.jpg [View same] [iqdb] [saucenao] [google]
3458831

>>3458825

>Psychopath ascribing his wild imagination to the entirety of Humanity.

>> No.3458833

>>3458831
At least he's poetic about it.

>> No.3458835

>>3458833
More poetic than you could ever be, you inane little shit.

>> No.3458840

Whoo-ho hoah, chill down here for a second. Keep in mind that we're deigning to respond to your inept attempts at socializing, and horribly wrong guesses.

>> No.3458842
File: 88 KB, 500x375, 433819.jpg [View same] [iqdb] [saucenao] [google]
3458842

>>3458835

>> No.3458844

>>3458842
My thoughts exactly. Methinks he's just a might cranky because his ideas got shot down like a fighter plane in a field of AA.

>> No.3458845
File: 9 KB, 160x201, 1.jpg [View same] [iqdb] [saucenao] [google]
3458845

>>3458835

>> No.3458851

Have any of you considered that maybe it just isn't God's will that we create artificial intelligence?

>> No.3458855

>>3458835

You seem lost, shouldn't you be at a Hawthorne Heights forum or something?

>> No.3458856
File: 39 KB, 408x500, jesus-lol.jpg [View same] [iqdb] [saucenao] [google]
3458856

>>3458851
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAAHAHAHAHAHAHA
HAHAAHAHAHAHAHAHAHAHAHAHAHA times one million trillion billion.

>> No.3458857

>>3458851

Why wouldnt it be his will?
How do you know what he thinks of it?

>> No.3458859

>>3456634
>what problems/obstacles are keeping us from creating strong AI?
this is not a valid question within the context of actual computer science.

But if I was to reinterpret it as just some science-fanboy hipster asking it then I'd assume you are generally referring to "how can we mimic human intelligence with computers?" To which the answer is simple and indictive of the flaws in calling psychology a science - we as the human race don't understand/agree-upon what intelligence actually is. Being a science-casual secondary I'd encourage you to attempt distinguishing logic (logical), reason (rationality), smart(ness), aptitude, intuition, perception, comprehension, intelligence, wisdom, knowledge, understanding and, of course, consciousness; you might also be wise to include mood, emotion and attitude as well since (human) intelligence does utilize these whether 'it' likes that or not. The amount of time you're willing to spend on understanding the differentiation between these definitions is directly proportional to your true concern about this issue because there are a lot people out there whose motivation is only to manipulate other people's perspective on the issue of human potential rather than their own.

>> No.3458860

>>3458857

That's very simple to figure out

It's in the bible
Every and each thing that happened in the history of the universe is his will, study the universe and you'll be studying God's will

>> No.3458861
File: 189 KB, 500x500, bravo_RE_Daily_Food-s500x500-165465-580.jpg [View same] [iqdb] [saucenao] [google]
3458861

>>3458859

>> No.3458866

>>3458829
Why is it idiotic? Let me count the ways.

It is shortsighted, wasteful, unimaginative, anthropocentric and redundant.

Computers as we have conceived them excel at very different things than we do. Whyever would it be considered brilliant to make them good at something they are very much not designed to do?

I suspect loneliness in the sense of desire for company. Which is stupid. Let's make tools and not toys.

>> No.3458870

>>3458859
The term "strong AI" is very badly defined. However, I would settle for this - Find me a machine which is incapable of teaching Theory Of Computation to a student. Give me a few months with said computer, so that I can verbally teach it Theory Of Computation. Then, let that machine teach Theory Of Computation to another human student. It's basically The Turing Test. If it can pass that, or anything like that, then we have strong AI.

>>3458860
No.

>> No.3458875

>>3458860

You have still failed to explain why wouldnt he want us to construct an AI.
This is your second failure in account to your first.
Will you increase your score of failures or will you end it?

>> No.3458887
File: 11 KB, 246x205, 11.jpg [View same] [iqdb] [saucenao] [google]
3458887

>>3458866

>It is shortsighted, wasteful, unimaginative, anthropocentric and redundant.

>Assertion with five clauses without an explanation for even one clause.

>> No.3458893

>>3458870
Well in that case your definition of strong is relative but generally speaking with that definition strictly in mind then we already have strong AI. Take Cleverbot for example - all it does it repeat what other people say - VERBATIM - yet the same model which it works on has passed the Turing test many times over.

>> No.3458896

I have to be serious here. The main obstacle to creating strong AI is that humans don't have a gut instinct to recognise computers as fully independent agencies, on the human level.

The combination of assigning agency when none is there and refusing agency when it doesn't match up to expectations will mean it will be very difficult to spot just where strong AI happens.

>> No.3458900

>>3458893
No it doesn't. No way cleverbot could pass the Turing Test. I could try to teach it simple Real Analysis, and ask a question that might appear on a Real Analysis page, and request that it show work. No way in hell it's doing that.

>> No.3458920

>>3458900
To repeat, that's the standard of "Strong AI". Take a machine which could not pass a Real Analysis test, spend some time teaching it Real Analysis, then have it take a midterm in Real Analysis, showing work.

Assuming not a malicious implementation where after a few days it just flips a switch to "understand", this is the standard.

>> No.3458946
File: 274 KB, 800x586, 2.jpg [View same] [iqdb] [saucenao] [google]
3458946

>>3458900
>'there is an official Turing test'
Like I indicated, THE Turing test is THE outline for _A_ relative test, namely on types of input & output a computer has, which has relative results namely on who the "judge" is going to be. Also and furthermore, I never said CLEVERBOT passed THE Turing test but when a Turning test is conducted is relative to any person competent of the definition of Turing test (which isn't saying much) watching another person 'use' a computer.

tl;dr _A_ Turing test is more dependent on relative aesthetics rather than strict functionality.

>> No.3458948

>>3458946
I merely meant that no reasonable person would conclude that cleverbot is a real person. Sure they are surprised, but if any actually sat down and tried to disprove it, it wouldn't be that hard. Just try to teach it /anything/, and have it demonstrate understanding. Cleverbot would fail.

Of course there is no standard Turing Test, but at the same time the suggestion that cleverbot can pass the Turing Test is disingenuous.

>> No.3458949

>>3457002
we have, in fact, already simulated neurons of a rat brain. we have a very good understanding of how our brains work. all that is really left is enough man hours and processing power and we could very well imitate a complete brain right now with our understanding.
>>3458693
look up neural networking
>>3458866
I don't know what to say really, you are a bad troll.

>> No.3458954

how do you define intelligence?
is it something that can learn?
is it something that can play chess?
is it something that can solve mathematical problems?
if it's not any of these then what is it?

>> No.3458959

>>3458946 samefag
>when a Turning test is conducted is relative to any person competent of the definition of Turing test (which isn't saying much) watching another person 'use' a computer.

should be:

when a Turning test is conducted _it_ is relative to _when_ any person competent _to_ the definition of Turing test (which isn't saying much) _is_ watching another person 'use' a computer _and calling it a Turing test_.

>> No.3458965

>>3458954
the collected scope of understanding, reason, aptitude for things and so on and so forth. in short "mental capability".

>> No.3458972

>>3458949
>we could very well imitate a complete brain right now with our understanding
nope..
you should read some more about neuroscience to understand how limited our really knowledge is

>> No.3458977

In short?

Adaptation. Brain is connected to the environment through senses. It adapts to changes in environment, stores information, processes and integrates it. These changes are physical with new neural pathways being formed constantly.

Life has mechanisms for adaptation. Mutations, sexual reproduction, survival of the fittest etc.

None of those are attributes in computers. A computer can't improve itself or adapt. The program adapts, but the hardware stays the same.

Perhaps in the future we can have artificial intelligence that closely mimics BEHAVIOR, but it's only mimicry, performed in the end by the programmers.

Also I'd like to add that technological singularity is pseudoscientific fluff. People in 200 years will laugh at the whole concept.

>> No.3458980

>>3458965
do you know what understanding is?
how do you define mental capacity?
you should have clear definitions in order to recognize what is intelligent and what's not

>> No.3459007

>>3458977
it may not be in 200 years but it will happen
it's not if but when

>> No.3459010

>>3458948
>I merely meant that no reasonable person would conclude that cleverbot is a real person.
That's because Cleverbot has BOT in the title.
>Just try to teach it /anything/, and have it demonstrate understanding.
That would be relative to the frequencies of who EXACTLY was interacting (not the original code creator) with the bot prior to a demonstration or Turing test: if you want it to do a demonstration of giving a physics lesson then it'd probably be best to wait 2 years after it has daily 'chats' with both (a) top-tier physics professor(s) and a multitude of students with extreme variations in knowledge of the subject.

>> No.3459014

>>3459010
No way. I know roughly how it works. Cleverbot could never do something like prove the shell theorem, with steps, unless it was told how to do so verbatim.

A strong AI would be able to prove the shell theorem from being taught math, calculus, and the basic laws of physics, just like Newton did back in the day.

>> No.3459017

>>3459007

Nope.

People will adapt. Or atleast some will.

>> No.3459018

I have to be serious here. The main obstacle to creating strong AI is that humans don't have a gut instinct to recognise computers as fully independent agencies, on the human level.

The combination of assigning agency when none is there and refusing agency when it doesn't match up to expectations will mean it will be very difficult to spot just where strong AI happens.

>> No.3459025

>>3458972
I'm a fucking M.Sc in biotech and neural science. We need not know what all parts of the brain do to simulate them. Copying a brain today is very much possible.
And I stand my ground, we know a whole shitload more than what the average Joe thinks we do about how it all works.
>>3458980
go to /x/

>> No.3459026

Too much human input is the problem.
>come on /sci/ misspell problem twice
>leave forever

>> No.3459030

>>3459017
for adaptation/evolution to work
old hardware should be discarded (that's the reason why people die)

and there is a limit to your adaptability
if you were immortal how much time do you think would pass before the earth's environment becomes hostile to you?

>> No.3459038

>>3458825
the reason I'm leaving this thread

>> No.3459042

>>3458977
Why would an AI not be able to upgrade it's hardware?
Passive adaptation is shit compared to active conscious adaptation, something an AI would be very much capable of.
We already have neural networks which don't at all mimic our behaviour or brains, but still show promise as a possible AI someday.

Anything else you want to pull out of your ass?

>> No.3459044

>>3459014
So basically you're talking about discovery. Intelligence is like water, flows along the banks of time and carves it's path in chaotic ways and it's impossible to tell where exactly it ultimately will collide with a sea of information or even create a lake of potential.

>> No.3459045

Hardware.

>> No.3459050

>>3459044
I have no clue what you're trying to say.

I could walk a human through how to prove the shell theorem, or something simpler if they have a simpler mathematical background. This is impossible with cleverbot. It will never be able to prove the shell theorem.

>> No.3459071

>>3459025
ok then tell me what percent of neurotransmitters do we know about?
and how many types of neurons do you know?
i'm sure you're lame, i'm just not sure exactly how lame you are

>> No.3459072

>>3459018
another thing is that the progression towards it is probably going to be somewhat gradal so that there won't be a moment we can point to and say "thats when strong AI happened, it will be a learning program that eventually learns enough to satisfy most naysayers

>> No.3459078

>>3459072
Probably indeed.

>> No.3459081

>>3459050
don't you see that proving a theorem is just like solving an equation
you just have to know math to do it
if you teach a bot math then it will do it
if you don't then it'll be like making someone that doesn't know the math prove it..

>> No.3459086

>>3459081
>don't you see that proving a theorem is just like solving an equation
>you just have to know math to do it
I agree that providing a math proof is exactly analogous to solving an equation.

>if you teach a bot math then it will do it
Indeed.

>if you don't then it'll be like making someone that doesn't know the math prove it..
What? I lost you here.

I agree that so called "strong AI" is a gradual slope. We can teach chimps to add, but we could never teach a chimp to prove the shell theorem. However, cleverbot even lacks the ability of a chimp to learn and use the learned knowledge. That the crucial metric by which we measure "strong AI", the ability to learn "arbitrary" things and then apply them.

Some people are better than others at doing that. People in general are better than chimps. Cleverbot isn't even anywhere on the scale. It effectively lacks this ability altogether.

>> No.3459092

>>3459086
As an example:

Me> Ok, if I ask what is the sum of two sets of red apples, then answer the product. If I ask what is the sum of two sets of green apples, then answer their sum.

Me> What is the sum of 3 red apples and 5 red apples?

Cleverbot will never answer that, and a chimp maybe could.

>> No.3459097

Dreyfus writes extensively on phenomenological obstacles in creating humanlike AI.

>> No.3459099
File: 43 KB, 518x500, 1264791073777.jpg [View same] [iqdb] [saucenao] [google]
3459099

>>3458949
>look up ___
anytime I see this I know that person doesn't know what I said or even what they intend to talk about. Such is life in a troll-filled post-modernist world.

>> No.3459100

>>3459097
Too bad Americans don't read Dreyfus because they can't stand Heidegger.

>> No.3459107

>>3459092
All it would take is to invent something just slightly new, like the rule to replace sums with products when talking about red apples. Cleverbot simply replies with something stashed in a database. It fully lacks the capabilities to understand the first command of "answer with products instead of sums", and thus it will never get the answer right. If it does - it's just luck, and that can be quickly found out by posing just a couple questions of this kind to it.

>> No.3459118

>>3459092
a kid won't be able to answer that question too...

>> No.3459124

>>3459118
Fine. If you prefer:

Me1> If I said the word "red" again, I want you to scream. Red.

Me2> If I said the word "blue" again, I want you to scream. Purple.

A chimp could easily do that reliably. A kid could easily do that reliably. Cleverbot will not reliably.

>> No.3459126
File: 12 KB, 210x190, cat.jpg [View same] [iqdb] [saucenao] [google]
3459126

>>3456634
You're all idiots. Hofstadter wrote an eight hundred page book trying to explain the problems with AI (coming to the half-assed conclusion that human brains like are ant farms and good luck building an intelligent ant farm anytime soon) and here you are trying to tackle this problem in a 250 word post on an infantile imageboard website

Good grief.

>> No.3459135

>>3459124
A transcript:

>If I say the word "red" again, I want you to scream. Red.
>So you knew the victim personally?
>If I say the word "blue" again, I want you to scream. Purple.
>From outer space!

I could do anything I want to try and train it, but it would likely never catch on, as long as i subtly changed the commands. With some food rewards, I could get a chimp answering correctly in a matter of seconds.

>> No.3459174

>>3456634
Hugely ineffective computational methods. The scale of the inefficiency in current cognitive simulations is simply mindblowing.

>> No.3459209

Don't we already have supercomputers?

>> No.3459223

http://arstechnica.com/science/news/2011/07/artificial-neural-network-used-for-memory-created-using-
dna-computing.ars

space

>> No.3459224

the problem of analog/continuous calculations

>> No.3459228

>>3459209
See: >>3459174

Power consumption of a human brain is about 25W.

Power consumption of a Beowulf cluster is about 100W multiplied by the number of processors. To simulate a brain using present methods need a LOT of processing.

The problem is more than one of scale. The methods we are using are themselves not good enough.

>> No.3459229

Until we understand exactly how the human brain works in minute detail, there'll never be AI.

>> No.3459233
File: 22 KB, 343x390, uuuuh.jpg [View same] [iqdb] [saucenao] [google]
3459233

>>3459223
>who doesn’t dream of being able to download information into your brain
Is the writer of this article retarded or am I missing something here?

>> No.3459236

>>3459233
You're retarded for not dreaming of downloading information to your brain, obviously.

>> No.3459515

the brain is not a computer, so a computer will never emulate a brain.

but a computer can perform better than a brain on many tasks. which doesn't make it a mind.

seriously stop spreading this confusion based on a metaphor. if you want to emulate the human mind, you have to re-create the whole thing, body and brain, and make it develop in the same way as humans, which would be the dumbest thing to do, considering there is a much easier way of doing it naturally.

which is not to say that trying to improve artificial emulators of intelligence will not lead to technological progress, but not a mind.

>> No.3459562
File: 140 KB, 500x359, Watson_Jeopardy_Feb16news.jpg [View same] [iqdb] [saucenao] [google]
3459562

>keeping us from creating strong AI?

I beg to difer.

>> No.3459578

>>3459562
I didnt know Wikipedia qualified as strong AI.

>> No.3459591

>>3459578
watch the video and commentary, faggot.
Watson interprets the question based on all possible variables, then finds an answer based on that.

>> No.3459600

>>3459515

Bullshit. You don't have to recreate the mind biologically for a good AI. Just as the human legs are not beams and motors, we can emulate the legs with just that. Sure, the mind is complex, getting vast amounts of signals, processing and holding loads of information. In time it can be emulated appropriately.

>> No.3459607

>>3459236
The only way information can be transfered to the brain by DNA is through the formation of instinctive mechanisms during the development of the brain in the womb. All your abstract thoughts are the result of environmental influences, so unless you are proposing we also engineer a retrovirus to convert parts of the brain into stem cells and implant the DNA in order to change someone's instincts then you must be full retard.

>> No.3460743
File: 135 KB, 240x240, 1308710063973.png [View same] [iqdb] [saucenao] [google]
3460743

>>3459562

>Watson
>Strong AI

Pick one

>> No.3461122

We don't even have the ability to make anything smarter than a retarded cockroach.

>> No.3461618
File: 17 KB, 300x300, technical-question-answered.jpg [View same] [iqdb] [saucenao] [google]
3461618

>>3458590
I answered major Anonymous and friends doubts in that post. It seems loss into the noise of anonymous asking the questions that I answered already.

1.We know general AI algorith already: Restricted Boltzman Machines

2.It can learn arbitrary input data format if there is structure in said data.
(>>3458721 >That's an easy problem. The hard problem is structuring it so that it can learn (almost) arbitrary data, and so that it has wants and desires and goals. Coupling these two is a bitch.)

>>3457065
>100 quadrillion floating-point operations per second.
3.It takes heavy processing power we still don't have economic ways to do it and prove doubters wrong, that is the obstacle.

>>3458851
Creating AI is like having super bright sky daughter that grows exponentially wiser by the nanosecond: God made us to procreate better.

>> No.3461651

>>3461618
This is brute force, in that you're setting up a general algorithm (artificial neural networks are Turing complete) and letting it do its thing until you come up with a solution. It doesn't give any actual insight into how the mind works, is the problem. Thanks for knowing what a boltzmann machine is, though, even if I'm not sure why the hell you were talking about images.

>> No.3461675

We don't know how the brain works.

That's pretty much it.

>> No.3461751

>>3461618

Perhaps you don't realize that, sometimes, we aren't looking for resolution.

>> No.3461754

God, so much fail here...
There are a few things we need to do.
1st find out how do biologicak neural networks work. We use backpropagation now, but that may not be what real brains use.
2nd replicate that on a computer
3rd advance our computer technology so we can simulate big and complex virtual worlds that are to be populated with beings that use our virtual neural nets to think

If this doesn't work, than there's something else about the mind and I'll become a dualist or something. If i misspelled anything, it's a typo because I'm on my mobile...

>> No.3461768

>>3461754
Backpropogation in ANN is named after the biological process, though biologically it is of course not the whole story.

>> No.3461985

>>3461618
^right I answered OP
>>3461651
^wrong, artificial neural networks are algorithms, not computing machines as Turing general computation is a property of machines.
No brute force. You are ignorant. The Restricted Boltzman Machine ALGORITHM can take any structured useful data in arbitrary format (hence images I talked about, but could be sounds, words, 4chan) and actually learn what every part means and its relation to the rest, through Deep Learning ALGORITHM
I am repeating myself: >>3457065

>>3461754
^wrong. Sage don't know we don't need all biological mechanisms. We have Restricted Boltzman Machines that learn it all fast. Research could build on that, make faster ALGORITHM architecture to learning and processing data.

>> No.3462028

>>3461985
>algorithms, not computing machines
A "computing machine" is an algorithm. That is why we can write emulators in software.

Admittedly I'm referring to boring old feed-forward networks as Turing-complete, but I doubt Boltzmann machines aren't.

Anyway, what does your method tell us about the structure of the mind and brain?

>I am repeating myself:
Yes. I meant that I don't know why you mentioned images there. Is the idea that you hook it up to a camera and wait for it to be sapient?