[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 28 KB, 600x600, Chinese_Room.gif [View same] [iqdb] [saucenao] [google]
2725709 No.2725709 [Reply] [Original]

Anyone else thing the Chinese Room Argument is horrible?

>> No.2725721

>>2725709

no, it makes perfect sense

are you stupid?

>> No.2725723

The system as a whole is aware, adding a human only confuses the argument.

>> No.2725727

Every argument against AI is stupid, actually.

>> No.2725742

>>2725723
Yesterday my philosophy professor (required credit) was explaining the Chinese Room Argument. I pointed out a number of things wrong with it, the fact that the human has little to do with the "consciousness" of the system was one thing brought up. It's like saying the human mind isn't conscious because the physical laws that dictate the happenings in our brain don't understand.

The argument has so many things wrong with it that I don't know how it has gotten this far. Why wasn't this thing laughed at from the start?

>> No.2725744

>>2725723

>The system as a whole is aware

fullretard.jpg

>> No.2725746

The biggest problem is it seems to count humans as something other than biological robots.

It's the entire "do we know anything, or are we merely following programming" philosophical question. What's stupid is the argument seems to be saying robots will never truly equal humanity, which isn't true in the least.

>> No.2725754

>>2725744
It can converse in Chinese.

>> No.2725755

>>2725742

you mentioned nothing wrong with it so far in that post.

you offered a false analogy similar to me saying once I put on a hat, the hat becomes part of the system and understands Chinese.

Sorry, nothing in our experience correlates to your stupid analogy. The system as a whole isn't understanding anything, it's producing an output.

Producing an output isn't evidence for understanding. Hitting a gong produces an output, the gong doesn't do this by understanding shit, it just does it.

>> No.2725762

>>2725746
>What's stupid is the argument seems to be saying robots will never truly equal humanity, which isn't true in the least.

and yet no one can refute it, and it makes perfect sense.

itt butthurt nerds.

>> No.2725767

>>2725755
>The system as a whole isn't understanding anything, it's producing an output.
Just like your brain.

Output is everything.

>> No.2725771

What do you mean by horrible?

Illogical or unpleasant?

>> No.2725774
File: 36 KB, 400x400, furby.jpg [View same] [iqdb] [saucenao] [google]
2725774

>Furbies are now sentient beings because they produce they laugh when you tickle them.

>> No.2725776

>>2725771
Unpleasantly illogical.

>> No.2725777

Are we biological robots? I'm pretty sapient right here, I'm not a philosophical zombie, either I'm some semantically twisted definition of "robot" or I'm not a robot.

>> No.2725782

>>2725762
>and yet no one can refute it, and it makes perfect sense.
fullretard.jpg
The reasoning is ridiculously close to that used by free will advocates.

Thing is, the model doesn't explain why the chinese guy is different from the AI, yet it concludes that the AI can't do what he's doing. There are so many problems with the axioms of that model that Searle doesn't address - at best, all it can prove is that we don't have a good definition of "understanding" and moreso we can't prove that we truly understand anything instead of simply reacting in deterministic patterns, a complicated chemical reaction.

>> No.2725783

If you think it's horrible disprove it

checkmate, guys who had CS in high school

>> No.2725786

>>2725762
A.I. is possible because it isn't physically impossible. We can look at out selves and see that we have intelligence (some of us). The same way you could tell flight was possible by looking at birds.

>> No.2725790

>>2725777
You are defining sapience as not a product of physical interactions. You are defining yourself to be right. EVERYTHING is the product of physical interactions, thus you are a robot.

>> No.2725791

Why would someone want to get locked in a room with a Chinese person?
All they ever do is try to take credit for all the scientific advances of humanity.

>> No.2725794

the problem with the argument is that it refutes simple "question and answer" responses.

giving an A.I. a cue, and looking at the response is not going to give an indication of whether or not it is actually sentient.

no matter how complex and real the responses, it could very well be the result of extremely elaborate responses.

the true measure of sentience really, is open for debate. self-awareness, abstract thinking, and many other concepts are much better judgements for sentience, but how do you measure them without looking at "stimulus and response" scenarios?

>> No.2725799

>>2725794
If it gets elaborate enough you can't tell it isn't sapient, why isn't it sapient? If it is functionally sapient it is sapient.

>> No.2725800

Do you not see the room could not just be a database of (nearly) all possible discussions? Do you honestly believe Turing test is a test for consciousness? Materialists are retarded.

>> No.2725802

>>2725790
So you went for "semantically twisted definition" then.

>not a product of physical interactions
Everything is a product of physical interactions, you are redefining "robot" to make yourself right.

You're talking to me like I'm a creationist or something, so also fuck you.

>> No.2725803

>>2725799
if it looks real it is huh?

i guess mannequins are humans too then.

>> No.2725810

>>2725794
>extremely elaborate responses.

just realized this is more accurately said as:

>extremely elaborate programs.

>> No.2725811

>>2725799
THIS

If an argument against strong AI doesn't make observable predictions then it amounts to little more than redefining the mind. It's like "AI cannot have a mind because I define the mind to be something that AI cannot have".

>> No.2725812

>>2725803
>if it looks real it is huh?
Yes, if you deny observation you deny the foundation of science.

>i guess mannequins are humans too then.
If you made one indistinguishable from a human in all ways, yes your artificially person would be a real human.

>> No.2725815

>>2725803
>i guess mannequins are humans too then.
Are you actually dimwitted or are you trolling?

>> No.2725817

>>2725767

>Just like your brain.
>Output is everything.

No because there's a subjective experience connected to that brain-state that is aware of the state and also has a recursive self-awareness--the machine doesn't have this.


Output is how we judge the internal state of a being. When you hit someone we assume they feel pain based on their output. The external indicates the internal. But not always.

Robots will give off false positives, they will create the output response that we assume indicates an internal state but it isn't there.

I can make a toy robot that says "ouch" when you hit it, does this mean it has an internal state of feeling pain? Of course not.

If it automatically responds correctly in chinese, does it understand chinese? No. It has no awareness, it understands nothing.

Google isn't aware of shit. It just searches via algorithms.

If you had an algorithm for awareness, and self-awareness, maybe we would talk then...but no one has made that

>> No.2725829

>>2725802 I'm the guy you originally replied to but not the last guy.

When someone says "robot", they're generally thinking of a computer program inside a mechanical body. When I said "we're all robots", I meant we're all biological programs in carbon meatsuits - the differences are superficial, just a matter of complexity and composition.

In the context of this thread, my comment was intended to address the Chinese Room argument, which implies the chinese guy has free will, while the robot doesn't.

>> No.2725830

>>2725799
Those are the same reasons I assume other people are sapient I suppose. But there are distinct physical differences between a biological brain and a computer.

A computer works in 1s and 0s in digital, 1s are always 1s, 0s are always 0s, one calculation at a time. Neurons have varying intensities of synapses and pulses are irregular.

>> No.2725842

>A computer works in 1s and 0s in digital, 1s are always 1s, 0s are always 0s, one calculation at a time. Neurons have varying intensities of synapses and pulses are irregular.

Why does this matter?

>> No.2725843

No, confusing the brain with the Chinese Room is horribly wrong.

One of the fundamental points made was syntax and semantics.

Syntax was the whole ordering of signs and what not that computers can do. Semantics is actual intuitive and conscious understanding of the room. Only humans seem to do that.

Proof of the computer inherently understanding meaning would be probable in advanced learning machines that would learn the word bank for example and understand it in a sentence that uses it thrice, but a set program is meaningless, such as Watson.

>> No.2725845

if consciousness is continuous, which it is, then it can't be digital

so it might be the case that digital consciousness is impossible even for God to make.

>> No.2725852

>>2725811
fullretard.jpg
Humans have a mind (me at least)
Submit further evidence if you'd like to claim otherwise
Why not start with a mind detector
And prove why it cannot have false positives

>> No.2725854

>>2725812
no, it is not so simple. just because something appears so, doesnt make it so.

you need to look closer. an illusion that makes people think it is real does not make the illusion real.

remember, the truth is independent of belief.

the computer might very well use responses that indicate a level of intelligence and thinking equal to a human, but it could very well be the result of predictive formulas.

remember watson?

he gives clear, complete sentences, but is in no way a full AI.

he appears sentient until examined closely.

im not saying AI is impossible, im simply pointing out the concept of the argument.

intelligent responses are not an adequate measure for sentience.

soemthing like, say, a computer solving abstract problems it was never programmed to by coming up with novel ways to do things is a much better indicator.

>> No.2725864

To me, the Chinese box shows nothing. That particular system wouldn't have consciousness. It wouldn't understand the words. That in no way implies though that there could not exist a system which has additional capacities to respond with the right words and understand them in a broader context. It's like the difference between someone who understands triangles and squares enough to recognize them but doesn't realize that two triangles can make a square and so on.

>> No.2725869

>>2725829
I guess it boils down to the definition of robot then. I got mad because he said "defining yourself to be right", yet we don't have a strict scientific definition. There might be no differences between a processor designed to act like a brain cell with the same complexity of possibilities as determined by quanta packets of energy exchanged between it and neighboring artificial brain cells and a brain cell.

>> No.2725874

>>2725842
because it shows a level of complexity far greater than "true/false", which is how computers work.

the brain uses very intricate, varying signals to determine responses, a computer does not.

it might very well be impossible to build a sentient computer that uses the binary system, we dont know yet.

>> No.2725879

>>2725817
1) The same argument can be made against your mind.
2) You speak of this internal state as if it is magic instead of physical interactions.

>> No.2725881

>>2725869

whats the algorithm for feeling pain?

hahah thought so, impossible

never gonna be sentient AI...

>> No.2725885

>>2725881
>thought so
>MAXIMUM TROLL

>> No.2725889

This thread is all kinds of retarded. A couple anons posted reasonable refutations to the original argument, and a bunch of dumbasses that don't understand the argument are talking about irrelevant matters of chemistry and encoding.

Consciousness is the sum total of a combination of learned and innate responses to environmental stimuli. Searle's argument is stupid because it claims understanding is somehow acausal, because the AI's causal system is obviously incapable of truly understanding. He further demonstrates his stupidity by placing the unnecessary english-speaking human in the system.

The word sapience itself is probably faulty in its assumptions, it's only useful as a label, setting an arbitrary diving line between the animate and the inanimate. There is no fundamental difference between a sapient being and a rock, other than the sapient being's interactions with the environment are more complex.

/thread

>> No.2725895

>>2725881
if (painReceptors.sensePain();)
{feel(Pain);}

>> No.2725897
File: 36 KB, 583x467, dr-walter-bishop.jpg [View same] [iqdb] [saucenao] [google]
2725897

I don't think it's going to matter. WHEN we get to the point where an AI says it's "aware" then we should treat it as if it was. Why would you be any more skeptical of a super intelligent than you would be to a human.

In short, philosophical arguments are a waste of time. AI will eventually become self away, and we need to treat them that way.

>> No.2725898

>>2725879

>You speak of this internal state as if it is magic instead of physical interactions.

the structure is important, im fine with consciousness being a product of neural interactions + biological brain matter--because that's what it is.

but it can't be the product of a digital program anymore than testosterone can be the product of digital testes

>> No.2725900

the argument is useless until the hard problem is solved.

>> No.2725904

>>2725889
>Consciousness is the sum total of a combination of learned and innate responses to environmental stimuli.

Nope.jpg
Stopped reading there, horrible and uneducated definition of consciousness.

Everyone else can carry on.

>> No.2725910

>>2725898
>but it can't be the product of a digital program anymore than testosterone can be the product of digital testes

that's brilliant, so many people are missing the point that consciousness depends on a specific structure which depends on the most complex object in the universe (the brain)

not only does it require complexity but it requires actual neural nets and synapses and brain chemistry...just like how Testosterone requires actual chemistry to produce it

just because something is physical doesn't mean it's reducible to programming, god you guys are dumb as shit

>> No.2725915

>>2725889
Expanding on that, I wouldn't be surprised if Searle is religious. Even if he isn't, his argument is going to be a favorite among believers. Why? Because the English-speaking human, representing inanimate AI, lacks understanding, which can only come from other life (which ironically, is the algorithm feeding the english guy chinese cards). Following it to its natural conclusion, the argument is an argument for intelligent design; the inanimate can never give rise to the animate, understanding can only come from understanding.

>> No.2725919

>Consciousness is the sum total of a combination of learned and innate responses to environmental stimuli.
[citation needed]

>> No.2725920

>>2725910
>just because something is physical doesn't mean it's reducible to programming

the only limit to emulation is processing power and the laws of thermodynamics. there is nothing to suggest that the brain is fundamentally too difficult to emulate in a computer.

>> No.2725923

>>2725904
You mean self-awareness?
That's included in
>combination of learned and innate responses to environmental
because what is self-awareness but the knowledge of the difference between self and environment?

Self-awareness itself is a reaction.

>> No.2725924

>>2725910

Then i ask this question. What is the point where a human would no longer be a human. Or at one point does your consciousness become "not consciousness? We'll get to the point where our brains will become more and more synthetic. When will we lose ourselves?

>> No.2725928

>>2725915
> I wouldn't be surprised if Searle is religious. Even if he isn't, his argument is going to be a favorite among believers.

confirmed for being an obvious troll.

derp. im going to try and swing my argument in an obviously retardede direction just to invoke religion. derpdederp.

>> No.2725938
File: 17 KB, 460x288, mfw (14).jpg [View same] [iqdb] [saucenao] [google]
2725938

>>2725928
>derp the herp, butthurt /sci/fag ignores the actual argument

>> No.2725940

>>2725897
> AI says it's "aware" then we should treat it as if it was

i can program a computer to say "hello".

should you treat it as real?

do you really think believing something, just because it looks on the surface to be real, is a good thing?

fullretard.jpg

>> No.2725944

>>2725940

My point went over your head entirely. Your program isn't an AI.

>> No.2725946

>>2725944
and neither is one that just says it is aware.

>> No.2725949
File: 62 KB, 300x300, trisomy21b.jpg [View same] [iqdb] [saucenao] [google]
2725949

>>2725946

Then you and i disagree.

>> No.2725953

>>2725920
>the only limit to emulation is processing power and the laws of thermodynamics. there is nothing to suggest that the brain is fundamentally too difficult to emulate in a computer.

no you are also limited by structure

digital structure doesn't equal biological structure

I can't eat digital food bro. No matter how well you program it on your iMac.

fucken retard

>> No.2725956

>>2725924
try it on yourself
it's the only sure way to find out that we know
empiricism or gtfo /sci/

>> No.2725962
File: 14 KB, 300x300, thom+yorke_855_19017413_0_0_7006291_300.jpg [View same] [iqdb] [saucenao] [google]
2725962

>>2725956

so you have no idea.

>> No.2725965

>>2725898
What is your definition of consciousness?

>> No.2725973

>>2725953
Stop trolling
please
stop it

>> No.2725975

>>2725949
example:

i build a computer that responds to sentences with a sentence of its own, from a database of sentences i put in it.

its a shit program though, and you quickly show me that argument x shows that the computer is not actually intelligent.

i write sentences for it to use when shown argument x.

now you say argument y shows it is not intelligent.

i make another. and another. i continue making responses for it to use until you cannot find an argument that makes it give an answer that shows that it is not intelligent.

now, no matter what you ask, you will get an intelligent response, as if it was actually aware.

you say "are you alive", it says "yes, and i am scared of dying" adn all kinds of other shit.

question: is this computer now actually sentient?

NO. it appears to be, but it is not.

>> No.2725979

>>2725854
>he appears sentient until examined closely.

Then we need to devise very good tests. Perhaps the search for AI is also a way to robustly define what makes sentience special.

>the brain uses very intricate, varying signals to determine responses.

So do modern processors. Computer logic is a fascinating course.

>>2725910
I have no idea what you're trying to say, except that an inadequate model is inadequate. What purpose would a digital model of testosterone have, other than to predict chemical interactions? How is this inadequate for its purpose? What a terrible strawman.
>not only does it require complexity but it requires actual neural nets and synapses and brain chemistry
you're basically saying it's too hard for you, so its too hard for everyone.

>>2725898

>but it can't be the product of a digital program anymore than testosterone can be the product of digital testes

oh, and what if the 'digital testes' are connected to a chemical synthesizer? Hell, what if it's just the control system for a chemical plant. What the fuck are you even trying to say here?

>> No.2725982

Chinese Room thought experiment is not horrible, it gets people thinking about what consciousness is.

>> No.2725985

>>2725973
> massive cognitive dissonance
that man is entirely correct
think for yourself

>> No.2725989

>>2725973

so you just realized how digital AI commits faulty reductionism?


digital phenomena aren't real phenomena:(
they are only simulations.

next?

>> No.2725991

>>2725975

Humans are absolutely no different. But the "programming" has been done by evolution.

>> No.2725993

>>2725975
When you come up with an algorithm that can determine what responses to sentences so well, tell the AI community. I think they'll be excited.

>> No.2725996

>>2725975
If an illusion is consistent then it may as well be real.

>> No.2725999

How can we prove that other humans are really conscious, and not just p-zombies?

Well, we have a prejudice. We consider other humans to be conscious agents because we just do by nature. A human being needs to go out of their way to prove themselves to be incompetent of being a conscious agent.

>> No.2726000

>>2725989
That may be true at this point in time, but that doesn't mean "strong AI" is impossible, which is exactly what the argument states.

>> No.2726005

>>2725953
Can you give a single example of something that is not possible to simulate in a computer?

I am actually unsure if computer simulations of the brain would experience consciousness, but if the brain follows the laws of known physics, there should be no reason why we wouldn't be able to make a perfect simulation given a powerful enough computer.

>> No.2726007

>>2725953
>I can't eat digital food bro. No matter how well you program it on your iMac.
>poorly constructed output is poor.

no shit. What, you think I can't program a computer with a bastardized version of english, just because it uses 1s and 0s? You think a computer can't output a continuous waveform, just because it's internal processes are discrete?

>> No.2726016

>>2725979
>Then we need to devise very good tests.

agreed. this is what i have been saying all along.

all the chinese room argument shows is that stimulus and response tests are NOT an indicator for true sentience.

a complex predictive program will be sufficient into fooling people that it is actually aware, when it is not.

as for the modern processors, i know that computers are more complex than just "ones and zeros" and that it is a horrible strawman, but it makes the point that computers arent as complex as the brain.

besides, the true/false system is not used by the brain. computers use an expansion of true/false to generate programs, whereas the brain uses many connections that all have to work together in order to make a single neuron fire and continue the signal.

>> No.2726020

>>2725996
if you believe that, then you should go die in a fire.

>> No.2726022

>>2726016
>many connections that have to work together
But those can be represented in terms of true or false, which is something you are NOT getting.

>> No.2726024

>>2725991
that is such a horrible, horrible strawman.

you obviously do not understand how evolution works.

>> No.2726025

>>2726005
>Can you give a single example of something that is not possible to simulate in a computer?

thats the thing, it's only a simulation, a trick, an appearance, it lacks all the structural properties of the object.

>but if the brain follows the laws of known physics, there should be no reason why we wouldn't be able to make a perfect simulation given a powerful enough computer.

because if consciousness depends on biological structures then your simulation is impotent.

simulated food isn't edible
simulated consciousness isn't aware

>> No.2726028

>>2726000
that's not the point the argument makes
the point of the argument is to counter the rampant idiocy in the AI community

>> No.2726035

>>2726022
only on a more complex scale. in essence, while computers work on a binary system (0,1), human brains operate on the order of THOUSANDS, and the variations of how many and which ones are firing.

comparing the two is silly.

>> No.2726038

>>2726024

Decent with modification. Next question.

>> No.2726039

>>2726025
Butting in here.

I see where you're coming from, but intentionality is not defined as a biological trait. Until you produce a concrete definition for "consciousness" there's no way to say it would be impossible for a robot to possess consciousness.

>> No.2726042

>>2726016
The underlying machinery might not matter, as long as the proper heuristics are followed.

>>2726025
funny, because I can consume simulated signals pretty well.
>>2726025
>>2726028
Nothing can counter the rampant idiocy of your argument.

>> No.2726045

>>2726025
A simulation is not a trick or appearance, if we give the simulation the same properties as the real thing.
and simulated food is edible, IN the simulation. a virtual mouse can eat virtual cheese.

>thats the thing, it's only a simulation, a trick, an appearance, it lacks all the structural properties of the object.
Give an example of a structural property a computer could not simulate

>> No.2726046

>>2726025

But simulated weather can be accurate enough to predict real weather. And there is no reason to assume that given enough data, it couldn't predict it unerringly.

Consciousness is not like food. What comes out of food that makes it food is that it fuels and builds our bodies. What comes out of consciousness that makes it consciousness is nothing solid and physical. It's more like calculation or computation. You can simulate a calculator and still get calculations from it. The same could be true of consciousness.

>> No.2726047

>>2726028
Actually, that's the exact point of the argument. You may want to review, Searle stated that in his conclusion almost verbatim.

>> No.2726051

>>2725777
you're a dumb robot whose outputs might be "i don't want do, iwant feel good, put hand on phallus"

>> No.2726057

>>2726025

moreover there's no way to access another consciousness and fiddle around with it

we can build things that we observe objectively, from a 3rd person perspective...we can analyze the angles and the parts of the object

but you can't do that with consciousness, you only have access to your own, it's completely 1st person, and you operate through it, you can't stand back and analyze it objectively, see it's parts and what makes it work

so what hope do you have of creating something you can't even observe or understand?

>> No.2726058

>>2726025
simulated food isn't edible because simulated food doesn't have chemical bonds in it that living things can use for energy.

there. that's why simulated food doesn't work. care to explain why simulated consciousness can't, now?

>> No.2726061

>>2726038
entirely wrong.

lrn2biology.

evolution is a complex concept, and comparing it to manual programming of computers is simply retarded.

only a troll would compare the two.

>> No.2726064

>>2726061

>It's complicated, so you cant compare the two.

Now who's trolling.

>> No.2726068

>>2726045
>simulated food is edible, IN the simulation. a virtual mouse can eat virtual cheese.

ok then a virtual person might believe a simulated AI is aware

but no rational real person would be so retarded

just like no rational real person would try to eat simulated food


the whole argument was that simulated objects = real
now you've admitted they are only real to other simulated beings lol

fucken dumb

>> No.2726070

physics does not explain consciousness
there is one theory above all
THIS IS A SIMULATION
THE WORLD IS NOT REAL, AND WHAT I SEE ARE ONLY ILLUSIONS IN MY CONSCIOUSNESS

>> No.2726071

>>2726061
lol

>> No.2726074

>>2726061
What's ridiculous is that you're saying the end result of evolution and the end of result of programming can't be compared because the two methods are different.

>> No.2726077

>>2726058
>that's why simulated food doesn't work. care to explain why simulated consciousness can't, now?

because simulated consciousness doesn't have an actual nervous system and the neural chemistry required for self-awareness and sensation (sentience)

done. next?

>> No.2726078

>>2726077
Why do you need an actual nervous system and specific neural chemistry for consciousness?

>> No.2726081

http://www.youtube.com/watch?v=kDqPnI-DdI8&feature=related

/thread

>> No.2726083

>>2726064
but you cannot compare the two.

how the two are established come from very different methods.

now how about we get back on track.

>> No.2726085

>>2726077
>neural chemistry required for self-awareness and sensation

bullshit.
Your entire argument is a strawman. That simulated outputs are shitty because you don't know how to reproduce them adequately. Emphasis on you.

>> No.2726089

>>2726074
i said comaring the METHODS was retarded, not the result.

read what i was replying to.

>> No.2726090

>>2726089

The methods would be no different

see exhibit >>2726081

>> No.2726091

>>2726070
I think there are multiple consciousnesses in this simulation (unless you are here just to deceive me). We should investigate your brain to see if we can trick the simulation to expose itself.

>> No.2726100

>>2726089
I followed the entire conversation. The other guy said there's no difference between humans and robots, for humans
>the "programming" has been done by evolution.

Unless you're way off base and criticizing his comparison and not the proposition that "humanity is effectively a robot", you're doing exactly what I stated, saying the differing methods make it impossible to compare the products.

>> No.2726102

>>2726071
yeah, its really funny.

some idiot thinks that the analogy i made is the SAME as evolution.

read:
>>2725975

if you think, in ANY way, that that is how evolution works, then you are beyond contempt.

simpletons half-understanding evolution and trying to apply it where it doesnt belong are the reason the public doesnt understand it.

>> No.2726105

>>2726100
read:

"if you think, in ANY way, that that is how evolution works, then you are beyond contempt.

simpletons half-understanding evolution and trying to apply it where it doesnt belong are the reason the public doesnt understand it."

this is all i was saying.

you want to go down "humans are robots too", im not going to stop you.

>> No.2726111

>>2726068
I was trying to prove that there is nothing real that cannot be simulated given a large enough / fast enough computer. Is there a reason why we ourselves couldn't be in a simulation and be unaware of it?

I wasn't trying to prove "simulated objects = real" what does that even mean? Simulated objects by definition, are not real objects.

>> No.2726113

>>2725881
I was just speculating, I said "might". omg....

>> No.2726114

computers arent simulations of anything. They are just as real as the brain, and their output is just as real as output from a human.

>> No.2726119

THE TURING TEST WOULDN'T WORK

IF YOU HAD A BOOK WITH LITERALLY INFINITE KNOWLEDGE.

Well no shit, Searle.

>> No.2726120
File: 8 KB, 406x381, 1295640663375.png [View same] [iqdb] [saucenao] [google]
2726120

>>2725746
> biological robots
You are one dumb motherfucker.

>> No.2726123

>>2726100
by the way, just so you are clear:

"Humans are absolutely no different. But the "programming" has been done by evolution."

this is the original argument made against what i said. you are going to defend this nonsense?

how humans developed sentience, and how we programs computers are two vastly, vastly different things.

>> No.2726128

>>2726114
>ignores every post in thread
>makes baseless assertion without any evidence

cool story bro.

>> No.2726129

>>2726120
No, you are because you didn't understand what that person was trying to convey. Don't be so trigger happy with the insults.

>> No.2726136

>>2726078
>Why do you need an actual nervous system and specific neural chemistry for consciousness?


because of neural maps and structures that are conditioned by experience are unique to each individual, kind of like the immune system.

They all form differently and have unique structures, you can't just generalize them into the rubric of "consciousness" and translate them into code. You can't program an immune system, there isn't a general one to code, and your simulated immune system won't actually produce real anti-bodies.

Neural nets have a feature of degeneracy which means that the same output can be reached in a whole range of different ways. And each way is again unique to the individual, the mapping isn't a building block you can emulate, it only works in that person with those experiences for reasons we can't explain, etc...

>> No.2726139

Life began with chemical reactions which triggered further chemical reactions, which then triggered more. Occasionally more complex molecules would become involved and the new molecules would be more efficient, or capable of absorbing other molecules, or able to self replicate more times. Over billions of years these reactions grew and continued to grow, until they combined with other ongoing reactions, and formed cells. The same thing happened with cells- darwinian evolution- until some of them joined together by chance and became more effective than a single cell, and therefore able to replicate more efficiently. The process once again continued with the number of cells building up exponentially, and the potential configurations and methods of creating new reactions accelerating also- note that there is no intent to replicate, but that the most common organisms are those that replicate most often, and others are either stopped by the common life or eradicated by natural forces.

>> No.2726140

>>2726139
At some point in time two different organisms swapped DNA somehow and formed a radically different being better at self replication, and once again the form of life changed. Reactions continued and separate organs for processing information and chemicals came into existence, along with better sensors such as eyes, to allow beings to avoid other organisms, or attack them, or simply regard the environment. Organisms capable of keeping a record of their general area were better at navigating it that others, so less likely to die and more efficient users of energy, so they became common place. Organisms that could process the information quickly also had an advantage, with those who kept memory and how to use it in the same place being most successful of all.
Memory and ability to process information continued to expand, with the most efficient method of communication between cells, electrochemical transmission, being used to "talk" between brain cells and sensory systems. By this point the organisms that passed on the urge to reproduce when they created more organisms were already the norm, as they recreated themselves more often and were also more able to respond, collectively, to problems around them. The urge to reproduce, however, was still a result of a preset configuration of brain cells and chemical stimuli though, and nothing more.

>> No.2726143

>>2726139
why are you posting a short summary of abiogenesis?

>> No.2726144

>>2726136
>bullshitbullshitbullshit
>for reasons we can't explain

Ah, I see.

>> No.2726146

>>2726105
In the example you gave, you kept reprogramming a computer until it gave responses that simulated a human's responses.

The only difference between that and humanity is you were inputting data as time went on, rather than it naturally acquiring them over time through generations of natural selection - there was only one robot and one programmer, not thousands of competing programs and no programmer.

Reproduction aside, how is that different from an evolved and sentient animal? Each time you reprogrammed it, you were terminating an inferior "species" (if you will). Each time you added code to the program, you were imitating mutation. In a way, you and the beta tester were the environment, and the program was the lifeform. And the end result was a being that was able to react to its environment in a way that enabled its survival.

The point you were arriving at was the wrong one; that program is not conscious, because it has no way to program itself, but it's not not conscious because it "simulated" correct responses.

>> No.2726152

>>2726140
and why are you continuing?

also, there are a number of oversimplifications in there.

>> No.2726156

The Chinese Room argument is just a big argument from incredulity filled with red herrings.

"But no part of it understands Chinese lol"
Of course not, dipshit. Just like none of your individual neurons understand English, and no individual transistor in your computer is running itunes.

"But it's just a room with a guy and some books lol."
That's true. Although, to accurately "simulate" a human mind, it would have to be a room larger than the Earth, and it would take a billion years to answer a simple question. So saying it's "just" that is misleading. No realistically sized Chinese Room could ever actually converse in Chinese.

>> No.2726159

>>2726140
Organisms that were looked after by their parents normally survived longer than those that were left alone by them, leading to the first social elements of life. Organisms that stuck with each other, even though they weren't related, were also more capable than those that weren't of coping in difficult times, and looking after new organisms. The things we know now as "instincts" were past down not because the animals with them fought for superiority- not any more than others, at least- but because others were less efficient and were therefore more likely to die before reproducing.

The social situations mentioned earlier led to competition between animals, to whom being in charge gave a greater chance of reproducing, so their minds evolved to be socially capable, and to store the structures of packs. Animals that had greater ability managed to stay in control and reproduce while the less capable did not and so intelligence became normal.

>> No.2726169

>>2726146
holy shit, i am not going to type up the 15 million different things wrong with that analogy.

the most blaringly obvious, is that evolution has no end goal.

it doesnt modify itself purposefully.

i mean, wow...

>> No.2726170
File: 13 KB, 250x250, trolling.jpg [View same] [iqdb] [saucenao] [google]
2726170

ITT:
Genetic Algorithms don't exist.
Adequate actuators don't exist.
Abstraction Layers don't exist.
Life is magic that we will never fully explain, because we can't completely explain it with today's technology.

>> No.2726172

>>2726144
>bullshitbullshitbullshit
>for reasons we can't explain
>I see

it's actually from leading neurobiology theories

check out Gerald Edelman, Ramachandran, etc

>> No.2726179

>>2726170
>Consciousness is magic that we will never fully explain, because we can't observe or analyze it empirically

fixed that for you bro
np :)

>> No.2726181

>>2726169
I love you VP
Don't listen to these faggots
They are all engineers and can't stand a straight man

>> No.2726187

>>2726169
Evolution does not work in the face of artificially induced selection. You heard it heard first.

Shit, where'd all my antibiotics go?

>> No.2726190

>>2726152
I don't know if this is correct, I'm not a biologist. If you could point out the problems with it, I'd give you all of my internets


As the brains got more and more complex, they became capable of remembering more and also of being able to use their environment more effectively- flint is sharp and cuts through skin, to eat other animals you must cut through their skin, therefore flint could be used to do this. A stick could hurt more than a fist, so it must hit harder. Flint on a stick must hit harder than normal flint, etc. Note that STILL, no objectively measurable change between life and non life has happened, because still the animals are just masses of simple organisms, which are in themselves masses of chemical reactions using simple molecules.

>> No.2726197

>>2726169
How the hell is that relevant? Unless you're misinterpreting
>Each time you added code to the program, you were imitating mutation.
and
>In a way, you and the beta tester were the environment, and the program was the lifeform.
as being directly connected. I realized immediately after posting that that I worded that poorly.

I wasn't saying mutation is purposeful. I had two points;
1) your analogy is flawed, because the computer isn't truly adapting, and
2) the word "simulation" is frankly derogatory, it's reacting according to the demands of its environment.

I believe that program would lack self-awareness, and it would lack the ability to spontaneously mutate, but that's not what you were pointing out.

>> No.2726211

>>2726129
I know exactly what he was trying to convey. He just chose the most idiotic way of expressing it. Thus, he is a dumb motherfucker. And so are you.

>> No.2726213

My senior thesis in high school was why the Chinese Room Argument is bullshit.

>> No.2726215

>>2726179
>because if we can't do it today, we never will.

ftfy some more.

>> No.2726222

>>2726187
Of course it works. The most resilient bacteria live on while the others perish, and bacteria that evolve to become more effective live more efficiently. The bacteria are reacting to artificially induced environmental pressures by evolving exactly as they would otherwise.

>>2726190
life carried on, and evolution did too. leaves held rain off, rain was bad, leaves could therefore be used to make shelters. Shelters made of stones were stronger than shelters made out of earth. Some groups used fire while others didn't, those who did not dying out due to illness slightly more often than the others. Clay ovens worked better than open fires and could be used inside better. Bigger fires worked better. Eventually, someone put a rock containing some iron into a fire and it melted into something hard. As a result, iron started to be smelted. Iron meant more tools could be made, better buildings could be built, and social groups could expand far beyond what they could do before. Someone somewhere else realised that if you kept boar in a pit and fed them then they would make more boar by reproducing, therefore creating more food. Someone in yet another place realised that if you put something on tree trunks and push it it's easier to move.

>> No.2726225

>>2726215

technology is irrelevant,

can't observe it in principle because it's 1st person subjective not a physical object

if you somehow transform it into an object or observe it, it isn't consciousness anymore, catch 22

>> No.2726228

anonymous responses have been using a CHATBOT within this thread....

INTERESTING......

>> No.2726236

>>2726225
So you're basically saying
>we can't know if another being or robot is conscious
Not exactly a new philosophical argument there, but whatever.

That doesn't mean a robot can't be self-aware and conscious.

>> No.2726237

>>2726225
>if you somehow transform it into an object or observe it, it isn't consciousness anymore, catch 22

construct the consciousness commutator, and I'll believe what you say.

>> No.2726240

>>2726197
i believe, after reading this response, you might have misunderstood what i WAS saying.

>1) your analogy is flawed, because the computer isn't truly adapting, and

this is actually the point i was making. the computer is not sentient. the process of direct manipulation of a program is as far removed from evolution as you can get.

the computer in my analogy would not be sentient, yet it would APPEAR to be on the surface. that is the one and only point i was trying to make in my post.

2) the word "simulation" is frankly derogatory, it's reacting according to the demands of its environment.

i believe you are trying to say it uses the "stimulus and response" to produce answers. and this is my point. a sufficiently complex program can appear to be sentient, much in the same way watson does. however, that does not mean it IS sentient.

computers may very well be sentient one day, but as pointed out, testing their responses to questions and statements etc. is not actually a viable way of testing for it.

>I believe that program would lack self-awareness

this is what i was actually saying all along...

>and it would lack the ability to spontaneously mutate

this is a given.

>> No.2726242
File: 19 KB, 300x300, 1296130580535.jpg [View same] [iqdb] [saucenao] [google]
2726242

VP is in so much anal pain.

>> No.2726244

>>2726222
we exponentially expanded, and created more and more refined and advanced technology. Population levels rocketed and money was invented to make bartering more efficient. Roads were built to make moving things easier. Eventually, newtonian physics and electricity were discovered, the industrial revolution kick started, and the present day reached

1) at no point did the non life of chemical reactions become something materialistically different
2) at no point did the mind capable of storing information and using it to make better decisions in the future become more than just that

Consciousness and life are not objectively different from unconsciousness and non-life, ladies.

If anyone feels like pointing out any problems with this I'd be grateful.

>> No.2726246

>>2726240
Ok, then I was misinterpreting what you said. You spent so much effort explaining how it was a simulation, it sounded like you were saying "because it's a simulation, it's not conscious".

My bad.

>> No.2726247

>>2725843
Wrong! Wrong! Wrong!

The humans brain is a system of syntax not semantics! Just like the Chinese Room, no one part of the human brain understands and comprehends language. By using syntactical processes, the whole brain is able to understand language (or anything else).

>> No.2726251

>>2726247

And thats why the next generation of AI will be a network of computers that simulate the function of neurons. They're building one now.

>> No.2726261

>>2726139
i would like to go through some of the points in your posts, but due to other people responding, balancing the two will be difficult.

there is a GREAT video explaining abiogenesis here:

http://www.youtube.com/watch?v=U6QYDdgP9eg&feature=channel_video_title

in fact, his entire series of "origin of x" provides a decent summation of events.

let me know if anything in particular gives you trouble though.

>> No.2726262

>>2726251

the CHATBOT response to your post was:

I am woman.

>> No.2726265

>>2726261
Thanks, watching now.

>> No.2726271

>>2726236

well how can you build it if you dunno what it is or how it works lol

it's not a simple thing you can build by accident

thus you can never build it

>> No.2726275

>>2726271
>don't understand it != will never understand it
Holy hell, you're embarrassingly stupid, and I'm not even the guy you're replying to.

>> No.2726331

If you could emulate the processes of the human brain in a computer with ample sensory input for it to process, it would be 100% conscious. Why? Whether the system is made from cells or silicon does not matter, if the reactions necessary for consciousness are there, then the system is conscious. Saying otherwise would be as ridiculous as saying the calculations derived from a slide rule are different from the calculations for the same equation on a calculator because the they are made of different materials.

>> No.2726338

>>2726261
This almost makes me regret dropping Biology at college level. Metric fucktons of science have been witnessed today

>> No.2726348

>>2726275
> don't understand it, cling onto hope simulation is as good as the real deal
sure is rational

>> No.2726355

>>2726348
>implying attempts at simulation aren't instructive.
>implying scientific discovery isn't an iterative process.

go fuck yourself.

>> No.2726362

this is an amazing thread

>> No.2726366

>>2726338
biology is certainly fascinating.

never to late to learn though. just be careful, as there is a lot of simplification to help people understand.

>> No.2726380

>>2726366
mm, definitely in agreement there. Do you know of any good books for someone who doesn't mind not instantly understanding everything?

>> No.2726409
File: 16 KB, 300x300, 1278621650549.jpg [View same] [iqdb] [saucenao] [google]
2726409

Guys...after reading all of your arguments, i can only say this for certain:

This has been an intelligent thread. On 4chan.

>> No.2726414

>>2726409
Sorry about that. We'll try harder next time.

>> No.2726419

>>2725709
Yep. But I still think AI is impossible.

>> No.2726441

>>2726380
thats a good question. textbooks should be your ideal go-to source for information, but they are insanely boring to read. while i can put up with it, i dont expect others to.

ironically, i dont know of any good non-scientific books about biology, but i can pont you in the right direction.

go here:
http://www.youtube.com/user/UCBerkeley

U.C. Berkeley was kind enough to upload entire series of lectures covering a variety of subjects.

i strongly urge you to watch them all. having a good base knowledge of many areas of science will give you a completely new outlook on life.

>> No.2726445

>>2726419
Why? What makes it so impossible?

>> No.2726459

>>2726445
Intelligence is a meaningless judgement call. We're not talking about any "measure" of intelligence just whether or not computers at any point will possess a certain intelligence-ness. Its a squishy pointless distinction.

>> No.2726461

>>2726441
Many thanks, I'll give 'em a watch now. You must be one of those mysterious not shitty tripfags I've heard so much about

>> No.2726469

>>2726261
Holy fuck that was so cool. I am a chem major atm though I still haven't begun taking checm classes yet.

Will I learn about awesome shit like this or do I have to go into Bio?

>> No.2726471

>>2726459
Then you have altered the meaning of intelligence to make it impossible. That's like saying that a person on an airplane is not truly flying because he is relying on machine power rather than actual wings.

>> No.2726496

>>2726471
I haven't altered it, there wasn't some definition just lying around to apply. There never has existed a criterion for intelligence for things that aren't human-like.

>> No.2726521

Today is a good day. Thanks for this thread /sci/ducks.

>> No.2726568

>>2726469
i am not entirely sure how your course structure works, as it varies from university to university.

stuff like evolution and abiogenesis should be covered in first year bio courses.

when i did my medsci bachelors, it was compulsory for everyone at the university i attended to take both chemistry and biology in the first year, and biochemistry and microbiology in the second, if you were doing ANY science degree offered by the university.

we also had a total of 12 units of electives we could take, and they could be whatever we wanted them to be, as long as they were of a sufficiently high level and we met the pre-reqs.

imo, a chem degree isnt complete without some form of bio in it. check your course structure and tailor it to your liking.

>> No.2726624

Could you imagine being a dog? Well, I can, sort of. I'd be walking on all fours and I'd be very much driven by instinct, I'm colorblind. I wouldn't think of very much but I'd be perceiving and acting the entire time, because that's what a dog does. A dog has a consciousness.

What about a machine that is programmed to behave like a human? All its "brain" consists of is the code for behavior. This robot isn't actually perceiving anything. It doesn't see, it records and analyzes, because that's all it can do - record and analyze. This robot has no consciousness. You can't imagine being this robot, you might as well imagine being a calculator. That is the point of the argument.

This doesn't mean that a conscious AI is impossible to code, but that it would require its own programming, it can't just spontaneously arise from behavior-based code.

>> No.2726660

>>2726624
Actually I know exactly what it would be like to be a machine that is programmed to behave like a human. My brain is also just my code for my behavior. The stuff it's made of is irrelevant.

>> No.2726690

Intelligent, humanlike and even better AI can be created. Only thing that would limit the
intelligence of computers is thermodynamics and the total amount of energy in the universe.

Individual atoms can be simulated. Collisions between the atoms can be simulated along with
the movement of them. Chemical bonds can be simulated and data can be stored about them.

We have in our hands all the things needed to create perfect copy of human brain cell, whole brain or the whole human, one atom at a time.
We don't need to program intelligence, we can program the universe and let the intelligence form naturally.

Once we have the brain simulated we just hook it up to real information input and collect it's
responses. Now we can have intelligent conversations and do everything we can do with regular intelligence.

Now it's only a matter of philosophy if it really is intelligent and what is true intelligence. But to
science this is regular intelligence and there is no reason to assume that it's not humanlike
intelligence, perhaps it could be even more.

>> No.2726707

>>2726624
There is software for automated model generation. A program utilizing such basic software to tease out the workings of the world and social interactions would essentially be conscious

>> No.2726720

>>2726660
Except that it isn't just the code for your behavior, that's the point. Consciousness is more than that. That's why you can't imagine being a robot because a robot is just an automaton. In this regard it is no different from a highly sophisticated rock. There is no "mind" to jump into, so-to-speak. Not from the aggregate of simple if-then statements that tell it to scream when it "feels" pain stimulus or cry when it "sees" human death. Human brains don't work that way.

>> No.2726723

>>2725709

The same argument can be used to prove that all other humans save for oneself are philosophical zombies, so yes it is horrible.

The truth is qualia will probably never be directly observable. Thus our only objective criteria is a object's reactions.

>> No.2726744

>>2726723
But we generally practically assume that other humans are conscious as ourselves, that they do have other minds. Artificial "brains" are different, because we know exactly what we're putting into them. Does the experience of being "someone" just arise naturally if you code reactions? All machines developed in such a way are philosophical zombies.

>> No.2726753

>>2726723
>Thus our only objective criteria is a object's reactions.

ya but you don't want to sell yourself short and just assume everything that has the semblance of intelligence is intelligent

guys in this thread are saying the room as a whole is sentient because the answers are coming out of it lol

that's like us sending a radio signal to some planet and getting a response and then concluding the entire planet is conscious.

seriously you guys gotta do better, no one here has made one valid point against Chinese Room Arg.

not 1.

>> No.2726755

>>2726720
Using the same logic, you can say people are highly sophisticated rocks. We are wired to feel pain in response to certain stimuli, and pleasure in others.

I'm not saying the animatronic pirates at Disneyland are committing willful acts of violence or understand the meaning of their prerecorded threats, but a computer of sufficient sophistication could be called intelligent.

>> No.2726756
File: 75 KB, 298x297, LOL_Face.jpg [View same] [iqdb] [saucenao] [google]
2726756

>>2726753
>that's like us sending a radio signal to some planet and getting a response and then concluding the entire planet is conscious.


lmao

>> No.2726770

>>2726753
That's because very few people understand the Chinese Room Argument. It's not arguing over whether or not the room is intelligent, it's arguing over whether or not the room understands Chinese, which it does (at least to the level that anything can truly be 'understood'). And many good arguments have been made, you just choose to ignore them.

>> No.2726773

>>2726755
No, you CAN'T. That's the point. You sitting in your computer are more than a highly-sophisticated rock, you are someone conscious and perceiving. A rock doesn't do that. Neither does a machine. When you feel despair, you actually FEEL it. A machine just acts like it does.

>> No.2726792

ITT:
>a bunch of fags who just looked up the word sapient on dictionary.com and the chinese room on wikipedia, and are now arguing about it. and you only saw the word abiogenesis in that video.

>> No.2726794

>>2726773
Actually yes I can say that people are just highly sophisticated rocks. We're essentially made of the same stuff. The only difference is in the arrangement of the atoms, which is clearly where consciousness comes from. If this process can be emulated at all, regardless of the medium, the system would be intelligent, whether the system is biological or not.

Are you John Searle by any chance? Your arguments are very familiar.

>> No.2726809

>>2726744

Provide objective proof that you experience qualia. Provide objective proof that computers don't. Or trees or rocks for that matter.

Whether we know what we're putting into it or not makes no difference.

>>2726753

I don't have to make an argument against it, I can distill the question down into something much more simple.

I'm also not claiming that one thing or another is conscious or not. You're claiming that some things are, and others aren't, thus the burden of proof lies on you.

>> No.2726837

>>2726773
>it's arguing over whether or not the room understands Chinese
>which it does
>which it does


the room understands chinese, just like how the hat that covers my head understands chinese

fullretard.jpg

>> No.2726848

>>2726794
No I'm not John Searle. But I understand what he means. I'm not saying that it is impossible to have a machine consciousness, and indeed once we learn more about the brain we may be able to emulate such a consciousness. Behavioral code in and of itself is not a sufficient emulation, because human and animal brains consist of more than that. If consciousness arises spontaneously from behavioral programming, that implies that you can start with a simple pain-sensing robot and just keep adding features (conversation-having, humor recognition, sleeping requirement) and somewhere along the way it'll have a mind. The pain-sensing robot doesn't have any mind, it doesn't actually feel any pain. It just detects what we tell it is a painful stimulus and then spits out an appropriate response.

>> No.2726855

>>2726809
>>2726809

as an objective observer of the thread, I'd say its about QED.
we are robots; it's settled, and has always been settled.
i'd say, by definition of the word (as i just read it, nice word btw) we do experience qualia

>> No.2726857
File: 367 KB, 500x455, 1297192543284.png [View same] [iqdb] [saucenao] [google]
2726857

>>2725746
>>2725746
>>2725746
>>2725746
>biological robot
This is the greatest online name ever.

>> No.2726869

>>2726809

you're clearly lost and confused

no one believes rocks and trees are conscious, your point is vague and irrelevant

because no one believes simple objects are conscious they don't believe a toy or furby are conscious because they know what they put into it, they know its structure, and it has no awareness algorithm or sensation capacity...it was never put into it

it can't be put into machine form because we don't understand what conscious is or how experiences are structured

output is irrelevant, what matters is the nature of the internal state..we can't recreate those states, we dont know how, we can't even analyze them empirically

anything output can fool a sufficiently stupid person, even into thinking a room understands a language

but that isn't the issue, strong AI is about qualia, internal states...you can't program subjectivity, we dont even know how it works

>> No.2726870

>>2726837
If your hat is vital to the system as a whole knowing Chinese, yes it does as long as it is part of the system.

>> No.2726873

>>2726837
Which part of the human brain understands language? Which neuron knows how to tie your shoes? Which brain cell remembers your sixteenth birthday? The point I'm making is that the Chinese Room as a system 'understands' Chinese, even though no one part of it knows what it's doing. In this regard, it is similar to your brain. No one part of your brain comprehends anything, but as a whole, it is able to 'understand' the world around it.

I'm assuming that when you called me a retard you were simply projecting, and I forgive you.

>> No.2726891

>>2726837


Just throw away the book, and internalize the room. You still don't know Chinese.

Anyone who has written a paper in the social sciences know that all you have to do is to write a few pages on shit that you don't understand, but sounds good. And bang, perfect grades all day every day.

>> No.2726918

>>2726873
So what you're saying is, we already have AI! Google KNOWS how to translate from English to Russian, so it has a mind.

>> No.2726934

>>2726848
Then you are simply arguing that we do not yet have the technology to develop a hard AI, which I'm sure everyone here will agree with. However, I do believe that consciousness is an emergent property. There is no gene for consciousness in our biology, it is simply the function of many interacting stimuli. So yes, you can just start adding features to a robot and it will eventually be conscious. Hell, this is kind of what nature has done already. Life started out as unthinking cells and through slight modification over the years 'intelligence' emerged through the gradual addition of new features. A better example is pregnancy. You start out as a singe-celled fetus and throughout your development your central nervous system becomes more complex until you gain the 'consciousness'.

>> No.2726936

Who's winning?

>> No.2726941

>>2726869

>no one believes rocks and trees are conscious

You clearly know nothing about religion. Animism believes exactly that, and it's a view still held by hundreds of millions of people.

The rest of you post more or less hinges on this inaccuracy, and so I'll not bother to reply directly to the rest of it.

So I ask again, can you proof that some things are conscious and others are not?

>> No.2726944

>I'm assuming that when you called me a retard you were simply projecting, and I forgive you.

This made me laugh so hard! +10000 internets!

>> No.2726959

>>2726918
You are truly an idiot. I'm not saying that that Google, or the Chinese Room, or whatever has human level intelligence. No one in this thread aside from you has said that. Stop being retarded.

>> No.2726960

I bet 50 years from now, we'll look back and laugh at our earliest attempts to classify intelligence as organic or synthetic.

>> No.2726962

wats the evidence to point that AI is possible?
bots? lol

>> No.2726970

>>2726962
The human brain is a pretty good example.

>> No.2726973

>>2726936

I just opened this thread. I won't read it, but so far I've heard Searle refute pretty much all replies. And even if I don't think his Chinese room is so impressive, don't want to sound like a dick but I see no reason why a computer scientists would know how the mind behaves, the really interesting stuff is how he attacks analytic behaviorism, identity-theory AND functionalism.

>> No.2726978

Maybe consciousness is too complex for us to fully understand.

>> No.2726984

>>2726960
But it's a deductive argument.

durrr.

>> No.2726987

>>2726973
> analytic behaviorism, identity-theory AND functionalism.
I hate it when I get hit with a combo of words I don't understand.

>> No.2726989

>>2726973
We don't even need to know how mind works, we just simulate local universe containing the brain and feed it information and collect the reaction. And we have created intelligence.

>> No.2726997

>>2726978
I thought the dualism thing was dead and buried.

>> No.2727001
File: 265 KB, 1638x800, Chinese room rebuttal.jpg [View same] [iqdb] [saucenao] [google]
2727001

Chinese Room Argument is for people who have no idea about neuroscience or anything relating to how the shit really works.

YES, YOUR INTELLIGENCE IS BASED IN SMALLER MECHANISMS THAT BY THEMSELVES ARE NOT INTELLIGENT. PRETTY CRAZY HUH.

>> No.2727017

>>2726970
oh yeah, because computers work exactly like brain does

>> No.2727026

>>2726941
>You clearly know nothing about religion. Animism believes exactly that, and it's a view still held by hundreds of millions of people.

I meant no one who matters or is intelligent believes that.

>> No.2727027

>>2726973
I've been writing some of these replies and the only thing of Searle's I've read is the Chinese room. I'm working from the assumption that animism isn't correct, and that other humans have a conscious thinking mind but that certain things (like rocks) do not. I can't prove that anything has or doesn't have qualia, because (at least with current technology) its impossible. At that point we're arguing a slew of possibilities that ranges from solipsism to panpsychism or what have you. I'm considering this separate from those arguments. Now if you believe a calculator has a consciousness and understands basic arithmetic, then you and I are just going to have to disagree.

>> No.2727049

>>2726987

In short;

> analytic behaviorism
Mental states are behavior, and can be translated to behavior. That "John don't like it when it rains" would be something like "If the window is open, and it starts to rain, John will close it". Abandoned for identity theory.

> identity-theory
Brain = Mental states. That you have X mental state is the same as you have X neuron firing. Abandoned by everyone buy hobby psychologists for functionalism.

> functionalism
Mental states are a function of the brain. So just like how a car driving is the functions of its wheel and engine, consciousness is a function of the brain. Kinda messy theory mainly hold by computer scientists in the form of computationalism, as far as I know.


>>2727001

> YES, YOUR INTELLIGENCE IS BASED IN SMALLER MECHANISMS THAT BY THEMSELVES ARE NOT INTELLIGENT. PRETTY CRAZY HUH.

Are you a retard? That completetly misses the point of the Chineese room, and are Searle's fucking own position.

>> No.2727056

>>2726970
Irrelevant. An abacus works differently from a pocket calculator, but they can still achieve the same results.

>> No.2727066

>>2727049
>Actually explaining those for me.
You're awesome. Seriously, I get lost in the jargon sometimes, and some of the articles on those skip explanation and jump right into defenses of their credibility. Thanks for the clarity.

>> No.2727068

>>2727049
>That completetly misses the point of the Chineese room, and are Searle's fucking own position.

Not the person you're responding to, but I think his post adequately explains why the Chinese Room is ridiculous and Searle's position is wrong.

>> No.2727070

>>2727027

Then we won't. Even if I don't agree on his biological naturalism, I'm an E-type dualist/Epiphenomenalist myself, his Chinese room is argument is sound.

>> No.2727091

>>2725852
The human mind is a construct of our dualistic Western society. The idea of the mind came from Ancient philosophy mixed with ideas of a soul. This idea of mind-body is dualistic in the sense that it separates what it is to have a mind and what it is to have a body. When in fact there is no mind, the mind is a soical construct what we feel and have is part of our body. Also there's this idea that we are the brain and thats FAR from the truth. The Brain is part of us, just like we are not our heart we use our Brain to funtion but we are not out Brains.

>> No.2727101

>>2727070
>E-type dualist/Epiphenomenalist
I was under the impression dualism was not credible, and that the mind is essentially what the brain does. What is this theory of yours and how is it different from dualism?

>> No.2727103

>>2727056
>ai is possible because you use an abacus and get the same results as the calculator
nice logic

>> No.2727108

>>2727068

Copying Searle's own position on how conscious emerges isn't exactly to refute the Chinese room. The dude is just begging the question.

>> No.2727127

>>2727108
You obviously don't know Searle's position on consciousness. Searle believes that a non-biological system (the room) can NEVER be conscious because no one part of it understands anything. Learn2Searle

>> No.2727143

I think the argument is ignorant and wrong.

inb4 u mad?

>> No.2727159

>>2727101

>E-type dualist/Epiphenomenalist
> I was under the impression dualism was not credible, and that the mind is essentially what the brain does. What is this theory of yours and how is it different from dualism?

I gtg, but in short, the mind is an emerging property caused by lower-level neurons working together. So it isn't a different substance than the mind. However, the mind has no causal power.

Imagine it a little you and your own shadow. Your physical attributes will change the shadow, but the shadow can't do anything to change your attributes.

Searle believes the mind has causal powers.

>> No.2727166

>>2727127

And what was the response? Just stating how the brain works. l2read

>> No.2727176

>>2727166
The point is that the Room and the brain are not so different. No one part of the Room understands language, no one part of the brain understands language. Simple as that. Learn2think.

>> No.2727177

>>2727056
i dont think animals understand how they work nor even humans do, yet we're counsciouss

>> No.2727192

>>2727159
Thanks. I'll think about this.

>> No.2727208
File: 939 KB, 1623x1258, 235 Go Ye Therefore.jpg [View same] [iqdb] [saucenao] [google]
2727208

Computers can't abstract therefore no AI is possible

>> No.2727243
File: 66 KB, 450x373, tropicthunder.jpg [View same] [iqdb] [saucenao] [google]
2727243

>>2727159
>the mind has no causal power.

>> No.2727315

>>2727159
If the mind has no causal power, it would not have evolved.

This...this is bewildering stupidity.

>> No.2727431

Put simply the Chinese Room argument falls into one question:

Does consciousness require a biological or organic structure to exist?

This can be viewed in two ways:
- there are no currently known non-organic/non-biological structures in existence, so AI of this sort cannot be simulated outside of the brain (weak argument IMO)
- given current computational power, programming skills and finite time restrictions we have (so far) been able to simulate everything discovered, so given enough computation power and infinite time/resources conscious AI will be possible

I would swing towards the second argument, essentially the brain is an extremely advanced computer.

>> No.2727499

>>2727431
THIS

>> No.2727619

>>2726045
Reality? Every simulation is grossly imperfect. Unless you've figured every factor, structure and force in the universe.

Read Baudrillard, dumbass.

>aspies who think their simulated heroic life on WOW is real.

>> No.2727683

>>2727619
read
>>2727431

>> No.2727702

yes, because it implies human beings have free will. humans are biological computers

>> No.2727843

searle just assumes that he could execute the program without understanding chinese but I think that that depends on how the program works, if the program worked like a human brain by associating the characters with concepts then associating the concepts with other concepts then by working through the program to get his reply he would have to come to understand the chinese writing. If the program was just an encyclopedia with every possible chinese sentence matched to a possible reply then he wouldn't but that's not how it would have to be.

>> No.2728038

>>2727431
The computer is a human creation, like a clock or a tv. Is your brain a tv?

The brain doenst really work like a computer, because it's made of living matter which works in a lot more complex ways than chips.

I'ts more likely that a bunch faggots who keep sucking Turing's dead dick believe that if they know maths then reality is maths. Kinda like that.

>> No.2728093

>>2728038
It's a fallacy to suggest that machines cannot simulate human intelligence because they work differently. That's the whole purpose of a simulator.

>> No.2728129

>>2728038
The medium doesn't matter jack shit as long as you are able to reproduce the results. Intelligence is not an intrinsic property of anything, it is the end result of many interacting parts, kind of like flight. A helicopter, a jet airplane, and a duck can all fly, but they don't use the same parts to achieve this effect. Saying that you cannot build an intelligent machine is like saying a helicopter can never truly fly because it isn't a biological organism.

>> No.2728151

>>2728038
but math is reality you nondeterministic dumb fuck. The brain is made of matter, it follows certain laws, it is possible to replicate what happens in the brain.

>> No.2728181

ITT: there are no good arguments against whether AI is possible.

>> No.2728237

>>2728151
>>2728093
>>2728129

I'm not the guy you're replying to.

But if a human behavior is emergent of our biology then how is human behavior emergent of computers? If we create a computer that thinks, it won't think like us. It won't have emotions like us. It won't behave like us. However it behaves it will be emergent of it's parts. And how can we know how to build all the right parts required for silicon and electricity to start writing symphonies? Simulations aren't life forms.

Secondly, neurons are not bits. Human memory isn't stored like computer memory. This is where transhumanists lose touch with reality. You have people who dedicate their lives to neurology and understanding how the brain works, but still don't have all the answers. Then you have some comp-sci faggot come along and go "well we can just simulate the human mind on a super computer lol." It's fucking amateur and non-science.

>> No.2728261

>>2728237
No one is saying that we can simulate the human mind on a supercomputer. We're arguing that it's theoretically possible. It's important to talk about theory, because that leads to many great innovations. The pursuit of hard AI is absolutely scientific.

You're right, in that a computer won't think, have emotions, or behave "like us". The question is whether the computer can think, have emotions, and behave under its own capabilities. And FWIW, I think symphony-writing is more of a software problem than a hard hardware problem.

>> No.2728271

I think the only thing making certain people not believe in AI, other than 'its too complex' is the fact that if 'AI' exists, they can't get beyond the fact that it is a human creation and that's it.

>> No.2728273

>>2728237
Just because the resulting system wouldn't be entirely human doesn't mean the system wouldn't be intelligent. Nobody is saying you can program a human mind in C++, everybody knows that neurons and microchips work differently. But We're not saying that the first AI's will be binary, we're saying that biology does not necessarily have the monopoly on intelligence.

>> No.2728313

>>2728261

Can thinking be reduced to 1 and 0? Really, that's the basis of the whole thing with the technology we have on hand. And even with quantum computing, can it be reduced to two bits and a probability bit or whatever it's called?

That's what it will take to create cognizant digital life. Can we use a limited set of primitives to facilitate an environment where behavior emerges.

And I use symphony because creativity is a human benchmark. Along with the ability to do science. So that would be my goal for AI.

>> No.2728319

Wow. I have never seen this before but this exact thing has bothered me about A.I. since I was a kid.

Really what its saying is that a truly sentient computer would have to be a set of hardware with no software on it that spontaneously does what it does without being programmed.

We could probably program robots to do some amazing things, but they won't be intelligent.

And actually what I guess I am saying is that we would have to give them free will.

>> No.2728348

>>2728313
Everything can be reduced to 1 and 0. The only question is whether we have enough 1s and 0s, and whether we can manipulate them fast enough. There's nothing mystical about thought. It's an emergent phenomenon based, fundamentally, on very simple systems. What we've found is that when you link a lot of simple systems together, you get highly complex systems. But those simple systems are still dealing in basic units of information, which we choose to represent using 1s and 0s.

>> No.2728383

http://www-formal.stanford.edu/jmc/chinese.html

>Searle confuses the mental qualities of one computational process, himself for example, with those of another process that the first process might be interpreting, a process that understands Chinese, for example.

>> No.2728397

this thread made me so much dumber reading it

all the retards saying NO BUT IT'S NOT REALLY REAL IN REAL LIFE and NO ONLY HUMANS CAN THINK BECAUSE BRAIN MAGIC

I have to assume it was just a bunch of really dedicated trolls...

>> No.2728402

>>2728348

>Everything can be reduced to 1 and 0.

Not anything qualitative.

>> No.2728408

>>2728397

It's almost as bad as all the science fiction nerds going HERP DERP COMPUTER MAGIC I'VE HAD THREE DECADES OF MOVIES SHOVING ROBOTS DOWN MY THROAT SO IT'S TOTALLY HAPPENING.

Trolls indeed.

>> No.2728411

>>2728402
Qualitative is just a way of saying not yet able to quantify.

>> No.2728415

>>2728402
>>Not anything qualitative.

I'm actually curious as to what you have in mind?

>> No.2728430

>>2728402
Yes they can. Qualities can be assigned numbers like anything else.

>> No.2728458

>>2728397
>Look, Ma! If I oversimplify something I can make it sound stupid!

It is you who is the troll! Either contribute or go suck a dick in the corner.

>> No.2728477
File: 123 KB, 580x658, 1296623329941.jpg [View same] [iqdb] [saucenao] [google]
2728477

>>2728383

>The Chinese Room Argument can be refuted in one sentence:

>Searle confuses the mental qualities of one computational process, himself for example, with those of another process that the first process might be interpreting, a process that understands Chinese, for example.

Holy shit who is this guy?

>> No.2728481

Why are we arguing about being able to tell if robots can have sentience? We can't even tell if our own mothers are sentient or just robots controlled by some outside presence.

>> No.2728482
File: 38 KB, 432x272, pain-scale.gif [View same] [iqdb] [saucenao] [google]
2728482

>>2728430

Assigning numbers is arbitrary. I guess this image really is pain quantified, lol.

>> No.2728490

I thought the Chinese Room argument was supposed to highlight that we cannot ever truly know one way or the other, which seems to be exactly what everyone here is arguing for.

>> No.2728515

>>2728482
Negative.
Pain can be quantified by number of and freqency that pain neurons fire. Only people's perceptions of pain are arbitrary.

>> No.2728526

>>2728482
No, that's pain quantified. Qualified is more like assigning tags to something in a database so that they can be sorted and relationships can be found.

>> No.2728538

>>2728477
the more relevant question is clearly:

HOLY SHIAT! SIZAUCE< NIZAO!!!

>> No.2728561
File: 60 KB, 363x380, I'm an oldfag summer&#44; lol.jpg [View same] [iqdb] [saucenao] [google]
2728561

>>2728538
Fenny Argentinita

>> No.2728565

>>2728515

Fallacy. You've changed the definition of quantified. We were talking about describing something qualitative with numbers. Now you're talking about measuring things. It's not the same thing. You can measure an erection but 8" isn't the quantification of arousal.

>> No.2728583

>>2728565
Anything quantifiable is by definition measurable.
Arousal. Specific neurons firing in a certain quantifiable pattern (frequency, number, cascade effects)
Or you could just continue goalpost moving.

>> No.2728648

>>2728583

By definition things that are qualitative are not quantitative. Good isn't a number, it's not a measurement of anything. Can you measure neurons firing during an experience that can be called good? Yes. But I can't put it any more plainly that the measurement isn't a description of a qualitative value.

>> No.2728664

>>2728583
if {quantifiable} then {measurable} does not imply if {measurable} then {quantifiable}

>> No.2728670

>>2728648
>By definition things that are qualitative are not quantitative.
Not necessarily true.

>> No.2728676

>>2728664
>nope.avi
the state of being quantifiable and measurability is biconditional.

>> No.2728678

>>2728648
>Good isn't a number
Sure it is. You take a list of objects, each with their own properties, and determine if those properties imply "Good". Then you set the value at the address for your "Good" flag to 1 if it is, 0 if it isn't.

>> No.2728691

>>2728648
Your bad argument is measuring 4.7 MegaHovinds.

>> No.2728693
File: 247 KB, 650x550, Mordin indeed.jpg [View same] [iqdb] [saucenao] [google]
2728693

>>2728678
You can also create a 'spectrum' of 'good' by adding more bits, and then defining larger numbers to be more good, smaller numbers to be less good.

>> No.2728730

>>2728678

Arbitrary assignment. Good is not a number. GOOD IS NOT A NUMBER VALUE.

Am I in /sci/? Where am I? I think I stumbled into /g/.

>> No.2728773

>>2728730
It is a property which can be assigned a number value. Objects are either good or they are not, or they are more or less good than each other according to a ranking system. In this situation, "good" becomes a number value because those numbers represent something other than just numbers. Computers can do more than numeric calculations, you know.

>> No.2728778

>>2728730
Good is arbitrary insofar as the organism defining it as good. A steak is good for a human being, but bad for a cow. This means that it is -your- fault that it is arbitrary, not the quantity's.

>> No.2729076

IT might be too late, this thread might already be dead. But I wanted to know if we can distinguish first and second order consciousness or understanding. A computer program that converses in french likely 'understands' french, but it does not understand that it understands french. I believe this has some significance to our problem.

>> No.2729083
File: 49 KB, 560x319, kb.jpg [View same] [iqdb] [saucenao] [google]
2729083

>> No.2729105

Here's my two cents: You're all fucking retarded for getting so worked up about this.

>> No.2729126

>See this thread
>Look up the Chinese Room Argument
>See nothing wrong with it
>Read this thread
>Feel like a retard

>> No.2729162

The only reason we consider other humans to have minds is by analogy and observation. We don't actually have a way to prove that anyone one else, humans included, has a subjective mind as we do.

For this reason it makes little sense to me to discount AI as an impossibility because the existence of private mental states belonging to the AI cannot be outwardly proven.

For example, consider an isomorphic mapping between human biological components to artificial components, along with a simulation of input and output? I don't see any basis for denying that such a human-in-metal would possess private mental states, accepting real humans do.

But then you have a machine with private mental states and the fundamental "machine AI is impossible because it's machine" argument falls apart.

>> No.2729165

>>2729105

Sounds like you're the one getting worked up, son.

>> No.2731124

>>2725767

>Just like your brain. Output is everything.
>Confused output with understanding once again.

When will /sci/ people ever learn?

>> No.2731128

> mfw this thread is still alive?

Jesus fucking Christ, what's the big deal? All I've seen (even thou I've just taken a quick look) is hurr durr system reply, and that's begging the fucking question.

I tough people were joking when they said that computationalism was like a fucking religion.

>> No.2731146

>>2731128
at least one has been steered away from fruitless idiocy
my job here is done