[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 882 KB, 752x571, Aborted Orc.png [View same] [iqdb] [saucenao] [google]
1545236 No.1545236 [Reply] [Original]

(No /phi/ board, but this is close enough.)

This is directed toward those who think that an AI program capable of passing the Turing test must therefore be sentient.

Assume that you are locked in a room. On the other side of your door, a person slips you pieces of paper written in Chinese. Assume you don't understand Chinese, but you have a comprehensive rule book which tells you how to respond to each piece of paper with another piece of paper in order to simulate a conversation with the person on the other side of the door. You don't understand what you are saying, but because you have the rule book, you are capable of making a syntactical connection (hypothetically) equal to that of a person fluent in Chinese.

Now let us assume that we have an AI program capable of making conversation. Substitute the man in the room with the rule book for that program. It can carry on a conversation because it can follow a set of rules, but in the same way that a calculator isn't conscious even though it follows a set of rules, that AI program isn't conscious either. It is capable of manipulating symbols, of making a syntactical connection, but not of understanding the semantics of that connection.

Thus, an AI program can pass the Turing test without being conscious.

QED

>> No.1545241

>an AI program capable of passing the Turing test must therefore be sentient.

Dude, nobody thinks that.
I don't think you understand the Turing test.

>> No.1545246

>>1545241
>has clearly never heard of the Turing test

>> No.1545247

Anybody who thinks that an AI program capable of passing the Turing test must therefore be sentient already knows the Chinese Room, bah.

>> No.1545249

>college sophomore learns of Searle's Chinese room

>> No.1545256

>>1545241
Are you a person with speshul needs by any chance?

>> No.1545257

>>1545249
>high school sophomore responds vacuously in kind

>> No.1545261
File: 27 KB, 314x269, Glasses Breaking.jpg [View same] [iqdb] [saucenao] [google]
1545261

>>1545241

>> No.1545270

Turing never said that passing the TT is either a necessary or sufficient condition of intelligence.

John Searle doesn't understand how 'understanding' is used as a term.

>> No.1545277

>>1545270
Passing the Turing Test clearly isn't a n/s condition for intelligence or awareness, as the OP analogy clearly shows.

>John Searle doesn't understand how 'understanding' is used as a term.

Please explain what you mean.

>> No.1545278

>>1545277
Its fine. DandE can't into mathematical logic.

>> No.1545291

>>1545278
Is that so?

>> No.1545298

>>1545257

I bet you thesaurus.com'd to find the word "vacuously." That, or it was on your latest vocab test in English 102 at your community college.

>> No.1545302

>>1545298
I'll take the compliment.

Next.

>> No.1545303

>>1545277
Searle's argument in my view hinges on how we use a term like 'understanding'; he, to paraphrase, concludes that an A.I. doesn't in fact understand what it processes in the same way that a man who gets chinese inputs and is writing chinese outputs from his translation manual doesn't actually 'understand' chinese. But here's the thing; that's not in fact how we use a term like 'understand' in ordinary life. If you want to see if someone understands something, you give them some sort of test that demonstrates to you whether they know it or not. This is mirrored in our daily life: academic tests, for example, are designed to allow us to demonstrate our understanding. You'll notice that in something like this, there's never the question: "but do you actually understand what you're writing?" (which in my opinion, is what Searle's argument amounts to), because it doesn't apply.

Similarly, if someone tells you they understand French (or they TRULY understand french even), but never speak or write a word of it, how could you possibly ever know whether the did understand it? You can't, so you give them some sort of test with whatever conditions you consider appropriate to determine for whatever purposes you have whether they have that understanding or not. As far as I'm concerned, this is how it stands with computers, and Searle's thought experiment does not mirror how the term is used in real life nor is the thought experiment viable on these grounds.

>> No.1545321

>>1545303
Whether or not the term "understanding" is readily applicable to real-world situations doesn't have a bearing on the thought-experiment because a thought-experiment is inherently hypothetical. In this case, understanding only means 'knowing Chinese.' The man in the room does not know Chinese. Therefore, he does not understand. This subtends and encloses the definition "understand" for the analogy at hand.

The implications of this, while not relevant now, have to do with how we'll approach ethical questions regarding AI programs on the basis of whether or not they're aware.

>> No.1545325
File: 24 KB, 300x366, Abraham-Lincoln-2.jpg [View same] [iqdb] [saucenao] [google]
1545325

>Confuses mathematical comprehension and linguistic understanding

You are out of this league D&E. Stay out. Remember Lincoln.

>> No.1545329

Arguing someone else's opinion with someone else's analogy gives me no reason to believe you your 'self' are sentient.

>> No.1545333

>>1545303

This is too coherent for D&E. Did you copypasta this, or is it from a paper you had already written?

>> No.1545335

>>1545333
Its also wrong.

>> No.1545337

>>1545335

GTFO you nonsense-spouting motherfucker, unless you have a substantive refutation.

>> No.1545344

>>1545337
Butthurt much?

>> No.1545345

I think James is right. I'm tempted to build a formal proof of the OP argument after class and possibly a proof of Deep's argument to see if either is sound.

>> No.1545351

>>1545321
cool, I don't think you've attempted to engage in any way with what I've said, because you would "understand" (forgive the playing of words here) that something like
>The man in the room does not know Chinese.
would only be knowable through
>Therefore, he does not understand.
which would itself only be knowable through some arbitrary predetermined test. Now, Searle's thought-experiment frees him of this crucial requirement, which is key to how the term is used in the first place, and in doing so it means he presents a situation that could have no parallel in reality. The thought-experiment is useless.

But whatever, this is probably the last thing I am going to say about the issue because I haven't fully followed the conclusions of this line of thought out yet myself.

>> No.1545355
File: 77 KB, 296x360, I-dont-think-you-appreciate-just-how-not-mad-I-am.jpg [View same] [iqdb] [saucenao] [google]
1545355

>>1545344

>> No.1545365

>>1545303
>But here's the thing; that's not in fact how we use a term like 'understand' in ordinary life.
But that's exactly who we use the term "understand" in ordinary life. You think a calculator "understands" addition? Does a student with access to teacher's notes during an exam "understand" the material?

Actually I don't care whether you think a calculator understands math or not, because semantics isn't the point. The point is the difference between functionality and conscious experience.

>> No.1545369

>>1545351
>doing so it means he presents a situation that could have no parallel in reality

Oh, so you haven't actually read Searle. Fascinating.

>> No.1545376

>But whatever, this is probably the last thing I am going to say about the issue because I haven't fully followed the conclusions of this line of thought out yet myself.
How bout you try actually reading some searle while you're at it.

>> No.1545387

>>1545303

Jaron Lanier makes an argument along these lines in "You Are Not A Gadget" (which could also be titled "Cool Your Jets, You Fucking Nerds") except going in almost the opposite direction, saying that the Turning-Test-Means-Sentient crowd is wrong for the same reason, in that they assume something that passes an arbitrary test for a thing IS that thing and then go on to make erroneous conclusions based on that, when all you can really say for sure is that it meets whatever definition of that thing you've chosen to test for.

>> No.1545390

>>1545365
>You think a calculator "understands" addition? Does a student with access to teacher's notes during an exam "understand" the material?
I haven't ever needed to test aa calculator understands anything so I've never needed the term for it. As far as the test is concerned he understands it. I could always add or subtract more conditions for what I count as understanding if I was worried the test didn't accurately reflect whether he understood it or not, I could have him searched before the test and so on. This all arises from your inability to construe of a term such as 'understanding' as completely flexible to any purposes we might need it for whatsoever.

>>1545369
>>1545376
cool, but these are not arguments

>> No.1545392

>>1545365

> chinese room
> semantics isn't the point

>> No.1545393

>>1545387
Cheers, I'll be sure to check it out, I'm really interested in this line of thought.

>> No.1545396

>>1545390
Once again, the definition of understanding is not the point.

The point is the difference between functionality and conscious experience.

>> No.1545398

>>1545333
It's just something I've been thinking about. I'm taking a class in the philosophy of A.I although I have to admit I'm not all that interested in it.

>> No.1545408

>>1545396
>the definition of understanding is not the point.
It may not be the point but that is what it boils down to,

>difference between functionality and conscious experience
This difference arises, as I've tried to demonstrate, from a misunderstanding, so to speak, of understanding

>> No.1545426

>>1545408
Alright I can agree to that. But just curious about your understanding of "understanding" what about this scenario:

I write (in chinese) and slip into the room this note:
"hey buddy, you can stop copying all this stuff now. I'm going out for a coffee, you want one?"

>> No.1545435

If a brain is capable of displaying intelligence, then there exists the possibility of a computer doing it one day. This computer may be very similar or different than a brain. But if a brain is a mechanical system (as long as you aren't some mystical mind-body dualist) then a machine may one day imitate it. More likely the machine will display something completely other than intelligence that will make us reconsider will and consciousness.

>> No.1545446

>>1545426
I'm not quite sure I understand what you want me to explain in this example, can you be a bit more precise for me?

>> No.1545447

>>1545426
I love you anon. But sadly its still incomplete. The hard-coded answer can be No.

>> No.1545452

>>1545426

"Anonymous, you seem to have forgotten that you are locked in a room and that room is locked inside a thought experiment. And besides, I don't drink coffee, I'm a computer."

>> No.1545466
File: 688 KB, 200x150, 1289929966273.gif [View same] [iqdb] [saucenao] [google]
1545466

>On the other side of your door, a person slips you pieces of paper with incompleteness theorems on them

>> No.1545698

If you think the Chinese Room metaphor concerns the meaning of 'understanding' instead of the presence of consciousness in AI programs, you clearly haven't read the first post of this thread.

Moreover, you probably haven't read a Searle book/paper, in which case I recommend you do so, if only to familiarize yourself with all aspects of the metaphor.

>> No.1545704

>>1545698
another guy who couldn't be arsed to look at what I've said and how I've shown how it actually DOES boil down to understanding. Whatever, let the fucktards keep coming.

>> No.1545705

>>1545466
>ignorance of not only the problem but also of common sense

The computer will be programmed to reply that it can't be solved.

>> No.1545720

>>1545705
For any consistent formal system, there are sentences that are true, but which the formal system cannot prove.
We humans can prove these sentences. Therefore, for any formal systems, humans can do something that the system cannot do. Therefore, no formal system (e.g. ai) is equivalent to the human mind.

or so the argument goes

>> No.1545736

>>1545720
Great use of google. Let me make it clear. You have understood NOTHING behind the theory of Mathematical logic.

>> No.1545749

>>1545736
thats because d&e didnt pass higher maths :3

>> No.1545751
File: 17 KB, 470x227, Stupid.jpg [View same] [iqdb] [saucenao] [google]
1545751

D&E, do you ever read the threads you post in?

>> No.1545755

>>1545736
that's got nothing to do with anything I've said

>> No.1545759

>>1545755
>respond to post about something you said
>that's got nothing to do with anything I've said

Just to make sure this is the original Deep and not a doppelganger trying to act stupid, has this always been Deep's tripcode?

>> No.1545760

>>1545755
.....

Have you actually ever seen a truth table layout of a turing problem boi?

>> No.1545763

>>1545759
>>1545760
These two things have nothing to do with what I've said

>> No.1545765

>>1545760
truth tables aren't d&e's speciality ^^

if you'd been here a bit longer you'd know that.

>> No.1545772

>>1545763
See D&E.

Let me make this clear to you. You are solving a problem from a field that is technically beyond your education.

I mean technically.

Its not a reading comprehension problem. I had 780 out of 800 in my GRE for verbal section. That did not make me an expert in english lit. It just proved that comprehension is a generic skill.

So be a nice little faggot and run off to bed. And try not to cry to sleep.

>> No.1545776

>>1545763
.

>> No.1545777

>>1545772
That has nothing to do with what I've said

>> No.1545778

>>1545720
Congrats, you're an idiot. Well done on taking a statement about mathematical formalism and applying in the dumbest and most obtuse way possible to humans.

>> No.1545779

>>1545772
For THE verbal section.

>> No.1545780
File: 136 KB, 428x510, 1293652156654.png [View same] [iqdb] [saucenao] [google]
1545780

>>1545777
now post 'i am the best'

>> No.1545781

>>1545777
jackpot

>> No.1545783

>>1545777
That has nothing to do with what he said.

>> No.1545784

>>1545780
inb4 circular quotes and faggy statues

>> No.1545790

>>1545779
4 d verbil sexshun

>> No.1545791

>>1545778
>>1545781
>>1545783
>>1545784

Stop getting angry because I'm the smartest, most well-read person in this thread who no-one has succesfully argued anything against

>> No.1545794
File: 17 KB, 265x290, 4b454bed_1a37_1cc7..jpg [View same] [iqdb] [saucenao] [google]
1545794

>>1545791

>> No.1545795

Deep&Edgy kind of reminds me of myself when I was in HS. It's creepy.

>> No.1545798

>>1545791
This doesn't have to do with anything we said.

>> No.1545799

>>1545798

>> No.1545802

>>1545303
understanding as a "mental state" is okay. searle's problem is assuming that the chiense speaker isn't actually operating a set of semantic rules. a simpler system can indeed give out the same responses as a more complex system, if the latter is studied behviorally. so the CRA doesn't actually show that the native chinese speaker person isn't actually operating a system, albeit a more complex one

>> No.1545804

>>1545791
An invalid claim, since it presumes that one need necessarily argue with you in order to be as "smart" or "well-read" as you, when there may well be people as "smart" as you who agree with you. In any case, your post has nothing to do with this conversation.

>> No.1545808

hurr me i got syntax and semantics confused

>> No.1545811

>>1545794
>>1545795
>>1545798
>>1545799
>>1545804

Stop getting angry because I'm the smartest, most well-read person in this thread who no-one has succesfully argued anything against

>> No.1545819

>>1545811
'

>> No.1545825

>>1545811
>cries back to sleep

>> No.1545826

>>1545811
I successfully handed you your ass when you were bullshitting about Schopenhauer. Other people have done similar every time I've been on.

>> No.1545831

>>1545802
>>1545819
>>1545825
>>1545826

Stop getting angry because I'm the smartest, most well-read person in this thread who no-one has succesfully argued anything against

>> No.1545836

>>1545831
whoops sorry onionring you're cool although I think his argument is that syntax by itself is not sufficient for nor constitutive of semantics, if that makes a difference

>> No.1545844

>>1545836

So are you French?

>> No.1545850

>>1545844
No, he's just retarded.

>> No.1545857

>>1545426

>I write (in chinese) and slip into the room this note:
"hey buddy, you can stop copying all this stuff now. I'm going out for a coffee, you want one?"

Just answering this because it's simple, not out of my league and was glossed over by everybody else as they focused on the deeper issues ITT.

If you wrote that (as the person who understands chinese) and sent it through to me (who has been giving you responses based on the number of messages I received) 1 of 2 things would happen.

If I was told which squiggly line to look for, I would not recognize what you put and reply with some form of error message, or not respond at all (depending on what parameters I was given at the start of the experiment)

If I was passing messages back based on the number of messages you gave me (i.e. I was told send this message through first, this one second, and so on) I would send back something that had nothing to do with what you said and therefore every message sent back after that would be delayed and thing would be ruined unless you skipped a question to get back on track.

>> No.1545861

The problem with the notion that a system can be studied behaviorally is that it affirms the consequent: If P then Q, Q; therefore P (if the program thinks this way, then it means this; the program means this, thus it must think that way).

Look, we can't prove whether or not a program can comprehend our input because we have no access to whatever 1st-person ontological experiences such a program would have. Whereas the behaviorist looks at only the input-output sequence, the computationalist wants to know what 1st person mediation occurs between input and output. Nobody here has touched upon biological naturalism, which lies at the root of Searle's philosophy and, I think, the root of his arguments. He is neither a materialist nor a dualist, and believes instead that mental states are caused by brain states in the same way that digestion is caused by intestines and other organs. One's mind is not ontologically reducible, nor ontologically identical to the neurons in one's mind; rather, one's mind is causally reducible to those neurons, and should be studied and comprehended as such. This is a very general outline of the thesis and anyone would do well to read his work in order to really understand him, but I think I've explained it at least marginally well.

>> No.1545864

>>1545861
That said, I think if we look at the Chinese Room as a comprehensive system rather than one man manipulating symbols, we come to inklings of an answer. Searle believes that the system can't understand the semantic exchange, only the syntactic one, because he views the system as identical to and coterminous with the man in the room. But if we see the system as not just one man, but the room and the rule-book as well, then we have a functional process that does, in fact, understand Chinese.

Think about it this way. No single neuron in your brain understands English. Your understanding of English emerges from the interactions of those neurons; it is not reducible to them. Your mind is the whole, not the parts. Similarly, Searle's metaphorical system understands English even if its parts (the human, the book, the room) do not individually understand it. Think of each as a neuron, and the process that emerges from them as a mind.

I realize I've essentially turned Searle's biological naturalism against him, but that's because I think his kernel of an idea is valid. Where he goes wrong is applying that kernel to all systems. If a program is by all rights indistinguishable from a human mind (and rest assured, we have a long way to get there) then I think it should be treated as such. Can we prove it's both sentient and sapient? Well, no, but we can't 'prove' that about each other either.

>> No.1545869

>>1545826
I think I saw that thread. I would say well done, but it wasn't that big of an accomplishment given the size of the fish you fried, if you know what I mean.

>> No.1545873

>>1545864
The reasons are that Sapient and sentient are artificial categories and not natural ones. Sentience is an emergent phenomena reducible not to roots bu to interactions. It has no 'per se' basis.

I would hold NOTHING against Serle though because these arguments were only shown to be correct through years of research in both cybernetics and neurosciences in the last few decades.

>> No.1545878

>>1545869
I know. I don't even think I know that much about Schopenhauer either, so maybe that says something.

>> No.1545887

>>1545873
>Sentience is an emergent phenomena reducible not to roots bu to interactions.

Precisely my point. I do agree that sentience and sapience are muddy terms, though at times they're the only terms that will do. And regarding Searle, I hold respect for the man as a philosopher and recognize that many of his ideas came out thirty-something years ago.

>> No.1546148

The Turing Test is a bad test.

The Chinese Room effectively debunks it. 100%, straightforward, non controversial success.

This means nothing for strong AI because the Turing Test is a bad test.

>> No.1546153

>>1545270
It's not about intelligence it's about whether they can think independent from human programming. Essentially, whether they can transcend their components.

They can't. Deal with it.

>> No.1546154

um isn't it more like this

the Turing test requires 3 people, one human judge, one human, and one machine

they are in separate rooms and the machine has a conversation with the human

if the judge cannot tell who is the machine or who is the human then the machine has passed the Turing test

>> No.1546160

>>1546154

The point: X
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Your head: X

>> No.1546164

>>1546160
http://plato.stanford.edu/entries/turing-test/#Tur195ImiGam

I guess it's way over stanford's head too

>> No.1546168

>>1546164

Yes, you know what the imitation game is.

This doesn't matter.

The post you made is irrelevant.

Has no purpose.

Betrays not knowing how to go on.

Is nonsense.

>> No.1546175

>>1546168
how is it nonsense?

OP claims that the Chinese Room was somehow a type of turing test, which it is not

and if it is not, it collapses his entire argument :\

>> No.1546179

>>1546175

The Chinese room....

Get this!

Is an argument against the Turing Test.

This means you're an idiot.

:(

Sorry!

>> No.1546191

>>1546179
so you are saying that the Chinese Room is an argument against the Turing test when they are completely different things? the chinese room requires one person and one machine while the turing test requires one judge one human participant and one machine

I don't see how a machine that passes or fails the Chinese Room would have anything to do with the turing test

>> No.1546209

>>1546191

The Chinese Room is not about passing or failing. It is about demonstrating that external criteria (for example, participating in a conversation competently) are not sufficient to demonstrate the occurrence of anything internal (for example, understanding).

The computer in the Turing Test... is a Chinese Room. It passes the Turing Test? Don't fucking matter, BECAUSE OF THE INSIGHT THAT THE CHINESE ROOM GIVES YOU.

Really, I find it scandalous that whoever taught you about Turing didn't teach you about the Chinese Room.

>> No.1546228

>>1546209

> whoever taught you
> wikipedia

>> No.1546235

I actually didn't know anything about either the Turing test or the Chinese room and instead I just wikipedia'd it and read the first few sentences umad?

anyways.

You are still missing one thing. I don't see why it would not be possible to teach a machine a language through experience in the same way that you teach a child language through experience. If this can be done, then when the machine interprets or speaks the language it will have experiential reference and therefore understanding/intelligibility

>> No.1546238

>>1546235

nope.png.tiff.exe.rar

Your test has nothing about looking for internal criteria. Only behavior.

You fail.

Even when you admit your failings.

>> No.1546240

>>1546228
>>1546235

give medal for being a wizard y/n

>> No.1546242

>>1546240

yes.

Yes.

YES.

YESYESYESYYESYEYSYEYSYEYSYEYS

>> No.1546252

>>1546238
what internal criteria are you looking for then if not experiential reference (as opposed to syntax input/output)? I hope you're not looking for a soul

>> No.1546256

>>1546252

Different guy, but the Turing Test doesn't satisfy internal criteria.

In theory, a very smart chatbot could pass the Turing Test just by having a very lengthy and complex algorithm of parsing and responses. Sentience this does not make. That's the point of the Chinese room as a thought experiment.

>> No.1546257

>>1546252
>wildly invent terminology
>the only alternative is a SOUL

I want to you think about how much alcohol I've imbibed, and how I'm still a much better philosopher than you.

The answer to whether something can think is not in observation of its behavior, but an observation of what its behavior is grounded in. That is, we need to look at mental processes, not behavior, in order to understand what humans are really thinking. Similarly, we need to look at the internal structure of an AI rather than its behavior to make the claim that it is intelligent.

So, we can see that an AI appears to be able to be taught. THIS IS NOT ENOUGH.

>> No.1546258

>>1546256
yes, I realize this, but what do you think of >>1546235 ?

>> No.1546271

resul>>1546257
then tell me what criteria you are looking for if it is not experiential reference linked to language/behavior

there is nothing that prevents the creation of a machine that mirrors the mental processes of a human brain

and if you are not referring to some extra-physical soul or what have you, then I implore you to tell us what makes a human mind a thinking mind, and not the result of mathematical processes in reference to experience

>> No.1546279

>>1546257
also, if the machine's behavior stems from experience, please explain why this would not be a mental process

>> No.1546286

Here's what it boils down to:

Either A) humans and other sentient organisms have something special that robots do not, or B) humans are in fact very advanced robots.

>> No.1546295

>>1546271
>there is nothing that prevents the creation of a machine that mirrors the mental processes of a human brain

Yes. Good boy! Yes, you're a good boy, aren't you.

>then tell me what criteria you are looking for if it is not experiential reference linked to language/behavior


You're still using your personal invented terminology. I'm going to pretend that "experiential reference" means "watching shit happen", and that "language/behavior" is some sort of activity that can be observed.

Understanding happens for reasons, not causes. If you watch some shit happen- say some utterance of "I understand"- you have no proof that there is a REASON for the utterance. To know that there is a reason you need to look deeper than the behavior- look at the mental process- and then you know if you have found JUSTIFICATION.

MENTAL PROCESSES... they involve neurons... this is not behavior, in the ordinary sense of the term.

My skin feels electric.

>> No.1546317

>>1546258

Let me boil your argument down:

>You are still missing one thing. I don't see why it would not be possible to teach a machine a language through experience in the same way that you teach a child language through experience. If this can be done, then when the machine interprets or speaks the language it will have experiential reference and therefore understanding/intelligibility

=

1) IF: a machine can be taught language like a child
2) THEN: it will gain experiential data
3) THEREFORE: it will become intelligent

Here's the thing though.

You need a machine of that intelligence to teach it language in the first place.

If we accept your claim that having learned language, and having absorbed all that experiential data MAKES the machine become intelligent . . . and now that we have a machine intelligent to teach it language . . .

. . . well now we're going in circles, aren't we?

>> No.1546318

>>1546295
I believe you are misinterpreting me, or not seeing all of what I am saying

let me define my 'personal terminology' although they are rather self-explanatory

when I say a machine has experiential reference I mean that the machine performs actions or communicates or processes information by referring to its learned experiences as a point of departure instead of merely calculating predefined data in predefined parameters

and the naturally the next question to ask is if the human mind and its processes can be reduced to mathematics -- if it is, I do not see why a machine could not do the same

>> No.1546320

>>1546317

*intelligent enough that we can teach it language . . .

>> No.1546321

Protip: /sci/ is the goto /phi/ board, not /lit/.

>> No.1546324

>>1546317
You are correct, but not correct about my argument

IF: a machine can, through experience, learn
THEN: a machine can learn language through experience
THUS: a machine can engage in conversations or speech with words that the machine understands because the machine can relate these words to the machine's experience, and the way which the machine would link language with experience is the same way that humans do, and how children learn language

>> No.1546327

>>1546318
>when I say a machine has experiential reference I mean that the machine performs actions or communicates or processes information by referring to its learned experiences as a point of departure instead of merely calculating predefined data in predefined parameters

Yes- but how do you know the machine is doing these things?

Observing it being taught is not enough.

You need to look at what it's made of, and how those parts move.

Maybe we agree.

Why am I ever sober? This is better. Tracking where the flickering line is... is difficult.

>> No.1546331

>>1546324
also my point is that the machine is not just 'intelligent enough to learn language' but it is rather intelligent enough to understand language through experience and not only producing a string of syntax

>> No.1546335

>>1546324

Right but...your end-goal is still...bah.

How do I put this?

You're saying if it can do that, it will be as intelligent as a human child. ...but being as intelligent as a human child -- that is, enough so to learn language -- IS THE PREREQUISITE OF YOUR FIRST STEP.

You're in a logical loop, buddeh. Your argument. No sense it makes. What trying you to express, eh?

>> No.1546346

>>1546335

Necessary logical input for this hypothetical scenario: a machine which can, through experience, learn
> IF: a machine can, through experience, learn
> THEN: a machine can learn language through experience
> THUS: a machine can engage in conversations or speech with words that the machine understands because the machine can relate these words to the machine's experience, and the way which the machine would link language with experience is the same way that humans do, and how children learn language.
Logical output of this hypothetical scenario: ...a machine which can, through experience, learn.

>> No.1546351

>>1546327
maybe I'm not explaining this thoroughly, if I am, try to think a few steps ahead of me

we are not just 'observing the machine being taught', I think it is entirely something phenomenal

Take for example-

Machine A: it cannot learn from experience, and is entirely dependent upon executions of mathematics and code

Machine B: it can learn from experience, and the things that it does or the things that it learns in the future will also be tied to its previous experience


you ask Machine A and B: "Hello, how are you doing today?"
Machine A answers "Hello, I am doing fine"

Machine A says it is doing fine not because of any condition of its wellbeing that it has observed today, but because its behavior is predefined so that he must respond that way to "how are you doing"

Machine B answers: "Hello, I am doing fine"

Machine B answers this not because it must, but because machine B examines his experiences to see what it means to 'be doing fine' so he must compare his current day to his other days to see how it measures up to the other days. If machine B were doing poorly on this day, it would answer differently also because it understands what 'poorly' means not by its syntax but by measuring 'poor' against its previous experiences

>> No.1546357

>>1546346

Logical input: God is a monkey.
> If: god is a monkey
> Then: god is a monkey.
> Thus: god is a monkey.
Logical output: God is a monkey.

See what I'm saying? That doesn't prove god is a monkey. It's just circular thinking.

>> No.1546371

>>1546346
my point: a machine that can learn through experience means that when the machine communicates, it links its language with its experience, and thus the machine understands language outside of syntax - the machine understands it through experience since the assumption is that the machine can learn through experience

IF: a machine can learn through experience
THEN: a machine can, through this experience, understand language in a way that transcends syntax

>> No.1546376

>>1546357
but that's not what my statements are

IF: god is a monkey
THEN: god can feel hunger
THUS: god can feel hunger because it is a monkey

>> No.1546380

>>1546351
>we are not just 'observing the machine being taught', I think it is entirely something phenomenal

Phenomenal... yes, your sense perceptions are like that. Just observing just is phenomenal.

>stuff about machine A and machine B just doing different things

Mathematics and code... one machines learns from that.

The other learns from experience.

Mathematics just is experience. If it is complex enough.

We agree. If you think that your Machine B.. is a slightly more complex Machine A... and the only way to tell the difference is to look for the reasons for behavior, not just AT behavior. The reasons are internal, observable, but not observable with human eyes. They see with a certain detail level

My vision is blurry now. If you're right, maybe I'm wrong.

>> No.1546387

WTF?! God is a monkey?!
Damn. Shit. Why nobody ever tells me anything?!
Seriously. Someone should have told me. This is not funny.

>> No.1546389

>>1546380
We both agree on everything, with the exception that you say that there is something internal, but what is this internal thing you are speaking of?

if experience is just mathematics, which it very well may be, and the human brain is composed of biological and physical substances, which it is, then what internal thing is it that you mention if it is also not mathematics? if it is mathematics, then it is cause and effect, and if it is cause and effect, then it is the same as a machine which experiences

>> No.1546396

>>1546389

Yes.

The problem is- how do we know? That the machine is intelligent.

That is why the internal is important.

The internal is- something you cannot see with your eyes. That you need- at least- a microscope for seeing it.

Just look at that, and then you know whether a machine is intelligent.

We agree :3.

>> No.1546397

>>1546389
also, a big difference between machine and human mind is that since a human mind is biological, it can rewire itself and completely change its structure, create new connections, replace connections, etc etc etc, and I don't know much about neuroscience, but if I am correct, I believe that this re-wiring and re-creating of itself is what leads to being able to experience

but it is not impossible to create a machine which can do this

>> No.1546403

I don't like the Chinese Room.

The human working with all that Chinese does not understand what the fuck is going on, but he doesn't stop getting more papers and answering them, for he was told to and that is an understanding on its own. The brain of this human might not be understanding, but the whole system (rule book, outside guy, papers, his ability to write down whatever that was suppose to say) does comprehend. I know it sounds weird, but the human brain in this case is just a single part of the entire system (which is functioning).

The Chinese Room doesn't prove the AI is not sentient, it only states that the programmed skill of searching a database in his "brain" for answers is not the same of understanding those answers (just like human searching rule book). But the point is that, in this case, the "robot" or whatever that AI is, is the entire system of the Chinese Room all together.

The incredible thing about the Chinese Room, as an argument, it's that it escapes the individual, it brings intelligence to an outside level so to prove that deep down (in this case at the human level), there is no understanding. That's what puzzles us.

It's like I said "man can fly, look at that plane he built" and the Chinese Room was there to say: "the plane flies, but the man inside the plane cannot fly". Well, that's preety much saying the motor of the plane can't fly, and indeed it's worthless without the rest of the plane. Do you see what I mean?

This is my interpretation and this is why I believe the Chinese Room is not very strong.

>> No.1546410

>>1546403 here
I read the thread just more or less and I believe no one thought of it this way or else I'd be misinterpreted you or missed something you guys said. Sorry in this case.

>> No.1546412

>>1546396
are you trolling me?

I am following you when you say it is internal and you can't see it with your eyes, but it is observable in the laboratory, but what is it that you're looking for? you're going to have to give more information than just 'it is internal'

say that you do look for this 'internal' in the human mind in a lab

then you find it

then you make a machine which does the same thing as this 'internal'

>> No.1546415

>>1546403

Sense experience.

If I have the sense experience of flying..

No weight. Rushing. Free. Falling, never, fast, fast.

Understanding experience- seem to contain all of understanding at the same time.

It's harder than understanding "to fly".

>> No.1546418

>>1546376

> If a machine can learn
> Then a machine can learn language
> Thus if a machine can learn, it can learn language

But the whole reason you've been making this argument is to justify the hypothetical possibility of a machine which is intelligent enough to authentically learn and interpret language in the first place. Which is what you require to engage the thought experiment at all. Which you only have because the thought experiment proves it. Which head explode.

>> No.1546420

>>1546412
> you can't see it with your eyes, but it is observable in the laboratory

We agree.

We are done.

I love

you

>> No.1546432

>>1546418

1) a machine can learn through experience
2) a machine can learn language through experience, since it can learn through experience
3) a machine can communicate words which the machine understands because it has learned language through experience because it has learned experience

Inquiry: X can A?

1) X can Y
2) X can Z, since it can Y
3) X can A, since X can Z, since X can Y

at this point you have to be trolling me I don't see anyway which this can be interpreted as circular

>> No.1546436

>>1545236
hey op, you go from "a person" to "the man" in paragraphs 2 and 3. I, uh, wasn't expecting that jump, ya kind of got me there.

>> No.1546462

>>1546415
Not the point, bro...

I'm talking about how the Chinese Room is tricky.

In order to comunicate, you need two things exchanging shit around. I'm talking to you. My brain makes shit up, it goes to my mouth, I say it, you hear it, you take it to your brain and you get it. Both of us understand what's being said, right? But does our mouth understand? Our ears? No and we don't even think about questioning those things because they are part of us. We just say "I understand" as in brain/mouth/body/penis/everything.

In the Chinese Room, the knowledge goes through the human without him being aware of what's going on. The difference between him and the AI is that the AI is both the thing that writes down and the rule book, it's both brain and mouth, just as wings, engine and pilot.

I don't see how the sense of flying relates to it. In my plane thing, I was trying to illustrate how it depends on the level that you're dealing with. Plane flies, but everything inside doesn't fly. I understand, but my mouth doesn't. The "rule book/outside guy/guy writing it down" understands, but the writing guy doesn't. The Chinese room implies that the robot might make the association, pass on the information and yet does not understand what's going on. It's like it was checking all the bits of metal on a plane and labeling "doesn't fly" and assuming the plane won't fly because of it.

>> No.1546750

lol this kid

>> No.1546786

Let the record show that this thread went to shit when I left.

>> No.1546890

This thread went to shit around where people started trying too hard to troll me

>> No.1547243

>still not quite sober.png.tiff

>> No.1547260

>>1546890
D&E you were so very fucked in this thread!

>> No.1547272

Sure is BLINDSIGHT IN HERE.

it's a very good book....can't believe no one brought it up : (

>> No.1547361

OP's pic is distracting.

>> No.1549548

we do really need a /phi/ and /oc/ board