[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 91 KB, 926x501, Chinese room experiment.jpg [View same] [iqdb] [saucenao] [google]
4376090 No.4376090 [Reply] [Original]

What's /sci/'s opinion on The Chinese Room thought experiment? I feel that it falls under the category of Science, because of the implications it can have for potentially sentient machines. I know it's just crazy bullshit, but I still feel like it could garner some good discussion of AI, the future of it, and how smart they really are.

For those who don't know what the Chinese Room experiment is:

http://en.wikipedia.org/wiki/Chinese_room

>Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

(1/2)

>> No.4376091

>The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6] Searle calls the first position "strong AI" (see below) and the latter "weak AI".[7]

>Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. As the computer had passed the Turing test this way, it is fair, says Searle, to deduce that he would be able to do so as well, simply by running the program manually.

>Searle asserts that there is no essential difference between the role the computer plays in the first case and the role he plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. And yet, Searle points out, "I don't speak a word of Chinese."[8] Since he does not understand Chinese, Searle argues, we must infer that the computer does not understand Chinese either.

>Searle argues that without "understanding" (what philosophers call "intentionality"), we cannot describe what the machine is doing as "thinking". Because it does not think, it does not have a "mind" in anything like the normal sense of the word, according to Searle. Therefore, he concludes, "strong AI" is mistaken.

(2/2)

>> No.4376097

>>4376091

Oh, and Strong AI is defined as:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[16]

>> No.4376098

AI is probably possible. It's happened once before it's just a matter of us figuring out how exactly that happened.

>> No.4376100

So let me get this straight. To refute the idea of computers being programmed in a way that they think like humans, therefore they have "brains" like humans, this argument is presented to refute such an idea that it doesn't actually think in any way?

I got that right?

>> No.4376113

it takes a brain to be a human

that's why never an artificial machine will be ''the same''

>> No.4376116

>>4376113

>implying AI wouldn't be very similar to a brain

>> No.4376120

Is the human mind truly too complex to duplicate with machines?

>> No.4376121

>>4376113
>implying we want an AI to be human in the first place

>> No.4376123

why not just have a seperate brain functioning on the basis of realizing the discarded information and translating it objectively whilst displaying it for both participants?

>> No.4376127

>>4376090
Goddamn I wrote a paper on this last semester... let me see

I don't quite agree with Searle's assertion that computer syntax is so fundamentally limited that it will never be sufficient for genuine human-like intelligence. At the rate technology is developing, it's only a matter of time.

What about programs that are capable of adapting to unique situations, like Deep Blue, or whatever that chess playing program was called. It's not a perfect example, but I think the components are there.

>> No.4376129

Which do you think is more unlikely:

A machine ever passing the Turing test, or

A human passing an equivalent, opposite test for machines?

>> No.4376131

>>4376113
Well, what makes a brain? Could a martian be said to "think" even if it had a brain made of silicon and ammonia?

>> No.4376139

>>4376127

I don't think Searle's point is that it wont reach the level where it can't duplicate human thought and action. That's the premise of the experiment, that a theoretical program can do such a thing as it can pass off as a human being writing in Chinese. However, he asserts that it does not actually "know" Chinese anymore than he would if he were sitting in a theoretical room with a chinese/english dictionary writing out Chinese and passing the same test.

It's not that the AI won't be able to pass off as human, it's that it won't actually fundamentally understand what it is saying or doing, but just doing a programmed syntax or algorithm or whatever. It doesn't actually KNOW anything, it's just doing.

>> No.4376141

>>4376139
This is very important for original thought and the ability to produce new ideas. If one does not have fundamental understanding of concepts, but instead can just refer to tables without understanding, they are useless in dealing with data without context (which is how modern research and science works!)

>> No.4376161

>>4376139
You are aware this is just what humans do? People seem to give understanding some sort of higher pedestal but it is literally the same thing.

>> No.4376168

This kinda proceeds a notion, which we've (by we, I mean the scientific community attempting to reach AI) already realized we have to reach with AI in order for it to truly work: It has to learn. What Searle is talking about is something already probable, it's just parroting. But in order for true AI to work, it doesn't necessarily need to understand anything immediately, rather, it needs to be able to learn and adapt to new information that isn't entered via the programming language, but through programmed "senses"; i.e., it speaks to someone, and through that conversation learns new ideas and concepts, or it can 'watch and learn'. This is what will give it the fundamental capability to understand, because understanding itself is a learned process.

this notion of how AI works has existed since... Geez, at least the early nineties?

>> No.4376175

This argument would fall apart if it were used to prove something rigorous in complexity theory. Just because something can be done in more than one way doesn't mean that all ways of doing it are wrong.

Say the human way of understanding Chinese involves subconsciously turning Chinese words into concepts, memories, and patterns of thought. Suppose a human can consciously compute the patterns of thought that a human goes through to understand Chinese. Suppose I do that for a Chinese speaker's brain. I don't understand Chinese; therefore, neither does the Chinese person.

The argument is flawed.

>> No.4376174

>>4376168
NO BUT U CANNOT IN2 QUALIA IT IS MYSTIC AND UNKNOWABLE!!!

>> No.4376184

>>4376127
>I don't quite agree with Searle's assertion that computer syntax is so fundamentally limited that it will never be sufficient for genuine human-like intelligence.

The syntax is irrelevant. The point is that semantic content cannot be inferred from syntactical manipulation.

>> No.4376190

>>4376175
That's not the argument. The argument is that you can't infer that the AI has intelligence like a human mind from the appearance of equal results.

>> No.4376209

This is just as stupid as the concept of a philosophical zombie.
>let's conceive a being that is exactly like a human, except it doesn't have a sooouuuuullll
>hurrdurr can't be called a human

These thought experiments always fail to rigorously define and defend the concepts that are supposed to differentiate "us" from "them". In short: dualistfags.
Is it really so hard to accept that there is no difference and humans don't need magic to function in a physical world?

>> No.4376216

It's misleading. It acts as though in a computer program the symbols at the level of machine language are the same as at the semantic level, which is not the case. Whether or not our brains are computers, the 1s and 0s aren't expected to be the level which counts as "chinese"

>> No.4376222

>>4376209
Indeed it always gives some intangible "humonisity" that all humans have that nothing else can have! You can't measure for such a thing and there's no difference between having and not having such a thing but its there and you can't assume it's not!

>> No.4376226

>>4376209
>hurr durr philosophy is hard i'll just ignore the hard problem of consciousness and pretend like I solved it

Doesn't work that way.

>> No.4376236

>>4376226
There is no hard problem of consciousness this is why no one takes philosophy seriously you just assert ideas that sound good in your head with no backing you are the reason science was held back so long.

>> No.4376239

reading a book on this currently. the argument put forward by the author is that what we think of as conciousness or "intelligence" or whatever is pretty far removed from the algorithms that mostly dominate AI research currently.

the author is convinced that "intelligence" requires memory and prediction based on those memories to be truly intelligent. also he tries to show that at every level of our brains/nervous system this memory/prediction model is working, and the total hierarchical structure working together produces this emergent property that we call intelligence.

I think the chinese room is interesting. It seems to me it would be possible to make a chinese room that "learns" the translation rules on its own. But the rules for learning the transition rules are also just rules that the chinese room follows. I suspect that if you dig deeply into this type of philosophy you will find that you are playing what wittgenstein called language games and that you actually aren't learning or proving anything about the nature of intelligence. maybe thats a lazy way of looking at this though, i dunno.

>> No.4376240

>>4376236
>>hurr durr philosophy is hard i'll just ignore the hard problem of consciousness and pretend like it doesn't exist

Ok, that should be a little more accurate now.

>> No.4376243

>>4376240
I have yet to see any objective evidence to suggest that this vaunted 'hard problem' exists at all. Present evidence to support your position or go to /x/.

>> No.4376249

>>4376240
Making a positive assertion prove to me it exists and you can't presuppose any form of intrinsic human essence you can't presuppose anything intangible and you can't presuppose anything that is beyond testing.

>> No.4376259

>>4376243
>I want objective evidence of subjectivity

>>4376249
>Problems arising from the limits of my investigation characteristics must be demonstrable only within the limits of my investigation characteristics.

The two of you, you understand nothing. I weep for the sorry state of your educations.

>> No.4376262

By Searle's logic, a Strong AI is possible. However, it would only become obvious by one method:

If you were in a room with the book, and in the process of using it it taught you how to speak Chinese, and you learning to speak Chinese was both a mandatory and assumed side effect of using the book, then the same could be said to have been neccessary for the computer.

Of course, that would not be the book of the final runtime program, but rather the book of the test cases used to prepare the chinese speaking computer in the first place. But it should still hold true by his logic.

>> No.4376265

>>4376259
I'll take empty circular reasoning for $800, Alex.

>> No.4376269

>>4376265
Technically, what you've done is already empty circular reasoning.

>> No.4376270

>>4376259
Atleast I understand that I understand nothing, unfortunately for you and your fellow philosophical ignoramuses day dreaming how the universe works without any care for truth.
You should take a class in humility and understand why science is so much more effective than philosophy.

>> No.4376272

>>4376090
>Posting this shit
You can interpret the argument two ways:
1. Appearance of intelligence != intelligence
2. Another lame attempt at promoting Cartesian dualism

So it's either: obvious or stupid.
/thread

>> No.4376273

It is based entirely on the assumption that humans aren't just a chemical computer whos response network is built over time through result analysis, we learn EVERYTHING we know how to do, all through response assessment, we are'programmed' to breath and respirate and process nutrients other than that it's all learnt.

>> No.4376276

>>4376100
No; the computer is RESPONDING like a human would, but only because it is programmed to do so, from specific input.

The key difference is that it is NOT thinking.

>> No.4376287

>>4376161
it is not what humans do.
Specifically, we are saying that humans actually have an understanding of the words, where the computer merely knows how to respond to the given words.

There are definitely modes of behavior where people do nothing more than respond, unthinkingly, to various blunt stimuli.
But the distinction we are making is about the kinds which require thinking.

>> No.4376289

I think it really only applies to programs like Eliza, which wasn't actually made to be intelligent in the first place. If you study the patterns between the questions and answers, you could learn some of the structure of the Chinese language, but with no information on what they represent, it will be unlikely that you could figure out much that is not purely abstract. To answer most questions relating to the real world, it would be in fact more practical to use such a method. For instance, to answer questions involving the cardinal direction of one place relative to another, it would be easier to provide a map and instructions on how to sue it to find the answer than it would be to list every combination of departure point and destination. Studying this such a set of instructions, I don't think it would be reasonable to figure out what the Chinese words for cardinal directions are. Even better would be if you were instructed to perform measurements, either on a simulation or with instruments connected to the real world. There are fields of AI, machine learning and robotics that use such methods, but chatbots never do because the Loebner Prize is a contest in hipster bullshit and trolling.

>> No.4376293

>>4376259
You're exactly right. The easy problem is easy: we understand mechanisms of sensory processing and learning. But EXPERIENCE, that's something different. Can a computer tell me that a flower is beautiful? Can it get goosebumps when listening to Vivaldi? Can it express genuine love?

I'm obviously trolling. You're a deluded tool if you think any of this exists outside of a physical brain... and all formulations of the hard problem reduce to these trite concepts of "genuine love and beauty."

>> No.4376304

Riddle me this dualist faggots, If the mind is more than the brain at what point does it kick in and if so how does it know to kick in? I think we can all agree none of us recollect being a sperm, or gamete or a foetus or even a baby for the most part if the soul is in your body at any of those ages why couldnt you do calculus or speak fluent languages at those ages? or does the "soul" age with you? in which case how much more do you want to invent for your religion?

>> No.4376307

Let's say you have a guy in a room who is following a Chinese Room-style set of operations in order to respond to chess moves. The guy moves one card to one location, another card to another location, etc., all based on predefined simple rules, and at the end a card tells him what to play.
Is the guy playing chess? No.
Is the Chinese Room guy speaking Chinese? No.
Is the system consisting of the room + the cards + the guy playing chess? Yes.
Is the system consisting of the room + the cards + the Chinese Room guy speaking Chinese? Yes.

>> No.4376311

>>4376293
And before you jump on the definition of the hard problem. I know exactly what it is. The only delineation between the easy and the hard is an artificial one made by humans to express just how "genuine" their own experiences, and thus emotions, are.

>> No.4376313

>>4376175
>Just because something can be done in more than one way doesn't mean that all ways of doing it are wrong.
No, you've misunderstood all of it:
We are saying that programmed responses are different from thinking.
When people use intuition, invent comparisons, are creative, etc, they are doing something beyond what they have been told to do by others.
That is the critical distinction, and it is very relevant to almost every kind of communication, advancement, or behavior.

>Say the human way of understanding Chinese involves subconsciously turning Chinese words into concepts, memories, and patterns of thought. Suppose a human can consciously compute the patterns of thought that a human goes through to understand Chinese. Suppose I do that for a Chinese speaker's brain. I don't understand Chinese; therefore, neither does the Chinese person.
What you describe there is EXACTLY the meaning of you both understand Chinese.

>> No.4376332

>>4376209
>Is it really so hard to accept that there is no difference and humans don't need magic to function in a physical world?
It wouldn't be for me, but that's immaterial, because we are describing a real, genuine difference in concepts.
it's not invented stuff to make either side feel better; there is a very strong, important distinction being made.

If you don't get it, I understand, but please stop posting that none of the rest of us are talking about anything.

>> No.4376333

>>4376313
You've just agreed computers can understand because all the other guy did was simulate what the chinese mans brain was doing recollecting memories and thought patterns, so a computer can understand.

>> No.4376340

>>4376222
>Indeed it always gives some intangible "humonisity" that all humans have that nothing else can have! You can't measure for such a thing and there's no difference between having and not having such a thing but its there and you can't assume it's not!

That isn't what is being said at all, and if you'd any idea what the topic was, it doesn't claim that machines cannot have real thinking.
We are making a distinction, maybe a difficult one to see at first, but a clear, specific, well-defined and VERY IMPORTANT distinction about types of responses.

>> No.4376346

>What's /sci/'s opinion on The Chinese Room thought experiment?
I agree with the conclusion, but it's not the best way to argue about this.

>> No.4376359

>>4376332
To put it simply: the only difference is to what degree we we can apply our very deeply ingrained mechanism of "empathy." You can't imagine a machine "thinking" like your brain does, so it's not genuine. I can imagine (i.e. look up "mentalization") another human's thought processes so his conclusions seem genuine to me.

This is a clear-cut case of your own mental biases creating a dilemma where there is none.

>> No.4376370

>>4376332
They don't get it, they're not going to get it no matter how precisely you explain it to them. It's a lost cause.

>> No.4376375

>>4376270
>Atleast I understand that I understand nothing,
You may be clueless and misbehaving in this thread, but I don't even believe this claim; you likely understand much, we're just trying to get you to see the difference we are talking about.

>unfortunately for you and your fellow philosophical ignoramuses day dreaming how the universe works without any care for truth.
Philosophy cares deeply about truth, and it is the ultimate goal of every issue in philosophy, but the truths it seeks are a different kind -- they are not all about factual prediction, etc.

>You should take a class in humility and understand why science is so much more effective than philosophy.
You just demonstrated the opposite: your lack of humility got in the way of your accepting someone had an idea.
None of the discussion was about 'effectiveness' in any case, so you missed your goal there, too. And no one chided science in the fist place, or lauded philosophy over it; that, also, was a failure of yours.

Seems like everyone is getting it in the thread but two people who want to troll.

>> No.4376379

>>4376370
Whatever idea helps preserve your ego when you fail at defending your position.

>> No.4376381

>>4376313
No, there's no such distinction being made. If a person can make inferences, then the machine, given the same input, would have to be able to make inferences. Otherwise it's indistinguishable from a person.

>> No.4376385

>>4376359
They don't understand that to learn you have to program your mind and thats how you understand programming yourself through experiences and that AI are even now being programmed to be capable of self teaching, The words you learn growing up are taught (programmed into you) by your parents and you're programmed to know what responses are valid to which questions by observing.
TLDR; Teaching is programming.

>> No.4376392

I meant to say "distinguishable"; a machine that can't reason at all is distinguishable from a person.

>> No.4376410
File: 1.82 MB, 2976x1860, 1262793879752.jpg [View same] [iqdb] [saucenao] [google]
4376410

>>4376091
>Searle argues that without "understanding" (what philosophers call "intentionality"), we cannot describe what the machine is doing as "thinking".

Seale missing the whole fucking point of the experiment. Then uses shitty concepts.

The chinese thought experiment proves that thinking is "mechanical". The box speaks and understands chinese, just like we all speak and understand our native tounge. There is no fucking difference between the box percieved "thinking" and our "thinking", both are just algorithms, both are mechanical.

\thread

>> No.4376412

Didn't read the thread, it stink the dualism so much, i'm out of here.

>> No.4376413

>>4376311
>I know exactly what it is. The only delineation between the easy and the hard is an artificial one made by humans to express just how "genuine" their own experiences, and thus emotions, are.
Nope, you missed the whole thing!

You're assuming the entire concept is flawed, and trying to describe something separate (because you give it little importance?)!
the distinction between the strong and weak AI is not at all about emotions.
the distinction between programmed response and thinking responses is a huge one, not at all about emotion
none of the argument is about soul, or belief, or philosophy, or perspective IN ANY WAY.

but it seems you are determined not to accept that anyone has any idea what we are talking about, so maybe you'd be happy just quitting the thread?
Not everyone can have deep thoughts, you seem to be unable to get there.

>> No.4376429

>>4376359
Nope, this is entirely a distinction between types of responses and kinds of thought.
It is a specific definition, but you keep trying to make it about something ethereal and unreal, some imagined hope that people have to be different.

If you knew what you were talking about at all, you'd know the topic doesn't even make that distinction; it includes the possibility that machines can actually think.
It's just talking about what would be necessary in order to determine when that happens.

>> No.4376432

>>4376385
Absolutely. The only difference between the two is the dopamine released when they receive a set of signals two which they can generate a response that is consistent with some reward criteria (useful, novel, etc.)

Your neurons are just as "mis-understanding" as the chinese translator is, but your body gets an emotional reward for generating coherent responses. So it feels real. Nothing they've presented distinguishes the two otherwise.

>> No.4376438

>>4376333
God-dammit, that is what EVERYBODY is trying to say;
you're the one who keeps insisting it's about man trying to pretend there is something different and magical!

>> No.4376439

>>4376432
Wow, I actually said "two", my bad.

>> No.4376447

The way I see it the AI wouldn't know what it's doing, it would just be doing. So it would be a tool, like a dictionary or google translate.

>> No.4376453

>>4376379
>Whatever idea helps preserve your ego when you fail at defending your position.
You clearly don't mean that, since you keep fighting,
but the point is clear to just about everyone;

you seem to have decided it's about proving humans are magically-souled creatures, and that isn't what anyone has said.
YOU made it an ad hominem attack, not the others.
YOU failed to understand the distinction presented, and it seems you are trying to poke at everyone else for being smarter than yourself.

You keep proving it by not seeing the topic of the discussion (at all!) and attacking the people that DO get it.

>> No.4376467

>>4376304
is this the same ignorant guy?

No one is talking about dualism!
No one is talking about 'mind more than brain,'no one is talking about 'soul,' and no one is talking about a magical difference between people and machines!

>> No.4376469

>>4376413
If you mean hard-programmed response, where the machine can take the input and produce output then return to exactly the same state, suppose there is such a thing as hard AI. Freeze the state of a hard AI, and have it, as a program with a particular state, run on the Chinese inputs, then flush state and return to that original state. Now it's a weak AI. It doesn't understand, because it's following a set program. It doesn't learn, because there's no change in state that could contain any learned knowledge. It's reduced to weak AI.

Do the same thing with a brain. But the brain, by supposition, understands.

Now, go back to the strong AI. Hard-code every bit of the brain's functionality, but return to state every time. It's even a deterministic program. Perhaps you allow the state to persist. It's not a hard AI, because it's just following a set of rules to generate Chinese characters and then changing state.

None of these things are at all different, and yet one's a real brain, the golden standard of understanding, one's a hard AI, and one's a hard-coded set of responses and state changes.

>> No.4376485

Hey guys, I know this isn't relavent, and I apologize, so l put this here and peace..

Do you think 4chan should have a Psycology board?

Answer heeeerreeee

http://freeonlinesurveys.com/app/showpoll.asp?sid=z7gtjfbs62zcap84617&qid=4617

If the results are good, I'll email it to Moot and hopefully well get one! Pass the link, Thanks

>> No.4376486

So this sounds like a specialized problem in computer science. can someone with knowledge of the subject confirm if dwny?

>> No.4376489

>>4376381
>No, there's no such distinction being made. If a person can make inferences, then the machine, given the same input, would have to be able to make inferences. Otherwise it's indistinguishable from a person.

Yes, there is EXACTLY that distinction being made: and you are describing it, mostly.
But you're not seeing the difference we are defining.
The issue is about that inference part: if the machine and human are both given the same input factors, they may have different results, because actually knowing the concepts involved is different from being programmed the concepts.

It really is: this is inarguable. We are making a definitive description of a specific concept, so it CANNOT BE WRONG (it's a definition!)

Yes, it may be difficult for you to perceive,
and yes, it doesn't apply to all kind of input, or knowledge, or response. That doesn't mean it isn't a true one.

>> No.4376494

>>4376469

>set of responses and state changes.

But...you just defined a brain.

>> No.4376498

>>4376453
But we already argued, effectively, that there's no way to distinguish the two things, not even when you crack open the code and look at it. We did it again and again. So the only difference is some magical factor which you have yet to describe. Either admit there's no reasonable distinction between them or clarify your position.

>> No.4376501

>>4376413

My god, for the last time.

I generate responses based purely on my biology and previous experiences. Either way, they're programmed into my brain. Machines do the same. I operate under this illusion of "understanding"(or misunderstanding!) that my emotions and reward circuitry reinforce.

If I know a lot, I "understand" the topic better, and I generate better responses. If a machine knows more, it generates better reponses. Does it "understand"? The only fucking reason you would use that word is that you are trying to project your own emotional interpretation of "understanding".

Can I empathize with a machine? No. Can I empathize with someone that understands a subject? Yes. Can I empathize with someone (i.e. the translator) who does NOT understand a subject? Yes, and that's when I say he "does not understand" a subject. Your empathy aside, they are are processing machines based on their programming, and to say one "understands" more than the other is you are projecting your own feelings of "understanding" onto the other object. Learn some fucking social psychology.

>> No.4376504

>>4376494
Yeah, but I also defined a computer program, which means you just made my point for me.

>> No.4376506

>>4376359
>To put it simply: the only difference is to what degree we we can apply our very deeply ingrained mechanism of "empathy."
>You can't imagine a machine "thinking" like your brain does, so it's not genuine.
>I can imagine (i.e. look up "mentalization") another human's thought processes so his conclusions seem genuine to me.

>This is a clear-cut case of your own mental biases creating a dilemma where there is none.

I understand COMPLETELY the situation you are referring to: I get it.
I agree that happens, please accept that.

BUT IT STILL ISN"T WHAT WE ARE TALKING ABOUT!

We're not talking about empathy, or wanting to believe we are special, or hoping our thoughts are different.
We're talking about when someone EXPECTING to make a real, thinking AI can define that he has got it done.
See? We're discussing the OPPOSITE of what you are saying we are doing.
We're talking about a specific kind of process that happens OUTSIDE programming, that would reveal when true thought is going on.

>> No.4376507
File: 73 KB, 700x574, 1267602419674.jpg [View same] [iqdb] [saucenao] [google]
4376507

>>4376489
Troll or retarded?

>> No.4376518
File: 45 KB, 593x581, 1277339339798.jpg [View same] [iqdb] [saucenao] [google]
4376518

>>4376506
>We're talking about a specific kind of process that happens OUTSIDE programming, that would reveal when true thought is going on.

Is living in a complete fantasty fun?

>> No.4376523

>>4376506
But you can't have a process outside of programming take place in a program, or even in a brain. A brain works a certain way, no more and no less. If there's something beyond programming, it's an intangible, a soul, and I reject that as a way to meaningfully distinguish between anything because I've never seen a soul and I don't know of anyone who has come up with an empirical test for a soul.

I think you should try saying what you wanted to say, but again and much more carefully.

>> No.4376528

>>4376410
>Seale missing the whole fucking point of the experiment. Then uses shitty concepts.

The chinese thought experiment proves that thinking is "mechanical". The box speaks and understands chinese, just like we all speak and understand our native tounge. There is no fucking difference between the box percieved "thinking" and our "thinking", both are just algorithms, both are mechanical.


This is a weird experience for me:
I'm arguing with a guy who, because of a very tiny amount of understanding and a huge amount of cynicism, figures everyone in the thread AND all the famous experts have missed all the points they created, and just won't understand that all processing is simple programming.

I hope you regret that ridiculous statement someday, when you learn what learning is, what programming is, and what thought is; you are demonstrating that you cannot be right.

>> No.4376536

>>4376506

Answer me this: Is a metally handicapped person that can only drool and make grunts an understanding, thinking entity?
Equivalently, does a bro that will predictably make a "so's your mom" response a thinking entity?

>> No.4376540

>>4376489
Oh, one more thing, before everyone else in the thread has harped on you for it - a definition can be wrong, if it is inconsistent with the axioms. You can't define f(x) to be the set of all numbers not in f(x), because it doesn't make any damn sense.

Likewise, you can't define one program to be a set of responses and the other to be something that organically acquired knowledge because any knowledge that could be acquired could be hard-coded in, and the hard-coded knowledge could be indistinguishable from any other sort of knowledge.

There is literally no way to distinguish between these two concepts as they've been described in this thread.

There

>> No.4376541
File: 106 KB, 489x400, 1293495531215.jpg [View same] [iqdb] [saucenao] [google]
4376541

>>4376506

>> No.4376545

>>4376432
>Absolutely. The only difference between the two is the dopamine released when they receive a set of signals two which they can generate a response that is consistent with some reward criteria (useful, novel, etc.)

Your neurons are just as "mis-understanding" as the chinese translator is, but your body gets an emotional reward for generating coherent responses. So it feels real. Nothing they've presented distinguishes the two otherwise.

Still, entirely wrong.
What we are doing is distinguishing a difference between a new response and a programmed response.

They must, by definition, be different.
They must, inherently, be relevant to thinking and learning.
To deny that there is a distinction there is foolish, stupid, ignorant,
and I suspect two of you are doing it because you anticipate the discussion going in a direction you don't like:
where people presume people are more special than machines.

We've said it many times, but once more:
that isn't what the discussion presumes.
We aren't saying people must be more special than machines.

What we are talking about is one way we would know when machines have that special ability, not that they cannot reach it.

and, once more, we are really talking about a specific, rational, and real difference between programming and learning: there is one, and people do not learn by programming.
Teaching is not programming, not even metaphorically.

>> No.4376550

>>4376447
>The way I see it the AI wouldn't know what it's doing, it would just be doing. So it would be a tool, like a dictionary or google translate.
Possibly, and that brings up a separate topic we haven't been discussing: sentience, awareness, consciousness.

What we are discussing doesn't require any of that.
(Well, the two you-know-nothings are skirting around it, but they haven't figured out what the topic is, and are railing against a different thing!)

>> No.4376557

>>4376545
>Teaching is not programming

Utter retard or troll confirmed.

>> No.4376558

>>4376545
FUCK, DO YOU NOT EVEN READ?

TAKE A PROGRAM THAT HAS LEARNED.

TYPE IN EVERYTHING IT HAS LEARNED.

THIS IS A PROGRAM THAT HAS BEEN PROGRAMMED TO ACT EXACTLY LIKE A MACHINE THAT HAS LEARNED.

THERE IS NO DIFFERENCE IN ANY WAY.

THIS IS A FUCKING COUNTEREXAMPLE TO YOUR POST, YOUR DEFINITION, YOUR VERY WAY OF LIFE.

>> No.4376559

what is learning , if not programming?

>> No.4376572

So people defined AI's strength by observing an AI's functionality, and others disagree and think that the process itself is important. Clearly the scientific view is the first. Philosophically, both matter, but scientifically, the only precise one is the first.

Also, assuming that we understand the brain well enough and have the hardware capabilities of simulating a brain, we can go back to the text of the program, ask whoever doesn't speak Chinese to simulate the behavior of a Chinese brain, and he won't understand Chinese either but should still be able to pass the Chinese Turing test (maybe not within a lifetime, but whatever, the Chinese room thing seems to ignore that fact as well).

So:
1) Either we will never be able to simulate the thought process of a human brain because it is intrinsically "greater" than whatever a program can do (but really, does /sci/ believe that?).
2) Or time is relevant. You pass the Turing test if you answer fast enough. Simulating with a huge time loss isn't enough. Then that Chinese room thing was crap. And anyway, this answer isn't satisfying.
3) Or simulating a thinking process IS thinking, and that Chinese room thing was crap.
4) Or we don't think at all, and that Chinese room thing was crap.

So I do think that this Chinese room thing is crap, because I don't believe 1 is true. And anyway, it assumes that we can pass the Turing test, which means 1 is false. So I don't think there's any way it could NOT be crap.

>> No.4376578

>>4376498

>But we already argued, effectively, that there's no way to distinguish the two things,
No, you argued ineffectively that simple reaction was the same in both.
You failed to make the distinction that is critical, so you may not even see the different kind of response necessary to the discussion.
Instead, you got insulting, assuming if you didn't see the difference then no one can.

>not even when you crack open the code and look at it.
That's the first point: humans do not have code, and programming code must be limited to what is preconceived, and programmed in.

>We did it again and again. So the only difference is some magical factor which you have yet to describe. Either admit there's no reasonable distinction between them or clarify your position.
We keep doing it, you keep ignoring it and pretending we don't have any idea what we're talking about.
As long as you are determined not to try to understand the distinction, you will never be able to get it: the discussion is about that distinction.
You seem determined only to assume you already know what it is.

>> No.4376580

>>4376545
>we are really talking about a specific, rational, and real difference between programming and learning: there is one, and people do not learn by programming.
Teaching is not programming, not even metaphorically.

This is exactly what everyone is calling you out on and you just keep saying that they're talking about about something else.

They're not, man.

Just because you repeatedly assert that there is a difference does not mean there is one!

>> No.4376593

>>4376578
Wrong. Humans run on physics and the code of DNA and take as input all the sensory inputs and all the matter ingested and surrounding.

Besides, I was talking about the code of a strong AI versus the code of a weak AI.

Stop contradicting posts containing no content and start answering things that have meaningful arguments you can't explain away with "that's not what I meant, and you know it."

>> No.4376601

>>4376501
I think I might have it with this one: read carefully, please:

>I generate responses based purely on my biology and previous experiences. Either way, they're programmed into my brain.

NO!
There is the distinction: you are NOT programmed. Where you have read that before, it was a METAPHOR. People CANNOT be programmed; they can be taught to learn.
(Okay, one aside; our psychology does permit us to be programmed to respond to stimuli in given ways, but only after we've acquired the abilities we're talking about.)

>Machines do the same.
No; machines without AI are literally limited to their programming.
Anything they are NOT programmed to do, they simply cannot do.
See the difference?
A person can try; use other techniques to approach, experience, learn, play, intuit, and reason in ways that the machine CANNOT do with regular programming. Artificial intelligence is what is needed, and the topic of this thread is how to distinguish WHEN we have reached artificial intelligence.

>I operate under this illusion of "understanding"(or misunderstanding!) that my emotions and reward circuitry reinforce.
No! people literally do have understanding -- because they do not have to be taught each and every step, and because they literally CANNOT be programmed.

>> No.4376617
File: 47 KB, 720x480, maximum_trolling.jpg [View same] [iqdb] [saucenao] [google]
4376617

>>4376601
Think a little more deeply about programming and education, my friend.

>> No.4376621

>>4376601
Take a strong AI.

Program all of the code that makes up the AI. Input all the data learned by the AI, as stored in the computers.

This has been programmed, so it's a weak AI. It's following a set of instructions, it's accessing a set of data, and that's all it's doing.

But it behaves EXACTLY like the strong AI. In fact, there's absolutely NO WAY to distinguish it from a strong AI because it's completely identical in every respect.

>> No.4376629

>>4376621
Stop replying to internet strawmen to make yourself feel good, and respond to this argument.

You haven't argued a single logical thing in this whole thread and I'd like to see you try, because either I'll finally understand what you're trying to say or you'll fail miserably.

I would take immense pleasure in either one, really. Either way is fine with me.

>> No.4376640

>>4376621
>>4376629
In case you're confused, strawman-man, these are the same person, me, and all three of these posts are begging you, the person who keeps stating there is some difference between weak and hard AI, to respond to an actual argument, which you have not done.

I'm doing this under the theory that if I put enough posts up, you'll eventually read one of the copies of the argument.

>> No.4376647

I'm gonna go ahead and jump into this discussion without reading most of the thread.

'Intelligence' and 'understanding' are concepts that really don't apply to anything. By reducing things down to their most basic level, it could be said that even a Chinese human does not 'understand' Chinese, because he is simply following the algorithm ingrained in his mind to communicate in Chinese. The source of the information (in the case of the Room, an algorithm; in the case of the Chinese man, his experience) is irrelevant. Just because the algorithm gets more complicated does not mean it has any greater level of 'understanding': the Chinese man still takes his mood, the other person, the tone of voice, his history, the environment, etc. as input, and outputs an appropriate response. This all falls apart if you consider the universe non-deterministic.

In my mind, the debate boils down to the existence of free will and the notion of the conscious mind. Which probably turns into a spiritual debate. Funny.

>> No.4376653

>>4376647
Yup, that's basically 80% of the thread, with the other 20% being some idiot going "No, there's a difference between having learned it and having that knowledge just _programmed in_."

Fuck, I need a hug.

>> No.4376655

>>4376601
Say I write a program in some higher level language and compile it.

The compiler makes all sorts of optimizations and reorganization (i.e. decisions based on the input). The structure of the implementation will be very different from how I programmed it.

Would you consider that I "programmed" this program?

Now say I recompiled with some new libraries. What if one math function is implemented to give the incorrect answer every once in a while? What if it approximates the answer really well once in a while?

Would you still consider that I "programmed" this program?

>> No.4376665

>>4376501
>If I know a lot, I "understand" the topic better, and I generate better responses.
Yes, certainly. Memory, experience, and previous proper responses guide future responses; of those only memory can be 'programmed' (taught), the rest are learned.

If a machine knows more, it generates better reponses. Does it "understand"? The only fucking reason you would use that word is that you are trying to project your own emotional interpretation of "understanding".
No, there is no emotional definition: understanding is different from programming.

Assume a very good robot (programmed) and a person (taught) how to push a volleyball over the ground with a stick.
Next, put them both in front of a billiard table, give them a cue.
A person may be able to see the connection; might even realize the balance issue and use his other hand, might even push the balls with the point, certainly will perceive a goal in getting balls into the pockets.
Until it is programmed, the robot would do none of that. It simply wouldn't intuit the new environment as correlating, wouldn't perceive the pockets as goals, wouldn't see the cue as a 'sweeper' if it had only been shown 'pusher.'

>Can I empathize with a machine? … Your empathy aside, they are are processing machines based on their programming, and to say one "understands" more than the other is you are projecting your own feelings of "understanding" onto the other object. Learn some fucking social psychology.

It has nothing to do with empathy; nothing to do with claiming humans are unique.
In fact, we're assuming machines _can_ acquire this ability.
I have an Associate's in Philosophy and a minor in Sociology, have worked in a programming facility and with children in a Williams/Downs facility.

>> No.4376669

>>4376665
Read: I worked NEAR people who programmed, and I'm therefore an expert.

>> No.4376672

What a stupid distinction between learning and programming.

Yes, we have the capability to synthesize new ideas. No shit. Our brain is wired to do that.

>> No.4376673

>>4376559
>what is learning , if not programming?

Oh, wow; that's so fundamental to this topic.
Learning is when YOU go out and try something no one has taught you how to do, and figure out how to do it.
It happens when kids play in the sand, seeing how to mold it, make it stack, make it flow, put in textures, use it as a base for other things, use it as support, use it as a covering, and fill their pockets with it.
Children don't have to be shown ANY of that, and yet they learn how to use sand.

Programming is defining that material for a machine, then giving it parameters beforehand of what it might do in various situations and with various tools around: you, the programmer, have to know it might encounter sand, have to preconceive what tools and purposes might be available or needed, and then how to do each step: including each movement of each joint, quantified, and with conditions and limits of usefulness.

>> No.4376685

>>4376673
You have a very very primitive understanding of programming.
>seeing how to mold it, make it stack, make it flow, put in textures, use it as a base for other things, use it as support, use it as a covering, and fill their pockets with it.
Yes because they have been programmed with ideas of molding, and stacking, and flowing, and texturing, etc, and this play will reprogram their brain with new ideas.

Answer this: >>4376655

>> No.4376688

>>4376673
And yet then you go back into the child's mind, and you find that perhaps it was programmed to like certain things, to do things that produced those results, and to try new things at least some of the time. Sometimes those things have interesting results. Can't I program a computer to try moving around, give it some sort of criterion that makes something interesting, and reward internal descriptions that model something observed, no matter how it's modeled or what is modeled?

Besides, respond to my other argument, already. This shit is getting as old as Betty White.

>> No.4376697

>>4376593
>Wrong. Humans run on physics and the code of DNA and take as input all the sensory inputs and all the matter ingested and surrounding.

Besides, I was talking about the code of a strong AI versus the code of a weak AI.

Stop contradicting posts containing no content and start answering things that have meaningful arguments you can't explain away with "that's not what I meant, and you know it."

You are using a metaphor and calling it the same thing.
Teaching is NOT programming.
(and I have explained why in several posts)
Teaching includes indoctrination, recitation, facts for memory, proper response types, proper response formats, kinds of processes to acquire a result -- but the point is to train independent thought and learning, not program anyone.

If teaching were programming,
you would never have been ALLOWED to give your own answers (instead you were encouraged always to do so).
Everyone would be expected to accomplish the same thing, in the same way, with the same rewards (none of those should happen in teaching).
You would never have been allowed to choose a topic of study, or subject for reading, or presentation, or research; it would have been proscribed.
Your assessment would have amounted to a checklist of items, not grades revealing variations in performance.
You would all have come out with the same range of knowledge and facts, the same responses to stimuli, and the same maximum potential to do anything further. Because programming does not encourage any new approach; it defines all expected approaches.

>> No.4376703

Just jumping into the thread to give my uneducated opinion

to me, it seems as though the chinese room wouldnt be able to simulate a human mind due the static nature of the rules. If I give it input A, and get output B, then B will follow A no matter how many times I input it.

However with a human, if I ask you the same question, over and over again, you will give me different answers, mostly because you are getting annoyed.

I dont know how complex the rules can be, or if this is even a valid criticism of the thought experiment. But it appears to me that any strong AI that runs off of rules must also grow and change these rules in order to properly replicate a human brain, lest it be simple I/O with no "understanding"

>> No.4376716

Searle is a goddamn dumb fuck
/thread

>> No.4376717

>>4376703

Lets say I write a program that says "fuck off" if you press a particular key 3 times in a row. It was never programmed to say that when I pressed "Q", but indeed, if I press "Q" 3 times it will do so.

>> No.4376719

>>4376697
Holy fuckballs, can you not READ THE GODDAMN SCREEN IN FRONT OF YOU, YOU RETARDED TROLL JEW SPIC NIGGER CUNT? I said STOP REPLYING TO POSTS WITH NO CONTENT.
Nothing you said in that post actually responded to anything that was in the post you replied to.
Nothing you said in that post actually responded to anything I said in the post before it.
Never have you replied to the simple argument that the act of programming a device, of taking all the code and data acquired through learning from a strong AI, and typing it into a computer, reduces a strong AI to a completely indistinguishable weak AI.

>> No.4376721

>>4376717
every three times? Like clockwork?

>> No.4376726

>>4376721
No, it uses a random number based on atmospheric conditions.

>> No.4376727

>>4376721
It's called a counterexample. It's an example of a program that breaks the thing that the person suggested. It doesn't solve every problem, or we'd already have perfect human-mimicking AI. Stop expecting it to.

>> No.4376733

>>4376647
>'Intelligence' and 'understanding' are concepts that really don't apply to anything.
Intelligence does not; understanding does.
Understanding in this context means that there is structure to the relationship between a word and the various things it relates to; programming means it lists only a definition.

>By reducing things down to their most basic level, it could be said that even a Chinese human does not 'understand' Chinese, because he is simply following the algorithm ingrained in his mind to communicate in Chinese.
But no, people are not given algorithms. They certainly are not given algorithms presupposing each possible inquiry and designating their potential responses (that is what programming is).
They learn by context and experience, and it is often flawed, but it also permits relationships between ideas that help us build connections with other concepts and remember related facts.

The source of the information (in the case of the Room, an algorithm; in the case of the Chinese man, his experience) is irrelevant.
Irrelevant to the result, but not to the way they are acquired; that is the point.
Switch them; the machine does not learn from experience, the man may be able to follow the algorithm (programming) just fine. (That is what Searle described.)

Just because the algorithm gets more complicated does not mean it has any greater level of 'understanding':
Correct, but there is potentially a way to get there; when we figure how to make a machine learn. That is the topic of the thread: how can we recognize when we have accomplished that?

>> No.4376736

>>4376727
I suppose. Perhaps I simply cant grasp how complex these rules can get.

>> No.4376746

>>4376733

>people are not given algorithms
...

You remind me of a creationist that says microevolution is okay but cannot conceive of macroevolution.

This is like shooting fish in a barrel.
/thread

>> No.4376748

>>4376733
We can't distinguish between a program that was programmed to do something and a program that learned to do it, because in the heart of EVERY BRAIN AND COMPUTER, there's a set of instructions for how to change the brain and how to act given an input. If a strong AI exists, it exists as some computer program, which is a list of instructions and some data. If you were to type those instructions in and enter that data, there's now no meaningful distinction between something that was programmed and something that learned.

Jesus, have you ever learned anything about computers?

From this I, unfortunately, gather that philosophy majors (if, indeed, you are one) argue endlessly about things they understand not at all.

>> No.4376751

>>4376655
>Say I write a program in some higher level language and compile it.
Discussion over the word 'program'? OK:

>The compiler makes all sorts of optimizations and reorganization (i.e. decisions based on the input).
>The structure of the implementation will be very different from how I programmed it.
Structure, yes, it shouldn't make any changes to the way it will respond, and it won't have had any input yet.

>Would you consider that I "programmed" this program?
Yes (although obviously there are shoulders you are standing on).

>Now say I recompiled with some new libraries. What if one math function is implemented to give the incorrect answer every once in a while? What if it approximates the answer really well once in a while?

>Would you still consider that I "programmed" this program?
If that is your intention (or the person who wrote the libraries) then yes, it is still a predetermined response (even if variable).

You preconceive the situation, you write some kind of response to it, you give those algorithms to a machine: that is programming.

>> No.4376755

>>4376501
To say so about the unknown with such certainty is clearly the mark of a fool

>> No.4376757

>>4376669
the discussion actually requires an understanding only of the definition of the word and it's range, not being an expert in programming itself.
Nevertheless, I am a programmer, too, although not expert.

>> No.4376766

>>4376672
>What a stupid distinction between learning and programming.
How would you correct it?

>Yes, we have the capability to synthesize new ideas. No shit. Our brain is wired to do that.
Uh, yes... don't you understand that has been the topic all along, how to wire a machine brain to think?

That's why I had to define the difference above; people kept insisting there wasn't a difference there:
are you saying you understood this difference all along?

>> No.4376767

>>4376748
my opinion of philosophy is shared by Archer.

http://www.youtube.com/watch?v=xvicEn5nXSU

Either way, such a program in the Chinese experiment described sounds more like a virtual intelligence. Nothing more than a set of cleverly defined programs to simulate artificial intelligence.

>> No.4376776

>>4376685

>You have a very very primitive understanding of programming.
If was a general definition, not a description of my understanding.
You have a habit of putting up straw men in your ad hominem attacks.

>>seeing how to mold it, make it stack, make it flow, put in textures, use it as a base for other things, use it as support, use it as a covering, and fill their pockets with it.
>Yes because they have been programmed with ideas of molding, and stacking, and flowing, and texturing, etc, and this play will reprogram their brain with new ideas.

But that's the point: they don't!
the capability to play and learn from it is innate;
it may be necessary to put some of this ability into machines before we can have AI

>> No.4376784

>>4376688
>Can't I program a computer to try moving around, give it some sort of criterion that makes something interesting, and reward internal descriptions that model something observed, no matter how it's modeled or what is modeled?

You mean give it the ability to play or learn or experiment or toy with things?
and the ability to find something new in that experience, or correlations, or comparisons, or appreciation?

Yes, we'd call that learning on the way to AI.
and no one has said it isn't possible;

Once more, the topic of the thread was 'how do we know when the machine is doing that?'

>> No.4376792

>>4376736
How about a rule that tests bits of code out that generate responses, and every time I say "Good computer!" it upvotes those pieces of code, and those bits of code can combine in literally endless ways, but maybe the things it says don't make much sense, so sometimes it acquires bits of code that filter the output, to check for sense.

Now let's say I write this program to do this, only I hard-code some basic things in - there's a speech center, that tries to make models of Chinese speech, and it tries all sorts of computational models and rates each one by how good it is at really understanding, and I read it bedtime stories with simple grammar and the rules that it's learned get ingrained, and then as I read more and more, it tolerates more and more complex models, and then it has some sort of semantic parsing which connects sets of tenuous concepts together, like images and 3-D models of objects, and video clips of actions, with the extraneous details filtered out. Let's say the computer filters and re-evaluates its models and tries to come up with new ones that fit the data, and let's say that it has huge numbers of things that it tries to fit models to. Let's say this machine learns math, learns that there are strict rules of inference and that there are various objects, and let's say that the machine internalizes these as concepts related to numbers, and then we ask it, "What is a formula you haven't seen yet?"

>> No.4376794

>>4376733
>Understanding in this context means that there is structure to the relationship between a word and the various things it relates to; programming means it lists only a definition.
But isn't such a definition a structure as well? And certainly a program capable of what is described in the OP contains things that would satisfy your criteria for understanding. As you describe it, the difference between the two is only one of complexity. I don't think that adds anything new that could be interpreted as the elusive 'understanding'.

>But no, people are not given algorithms. They certainly are not given algorithms presupposing each possible inquiry and designating their potential responses (that is what programming is).
>They learn by context and experience, and it is often flawed, but it also permits relationships between ideas that help us build connections with other concepts and remember related facts.
They are not given algorithms; they construct them. This is a result of brain chemistry and depends on all the things you listed and many more. Although none of us could possibly write the algorithm down, it does exist (if you believe in a deterministic universe)

Relationships between ideas are still valid programming concepts.

>Irrelevant to the result, but not to the way they are acquired; that is the point.
>Switch them; the machine does not learn from experience, the man may be able to follow the algorithm (programming) just fine. (That is what Searle described.)
I would guess that it is the same, since learning itself could be broken down into an algorithm based on brain function. Humans -- so long as you don't believe in spirituality -- have to be considered on the same level as machines, and do not possess anything machines do not beyond massive complexity. There is no fine line between programmed and intelligent, because the two are synonymous.

>> No.4376798

>>4376792
It knows what a formula is; a formula is something with letters and numbers; it's arranged according to some complex model that formulas it's seen are arranged, and it has some proof.

It understands the concept of seeing; it can't see, but we always use the word "see" to talk to it about it getting information. It knows how to arrange the sentence "What is a formula you haven't seen yet?" syntactically, and it understands each bit. Then it looks for a formula which it hasn't seen, maybe by trying some proofs, because every time it listens to us we reward it with a "Good computer!" and those models and algorithms get a boost.

Let's say this program could exist, and that it was refined over millions of years.

This would be a strong AI. It would emulate every action of the human brain, with conceivable algorithmic approaches.
But it's a weak AI. All we ever told it to do is rate various models, parse words, try to come up with models that let it rearrange those words, tell it "Good computer!" when it could point out the ball for us, show it a textbook on algebra, let it parse the book, let it collect pairs of characters and concepts, words with definitions and uses and other interesting information. This program follows a set program. I could type all the data in and all the program in, and I'd have hard-coded my strong AI. There's still no meaningful distinction between "programmed" and "self-taught" because there's no way to distinguish the two.

>> No.4376799

>Correct, but there is potentially a way to get there; when we figure how to make a machine learn. That is the topic of the thread: how can we recognize when we have accomplished that?
This is a good question, but I think you would draw the wrong conclusion from the answer. Learning wouldn't make a machine intelligent, because intelligence doesn't exist in a deterministic universe (The entire universe, then, could be described as pre-programmed to create one outcome). But it could be said that humans are only machines.

Unless you believe in a soul. That would render this entire discussion meaningless. Consciousness and the concept of self are the only remaining issues here.

>> No.4376802

>>4376757
The definition requires that little, but understanding the subject requires something which you seem to lack.

>> No.4376812

>>4376784
You are a philosopher, or so you claim, and you don't understand computers, or so your posts indicate, but you don't want to accept that the answer is "there's no reasonable distinction" because, logically, there's no way to distinguish between your two ill-defined ideas.

>> No.4376824

>>4376719
>Holy fuckballs, can you not READ THE GODDAMN SCREEN IN FRONT OF YOU, YOU RETARDED TROLL JEW SPIC NIGGER CUNT? I said STOP REPLYING TO POSTS WITH NO CONTENT.
I saw that; I include meaningful, relevant explanations in every post, and I have not been as childish about it.

>Nothing you said in that post actually responded to anything that was in the post you replied to.
>Nothing you said in that post actually responded to anything I said in the post before it.
If you ask me to clarify one of the statements, I certainly will, but I think I responded specifically to what you wrote, in exactly that previous post cited.

>Never have you replied to the simple argument that the act of programming a device, of taking all the code and data acquired through learning from a strong AI, and typing it into a computer, reduces a strong AI to a completely indistinguishable weak AI.

You mean this:


You made a statement against 'dualistfags' that thought 'soul' was an imaginary element put in to magically distinguish humans from machines, right?
Then,
>But we already argued, effectively, that there's no way to distinguish the two things, not even when you crack open the code and look at it. We did it again and again.
This is vague, but it seemed you were claiming you'd already made a position clear about examining code. I don't know which point you meant, but we're talking about the result of genuine AI programming, aren't we?

>So the only difference is some magical factor which you have yet to describe. Either admit there's no reasonable distinction between them or clarify your position.
I have clarified my position many times; you may not be able to see which are my posts.
There is a very reasonable and specific distinction being made between programmed responses and AI; how we recognize it is the question we are addressing.

>> No.4376831

Stored-instruction computer - a device that takes instructions and runs them.

Program - a list of instructions.

Programming - the act of writing down a list of instructions (usually with the intent that they actually do something in particular, but I don't want you to get distracted)

If a computer does it, it's a program. If it's a program, it can be programmed. If a computer can learn, then it does so by executing a program. That program can be written down, so it can be programmed into a computer. There is no meaningful distinction between something which is or can be "programmed" and anything else it's possible for a computer to do.

>> No.4376838

>>4376824
No, I mean this:
>>4376831

>> No.4376840

>>4376798
What is it that breathes fire into equations and creates a universe for them to describe - stephen hawking

>> No.4376846

>>4376746
>>people are not given algorithms
>You remind me of a creationist that says microevolution is okay but cannot conceive of macroevolution.
I don't know why you think this is hard:
an algorithm is a specific list of specific things to do in a specific case, allowing no variation and attempting to encompass all relevant situations.
People are barely capable of following lists at all; they certainly are not given lists of steps to follow in all future examples, programmed for that specifically.
They are trained, instead, to acquire techniques and tools, and then practice them on situations that exercise those, so that in the future they can recognize a proper situation and apply the tools.
'Programming' is a metaphor for what teaching is about, it is not a descriptor.

>This is like shooting fish in a barrel.
That metaphor is about things that are extremely easy; you didn't get that, either?

>> No.4376868

>>4376748
>We can't distinguish between a program that was programmed to do something and a program that learned to do it, because in the heart of EVERY BRAIN AND COMPUTER, there's a set of instructions for how to change the brain and how to act given an input.
until you see the difference between vague training (education) and real programming, you won't be able to understand this topic at all.
I keep writing it: teaching is not programming, that is a metaphor, and a very flawed one.

>If a strong AI exists, it exists as some computer program, which is a list of instructions and some data. If you were to type those instructions in and enter that data, there's now no meaningful distinction between something that was programmed and something that learned.
YES! We're at the topic, finally!
Given: strong AI machine, how do we identify when that machine is going outside it's programming, demonstrating that it UNDERSTANDS the topic, rather than was programmed for it's reaction?

>Jesus, have you ever learned anything about computers?
I started in 1978, and I wrote earlier that I programmed then and now.

>From this I, unfortunately, gather that philosophy majors (if, indeed, you are one) argue endlessly about things they understand not at all.
No, that one was for an asociates only, and no, I am trying persistently to get back on topic with at least one person (I think a different one from you) who kept trying to make it about something else, something he was very much against.
I hope you do not think I don't understand this at all, because above I just agreed with you.

>> No.4376888

>>4376868
I've been waiting for someone to reply to that argument for the whole damn thread.

You say "How do we know a computer has gone outside its programming?"

But computers execute instructions.
Programs are lists of instructions.
Lists can be entered.
Programming is the act of entering programs into a computer.
Therefore, ANYTHING A COMPUTER CAN DO IS PROGRAMMABLE. Anything at all. It doesn't matter what it does, it can't exceed its programming. Maybe its programming allows for sophisticated idea correlation that is very similar or identical to human understanding, but it still needs those instructions to tell it what to do.

I mean, humans don't exceed their physical capacity when learning. They're still constrained by the brain.

>> No.4376905

>>4376868
Basically, I'm concerned that people keep saying things that make no sense. There's no such thing as a computer acting outside its programming because doing what its programming says is all a computer can possibly do. It's a machine that runs programs.

The confusion in this thread is so fundamental that I didn't realize it until now. Half the posts here seem to address the problem "How do we know if a machine is truly learning, whatever that means?" and define learning in a way that makes no sense to me, and much of the other half is me, trying to say that what the OP posted, and what people keep posting, makes no sense to me, without addressing the question you want answered, which is "How do we know if computers are learning?"

>> No.4376916

>>4376905
It's not like they could become a mystery to themselves.

>> No.4376919

>>4376905
Computers are already learning. We give it inputs and it creates a model for those inputs. We've come up with algorithms that do this, and some education researchers have come to believe this is fundamental to our learning process. If we can create some algorithms that create certain kinds of models, what happens when we create algorithms that create models for each of the things we learn to model? Each of those is just an algorithm, but when we give it information the computer produces a new model. What if we just have an algorithm that creates models like the ones in our brains?

What if the models in our brains are determined entirely by a fairly complex state in each neuron and the connections between neurons? Then a computer could generate these models and adjust them like the human brain does.

>> No.4376925

>>4376794
>As you describe it, the difference between the two is only one of complexity. I don't think that adds anything new that could be interpreted as the elusive 'understanding'.
Maybe my use of 'relationship' was too simple. If the meanings of words can have correlations built between them which are not part of the original programming; how about that?
Any machine that can make new correlations is learning; continuing, those correlations become stronger or weaker. Having correlations to divert to in situations where new processes are needed but not programmed is responding with originality: that is what we seek.

>[Children] are not given algorithms; they construct them. This is a result of brain chemistry and depends on all the things you listed and many more. Although none of us could possibly write the algorithm down, it does exist (if you believe in a deterministic universe)
Yes; now, if we could actually do that, we'd have an AI we could recompile for a new machine. The point is, what a child does is well beyond what a machine without AI does.

>Relationships between ideas are still valid programming concepts.
Of course, and particularly in the AI field, which is our topic.

>> No.4376926

>>4376916
Well, if they have no understanding, they're already mysteries to themselves.

>> No.4376931

>>4376794
>Irrelevant to the result, but not to the way they are acquired; that is the point.
>Switch them; the machine does not learn from experience, the man may be able to follow the algorithm (programming) just fine. (That is what Searle described.)
I would guess that it is the same, since learning itself could be broken down into an algorithm based on brain function.
Right; now, while we have heuristics, we don't seem to have learning down yet. I have read of theoreticians actually looking for the key to 'play,' as some perceive that is an effective way to independently learn.

>Humans -- so long as you don't believe in spirituality -- have to be considered on the same level as machines, and do not possess anything machines do not beyond massive complexity...
I can work with that; under that statement, humans seem to possess something in that complexity that allows learning (and through that understanding) beyond what our machines currently do.

>> No.4376940

>>4376925

>Maybe my use of 'relationship' was too simple. If the meanings of words can have correlations built between them which are not part of the original programming; how about that?
These exist. This is the current research in AI; someone two doors do- next door to me is working on it right now.
>Any machine that can make new correlations is learning; continuing, those correlations become stronger or weaker. Having correlations to divert to in situations where new processes are needed but not programmed is responding with originality: that is what we seek.
We've had plenty of methods that do just that for years. You should see what's going on in this field. It's just, computers are still not that obscenely powerful that you can't understand what's going on.

>Yes; now, if we could actually do that, we'd have an AI we could recompile for a new machine. The point is, what a child does is well beyond what a machine without AI does.
But it's only a matter of scope. What a computer does is only smaller because we haven't made the progress in 50 years at developing model-constructing instruction sets that nature has in a hundred million.

>> No.4376947

>>4376798
>every time it listens to us we reward it with a "Good computer!" and those models and algorithms get a boost.
Which we have to acknowledge as ongoing programming: it is an external source of correction.
If it can do that internally, then it is teaching itself.

>Let's say this program could exist, and that it was refined over millions of years.

>This would be a strong AI. It would emulate every action of the human brain, with conceivable algorithmic approaches.
Whoa, wait: you haven't approached 'every action of the human brain' -- you have only emulated some of the learning process.
(to be clear: play and experiment are two popular notions in AI development)

>There's still no meaningful distinction between "programmed" and "self-taught" because there's no way to distinguish the two.
You mean by looking at the current code set? No, you may not be able to examine the code to see which was programmed and which was self-taught, but that isn't necessary any more:
the first landmark is to accomplish 'self-taught.'

>> No.4376953

>>4376931
The only thing that humans possess that computers do not is additional processing power and highly-tuned programs for learning. If we could adequately mimic human processes we could get fantastic learning systems. As it is, neural nets do interesting things. A car was driven across the country given only a collection of images and which way the steering wheel should turn, and the computer did the rest with a neural net. That was, what, 15 years ago? And that's with perceptrons, which are a weak, pathetic attempt at mimicking human neurons.

>> No.4376955

>>4376090
I'm late to the party, so I might be repeating what someone's already said.

First things first, Searle is certifiably retarded and should never speak again.

Secondly, we are machines and machine intelligence is possible in the form of strong AI. Anyone who has ever read pretty much any of Douglas Hofstadter's books would knows this. Anyone who's read Steven Pinker knows this. Anyone with an IQ over 115 knows this.

/thread

>> No.4376967

>>4376947
>Which we have to acknowledge as ongoing programming: it is an external source of correction.
If it can do that internally, then it is teaching itself.
This is nonsense! We get external stimuli that give our ideas positive or negative feedback, and you call this "learning", and yet I pet a computer and tell it it's a good boy, and this is "programming" because I wrote down that it should do things more often when I tell it it did a good job, but people have simply evolved to where smiling at a baby reinforces its learning process, but that's learning because someone didn't go write it into the baby's brain when it was born?

That's an innate trait of babies.

>> No.4376971

>>4376888
>I've been waiting for someone to reply to that argument for the whole damn thread.

>You say "How do we know a computer has gone outside its programming?"
...
>Therefore, ANYTHING A COMPUTER CAN DO IS PROGRAMMABLE.
Yes; but here's the issue: is everything a machine can do LIMITED BY it's program?
We assume 'no,' that some AI is possible, so how do we tell if that happens?


>Anything at all. It doesn't matter what it does, it can't exceed its programming.
No, that's a separate assertion: yes, everything it can do can be programmed, but if it's program does NOT include a particular function, can it still learn to do it?
Key in this answer is whether a machine can build 'understanding' about functions, which I have largely called 'correlations between processes and facts.'

>Maybe its programming allows for sophisticated idea correlation that is very similar or identical to human understanding, but it still needs those instructions to tell it what to do.
That's right; and that programming would be called AI.

>> No.4376974

>>4376947
So why have you let me say a million times that if a strong AI exists the two are indistinguishable, and yet you never responded? Because in this context, you're agreeing that the code would be indistinguishable, which makes sense because it's IDENTICAL, and yet you've never said "wow, that means that any code that was created by learning could have also been written down to mimic something created by learning"

>> No.4376976

>>4376971
>Yes; but here's the issue: is everything a machine can do LIMITED BY it's program?
What the HELL does this actually mean?

>> No.4376981
File: 486 KB, 1750x3200, bondage jebus.jpg [View same] [iqdb] [saucenao] [google]
4376981

>Build a computer large enough to simulate a human brain in its entirety, integrating with a robot with visual/audio/haptic feedback/command that can be linked as input
>Scan the brain of the designer into the simulation and run it
What now?

>> No.4376985

>>4376905
>Basically, I'm concerned that people keep saying things that make no sense.
>There's no such thing as a computer acting outside its programming because doing what its programming says is all a computer can possibly do. It's a machine that runs programs.
Then you would be among those who believe that AI is not possible; all responses must be programmed.
Most of the discussion assumes it is possible, so a lot won't make sense to you.

>The confusion in this thread is so fundamental that I didn't realize it until now. Half the posts here seem to address the problem "How do we know if a machine is truly learning, whatever that means?" and define learning in a way that makes no sense to me, and much of the other half is me, trying to say that what the OP posted, and what people keep posting, makes no sense to me, without addressing the question you want answered, which is "How do we know if computers are learning?"
Circular questions, huh?
I'll try to simplify:
If a computer can learn (build correlations, develop processes or responses not yet in it's program) then it we can say it has 'understood' the things it has correlated, and the situations it did not previously have programming for.
If it cannot learn, then the only things it can respond to are stimuli it recognizes and has a response programmed for. Those may be generalized (roughly round object in path, does not move with 10N, 100N, 200N pressure), but they would always have limited ranges.

Note that we can already build algorithms that take a decent experimental approach to new situations, and which can apply those to their programming on-the-fly, but the range is not as impressive as we might hope for.

>> No.4376990
File: 20 KB, 500x362, 1324963417728..jpg [View same] [iqdb] [saucenao] [google]
4376990

I've actually taken Searle's course on Philosophy of the mind (uc berkeley ocw) and I might be of some help to this thread.

Alongside the his conjecture that the AI in the room is just a Turing machine in the sense it translates symbols, there should also be a definitive explanation behind what constitutes intention--Intentionality is what creates conscience according to Searle.

However, there are big debates as to what makes conscience different than Turing processe, and I don't Searle quite nails it. Epiphenominalists make pretty good reasonings basing conscious judgements off of qualia.

I would say one of the stronger arguments is Mary's Room:
en.wikipedia.org /wiki/Mary's_room
Feel free to discuss.

>> No.4376991
File: 100 KB, 894x1220, fry finally comes.jpg [View same] [iqdb] [saucenao] [google]
4376991

ITT Philosophy morons that don't know of the pseudo-AI that already dynamical evolve their root code based on input.

>> No.4377003

>>4376985
>Then you would be among those who believe that AI is not possible; all responses must be programmed.
>Most of the discussion assumes it is possible, so a lot won't make sense to you.

You're conflating two very different things. One is that computers can't do more than their program tells them to, which is exactly what computers are and exactly what they do, and the other is that computers can't create new pieces of program that they could then do, which a computer could conceivably do even using this model you think is naive and restrictive, which is

COMPUTERS TAKE INSTRUCTIONS FROM MEMORY.

COMPUTERS EXECUTE THOSE INSTRUCTIONS.

COMPUTERS DO THINGS TO MEMORY

COMPUTERS DO MORE INSTRUCTIONS.

That is what a computer IS. That is ALL a computer DOES. You seem to think that a computer needs to be more than this to create new code or new ideas and I don't think it does, but if it does then you're right, there's no way for a computer, which is a device that executes commands, stored in memory, then stores the results back to memory, to achieve AI.

>> No.4377005

>>4376981
what happens if the robot could interact with the computer running it

>> No.4377010

>>4376991
That does seem to be the problem in this thread. Philosophy majors and philosophers are arguing about the mind and trying to come up with ways in which it's better than computers, and meanwhile computer scientists are making computers do the things that philosophy majors are only just identifying as some problem that computers have not yet accomplished. By the time they get around to replying in this thread, we'll probably have achieved true AI, too, and then they'll be looking silly.

>> No.4377014
File: 114 KB, 526x353, witide-6.jpg [View same] [iqdb] [saucenao] [google]
4377014

>>4377010
>philosophy ever not looking silly

>> No.4377015

>>4377005
It already would. That's the control interface. Or do you mean brain autosurgery? 'Cause I'll be the first to tell you, that never ends well.

>> No.4377016

>>4376919
>Computers are already learning. We give it inputs and it creates a model for those inputs. We've come up with algorithms that do this, and some education researchers have come to believe this is fundamental to our learning process.
Yes, exploring perception (exploring) and categorization (sorting) and modeling (pretending) all contribute to our understanding, even of our own learning processes. (You might perceive I am a proponent of play as a model for early human learning.)

>If we can create some algorithms that create certain kinds of models, what happens when we create algorithms that create models for each of the things we learn to model? Each of those is just an algorithm, but when we give it information the computer produces a new model. What if we just have an algorithm that creates models like the ones in our brains?
Iterative, dependent, and logarithmic modelling relationships? The only one I have seen sorted results into narrower ranges of good responses. That is, rather than branch into a huge number of potential next-step responses, the field narrows to very few. The failure seems to be in giving priority to some kinds of responses, and not letting unusual choices bear more fruit. But that might just mean someone needs to write better model-judging routines or more flexible models.

>What if the models in our brains are determined entirely by a fairly complex state in each neuron and the connections between neurons? Then a computer could generate these models and adjust them like the human brain does.
I have seen a few things called 'neural networking' and related in AI and they mostly seem like buzzwords rather than actually using any such principles.

>> No.4377019

>>4376955
Secondly, we are machines and machine intelligence is possible in the form of strong AI.
That only means the definition of strong AI is that; you aren't showing it is possible.

>Anyone who has ever read pretty much any of Douglas Hofstadter's books would knows this.
Hofstadter doesn't impress me, but you might quote something relevant to recognizing AI when we find it?

>Anyone who's read Steven Pinker knows this. >Anyone with an IQ over 115 knows this.
Knows that AI is possible if a strong AI is possible? I think we're clear on that.
But, the discussion is assuming AI is possible; it is asking how we would know when we got there.

>/thread
you're closing the thread after just arriving?
You don't even know if you're on-topic or not!

>> No.4377020

>>4377016
>Iterative, dependent, and logarithmic modelling relationships
>mostly seem like buzzwords
>see pic
Just admit that you don't know shit from sugar when it comes to algorithms or data structure processing(specifically Bayesian networks)

>> No.4377022
File: 99 KB, 400x302, 106031782321.jpg [View same] [iqdb] [saucenao] [google]
4377022

>>4377020
screw captcha

>> No.4377025

>>4376098
Stop being an idiot, there is AI all around us. Ever played a video game?

>> No.4377027

>>4376974
>So why have you let me say a million times that if a strong AI exists the two are indistinguishable, and yet you never responded?
The two WHAT are indistinguishable?
I checked back through your cites, and it doesn't show me what you are talking about.

>Because in this context, you're agreeing that the code would be indistinguishable, which makes sense because it's IDENTICAL, and yet you've never said "wow, that means that any code that was created by learning could have also been written down to mimic something created by learning"

There is no way or reason code created by a machine through learning and code created by a very intelligent team of insightful programmers would be identical; I cannot agree to that.

But, if you are suggesting the 'if we could get all the 'code' (metaphor) from a person and compile it into a machine' -- then yes, we haven't changed the code, just the compile, and yes, that machine (given all appropriate abilities to act, behave, move, sense, etc. like a person) behave with understanding, intelligence, and insight as the original person.

But I don't know why you're suggesting it -- there is no 'code' inside people, there is no set of rules by which we know a person will always respond, there isn't even a very narrow range of perceptions to people -- and however we don't like it, emotion is always part of the equation.

>> No.4377032

If the brain can do it then so can sufficiently advanced circuitry.

>> No.4377034

>>4376976
Well, AI assumes we can give a machine the ability to choose responses from situations we do NOT program in; it has understanding, judgment, can create it's own reaction.

So, if you believe AI is possible, then a machine is not limited by it's programming.
If you do not believe AI is possible, then a machine can always respond only within the range of responses given to it, and not judge anything on it's own. The Searle question called it 'understanding.'

>> No.4377037

>>4376981
>Scan the brain of the designer into the simulation and run it

we cannot scan brains to any useful degree;
we don't know how anything works even if we had that kind of mapping
there is no reason to think building the same thing would work similarly
'into the simulation' means nothing

seems you've watched a lot of science fiction.

>> No.4377047
File: 82 KB, 750x600, full_retard.jpg [View same] [iqdb] [saucenao] [google]
4377047

>>4377037
Welcome to the 1960's, where our computers use vacuum tubes and can't interface with mouse brains and drive motorized carts around...oh wait its 2012 and we have shit like that.

>> No.4377048

>>4377025
Yes, they are so stupid, I always win on expert mode. It's a waste of time to play with them unless you want to practice something.

>> No.4377052

>>4377037
we already have computer controlled robotic limbs for disabled people

this isn't a question of whether we can make ai anymore, its a question of how long is it going to take to make a full neural mapping of a human brain

>> No.4377053

>>4377048
Go play operation flashpoint(the original) and turn the AI to maximum.

Warning: You might as well bend over because you're getting your shit pushed in.

>> No.4377066

>>4377003
>You're conflating two very different things. One is that computers can't do more than their program tells them to, which is exactly what computers are and exactly what they do, and the other is that computers can't create new pieces of program that they could then do,
I see that those are different from each other, but they are not outside the realm we had for them: that the range of their responses was limited to what they already had in programming.
I'm not claiming that requires an explicit subroutine for each response first; the ability to create a new piece of code must be critical to AI, which I already suggested is possible.

>which a computer could conceivably do even using this model you think is naive and restrictive, which is

that is a very simplified sequence, odd since you stated you wanted to show a computer modifies it own code.

>That is what a computer IS. That is ALL a computer DOES. You seem to think that a computer needs to be more than this to create new code or new ideas and I don't think it does, but if it does then you're right, there's no way for a computer, which is a device that executes commands, stored in memory, then stores the results back to memory, to achieve AI.
Waitaminute: I didn't suggest anywhere that AI wasn't possible.

I wrote I think it needs to be able to learn in order to be called 'understanding.'
I equated making new correlations and finding new responses were critical to that.

>> No.4377070

>>4377047
What the fuck this picture is shit. Don't ever use it again.

>> No.4377079

>>4377066
"understand" as you put it is a biological evolutionary advantage from iterative generations, genetic algorithms can already select for advantages so you could easily develop a machine who's original code is to take a condition(or add another) and change it and simulate both

you're (frankly pseudo-science philosophy) definition of 'understanding' is so moronically grey that it's not even worth using

>> No.4377081

>>4377010
>>ITT Philosophy morons that don't know of the pseudo-AI that already dynamical evolve their root code based on input.
>That does seem to be the problem in this thread.
I can see we've got you very confused and frustrated; see if you can follow this:
I showed that I was aware of AI developments; it just isn't the topic.
(and the 'ITT' poster made 3 errors in his description, so you may not want to ride his bandwagon)

>Philosophy majors and philosophers are arguing about the mind and trying to come up with ways in which it's better than computers,
I see no one _trying_ to come up with ways the mind is better than computers; I see two who justifiably have demonstrated that.
I also see one annoyed attacker who is arguing the wrong points and decided ad hominem was better than thinking.

>and meanwhile computer scientists are making computers do the things that philosophy majors are only just identifying
You assume because I didn't post this thirty years ago, that I didn't think of it then?
You are wrong, and a fool, for deciding something you had zero information about.

>as some problem that computers have not yet accomplished. By the time they get around to replying in this thread, we'll probably have achieved true AI, too, and then they'll be looking silly.
Well, I agree someone does.
I acknowledged we have some AI today; you don't seem to know that.
I demonstrated knowledge of this from previous decades; you may not even have been alive.
I also demonstrated I can stick to the topic; it is not _if_ we could attain AI, but _assuming_ we can, how would we know when it is accomplished?

>> No.4377083
File: 88 KB, 293x400, You-must-be-new-here2.jpg [View same] [iqdb] [saucenao] [google]
4377083

>>4377070

>> No.4377089

>>4377020
I don't approve of buzzwords and locking labels in for dynamic subjects; it prevents flexible thinking.

To wit; your post, discarding my idea because I didn't use a buzzword you know.

Why didn't I? Because we are trying to INCLUDE people who are not expert in the field.
Why did you? I guess because you wanted to PRETEND you had immense knowledge on a field by using terms that would limit understanding.

>> No.4377091

>>4377032
But we're still assuming that machines can achieve AI.
Why do so many posters keep claiming AI can be achieved, when it is the first assumption of the whole thing?

>> No.4377099

>>4377047
>Welcome to the 1960's, where our computers use vacuum tubes and can't interface with mouse brains and drive motorized carts around...oh wait its 2012 and we have shit like that.

I remind someone that 'scan brain, insert into simulation' is nonsense, and you sarcastically make fun of ME?

>> No.4377096

>>4377053
That doesn't prove the AI in that game is smart, only that you fail to beat it.

>> No.4377101

>>4377081
You are taking the assumption of AI to be defined as a boolean, which only exponentially shows how ignorant you are. Intelligence(and 'understanding' thought) are a gradient concepts, they develop to degrees over time. There is no "this is AI" and this "isn't AI".

ITT the ramblings of a moron creating a false dichotomy.

>> No.4377103

>>4377025
Simple heuristics is STILL not what we are talking about,
and I STILL emphasize that the discussion isn't claiming AI isn't possible.

>> No.4377105

>>4377099
No, simply providing a concept argument that contradicts the base case of your statement.

>> No.4377107

>>4377052
>its a question of how long is it going to take to make a full neural mapping of a human brain

Actually, I don't think many people in AI consider that a technique at all.
people in psych research would like it, sure, but AI is progressing just fine without a brain model.

>> No.4377116

>>4377079
>you're (frankly pseudo-science philosophy) definition of 'understanding' is so moronically grey that it's not even worth using
I was trying to keep it applied to this thread specifically, because several people are having a LOT of trouble.

Fine, show me your version, without just copying something?

>> No.4377119

>>4377096
>That doesn't prove the AI in that game is smart, only that you fail to beat it.
Worse, it doesn't prove AI at all, which was the point of those two bringing it up.
and it still didn't make any damned sense, since almost everyone in the thread agrees AI is possible and I even said there was good current progress in _real_ AI.

It seems some children will put more value in what they read on game packaging than the authority of knowledge; several were cited.

>> No.4377120

ITT: tards flail around spastically trying and failing to argue against a 30 year old idea they don't understand, eventually resorting to irrelevant tangents in an attempt to make themselves seem relevant.

>> No.4377124

>>4377089
>Have programmed simulations that has been able to learn and 'comprehend' derived Sanskrit based language inputs with solid accuracy
I comprehend the subject and limitations of computation better then your root eating troglodyte brain can even understand. You piggyback on intellectual philosophers, but you are so intellectually impoverished that it's impossible for understanding on that what you are arguing against or for.

tl'dr - You're a fucking pseudo-intellectual and should stop talking because your depth of knowledge is laughable.

>> No.4377131

>>4377116
Someone already has:
>>4377101

>> No.4377134

>>4376990
>Feel free to discuss.
I think you should use more jargon. Your post didn't quite have enough to confuse me to death; I am only in a mild coma.

>> No.4377141 [DELETED] 

Dude, srlsy if u cant beat AI in a game, quit gaming. You noob

>> No.4377142

>>4377101
>You are taking the assumption of AI to be defined as a boolean, which only exponentially shows how ignorant you are.
No, I didn't. My use of if-then statements was a routine conditional of logic for the application of this discussion alone. My description of any example AI system wasn't meant to limit them, just provide an example for consideration.

>Intelligence(and 'understanding' thought) are a gradient concepts, they develop to degrees over time. There is no "this is AI" and this "isn't AI".
Actually I wrote that intelligence wasn't limited to the discussion in any way, and I cited several steps in making progress toward AI, which I have constantly and consistently said is possible and actively developed.
However, I am not OP, who asked the question which is (so frustratingly for you) Boolean.

>ITT the ramblings of a moron creating a false dichotomy.
Won't you stop with the ad hominem bullshit?
You must really be confused by this to keep attacking.

>> No.4377144

>>4377119
Dude, srlsy if u cant beat AI in a game, quit gaming. You noob

>> No.4377147

>>4376905
>The confusion in this thread is so fundamental that I didn't realize it until now. Half the posts here seem to address the problem "How do we know if a machine is truly learning, whatever that means?" and define learning in a way that makes no sense to me, and much of the other half is me, trying to say that what the OP posted, and what people keep posting, makes no sense to me, without addressing the question you want answered, which is "How do we know if computers are learning?"

And this is precisely the question that cannot be answered by any objective metric. Thus the conundrum.

>> No.4377151

>>4377105
>No, simply providing a concept argument that contradicts the base case of your statement.

Oh. Well, then you completely failed.
That poster suggested we do it NOW, to accomplish said AI. It is not possible, so it is foolish fantasy, and ignorant.

No one said future development of impressive technologies isn't possible, in fact the entire thread assumes it definitely is, so it's amazing you thought a childish simplistic example of a tech advance would make fun of my point.
Yep; you screwed up, and I can't see why you attacked at all.

>> No.4377160

>>4377131
You must really be confused in here:
this doesn't define 'understanding' at all, certainly not as well as I did;
He is merely reminding us (badly) it's not a black-and-white issue, which no one claimed in the first place, and misunderstood every statement I made about intelligence and understanding being merely key to the topic.

>>You are taking the assumption of AI to be defined as a boolean, which only exponentially shows how ignorant you are. Intelligence(and 'understanding' thought) are a gradient concepts, they develop to degrees over time. There is no "this is AI" and this "isn't AI".

Maybe it sounded good to the people who were confused, but he is really just agreeing with me, and doesn't understand what he writes.

>> No.4377162

>>4377134
and some moron chastised me for now using enough; he assumes if I don't use a term he already knows, I don't either.

>> No.4377168

>>4377147

thank you;you may be the only person in many posts who remembers the topic isn't challenging _me_.

I must admit, however, that I was the one who suggested learning methodology was the key to accomplishing AI. The topic before that was about demonstrating 'understanding,' and was confusing several people.

>> No.4377175

>>4377142
1) Excuse me if I'm addled by staggering obtuseness that creates arguments so intentionally broad in nature that they can't possibly be refuted or confirmed.

2) Throwing desultory words in your post that look right but don't add anything isn't a good practice.

3) It's not ad hominem when your contumacious stupidity corrupts the argument.

4) Relocating the goalposts, it's what you're doing.

5) Sidenote: You aren't smart, you aren't profound, you don't even know what you're arguing.

>> No.4377177

>>4377168
>I must admit, however, that I was the one who suggested learning methodology was the key to accomplishing AI.

I'd suggest something along the lines of the evolutionary process, myself. If everything that a given system can "learn" is merely something deduced from its initial programming, then it's not really "learning", is it?

>> No.4377189

>>4377175
1) Excuse me if I'm addled by staggering obtuseness that creates arguments so intentionally broad in nature that they can't possibly be refuted or confirmed.
In moany posts I wasn't trying to isolate my own idea, but get someone else to see what the topic was. Often that requires broad language so they can find the road in. They also should never be refutable.
Can you show me I didn't lead anyone to the topic that way?

2) Throwing desultory words in your post that look right but don't add anything isn't a good practice.
No, of course not. Which one?

3) It's not ad hominem when your contumacious stupidity corrupts the argument.
If that is what I did, OK; show me it was my fault, my misunderstanding, or my error.

4) Relocating the goalposts, it's what you're doing.
In which post? You must have one to show...

5) Sidenote: You aren't smart, you aren't profound, you don't even know what you're arguing.
Show me something I have wrong: you failed to here.

>> No.4377190

>>4377177

Correct, and someone did cite that programming complexity today might be mistaken or interpreted as self-induced programming by a machine.

But I suspect there can be genuine learning by a machine, in the way we are discussing (or trying to). I can't say it also progresses to awareness or understanding, but I believe it might.

>> No.4377207

>>4376090
I am a reductionist. That is my opinion of the Chinese room thought experiment.

>> No.4377209

>>4377175
Waiting, and I promise to be honest and correct myself if you can show me one of those.

>> No.4377224

>>4376090
How exactly is this different from any normal interpretation of a Turing test?

>> No.4377249
File: 17 KB, 270x400, ¸AD FHJK.jpg [View same] [iqdb] [saucenao] [google]
4377249

God I fucking hate "define what 'is' is" mincing sophist faggotry.

All roads that lead to the same destination are equally valid. Searle is a faggot being a dick about terms.

>> No.4377259

>>4377224
It's the opposite:

http://en.wikipedia.org/wiki/Turing_test#The_Chinese_room

>> No.4377266

>>4377259
Sounds just like a turing test to me. Combinations of english letters are not particularly more complex than the meaning of certain chinese symbols. Since the symbols have definite and contextual meaning, it should be a relatively simple matter to program or teach the machine the possible meanings of certain symbols and combinations of symbols; forming conceptual representation according to those meanings; and making an answer.

I don't see how it is different from asking it a question in english or any other language.

>> No.4377274

>>4377259
I mean, the argument seems to be that the turing test itself is a flawed premise because if a person manually executing a program is just following instructions without understanding the meaning of the program that the person doesn't have a mind.

So the Chinese room experiment seems absurd to me because to pass that test you would still need to pass a standard turing test by constructing a computer complex enough to interpret the task and form an understanding of it like any human would. If you "taught" the machine to understand chinese characters like you would for any other language then I don't see what the test is trying to assert about the machine.

Is he trying to say that if the person or machine doing the interpreting doesn't "think" about the task they're doing that they don't have a mind? It just seems like a null hypothesis without any possibility for confirmation at all.

>> No.4377278

I can't believe this thread is still here.

2 Simple questions:

First, do you believe that the responses humans give to stimuli are a result of the particular configuration of neurons in their brain?

Secondly: do you believe that the changes in these configurations (neglecting e.g. drugs and impact) are a result of processes undergone by the previous state?

>> No.4377283

>>4377266
>Since the symbols have definite and contextual meaning, it should be a relatively simple matter to program or teach the machine the possible meanings of certain symbols and combinations of symbols; forming conceptual representation according to those meanings; and making an answer.

Yes, but

>Sounds just like a turing test to me.

The Turing Test has as its implication that the appearance of intelligence is sufficient to infer intelligence. The Chinese Room has as its implication that the appearance of intelligence is not sufficient to infer intelligence. Thus they are opposites.

>> No.4377284

>>4377266
Don't worry, you're probably just a bit dense.

The turing test says that if the machine fools a human, it's intelligent.
The chinese room says that the machine can easily fool a human, but with this setup, would it really be intelligent?

Both theories fall under the category of "philosophical bullshit."

>> No.4377285

needs some kind of gradient of cognition and eloquence to really seem human. like depending on how many other tasks it is doing, it's response will change.

kind of like a human, if I'm in a sour mood and you ask me how my day went I may say, "fuck off".

if I'm in a good mood I may say, "it went well, thanks for asking".

But what would be the point of making it seem more human? It would be some kind of decompression algorithm for allocating resources when preforming multiple tasks.

>> No.4377287

>>4377283
So what is the merit of the chinese room experiment then?

>> No.4377290

>>4377278
And I just came back so sorry if this has been covered, but I'd like to find some common ground with the Searle guy.

>> No.4377292

>>4377285
Not human emotion, human intelligence. Think of it more like an incredibly blank personality.

>> No.4377303

>>4377274
>Is he trying to say that if the person or machine doing the interpreting doesn't "think" about the task they're doing that they don't have a mind? It just seems like a null hypothesis without any possibility for confirmation at all.

Searle isn't saying anything about whether the machine has a mind or not. Searle is saying that the appearance of understanding can be had without the substance of understanding, and so the appearance of understanding is not sufficient evidence to infer the presence of understanding.

>>4377287
What do you mean, "what is the merit"? It is a critique of the Turing Test idea.

>> No.4377305

>>4377287
The "machine" tricks the people outside into believing that it knows chinees, but it doesn't really.. Fuck, stop thinking that you've stumbled onto some new common ground between these rivaling theories.

>> No.4377310

>>4377303
merit as in applicability or usefulness in contributing to our understanding of the subject.

If it's just a statement of "oh well this particular task can pass a turing test but it's not REALLY intelligent" then how is the test different from throwing something out for being too perfect?

>> No.4377313

The general intelligence algorithm is just a program being run on a turing machine. Make no mistake that it gives every appearance of intelligence than a person could - it does not overfit to data.

If it's being run on a computer or a brain it does the exact same thing. The exact same thing can be said about a human being. Say you had an expert on the Chinese language. All he has is a serious of associations between the image of the characters and their meanings. There are infinite such associations to be had that he wouldn't, for instance approximations of a given character using a waveform of a certain type.

>> No.4377314

>>4377310
Its contribution to our understanding of the subject is to help us not bullshit ourselves about what's really going on. Personally, I find that quite merit-er... meritful.

>> No.4377315

>>4377310
It doesn't help us understand anything, it's philosophical bullshit. The barriers those fuckers pull up has never ever stopped science in its path.

>> No.4377316

>>4377303
Of course it isn't sufficient because the test doesn't require understanding on any level to do since there is no room for expression of understanding. If the test were expanded to include a scenario of asking for extrapolation based on a response and interpretation of the metacognition required to give said response then I would yield that the test is worth considering.

It just does such a poor job of explaining what he means by "understanding".

>> No.4377323

>>4377316
>there is no room for expression of understanding. If the test were expanded to include a scenario of asking for extrapolation based on a response and interpretation of the metacognition required to give said response then I would yield that the test is worth considering.

That's all included in the whole capable of responding to question in the language thing. It's not ruled out at all.

>It just does such a poor job of explaining what he means by "understanding".

That's because he doesn't explain it. He's assuming that you understand what it is like to understand something.

>> No.4377331

>>4377303

Searle is a moron looking to gain his immortality by cannibalizing the bones of his betters.

The effective appearance and function of intelligence to beings of equal or greater intelligence NECESSITATES intelligence.

And in case the word hasn't lost all meaning to you yet, intelligence intelligence intelligence.

>> No.4377335

>>4377323
Well that's a dumb assumption to make since it's easy to understand what a word means like:
"Plane:
1
a : to make smooth or even : level b : to make smooth or even by use of a plane
2
: to remove by or as if by planing —often used with away or off ..."

and to understand the conceptual context that the word could have given its usage. There are certainly people around who understand things in a purely concrete manner because they lack or have lost the neurological connections to make abstract associations when hearing or seeing the word. If this is the type of "understanding" he means, then how could this not be achieved by building this type of mechanical association into your turing machine?

>> No.4377336

>>4377331
>The effective appearance and function of intelligence to beings of equal or greater intelligence NECESSITATES intelligence.

But that is precisely what he shows to be incorrect.

>> No.4377340

>>4377336
So would you consider an autistic savant to not be intelligent?

>> No.4377341

>>4377336

He SHOWS nothing, he's asking US to show it FOR him but we CAN'T because his PREMISE IS FLAWED

>> No.4377343

>>4377335
Building the mechanical association into the machine is the equivalent of programming, or the symbol manipulation rules. It does not require understanding of meaning.

>> No.4377348

>>4377340
Non sequitur. Cannot infer intelligence =/= is not intelligent.

>>4377341
No, he shows it. He gives the example of a thing being done without the doer having understanding.

What premise is it you think he has which is flawed?

>> No.4377351

>>4377348

The premise that intelligence necessitates understanding.

>> No.4377353

>>4377336
He doesn't show anything, he brings up a bullshit way of looking at it THAT DOESN'T MATTER IN THE REAL WORLD.

Fuck your philosophical assholes!

Why are you even considering the dude in the room as the "machine" in this experiment? Shouldn't the entire room be the "machine"? Then obviously it's just as intelligent as any other turing machine doing a particular task - it's just a more mechanical way to translate input to output. Doesn't matter how it's done. What you fuckers are saying is that the chinese room is not intelligent because the dude inside is not aware of what he's doing. Are you aware of what your brain is doing? Didn't fucking think so. Philosophical. Assholes! Nothing of this MATTERS

If you want to define intelligence (which really is what you're doing with these thought experiments), then either let the turing test be IT or just accept that you need to define some sort of self awareness, which some say is the ability to reflect on oneself etc etc. No matter what you decide, it WONT. MATTER. to the actual development. At best you'll be able to rationalize treating theoretical androids like slaves, justified or not.

>> No.4377358

>>4377351
That is not a premise he holds. In fact, quite the opposite, since the intelligent man in the example does not understand chinese.

>>4377353
>Why are you even considering the dude in the room as the "machine" in this experiment? Shouldn't the entire room be the "machine"?
Doesn't matter. Have the man memorize the rulebook and act from memory. The man is then the entire "room", and yet he still doesn't understand chinese,

>What you fuckers are saying is that the chinese room is not intelligent because the dude inside is not aware of what he's doing. Are you aware of what your brain is doing? Didn't fucking think so. Philosophical. Assholes! Nothing of this MATTERS
Not to you, perhaps, because you don't care about knowing what's going on.

>If you want to define intelligence (which really is what you're doing with these thought experiments), then either let the turing test be IT or just accept that you need to define some sort of self awareness, which some say is the ability to reflect on oneself etc etc. No matter what you decide, it WONT. MATTER. to the actual development. At best you'll be able to rationalize treating theoretical androids like slaves, justified or not.
The point of TCR is that the Turing Test is not a good indicator of intelligence as we understand it. And no, it doesn't matter to the development of the algorithm. It matters to the question of what the algorithm *is*.

>> No.4377360

Can the guy that was arguing about "programming" vs. "learning" please answer this?
>>4377278

If I prompt google "Give me Times Square guitars" and it gives me some blogs of people who play guitars on times square then I will say google didn't understand me. If I clarify "Give me guitar shops by Times Square" and it gives me a map to the music district, I have no problem saying that google understands me.

The Room and brains are intelligent as black boxes.

The machine or the man inside the Room are NOT intelligent with regards to the blackbox's processes, just as much as individual neurons are not intelligent with regards to the brain's thinking as a whole.

This whole dilemma plays upon your bias to empathize with the man inside the box, and the frustration you would feel in not being able to add your two cents to the process. That is irrelevant to the decision making process as a whole, just like trying to pretend you're a neuron would lead you to extrapolate that the brain does not understand. This is an absurdly false dilemma based purely on psychological empathy biases.

>> No.4377363

>>4377358

While the Turing Test may not be optimal, I have difficulty considering this thought experiment a test AT ALL. It's more like a halfwrought musing from which an idiotic conclusion is drawn.

Then again, PHILOSOPHY!

>> No.4377364

>>4377358
>It matters to the question of what the algorithm *is*.
Which is good for what exactly? Oh yeah, I already told you.

>> No.4377365

>>4377358

the other guy is right, you can't even define the kind of intelligence you are looking for and until you can it has no meaning. You won't be able to either, because anything type of understanding a person has a computer can have just as much...

The only thing left for humans is the raw feels of consciousness... not emotions because those are part of the equation too... but the feel of emotions, colors etc..

>> No.4377369

>>4377360
I'm not him, but
>I have no problem saying that google understands me.
Well, not everyone is willing to play fast and loose with terminology.

>>4377363
I agree that Searle has constructed a thought experiment very difficult for most people to conceive properly. Perhaps because I am... possibly autistic... and have had to learn to mimic social interactions without ever understanding them, it is easier for me to do so.

>>4377364
It's good for knowing what's going on. Truth, and all that.

>>4377365
Qualia, right.

>> No.4377379

>>4377369
With regards to the "google understands" post:

Really? That's all you got to say about that post? I elaborate on that. I'm using the term just as abstractly as the concept exists. Give a clear definition instead of moving the goal post. Again, you only say "nuh uh" because it doesn't confirm to your empathy biases.

>> No.4377382
File: 211 KB, 720x540, sci-iamdisappoint.jpg [View same] [iqdb] [saucenao] [google]
4377382

>Saw this thread, shit philosophy brain melting
>Went to bar, hit on a bunch of girls
>Got rejected by a bunch of girls
>Come back and this thread is still going.
I still think I had a more productive night then this ass pile of a thread.

>> No.4377386

>>4377382
hey man what town?

>> No.4377387

>>4377379
Your use of the term strikes me as being significantly more lenient than normal, or at least figurative in its application. If you said it to me, I would assume you were using a figure of speech, rather than suggesting that google has an actual mind that can understand what you want.

>> No.4377388

>>4377386
Redneckville, Alberta
Intelligence isn't their strong point here.

>> No.4377390

>>4377369
>It's good for knowing what's going on. Truth, and all that.
The problem is that philosophy doesn't give you truth, it rationalizes your view of things. It makes you feel good about your beliefs and opinions. Stop that.

Like I said, defining intelligence only allows us to feel good about how we treat other beings, mechanical or not. There's the ability to perform complex tasks, the ability to reflect on oneself, the ability to feel emotions, etc. You're considering the first of these, which probably is of the least importance when deciding how to treat others.

>> No.4377393

>>4377390
>The problem is that philosophy doesn't give you truth, it rationalizes your view of things. It makes you feel good about your beliefs and opinions.

The thing you're talking about, that's not philosophy.

>Like I said, defining intelligence only allows us to feel good about how we treat other beings, mechanical or not. There's the ability to perform complex tasks, the ability to reflect on oneself, the ability to feel emotions, etc. You're considering the first of these, which probably is of the least importance when deciding how to treat others.

However the first of those is the only one with objectively perceptible results. That's kind of the point.

>> No.4377406

>>4377393
>The thing you're talking about, that's not philosophy.
You're free to expand on that. Please tell me how philosophy has ever let us see some greater truth.

It's nothing but mind masturbation. They make some half ass claims, spin off on that and draw a battshit conclution. Just look at pascal's wager for one. I think you'll have a hard time giving me an example of where philosophy has not just been a way to rationalize thinking. Being rational is good, rationalizing is not.

>However the first of those is the only one with objectively perceptible results. That's kind of the point.
So? It should be clear to you now that this alone doesn't give you a definition of intelligence that we all can agree on.

>> No.4377413

>>4377406
>Please tell me how philosophy has ever let us see some greater truth.

Epistemology, one of the three major traditional branches of philosophy, deals with knowledge and how we know what we know. For one thing, it led to the scientific method, and I suspect you'd be rather happy about that. More to the point, it is the means by which one judges what we know and what is true. When you look at some proposed idea - say, empiricism - and think about whether or not it is true, that's philosophy you're engaging in.

>Just look at pascal's wager for one.

Ok. What about it?

>It should be clear to you now that this alone doesn't give you a definition of intelligence that we all can agree on.

It is.

>> No.4377416

>>4377387
I don't think I'm being "lenient" at all, I think I'm acknowledging that "understanding" is a structure that emerges on a spectrum from ever increasing complexity of harmonious processes that generates results that are consistent with ever increasing facts or experiences. We'd like to think our brains are brilliant that they can come up "new, clever" responses, but they are just results that came through some problem solving circuitry that happen to be consistent with a wider degree of (sometimes unexpected) observations. When a machine gives us good results, we just play it off as "programming" that doesn't "understand". There is no delineation, and nobody can get past their psychology and ego to acknowledge it.

>> No.4377420

>>4377416
Except TCR shows an example of how good results can be had without the understanding you're assuming ought to be there.

>> No.4377429

>>4377416

This. We would call a unicorn an act of creativity but it is really just a recombination of pieces of things we have perceived. I understand both how to program something to do this, and how to program something to want to do this as a byproduct of human like motivation.

>> No.4377430

>>4376090
I'm a reductionist. I hold "science works" axiomatically, and I hold "I'm not special" axiomatically. From that, I can conclude that because I have a mind, thus other people have minds. Thus there is something about the configuration of the human brain that gives rise to minds. Moreover, it's likely other animals have minds too.

As for the Chinese room, the human has a mind, but does the room as a whole have a mind? Does it experience qualia? Dunno.

A more interesting question is does the Chinese room have moral rights? I'm strongly tempted to say no. I guess that implies I think the Chinese room doesn't experience qualia, which I think comes from my belief that the Chinese room lacks the proper physical configuration to have a mind. Of course, I'm really pulling shit out of my ass now, but you asked OP.

>> No.4377434

>>4377420
My god, I've addressed that here after the google part.
>>4377360

But let me paraphrase:

The man in the room does not understand what is happening any more than a neuron understands what's happening in the brain. That doesn't mean the "Room" understands any less than a "brain" does on the whole.

Your attempt to empathize with the *man's* misunderstanding (instead of the "Room's") is as false and absurd (in this context) as trying to empathize with a neuron instead of a brain.

>> No.4377437

>>4377434
And your objection is dealt with in having the man memorize the rulebook and work from memory. He is now the room, and still does not understand chinese.

>> No.4377443

>>4377413
>Epistemology
Yes, but is it really true? Or does it just make me feel good about accepting it? Empiricism is something so fundamental that we are born to deal with it. Long before we knew anything about the solar system, we expected that the sun would rise tomorrow as well because we had empirically concluded that it rises every morning. Sure you can slap a philosophy sticker on it if you like, but then why not just say that any form of thinking is philosophy. Am I being a philosophical faggot right now? Perhaps, you decide.

>deals with knowledge and how we know what we know
Nothing of which has changed how we learn.

>Ok. What about it?
Pascal's wager, a philosophical argument concluding that it makes sense to believe in god because doing so doesn't hurt anyone. That's the kind of truth philosophy gives you.

>> No.4377444

>>4377437
Yes! Now you're getting it!

How does that in absolutely any sense contradict the fact that in that case he IS STILL A NEURON in the Room structure?

You're still falsely empathizing with him as the "understander", as opposed to the "neuron" which is the role he is actually playing.

If "the book" gives me the right answers, I say it understands. The fact that now it has become an abstract entity without a box surrounding it is a stupid differentiation between the two. What the fuck does the wall have to do with anything?

>> No.4377448

>>4377443
>Yes, but is it really true? Or does it just make me feel good about accepting it?

Those are epistemological questions.

>Nothing of which has changed how we learn.

Incorrect. As I mentioned, it developed the scientific method, which was a significant change in how humans collectively acquired knowledge.

>Pascal's wager, a philosophical argument concluding that it makes sense to believe in god because doing so doesn't hurt anyone. That's the kind of truth philosophy gives you.

And the arguments against it are philosophical as well. You're cherry picking something you don't like to use as an example to characterize philosophy. It doesn't.

>> No.4377453

>>4377444
>How does that in absolutely any sense contradict the fact that in that case he IS STILL A NEURON in the Room structure?
If he's the room/brain, then he's not a neuron, because he's the whole brain.

>You're still falsely empathizing with him as the "understander", as opposed to the "neuron" which is the role he is actually playing.
No, the CR explicitly says he's not the understander.

>If "the book" gives me the right answers, I say it understands.
And yet the man can give you the right answers without understanding. So your understanding detector doesn;t work so well.

>The fact that now it has become an abstract entity without a box surrounding it is a stupid differentiation between the two. What the fuck does the wall have to do with anything?
Because when the man memorizes the book, and continues to give the right answers without understanding chinese, it invalidates your point. The man is now the book, has the right answers, and does not understand.

>> No.4377455

Imagine the following:
Now Searle gets in the room with a dictionary and an english version of the program, he first translate mechanically word by word the paper given then he input it in the computer, get the same result and re-translate it to chinese and send it out.
Now he understood the message(In an ideal situattion were the translation by a dictionary is perfect) and gave an answer that he understood in a language that he didn't understand.
What happened here is no different to what a computer actually does, it translate the message to bits(electric impulses) and then interpret it, give an answer in bits and then translate it tu chinese again.
Just an opinion.

>> No.4377458

>>4377448
So to conclude, all thinking is philosophy. I guess we're done here.

>> No.4377465

>>4377458
>So to conclude, all thinking is philosophy.

That's an erroneous conclusion. We can be done, though. It's a little frustrating trying to explain phil101 to someone who has decided the subject is their enemy without understanding what it is.

>> No.4377469
File: 133 KB, 400x307, 1230932694158.png [View same] [iqdb] [saucenao] [google]
4377469

>>4377453

If something can function as if it understands then science says it understands.

Anything else is just mincing sophist faggotry. Take it up with /lit/ if fighting over the intrinsical implications of words is what you're looking for.

Philosophy ain't science, folks. Sorry. As a rule of thumb, if something doesn't involve math in some shape or form, it probably isn't science either.

>> No.4377474
File: 57 KB, 646x536, CarlSaganB.jpg [View same] [iqdb] [saucenao] [google]
4377474

>>4377444

With these trips I retire, but I'll summarize the point.

The "understanding entity" is not the hardware, but the software that it is running. We're used to associating one body (hardware) with one mind (neuron configuration), so calling abstract entities like "programs" the object that is "understanding" does not sit well with us. Thus we can play games like the Chinese Room and place new software onto a biological entity, which has its own personal software running at the same time. But realize, this is only meant to conflate two "programs" that are actually entirely separate. We are judging the understanding that the "Chinese software" inherently posses, against whether the man's own personal software is able to process this information coherently.

For the "Chinese Program", the man plays only the role of the hardware. For his "Personal Program", he is both the hardware and the software. The Personal Program is not the same as the Chinese Program. If we ask if "he" understands chinese, then we say no, just as much as an individual neuron does not "understand" and google's CPU's don't "understand". However configuration of neurons "understand" and google's algorithms "understand".

The beauty of a living thing is not the atoms that go into it but the way those atoms are put together. -Sagan

>> No.4377477

>>4377474
...being a response to:
>>4377453

>> No.4377476

>>4377469
>If something can function as if it understands then science says it understands.

Which is a hasty conclusion, as TCR demonstrates.

>Anything else is just mincing sophist faggotry.

Actually, it's an attempt to prevent you from jumping blindly into unthinking dogma, like
>If something can function as if it understands then science says it understands.
...because science says no such thing. That kind of conclusion is a philosophical one.

>> No.4377482

>>4377476

Ugh. Fine.

If something can function as if it understands, then whether it can truly understand or not as defined by stoned philosophy majors is as irrelevant as anything else stoned philosophy majors ever contributed to the world. Actually even less so, since all of us have at one point at least eaten a poptart without toasting it first, which is modern philosophy's greatest contribution to human understanding.

>> No.4377486

>>4377474
I don't see why you're supposing two separate entities in one. I don't see any evidence or necessity for it (excepting to justify your conclusion). The man's personal program integrates the chinese book program. He's not a single neuron, he's the universe of neurons containing all the neurons in their various arrangements that run every program in him. There;s no reason to assume that the chinese book program remains somehow insulated from him although inside of him. The hardware that runs the book program is the same hardware running the persona program.

>> No.4377487

>philosophy: I really want to act superior to other people and refer to real scientists as 'my contemporaries'...But I don't want to learn long division.

>> No.4377493

>>4377482
Sorry, didn't realize rigorous logic was going to hurt your butt so much. Maybe you can go play with the stoner philosophers you made up... they probably won't care about your lackadaisical thought processes.

>> No.4377497

>>4377482

>Actually even less so, since all of us have at one point at least eaten a poptart without toasting it first, which is modern philosophy's greatest contribution to human understanding.

Whoah whoah WHOAH.

That is WAY too much credit you're giving them right there.

>> No.4377498

>>4377465
I've told you over and over again what philosophy is. It's rationalizing thought and how to think. It's not based on any kind of truth, it's merely us as a species, trying to feel good about some common ground we have. Our brains function in a certain way, we percieve our surrounding in a certain way, our conscience (also something in our heads) tells us how to treat others and such. Philosophy takes all this, puts it into words, and more often than not rapes it along the way.

Any time I make an argument, you call it philosophy. Fine, do so. I don't mind. I'm not saying all philosophy is wrong, I'm saying it's unnecessary and often destructive.

In particular, in this thread we see people trying to define intelligence (defining an abstract concept is obviously something philosophical) in order to know how to treat others. Why is this necessary?

>> No.4377506

>>4377493

At least you don't dispute the fact that philosophy is a self-perpetuating dead end that's contributed nothing to the world except more and more useless permutations of itself.

Honestly, if philosophy is a 'science', then it's the science of giving things labels. Although even that is probably giving it too much credit; I've probably already pissed off a handful of zoologists.

>> No.4377516

>>4377498
>I've told you over and over again what philosophy is.

And every time you were wrong.

>Any time I make an argument, you call it philosophy.

Not true. This arguement you are making right now, it's not philosophy.

>>4377506
>At least you don't dispute the fact that philosophy is a self-perpetuating dead end that's contributed nothing to the world except more and more useless permutations of itself.

It isn't. I so dispute.

>Honestly, if philosophy is a 'science', then it's the science of giving things labels. Although even that is probably giving it too much credit; I've probably already pissed off a handful of zoologists.

It's not a science in the modern sense of the term. But seriously, if you can't handle the precision of the philosophical issues in this thread, you're not ready for science.

>> No.4377519

>>4377516

>if you can't handle the precision of the philosophical issues in this thread, you're not ready for science.

chemist here philosphers and scientists should drink from separate water fountains, there I said it.

>> No.4377520
File: 26 KB, 120x89, 1265923757331.gif [View same] [iqdb] [saucenao] [google]
4377520

If it is a perfect imitation, it is the same thing.

The idea that there's some sort of divine "soul" or that we all AREN'T just strong AI loaded onto meat mechas is a bit obsolete.

>> No.4377524

>>4377516

>But seriously, if you can't handle the precision of the philosophical issues in this thread

>Hah, if you're not good at hackey-sack then why are you hanging around the /sci/ board, amirite guys?

We are not your friends. We do not respect you.

>> No.4377525

>>4377486

oh god, ok.

When I talk about "him" I'm referring to his personal software. You're still conflating his personal program's understanding with the chinese program's understanding. Do you say that Firefox is an integral part of Windows' identity just because they're both running on my laptop?

You don't have evidence that the configuration of neurons in you brain affects its decisions.. indeed IS the decision making entity? Do you dispute that I could (theoretically) replicate the configuration of neurons in my brain and grow a new one in the vat, and that it would make the same decisions?

>> No.4377533

>>4377027
THE TWO WHAT DO YOU FUCKING THINK, YOU FUCKING IMBECILE.

THE TWO CONCEPTS, OF STRONG AND WEAK AI. THEY ARE IDENTICAL IF THERE IS STRONG AI, BECAUSE THE DEFINITIONS AREN'T RIGOROUS AT ALL.

>> No.4377535

>>4377516
>And every time you were wrong.
Enlighten me. And don't give me any shit about not being able to summarize a phil101 class. If you can't explain it simply, you don't understand it well enough. Take that for philosophy.

I'm not sure if you completely understand what I'm saying. Science deals with universal truths, they are true no matter who you are or where you are. Philosophy is a bit like that, but it's trapped in our minds and it's completely subjective to us as a species. It's doesn't really matter in anything but how we behave, what we do. Ie. it's pointless. It's even a bit egocentric, blowing our own importance up to unreasonably proportions.

Oh and I'm still waiting for that one philosophical theory that has in any way mattered even in our own development as a global society or whatever. Something that doesn't take our very nature and puts it into words. Something that people simply must know before utilizing its importance.

>> No.4377536

>>4377525
No, I'm saying I'm not seeing the evidence or reason to see the book program as being somehow always distinct from the personal program.

>Do you say that Firefox is an integral part of Windows' identity just because they're both running on my laptop?

Do you say that your laptop is incapable of browsing the web because firefox is its own program and should be somehow considered not a part of the laptop?

>> No.4377538

>>4377519
This. Empirical observation and/or pragmatism > 100% logic with NO attempt of testing ideas in reality.

>> No.4377539

>>4377027
Also, no, you fuckstick, I said if you had a strong AI, and reprogrammed a computer with exactly the same state, you've now programmed a computer with exactly the same configuration as the "strong AI" and yet it's a weak AI because it was programmed that way, which is why the distinction makes no sense. I've said this like seven damn times. You just don't read.

>> No.4377540

>>4377524
If you can't handle the rigor in this thread, then your respect would be meaningless, as you wouldn't be cut out for either philosophy *or* science. It's not like this is abnormal stuff here, you know. The way people in this thread have responded to attempts at clarity and precision is shameful. You're all lazy, sloppy thinkers.

>> No.4377543

>>4377540
Welcome to 4chan

>> No.4377546

>>4377538
Non sequitur, which you could have avoided if you had a slight education in philosophy.

>>4377535
>Enlighten me.
Philosophy is the love of wisdom, literally, and the pursuit of wisdom through reasoned thought practically. Traditionally it has three major divisions: metaphysics, the study of what exists; epistemology, the study of knowledge; and ethics, the study of how people should behave. More recently, politics and aesthetics are considered divisions of philosophy as well. In terms of this thread, the issue is one of epistemology, namely, "what is our justification for believing a turing test passing computer to have understanding or intelligence?" If you need more clarification than that I'll have to ask for specific questions.

>Oh and I'm still waiting for that one philosophical theory that has in any way mattered even in our own development as a global society or whatever. Something that doesn't take our very nature and puts it into words. Something that people simply must know before utilizing its importance.
Empiricism, democracy, perspective (in art)... Oh wait, you can take anything you want and claim it's "our very nature" and wiggle out of the problem that way. Nice sophistry.

>> No.4377547

>>4377538
Immanuel Kant said much the same thing in his Critique of Pure reason. Guess which class teaches from this work. That's right, Philosophy.

>> No.4377548

>>4377540
You think you have rigor? You don't have rigor. You don't even know what rigor is. You've defined two concepts, one of which makes no sense and one of which has no useful meaning. You claim that a human brain does not follow an algorithm that produces a particular output at a particular time, and yet you have never given us evidence that the human brain isn't essentially a probabilistic computation device with bounded computation power running some abstract set of instructions whose result is a movement of atoms and therefore a change in state, and presumably sometimes input or output.

>> No.4377550

>>4377548
>You claim that a human brain does not follow an algorithm that produces a particular output at a particular time

I have not claimed this. This is an example of your lack of rigor.

>> No.4377551

How to archive this thread so I can read it later?

>> No.4377552

>>4377551
ctrl + s

>> No.4377553

>>4377550
Oh, I forgot, there's a subtle fucking distinction between arguing as if something is false without ever saying it, and actually stating something as your premise before arguing as if it is false.

>> No.4377555

>>4377548

Buddy, all those terms involve WAY too much math for me to argue against without blushing. I barely had the confidence to employ the term 'sophistry' after someone else had used it three or four times.

>> No.4377556

>>4377553
OK, how does the chinese room necessitate that the human brain not follow an algorithm that produces a particular output at a particular time?

>> No.4377557
File: 79 KB, 350x218, 1229498305273.jpg [View same] [iqdb] [saucenao] [google]
4377557

>>4377553

PHILOSOPHY!

I bet you're regretting pursuing that chemistry degree now, NERD!

>> No.4377558

>>4377556

It doesn't, hence 'flawed premise'. The whole thing is a tedious, leading puff of bongsmoke that started with a conclusion and worked it's way back from there.

>> No.4377559

>>4377556
If the brain is a probabilistic computer with bounded computational power, then its very function could be programmed into a computer and run to simulate it completely, something I think you've said is impossible. If you can do that, the whole original question breaks down because then you've "programmed" a computer that acts like the gold standard for learning, a human brain.

>> No.4377561

>>4377558
>It doesn't

Then I didn't claim it. Thanks for admitting your lie.

>> No.4377563

>>4377561
That wasn't me.

>> No.4377570

>>4377561
You should really get a trip, so we can identify your stupidity from others' stupidity. Just for the thread.

>> No.4377573

>>4377546
>love of wisdom
This is vague bullshit of no importance, and you know it.

>reasoned thought practically
This is the unnecessary rationalization I'm speaking of.

>Empiricism, democracy, perspective
All of which came before this reasoning about it. And the reasoning has not "improved" on any of them.

I say again, mind masturbation. There's no truth in any of it. It's merely different schools of thought, some which you feel good about and others which you disagree with. Subjective nonsense.

>> No.4377574

>>4377559
>If the brain is a probabilistic computer with bounded computational power, then its very function could be programmed into a computer and run to simulate it completely, something I think you've said is impossible.

I haven't, although I think Searle does. He seems to believe that there is something special about organic neurons that cannot be replicated in silicon, though I cannot recall if he was ever specific about what it was.

>If you can do that, the whole original question breaks down because then you've "programmed" a computer that acts like the gold standard for learning, a human brain.

The question does not "break down", and it has nothing to do with learning. It doesn't matter if you copy exactly the "programming" of the brain (even though this is now a new scenario and not TCR). If programming can be run in a human brain without understanding of the subject, then successfully run programs in a computer DO NOT INDICATE UNDERSTANDING OF THE SUBJECT. Seriously, unless you've just come back from a lobotomy this should not be so hard for you to understand.

>>4377570
You need the mental exercise.

>> No.4377576

I think it's absolutely absurd, and its proliferation is one of the main reasons I dislike modern philosophy.

If any proponent of it had ever explained to me how a child learns to talk, and then the differences between that and the Chinese room, I might be slightly less cynical.

>> No.4377578

>>4377570

Because CLEARLY you're the one true bastion here of real science and the rest of us are just soulless mathematician fucks to be guided by your fucking useless pretend-science.

God, I can practically SMELL your fucking soul patch.

>> No.4377579

>>4377573
>This is vague bullshit of no importance

If you don't think knowing what's going on in reality is important, that explain a lot of your behavior this far.

>This is the unnecessary rationalization I'm speaking of.

So now you're against reason? Great, I wish you'd mentioned this before I wasted my time trying to reason with you.

>more bullshit

Yeah, I'm done with your remedial education. You're too dogmatic to actually learn anything.

>> No.4377580

>>4377574
>The question does not "break down", and it has nothing to do with learning. It doesn't matter if you copy exactly the "programming" of the brain (even though this is now a new scenario and not TCR). If programming can be run in a human brain without understanding of the subject, then successfully run programs in a computer DO NOT INDICATE UNDERSTANDING OF THE SUBJECT. Seriously, unless you've just come back from a lobotomy this should not be so hard for you to understand.

But if you copy exactly the programming of the brain, you've replicated its function, which includes learning. Or are you arguing that learning is not a function performed by the brain, and is therefore something outside of the brain?

Seriously, these arguments would make instantaneous sense to anyone who actually knew any computer science, but you're arguing about something you don't really understand.

>> No.4377583

>>4377580
Learning is irrelevant.

>If programming can be run in a human brain without understanding of the subject, then successfully run programs in a computer DO NOT INDICATE UNDERSTANDING OF THE SUBJECT.

It's that simple.

>> No.4377586

>>4377557

You do know that practically everything we use in our society uses chemistry--along with our own bodies--,right? As I am lacking in the subject, would you be so kind to inform me of what role philosophy has played toward the average joe or jane within the last five or more more decades? Please note that this is, in no way, an attempt at either forming a debate or argument. I simply wish for my question to be answered...

>> No.4377589

Everyone is getting trolled, because nobody intelligent enough to operate a computer believes the Chinese room argument

/thread

>> No.4377590

>>4377583
But the program that is run in the human brain IS capable of understanding. That's my point.

>> No.4377591
File: 1.54 MB, 276x225, 1328937446884.gif [View same] [iqdb] [saucenao] [google]
4377591

>>4377586
There's this thing called trolling...

>> No.4377594

>>4377590
But the brain can run a program that it does not understand. That's Searle's point.

>> No.4377595

>>4377586
That wasn't even trolling going on, it was sarcasm, and you were still sucked in. That's the fucking joke. There's chemistry, on one hand, insanely relevant, very useful, and there's philosophical wankery on the other hand, and that was the joke. End story.

>> No.4377598

>>4377579

You're not explaining anything, let alone what's going on in reality. You are slapping a label on it and claiming credit for everything involved. You might as well say Columbus INVENTED America. Actually it might even be dumber than that, since you're effectively saying America didn't exist until Colombus 'invented' it. Fucking hell.

>> No.4377600

>>4377579
>knowing what's going on in reality
You keep saying this, but it's so vague that there's just no way for me to respond to it. Goes on where exactly? You're still not gaining any truth about what is going on about anything, you sit on your chair and rationalize about it. Kinda like when we didn't know about the solar system. Clearly we must be floating in space on the back of a giant turtle. Gee now I feel so much better about my existence. PHILOSOPHY!

>So now you're against reason?
To quote myself from earlier (it's easy to get confused around here): Being reasonable is good, rationalizing is not. Philosophy is rationalizing what you don't understand.

>> No.4377602

>>4377600
>Philosophy is rationalizing what you don't understand.

Well, you don't understand philosophy, and your statement above is a rationalization of what you don't understand, so... urk

>> No.4377604

>>4377594
Different kind of program. On the one hand, the program in question is natural laws as they apply to the brain, creating a machine that has state and responds to input, and on the other hand is a program, a set of instructions being executed in the consciousness of the brain (which itself is already a program).

The system of a person consciously executing a program which totally mimics a brain's response to stimuli has understanding, even while the brain doing the computation needn't have understanding.

>> No.4377606

>>4377602
For once I rather like your argument, but it kinda falls on the fact that I actually understand what philosophy is and you're just trying to get an easy way out of this by denouncing me instead of going after my arguments. What is "going on" that you gain this truth about?

>> No.4377608

>>4377602
So obviously that statement is used by people who can do philosophy but don't understand it, which implies that it could be done even by weak AI.

>> No.4377618

>>4377606
I'm not so much looking for an easy out as I am no longer able to take you seriously. I have a great capacity for giving people too much credit, but your proud ignorance and nonsense have exhausted my sobriety. Well done, you're a star.

>>4377604
No, same kind of program. Why would they be different?

>The system of a person consciously executing a program which totally mimics a brain's response to stimuli has understanding, even while the brain doing the computation needn't have understanding.

You're assuming your conclusion. There's no reason to treat the function as though it possesses understanding somehow separate and almost hidden from its host.

>> No.4377629

>>4377618
And again you refuse to explain what this "goin' on" business is all about.. Since you've mentioned it several times, I just suspected that you'd have at least som idea.

>> No.4377632

>>4377629
Is the computer intelligent, conscious? Does it understand?

>> No.4377633

>>4377618

>proud ignorance

This post is soaked with so much irony I very nearly needed dyalisis to process it.

>> No.4377637

>>4377633
It's ok, I don't expect you to understand, given the mess you've made of it all so far.

>> No.4377638

>>4377618
Unfortunately, I can't help BUT assume that, because that's true of computers. You can't divorce their functionality from their state, which includes their programming. Wipe the state, and the computer can do anything it's possible for a computer to do. Wipe the state of the human brain, and the brain can do anything it's possible for a brain to do. You can't brusquely say that a brain has understanding, because if it's wiped, it no longer has understanding. It's inherent to brains that the understanding comes from the state of the brain, and not the original configuration of the brain, and it's inherent to computers that understand, if they can exist, that the understanding comes from the state.

>> No.4377641

>>4377638
Which doesn't follow since brains can clearly have states that produce results that give the appearance of understanding without actually having the understanding.

And then you respond with something like
>the state which gives the appearance of understanding has understanding
which is an unfounded assertion

>> No.4377642
File: 63 KB, 155x202, 1318995202810.png [View same] [iqdb] [saucenao] [google]
4377642

>>4377637

No, I understood (what rock-soup substancelessness there was to understand), it just blows my fucking mind from here to eternity that you could have so little self-awareness. Honestly, you're like a cartoon character to me at this point.

>> No.4377643

>>4377642
It's nice of you to write out for me what I would have written out about you. Thanks.

>> No.4377644

>>4377637
Sorry, I'm not that into Pokemon.

>> No.4377646

>>4377641
What doesn't follow? Tell me exactly what you think doesn't follow and from what you don't think it follows.

>> No.4377651

>>4377646
It does not follow that (a program which exhibits the appearance of understanding) has understanding within (a system that does not have understanding of the thing the program exhibits the appearance of understanding of).

>> No.4377654

>>4377651
Or that the program should be treated as a distinct entity from the system.

>> No.4377655

>>4377651
I don't really know what the two of you are discussing, but for the love of god, don't use paranthesis in that way ever again.

>> No.4377656

>>4377643

Ah yes, the elegance of the 'no u' response.

Sublime.

>> No.4377658

>>4377651
From what does that not follow?

Besides, all I said was that if a brain has understanding, it possesses that understanding in its state. I think you think I meant if something appears to have understanding, it possesses that understanding in its state.

>> No.4377660

>>4377656
I was stunned by my good fortune.

>> No.4377666

>>4377658
>I think you think I meant if something appears to have understanding, it possesses that understanding in its state.

That is how I understood (or perhaps misunderstood) you, yes.

Correct me?

>> No.4377684

>>4377666
First, you must separate the idea of a machine, which can be in many different states, not all of which can understand something (for sake of coherence, call it Chinese), from a particular state, which can have understanding (say, the state of a Chinese speaker's brain).

Because a brain could be in so many different states, and many of those states are insane and many states have never been expressed by any human brain, you can't ever say that a brain has understanding, but merely that the state has understanding. This is why I say that you must divorce the idea of understanding from the physical machine layout.

In addition, the state of something is its programming. If a brain has a state (the state of a particular Chinese man's brain, for example) then that IS the programming. It completely determines the next state, given the inputs and perhaps some well-defined randomness, and that's what a program does.

If, then, the programming of the brain of the Chinese man itself, and here I'm not referring to a program handed to the Chinese man but the very workings of his brain, if that is what must possess the understanding, then why would that program not possess understanding when it is run on a computer?

>> No.4377694

>>4377684
Because the brain can have a state which results in the appearance of understanding without the actuality of understanding.

>> No.4377695

>>4377694
...and therefore a program can too.

>> No.4377701

>>4377695
Sorry, to try to stay consistent with your terminology, the machine running the program can have a state that etc etc

>> No.4377708

>>4377695
>>4377694
But how can you legitimately state that a brain has understanding, then? Maybe the Chinese man only appears to have understanding.

>> No.4377719

>>4377708
That's the problem (problem of other minds).

>> No.4377727

>>4377719
Having rejected the currently accepted notion of equivalence between computer systems, what's usually called programming models, as carrying understanding with it, you've stopped making this a question about computers and instead a question about what you want computers to be. You've completely divorced the question from reality. There is no physical basis for your question, no abstract basis for your question. It no longer has any meaning at all.

>> No.4377738

>>4377727
It doesn't stop being a question about computers and reality just because it doesn't assume your assumptions.

>> No.4377755

>>4377738
They're mine and everyone else's, because if a computer executes a series of instructions, ANY device that emulates that series of instructions does EXACTLY the same thing, so the hardware no longer matters. It ceases to be a question about reality because there's no test for whether something is executing something or merely acting like something else that's executing something.

>> No.4377760

>>4377755
Welcome to the hard problem of consciousness. There is no objective test for subjectivity. This does not remove the issue from the realm of reality, merely from the realm of the objective and empirical.

>> No.4377782

>>4377760
So something exists, and it's something that a physical thing can have, but there's no way of determining whether it exists, so instead you argue about whether each thing has it?

I think you have your answer.

>> No.4377795

>implying humans aren't weak AI

>> No.4377796

Are you guys new to this thread? You've rehashed some dumb points that have already been covered and debunked, and now you've gone full circle to the hard problem of consciousness which discussed near the top.

>> No.4377798

>>4376090
The chinese room experiment fails to take into account that our brains work not too much differently than this. That is how people can recite entire plays, poems, books, have photographic memories, but understand little about the information they hold.

I would love to have a photographic memory able to recall any second of my entire existence, I would never sacrifice even 1% of my processing power to attain that.

>> No.4377827

>>4377796
No, a hard problem is SAT. SAT is a hard problem. This is more like decidability, except without even the definition. It's literally impossible, for one because it has no definition.

>> No.4377832

>>4377782
So instead of assuming something will have it we examine the logic behind the assumption and find it to be faulty.

>> No.4377837

>>4377832
Assumptions aren't based on logic.

>> No.4377845

>>4377837
They can be evaluated in terms of how they fit with more fundamental assumptions, to examine consistency. Also, it's not so much a formal assumptions as much as it is an uncritical one.

>> No.4377849

>>4377827
SAT?

>> No.4377856

>>4377849
Oh, I see, you don't actually know anything about computers. Why are you in this thread?
>>4377845
Assumptions about assumptions, the young learner now makes.

>> No.4377878

>>4377856
'Cause I woke up and needed something to do. What is SAT?

>> No.4377956

>>4377760
There's no "objective" without subjectivity either. So it makes sense that there's no objective test for subjectivity that isn't validated by another subjectivity.

>> No.4377958

>>4377878
The set of satisfiable boolean formulas.

>> No.4377977

>>4377956
Objectivism, motherfucker.