[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 912 KB, 493x304, 1339651116095.gif [View same] [iqdb] [saucenao] [google]
5242197 No.5242197 [Reply] [Original]

Why has the Turing test not been passed yet.

>> No.5242220

It all boils down to the simple fact that machines still aren't at the time complex enough to create what we would like to call "free will". Everything in a computer, even things that you would like to call "random" are actually sudo-random, and true spontaneity can never be achieved in binary. If you would like, you can dedicate your life to proving Turing wrong, but good luck my friend.

>> No.5242219

You're wrong. Most humans pass the Turing test.

>> No.5242244

>>5242219
It's true. If 10 people were to take the turing test, only 1 would pass. The other 9 would score lower than him and not pass.

>> No.5242255

It has http://www.sciencedaily.com/releases/2012/09/120926133235.htm

>> No.5242262

>>5242197
I wondered this when we discussed it in school along with free will and souls and what not. I have come to the theory that it is because most people think consciousness is magical.
We had arguments in class where we'd compare a machine to a human at doing X task, and we'd imagine that the machine had been programmed to behave exactly like a human being. And most my class mates would respond by requesting the computer do something which it was never programmed to do, which is a silly way to disprove it seeing as the machine was only ever 'built' to do one task.
I believe you could replicate the same kind of consciousness as humans have in a machine if you made it complex enough, but programming isn't magical, it will never be able to do anything it wasn't programmed to do, just like there are probably things in existence humans will never be able to comprehend because we were never 'programmed to'. Of course we can, by using math, describe things, but that doesn't necessarily mean we can comprehend them.

>> No.5242267

>AI
>not a pipedream

gtfo
>>>/g/ >>>/a />>>/x/ are that way

>> No.5242277

Why can't we simulate something so simple that nature has managed to randomly cook it up from a mess of liquids and dividing rubber bands that absorb shit?

>> No.5242278

>>5242220
>true spontaneity can never be achieved in binary

>>5242262
>it will never be able to do anything it wasn't programmed to do


ITT: Babby's first programming course.

>> No.5242281

>>5242262
>souls
>>>/x/

>> No.5242288

>>5242262
conciousness should be censorsed on this board

>>5242197

write a program for each section of the human brain and give it a simulated environment until it's smart enough to understand what we understand. don't know how the human brain works? then how would you create a program to act like one?

thread over

>> No.5242312

>>5242281
Not implying it exists, just making a point that in a philosophical debate about consciousness and Turing tests you have to look at what explanations for human behavior has been made throughout time. Also it's not exactly been disproved, but there is not really a merit either.

>> No.5242351
File: 15 KB, 460x276, tumblr_mcb0vf61TK1qlkxsp.jpg [View same] [iqdb] [saucenao] [google]
5242351

>>5242220
>even things that you would like to call "random" are actually sudo-random
>sudo-random
>sudo

>> No.5242389

>>5242277
It took nature billions of years to make those things (which are very, very complex), mankind has only been programming computers for maybe 70 years. Give us some more time.

>> No.5244445

>>5242220
That... I... what? Did you post in the wrong thread?

>> No.5244621
File: 6 KB, 500x250, CA_rule30s.png [View same] [iqdb] [saucenao] [google]
5244621

>>5242220
Wait, I'm pretty sure that you can get randomness from simple programs. I've been reading Steven Wolfram's book a New Kind of Science, while I'm only about 200ish pages in I have definitely convinced that you can get complex and random outputs from very simple systems.

>> No.5244626

>>5242288
> consciousness should be censored on this board
What? Y would you censor consciousness? Just because it is more complex than humans can understand at this point in our inquisition into the natural world does not mean we can't muse about it...
Unless we get a philosophy board and you rather discuss it there, but until then as long we stay away from /x/ theories It should be discussed here.

>> No.5244633

>>5244626
>as long we stay away from /x/ theories It should be discussed here.

Let me rephrase what you just said: "As long as we stay away from /x/ theories, /x/ theories should be discussed here."

>> No.5244641

>>5242220
This rests on the wrong assumption that life is "true" randomness

>> No.5244647

>>5244633
Are you implying that consciousnesses is paranormal/supernatural?

>> No.5244655

>>5244647
It is not testable and has no evidence. That's pretty much the opposite of a scientific theory. If you want to believe in invisible non-interacting entities, you are free to do so on >>>/x/

>> No.5244661

>>5244655
Fine, but I believe there is worth in a discussion on
consciousnesses and /x/ is not where I would put it

>> No.5244672

>>5244661
For sure /sci/ is not the right place for baseless claims. If you don't want to discuss it on /x/, there's still /b/ or reddit.

>> No.5244673

>>5244661
Don't let an anon tell you where to make your threads

>> No.5244680

>>5242262
Here's a thought experiment.

Suppose you put a person who speaks (and reads and writes) only English in a room by themselves, a rulebook written in English, and a set of stamps, one for each letter of the Chinese alphabet. Every day you pass two sheets of paper under the door: one with a message in Chinese on it, and the other blank. The person looks at the message, then reads the rulebook, which tells them how to convert that message into a different one using the blank sheet of paper and the stamp. Does the person understand Chinese? The sheets of paper? The stamps? The rulebook? No part of the system understands Chinese, but it can read and write Chinese, maybe even as convincingly as a Chinese person.

>> No.5244702

>>5244680
Define "understand".

>> No.5244705

>>5242219

I'm a machine.
Prove me wrong.

>> No.5244711

>>5244680
The person is running a virtual machine on which the thing doing the understanding is running. It's a mind within a mind.

I hate this thought experiment, and the guy who posed it originally, it's refutable fairly easily and he just replies with nu uh, that's wrong.

>> No.5244715

>>5244711

So it is your contention that computers have minds?

Why don't the vast majority of people agree with you?

>> No.5244718

>>5244715
Jesus Christ Carl you are fucking dumb. Your first response to that guy was a retarded strawman, your second was an appeal to the mob.

Fucking idiot.

>> No.5244722

>>5244715
No no, I'm asserting that just dismissing the entire system as being a bunch of parts is silly when clearly the whole is more than the sum of it's parts.

A more complex system that gave the man a rulebook that allowed him to respond to any situation in a fully human way without understanding the underlying system would be a near sentient being with the man, rule book, response system and papers working as the beings brain.

>> No.5244724

>>5244718
>argumentum ad hominem

>> No.5244725

>>5244718
Why do you keep replying to that underage shitposter ? It's not worth your time to even bother his posts

>> No.5244727

>>5244718

So YOU think computers have minds?
what about analog devices that do the same task?

Do you see where this is going?

>> No.5244729

>>5244725

Pssst.... hey buddy.. there might be more than ONE Carl....

>> No.5244730

>>5244727
I think you are mistakenly assuming that the only system that could produce sentience must involve squishy meat. As opposed to a sufficiently advanced system being able to produce the same result.

Anyway, I'm out.

>> No.5244731

>>5244680
It's 'the system' which understands Chinese.

This argument is retarded and easily refutable.

It's rather like asking, are the ions in your brain conscious? Or is it the neurotransmitters? Or is it the electromagnetic field itself?

It's a bullshit question, there's no reason to think that one single part of the system must be responsible for all of the system.

If you replaced each neuron in your head with a Chinaman who had a list of instructions and the phone numbers of some neighbouring Chinamen, you could emulate a consciousness that way. Asking which Chinaman is the one causing consciousness is as retarded as asking which neuron is the one causing consciousness.

It's obvious that if you made a brain out of Chinamen, or wood, or anything else, it wouldn't fucking matter. What matters is the FUNCTIONS of all of those parts.

To suggest otherwise is to suggest that proteins and water and ions and fats etc. are what's necessary for conscious, and that's extremely dumb.

>> No.5244735

>>5244730

Actually... I don't know.
But me and Roger Penrose find the "Chinese Room" puzzle rather difficult to refute.

>> No.5244736

>>5244727
Carl... Carl, listen.

>Why don't the vast majority of people agree with you?
is not a question. In the 1600s most people believed the earth was flat.

>analog devices that do the same task
What do you mean by "analog"? Do you really get the point of the discussion?

>> No.5244737

>>5244725
>doesn't comprehend higher science education
>resorts to insults

How about you go back to reddit?

>> No.5244739

>>5244735
That's because Penrose is a shit-tier philosopher. He's a mathematician and physicist; he is awful at philosophical arguments.

>> No.5244742

>>5244729
OH MY GOD THEY CAN REPLICATE

>> No.5244744

>>5244736
>is not a question.

It is. It starts with "why" and ends with a question mark. Also it's very relevant. Reality is agreed on by consensus. Not agreeing with reality is delusion. Are you delusional?

>> No.5244743

any system that performs a task 'understands' the task?

I know what I mean when I say I understand it...
and I ASSUME people who report similar experiences are having similar experiences.

Now if you want to tell me they aren't......

we're going to slip into solipsism pretty damn quick...

>> No.5244740

>>5244727
Saying that a computer could potentially have a mind is not the same thing as saying that computers have minds.

You are a fucking moron for thinking otherwise.

>> No.5244749

>>5244739

I suspect he's better than you.

>> No.5244745

>>5244735
Roger Penrose (I assume you mean the guy who asserted it) just responds to criticisms with nu uh!

As I said before, the response is that the system as a whole is the mind responding. Just like you can't say "this brain isn't aware!" by pointing out that all the individual neurons aren't the center of the consciousness. Honestly it's a hideously fallacious argument.

>> No.5244747

>>5244702
Putting symbols into the correct order isn't comprehension/understanding, it's just following rules. Following rules isn't understanding. I realise that this isn't a definition of what understanding is (actually it's a definition of what understanding is not) but it's the best I can do.

>>5244711
The only valid refutation I've seen is in the book I got the example from, which is AI - A Modern Approach:
>The real claim made by Searle has the following form:
>1. Certain kinds of objects are incapable of conscious understanding (of Chinese).
>2. The human, paper, and rule book are objects of this kind.
>3. If each of a set of objects is incapable of conscious understanding, then any system constructed from the objects is incapable of conscious understanding
>4. Therefore there is no conscious understanding in the Chinese room.
>While the first two steps are on firm ground,13 the third is not. Searle just assumes it is true without giving any support for it. But notice that if you do believe it, and if you believe that humans are composed of molecules, then either you must believe that humans are incapable of conscious understanding, or you must believe that individual molecules are capable.
Which is a fair point. I just like the thought experiment.

>>5244722
Not necessarily. If system that gave the man the rulebook is also the system that wrote the rulebook and all of the questions, it would know in advance what questions it was going to ask, and would only need to write rules for those questions. Therefore, the system that wrote the rulebook would at most need to be a bilingual human.

>> No.5244767

>>5244749
Except he finds the Chinese Room argument 'hard to refute', and is thus automatically awful.

The reason such an intelligent person is so crap at this is actually quite simple: his brain matured before the current paradigm of systems. When he was growing up, the entire idea of a computer, of software, hardware, and the abstract concept of a disembodied 'system', did not exist in the public psyche. That's why he fails at basic intuition about the subject.

>> No.5244774

>>5244747
>The only valid refutation I've seen is in the book I got the example from, which is AI - A Modern Approach:

It's funny that you had to read this to know it, instead of just using basic scepticism and rational thought to instantly see for yourself why Searle's argument is nonsense. I'd read this stuff and rejected it for exactly the same reason as your textbook at the age of 11 for fuck's sake.

You will never discover anything new or have an original, autonomous thought. All you are capable of doing is being told what the correct answer is by imagined 'authorities'.

>> No.5244785

>>5244774
Not him, but was there any point to this post beyond saying "I'm smarter than you"?

>> No.5244791

>>5244785
It just pisses me off that he was apparently incapable of answering this question for himself by simply applying his own brain to the argument, which would have yielded results quickly.

Instead he deferred judgement until he found some 'authoritative source' to tell him what the 'correct' counterargument was.

This is basically the definition of anti-enlightenment and anti-science; trusting authority and dogma more than one's own intellect.

He's probably as intelligent as me. It's just that his approach to knowledge is backward and pernicious.

>> No.5244796

>>5244774
>>5244791
When did I say I believed or disbelieved the argument, or that the refutation had shifted my position, or anything of the sort? You just assumed all that. I don't find the deductive argument given in the book to be equivalent to the thought experiment, so it doesn't shift my opinion on strong vs. weak AI.

>> No.5244804

>>5244791
There's no shame in deferring to others for the proper words, or a superior method when necessary. You claim that he could have thought of this himself to find results quickly, but you do so without any knowledge of his background or personal biases.

"Question everything" is all well and good when you have all the time in the world and no one to answer to, but in real life if you plan to derive every equation before you use it, then you will find yourself quickly out of a job.

>> No.5244873

>>5244680
Here are some other thoughts:

1. Separation of syntax from semantics. The system in the thought experiment is just generating a string of symbols. It can only distinguish between different strings according to what they look like, it doesn't know what they represent. It might learn the syntax of Chinese, i.e. certain symbols only follow certain other symbols and such, but it would never learn to deduce the meaning of a string of symbols. In other words, the machine is only operating on syntax, not semantics. For example, suppose the man is asked what colour his shirt is, and follows the rules to produce the answer "red", but his shirt is really blue. The rulebook doesn't have to give correct answers. The system doesn't have to understand the question to give an answer. It just has to follow the rules. Then again, I guess you could argue that a human is the same.
2. Suppose the system is passed a sequence of symbols in an order that isn't defined by the rulebook, but is still syntactically valid. The system couldn't produce an answer. It'd be like passing a syntactically valid program that has an undefined symbol to a C compiler. It couldn't look at the name of the symbol and figure out what value you wanted it to contain, even if the name of the symbol was "five". A human could. In other words, the system has no predictive capability or intuition because it can't produce answers to questions that aren't pre-defined by the rulebook. This argument would then reduce to "computers can't make solutions creatively".

>> No.5244971

>>5244724
>He's not arguing
>You really are just dumb as fuck.