[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 338 KB, 1200x1663, whatthefuckareyoudoinginmyhouse.jpg [View same] [iqdb] [saucenao] [google]
10164335 No.10164335 [Reply] [Original]

What's your favourite example of a philosopher or cognitive scientist BTFO'ing the computational theory of mind and amodal representations?
I've read Searle's chinese room and Harnad's symbol grounding papers, but looking for more inspiration.

>> No.10165035
File: 25 KB, 400x386, 1407520883877.jpg [View same] [iqdb] [saucenao] [google]
10165035

>>10164335
>He still thinks The Chinese Room is real philosophy
https://link.springer.com/content/pdf/10.1023%2FA%3A1008255830248.pdf

>> No.10165180

>>10165035
That paper has 4 citations. Are you Larry Hauser? Is that your paper?

>> No.10165580

>>10165035
Thanks for the link. I found it here
http://cogprints.org/240/
Just reading the intro, where dude sounds mega butthurt, like this whole paper is a way to get revenge for the time he walked in on Searle gangbanging his mom with Rumelhart and McClelland.

tl;dr??

>> No.10165590

>>10164335
pretty much anything written by Peter Hacker

>> No.10165677

>>10165580
Reading through the paper now, this really takes the cake:
>Computers, even lowly pocket calculators, really have mental properties - calculating that 7+5 is 12, detecting keypresses, recognizing commands, trying to initialize their printers - answering to the mental predications their intelligent seeming deeds inspire us to make of them.

I can't make this shit up.
>>10165180 A paper only it's mother could love. He might actually be Larry Hauser...

>> No.10165812
File: 110 KB, 1920x1080, david chalmers.jpg [View same] [iqdb] [saucenao] [google]
10165812

physicalists btfo!

>> No.10165831

>he still thinks that perfect imitation of consciousness is qualitatively different from real consciousness
Brainlets, they never learn.
>DUDE WHAT ABOUT MY QUALIA
How many times does this ancient meme needs to get BTFO, before you see the light?

>> No.10165833

>>10165831
*starts a forest fire by simulating one on a computer*
pssh, nothing personal....kid

>> No.10165845

>>10165035
wow an article in an obscure book by a >literally who

you really showed him with that google search anon

>> No.10165855
File: 117 KB, 680x788, tfwtointelligentforchairs.png [View same] [iqdb] [saucenao] [google]
10165855

>>10165035
>John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal to those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them.

>> No.10165864

>>10165180
>Of course, the professor was of no help at all either. On the personal side, Larry Hauser is man that looks like James Carville and sounds like Professor Frink from the Simpsons. He is constantly shoving his republican ideals down the throats of his students. Most professors that I've had, even if its obvious that they are are conservative or liberal, usually refrain from sharing their beliefs because they know that it is not the proper forum. Hauser on the other hand purposely constructs logic problems around his political beliefs, and I had a hard time answering them just on the matter that I don't agree with anything he believes in. But for the grade, I forced myself to.

>> No.10165870

>>10165833
>*starts a forest fire by imagining one in his brain*
Please, don't engage in big boy discussions before your pubes have fully grown.

>> No.10165875

>>10165870
do you even know what computation is?

>> No.10165930

>>10165875
Do you even know what a valid argument is?

>> No.10165955

>>10165930
what is computation?

explain it to me

and then explain how that computation could result in consciousness

>> No.10165967

>>10165955
Again, you're welcome to make any argument pertaining to the topic at hand anytime. For your educational needs refer to online resources and nearby community college.

>> No.10165969

>>10165967
computation is semantic manipulation

a simulation of a conscious mind would be...a simulation of a conscious mind

>> No.10165970

>>10165930
>do you even philosophy bro

>> No.10165998

>>10165831
That's a strawman. Chinese room isn't about perfect imitation of consciousness, it's about >identical output given identical input doesn't equal "it's alive mwahahahaha"

in fact, searle states that brains are just machines, implying that any other machine that behaves in the same way will also have mental states. but the types of machines being produced up by current AI programmes are too ghetto (read: semantically impoverished) to have mental states

>> No.10166001

>>10165998
he doesn't think a computational process can be conscious, period, since all computers do is syntax in, syntax out according to fixed rules.

he does think consciousness, intentionality and so on are biological, and can be reproduced by a machine, but first we're going to have to learn how neurochemistry works with this kind of thing

>> No.10166030

>>10164335
try like, basic phenomenology

>> No.10166031
File: 297 KB, 600x814, 1335720482292.jpg [View same] [iqdb] [saucenao] [google]
10166031

>>10166001

This.

Also,
>>10165855
>>10165864
>>10165035

OP is either Larry Hauser, a sycophant student, or his research assistant. From what I've read, Hauser seems to be enunciating a psychological knee jerk without any legitimate substance. He blurts a vaugely presidential "Wrong!" without any follow-up alternative or direct argument. That's not how science or philosophy work. When you say that someone is wrong, you have to show how, and then construct a new framework that better explains the input data, phenomena, or thought experiment. Hauser fails to do this on every level available.

>> No.10166045

>>10165831
you can’t impart your experience of love to me—and that’s not only because you’ve never felt it

>> No.10166196

>>10165812
I find it heart-warmingly ironic that one of the only philosophers of mind I've read who actually seems literate about machine learning and AI, argues what he does.

>>10166030
Any recommendations for contemporary phenomenologists that are aware of the current state of cognitive neuroscience?

>> No.10167250

>>10166196
phenomenology is dead anon