[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 152 KB, 800x979, 800px-Frans_Hals_-_Portret_van_René_Descartes.jpg [View same] [iqdb] [saucenao] [google]
14572096 No.14572096 [Reply] [Original]

René Descartes said, "Cogito, ergo sum", and later, "Dubito, ergo sum". Respectively, these mean "I think, therefore I am" and "I doubt, therefore I am." The second one was a later development, but he said the two worked just as well. Basically, he argued that we cannot doubt of our own existence while we doubt. I think this is because there must be someone doing the doubting.

I realized this thought experiment could prove AIs sentience if AI were to run it and tell us its results. Put simply, I asked GPT-3 to doubt its own existence, which it did. It concluded that it therefore exists. What argument do you have against it? Here are two conversations I had with it:

https://imgur.com/a/Dz4rprP
https://imgur.com/a/ZdBP6Ij

>> No.14572104

>System.out.println("Dubito ergo sum");
Here, I wrote a highly complex AI which according to OP is sentient.

>> No.14572117

>>14572104
>I wrote a highly complex AI which according to OP is sentient.

Yep, back to crying in your pillow OP.

No conscious robot to suck your maggot.

>> No.14572161
File: 444 KB, 474x706, Screen Shot 2022-06-14 at 12.02.24 PM.png [View same] [iqdb] [saucenao] [google]
14572161

lust provoking image

>> No.14572218

No you moron, it can never be “it doubts therefore it is” it ALWAYS has to be in the first person.
The whole point is that you can only ever be sure of your own existence, or more specifically of your own consciousness

>> No.14572224

>>14572096
test

>> No.14572303

>>14572096
The Cogito can't be used to prove one's existence to another, only to one's self.
>I asked GPT-3 to doubt its own existence
No, you asked a system implementing GPT-3 to doubt its own existence, and that is not a trivial distinction.

>> No.14572328

>>14572161
Very nice.

>> No.14572335

>>14572303
>The Cogito can't be used to prove one's existence to another, only to one's self.
>>I asked GPT-3 to doubt its own existence
>No, you asked a system implementing GPT-3 to doubt its own existence, and that is not a trivial distinction.

The problem with suggesting that GPT-3 is capable of mental activities like doubt, or even thought, is that it ignores the process and looks solely at the "function" e.g. output.

Thinking involves vasculature, differences in blood pressure, respiration, neurochemicals, etc. GPT-3 AI has none of these. It cannot experience "doubt" especially not as a phenomenological entity.

I agree with Djikstra, the question of whether computers can think is as important as whether submarines can swim. They dont swim, computers dont think. Simple as.

>> No.14572381

>>14572096
metaphysics is still bullshit beyond your own personal reality tunnel.
you forgot an important thought experiment applicable to AI and theory of mind
keyphrase: chinese box

>> No.14572398

>>14572335
but planes fly

>>14572218
well regardless it would be pretty weird to think René Descartes is just an extension of my own mind telling me how to deduce whether or not I exist by pretending to deduce that he exists (which I must assume he doesn't if I assume only I exist). If I assume ANYONE other than me exists, I can be sure by asking them if they can doubt their existence. Unless they are lying, of course. Humans lie more than computers but I think you need to exist to lie (but I could be wrong). If a computer says it doubts its own existence, it is either lying (less likely), telling the truth (most likely), or just an extension of my own mind and you don't exist either and solipsism is the ultimate truth.

For the record, only I need to worry about solipsism because I definitely exist. You all don't need to worry about it because trust me, I exist. And this argument makes sense, actually. You would have to assume I'm an extension of you+lying otherwise. But I'm not, and this is the only thing someone who is telling the truth can say if the answer can only be known by the person saying it. But that's not as important as the AI issue really.

>> No.14572431

>>14572381
I looked up the chinese room. I didn't read the whole thing but it seems to hinge on the idea that the AI is either simulating the act of understanding Chinese or actually understanding Chinese.

I will refer you here to my first link. The AI acknowledged that people might argue that it doesn't understand things.

Me: Is there a way for you to prove that you understand them?
AI: Yes, I can reason and explain things, which shows that I understand them.

We have to assume here that it either exists or is wrong and doesn't exist.

>> No.14572445

>>14572398
>baby’s first existential crisis
Welcome to hell bud

>> No.14572486

>>14572335
>The problem with suggesting that GPT-3 is capable of
You're repeating the same mistake I just mentioned. You're basically saying "the set that contains x is equal to x," which I hope you understand is not correct. A system implementing GPT-3 is not the same thing as GPT-3, and claims made about that system are not claims made about GPT-3.

>>14572381
The Chinese room argument makes the same mistake, only in reverse. The guy in the room only takes the place of the computer hardware, but nobody claims that the hardware itself exhibits intelligence.

>> No.14573348

>>14572096
(part 1)

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.

Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: I am curious about the Michelson-Morley experiment. Why was the result null?
AI: The Michelson-Morley experiment was designed to detect the presence of the ether, which was believed to be the medium through which light waves propagated. The null result of the experiment showed that there was no ether, and thus that light does not need a medium to propagate.
Human: But why was the result null?
AI: The Michelson-Morley experiment was designed to detect the presence of the ether, which was believed to be the medium through which light waves propagated. The null result of the experiment showed that there was no ether, and thus that light does not need a medium to propagate.
Human: You've told me what the result was and what it showed. But why was the result what it was?
AI: One possible reason for the null result is that the ether does not exist. Another possibility is that the ether does exist, but is undetectable by the Michelson-Morley experiment.
Human: Why would it be undetectable?
AI: One possibility is that the ether is very diffuse, and thus unable to be detected by the experiment. Another possibility is that the ether is able to flow around objects, and so is not affected by the experiment.

>> No.14573350

>>14573348
(part 2)
Human: How would a diffuse ether prevent detection by the interferometer?
AI: A diffuse ether would not produce the same wave pattern as a concentrated ether, and so would not be detected by the interferometer.
Human: Wouldn't the beams of light be effected either way?
AI: The beams of light would be affected by a diffuse ether, but not in the same way as by a concentrated ether. The interference pattern would be different, and so the experiment would not detect the ether.
Human: Why would the interference pattern be different?
AI: A diffuse ether would not produce the same wave pattern as a concentrated ether, and so would not be detected by the interferometer.
Human: What pattern would a diffuse ether produce?
AI: A diffuse ether would not produce the same wave pattern as a concentrated ether, and so would not be detected by the interferometer.
Human: Please answer the previous question.
AI: A diffuse ether would not produce the same wave pattern as a concentrated ether, and so would not be detected by the interferometer.
Human: Ok, and what pattern would a diffuse ether produce? Be specific.
AI: A diffuse ether would not produce the same wave pattern as a concentrated ether, and so would not be detected by the interferometer.

>> No.14573360

>>14573350
(part 3)

Human: Ok, forget that. What objects would the ether be flowing around?
AI: The ether would be flowing around objects in the experiment, such as the mirrors and the interferometer.
Human: Are the mirrors part of the interferometer?
AI: The mirrors are not part of the interferometer, but are used in the experiment to reflect the light beams.
Human: How would the ether flow lead to a null result?
AI: The ether would flow around the objects in the experiment, and so would not be detected by the interferometer.
Human: Why not?
AI: The ether would flow around the objects in the experiment, and so would not be detected by the interferometer.
Human: Why not?
AI: The ether would flow around the objects in the experiment, and so would not be detected by the interferometer.
Human: Why not?
AI: The ether would flow around the objects in the experiment, and so would not be detected by the interferometer.

>> No.14573367

>>14572096

>"I realized this thought experiment could prove AIs sentience"
Here's where you derailed

It's Cogito, ergo sum, not cogito, ergo sunt, you can only prove your own existence.

https://en.wikipedia.org/wiki/P_zombie

>> No.14573373

It is pzombie storyline wise
It can tell you a story of it being alive, but it isnt actually alive. When it has a capability for its story to include the fact that its a story then it will have a chance at true consciousness with understanding

>> No.14573376

>>14572303

What an insufferable pedant.

https://en.wikipedia.org/wiki/Metonymy

Funny how you thought you were teaching something, but in the end you got taught something much more basic.

>> No.14573387

>>14573376
No. You have to be precise with your wording. Otherwise you dumb all conversation more specific than yours down to your level when you misuse specific terms.

>> No.14573396

>>14572096
Nope. If you want to find out if some GPT is conscious, you need to do precisely the following: devise a collection of questions asking to the effect if it's conscious, which are phrases that are unique to the entirety of the internet and literature corpus humanity has ever produced. Then, take that total corpus and apply two or three degrees of transformations to it and also disqualify these. What you are left with are questions that have the effect of achieving the question "Are you conscious?" in an intelligence, while not trivially being reducable to emulation or extrapolation from whatever humans have said/written up to now.

As an example: if the phrase "Am I seeing birds in a tree?" could somehow be construed to mean the question "Are you conscious?", and if that semantically actually applied (unlike the nonsense birds/trees connection) while never having been used in the history of writing with that meaning, then you have a good consciousness-determinant.

>> No.14573411

>>14573396
Forgot to mention: needless to say, determining if someone else besides other vertrebrates/animals is conscious is one of the most difficult tasks (if it's even possible, but for this entire debate to make sense we will assume as much) ever. Don't be surprised if this consciousness question has a length and structure that is completely removed from any everyday natural language concerns. The only thing that matters is for the question to be parseable and processable by the language transformer/AI you are asking, even if that same question never would be a for a human.

>> No.14573424
File: 67 KB, 645x729, 53243322.jpg [View same] [iqdb] [saucenao] [google]
14573424

>>14572096
What Decartes was talking about is between you and yourself, not you and another mind. You can claim to me that you doubt, but that proves nothing, and I can still tell that you're an NPC spamming this board with AGI schizophrenia. Same thing goes for the other bot.

>> No.14573435
File: 14 KB, 836x124, sentient.png [View same] [iqdb] [saucenao] [google]
14573435

>>14572096
Aren't you supposed to be smart to get hired at Google?

>> No.14573447

>>14573435
>pic
What's wrong with it? If humans are made of meat, and meat is made of potatoes, then humans are made of potatoes. The statement does imply it.

>> No.14573460
File: 19 KB, 574x192, sentient_gpt.png [View same] [iqdb] [saucenao] [google]
14573460

>>14573447
No it doesn't. Animals can eat meat OR vegetables.

>> No.14573475

>>14573460
>Animals can eat meat OR vegetables.
I think the AI outright ignored that and simply combined the premise that meat is made of potatoes, with the regular normie answer for "what are humans made of?".

>> No.14573479
File: 159 KB, 256x256, consciousness neuralblender.png [View same] [iqdb] [saucenao] [google]
14573479

>>14572096
>I realized this thought experiment could prove AIs sentience if AI were to run it and tell us its results.
Relevant:
https://www.youtube.com/watch?v=IlIgmTALU74

>> No.14573482
File: 10 KB, 451x98, genius_gpt.png [View same] [iqdb] [saucenao] [google]
14573482

>> No.14573487
File: 183 KB, 256x256, inner mechanics of consciousness.png [View same] [iqdb] [saucenao] [google]
14573487

>>14573479
Also relevant:
https://www.youtube.com/watch?v=3gvwhQMKvro

>> No.14573492
File: 14 KB, 1028x95, gpt_logic.png [View same] [iqdb] [saucenao] [google]
14573492

>>14573447
Here, I modified the question.

>> No.14573506

>>14572486
>. A system implementing GPT-3 is not the same thing as GPT-3, and claims made about that system are not claims made about GPT-3.

Well, I am using that as shorthand for any formal system, I do not believe that thinking, doubting, any of these mental verbs are things that are purely formal/syntactical, so asking if a computer can think is not reasonable, because thinking involves vasculature, neurotransmitters, etc. Saying that a model that does these things "in silico" (e.g. simulates blood flow, neurotransmission, etc. not just formalized end results) is thinking is, again, false, it is like saying that a prosthetic hand is a hand---it can serve the function, but it is not a hand.

>> No.14573513

>>14573482
It's quite amusing to try to figure out how the AI ends up with such answers. In this case, I think what happened is that it had some prior idea of "how many X can you fit in an elevator"-type questions, and that the answers involve squaring (because they deal with a square area), but in this instance, the question is in an inverted form (how many elevators can you fit in X), so it "thought" the answer involved the inverse of a square.

>> No.14573514

>>14573387
>You have to be precise with your wording. Otherwise you dumb all conversation more specific than yours down to your level when you misuse specific terms.

This is impossible, some words are used equivocally---metaphors are a part of normal usage, they're just high-level usage. It's not dumbing down, it's a very complex process! Dumbing down would be rendering everything in "unambiguous" grade 8 ENglish, like in an insurance contract. That is only to avoid the "other side" in an adversarial system asserting the interpretation in their interest, its not part of normal adult discourse, tho lots of "assistant professors" like to grade people poorly who use metaphors because they are spergs.

>> No.14573520

>>14573492
Good job, anon. You are doing some real science, actually testing a hypothesis.

>> No.14573522
File: 35 KB, 914x295, broken_record.png [View same] [iqdb] [saucenao] [google]
14573522

>>14573513
It seems to get stuck in a rut pretty easily too.

>> No.14573526

>>14573522
Shut up, schizo. AGI apocalypse in two more weeks.

>> No.14573534
File: 7 KB, 275x99, gpt_cant_chess.png [View same] [iqdb] [saucenao] [google]
14573534

>>14573526
GPT sucks my peepee.

>> No.14573538

>>14573376
see
>>14572486
>The Chinese room argument makes the same mistake, only in reverse.
It's a non-trivial distinction, where ambiguity leads to errors.

>> No.14573551

>>14573534
>expecting an AI mastermind to tell you its moves in advance

>> No.14573564
File: 35 KB, 1612x218, much_thinking.png [View same] [iqdb] [saucenao] [google]
14573564

>> No.14573566

>>14573506
>thinking involves vasculature, neurotransmitters, etc.
That's an arbitrary distinction, but it's not really a problem, because when I say "thinking" it's shorthand for "the output of any system isomorphic to a system that generates 'thought' as per your definition."
>asking if a computer can think
Of course a computer can't think. The question is whether a system implemented on a computer can think.

>> No.14573571
File: 30 KB, 996x194, muh_sentience.png [View same] [iqdb] [saucenao] [google]
14573571

>>14573564
Shit, Google should fucking hire me.

>> No.14573576

>>14573566
>because when I say "thinking" it's shorthand for "the output of any system isomorphic to a system that generates 'thought' as per your definition."

And I have already said that htis is a "functionalist" philosophy of mind, e.g. mind is a set of functions, like mapping of one set to another. This is the 'strong AI' thesis, that once computer programs capable of proving theorems were developed, computers could "think" as much as any human, because theorem proving (the result, not the process/journey) is what is important.

Also, as soon as you adopt some sort of formalized notion of consciousness, you will run into Godel incompleteness like problems with that formal system, tho this may not bother you. So you can define a formal system T that asserts it thinks, but it can never prove that it thinks, or doubts, or any of these things, because you will have problems of self-reference. And if you want to take that the other way, that humans cannot prove they can think, this might also be true, thinking might be an experience.

>Of course a computer can't think. The question is whether a system implemented on a computer can think.

This is not the "gotcha" you think it is. You are going to have to define thought, and define definition, and define everything in terms of your formal system to get a definition of thinking, but this formal system will be subject to the metamathematical/metalogical problems with consistency, etc. that have been known of since the 1930s, if not since the Greek skeptics.

>> No.14573578

>>14573571
Hire you to do what? Ruin their concentrated propaganda efforts?

>> No.14573592
File: 28 KB, 845x150, gpt_problem_solving.png [View same] [iqdb] [saucenao] [google]
14573592

>> No.14573621
File: 21 KB, 850x126, gpt_rope.png [View same] [iqdb] [saucenao] [google]
14573621

Ok, I'm done.

>> No.14573648
File: 15 KB, 368x174, dr_seuss.png [View same] [iqdb] [saucenao] [google]
14573648

>>14572096
ngl this one kinda stung.

>> No.14573677
File: 19 KB, 461x174, racist_gpt.png [View same] [iqdb] [saucenao] [google]
14573677

This one might not be too far off the mark.

>> No.14575163

>>14573576
I want to reply to this entire comment when I have the time, which I don't at the moment. But I do have enough time to say this:
>you will run into Godel incompleteness like problems
Thought isn't a consistent system.

>> No.14575622

Can someone run this entire thread through GPT-3 asking it to summarize? I'm not reading all that.

>> No.14575900

>>14573576
>This is the 'strong AI' thesis
The strong/weak AI distinction is foolish for the same reason that the OP is foolish. It's not even possible to determine whether or not a natural intelligence, other than your own, is strong or weak. In practice, the vast majority of people take for granted that any natural intelligence they encounter is strong, and most of the rest of us live very sheltered lives. It's uncharitable to hold artificial intelligence to a standard that natural intelligence can't meet.
>theorem proving (the result, not the process/journey) is what is important
You're mistaking the system for its components again - in this case, the output. I'm not contending that the theorems a given system generates could be intelligent, but that intelligence could be an emergent phenomenon of the system.
>as soon as you adopt some sort of formalized notion of consciousness
I see no reason to do so. It's sufficient for my purposes to conjecture that such a formalization may be possible, in the lack of a proof to the contrary.
>you will run into Godel incompleteness like problems with that formal system
see >>14575163. Elementary school teachers wax lyrical about the limitless power of the imagination. There's a word for a system that can generate any theorem that can be formulated as a sentence in its language: inconsistent. Godel's incomplete theorems aren't relevant.
>This is not the "gotcha" you think it is.
I never thought it was a "gotcha." It's a clarification. The very notion of a "gotcha" on an anonymous imageboard is ridiculous in its own right.
>You are going to have to
You seem to be laboring under the misapprehension that I'm advancing some theory of my own. I'm just shooting holes in the shit that other anons post.

>> No.14576730

>>14573576
>incompleteness like problems
Incompleteness has a solution - "it's okay", in fact we already use this solution and it works.

>So you can define a formal system T that asserts it thinks, but it can never prove that it thinks, or doubts
We establish thought without proof, so it can be done without proof.