[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature

Search:


View post   

>> No.13584903 [View]
File: 194 KB, 1033x689, dogbless.jpg [View same] [iqdb] [saucenao] [google]
13584903

>>13584834
Apples and oranges. The point of Turing test is to check whether the entity is capable of displaying full *range* of human response, most significantly, if it can properly simulate emotions in a way normal human would depending on extremely intricate context. Does it have personality? Does its personality change over time?

Current chatbots don't last for 30 seconds (at human conversation speeds), and each subsequent second makes it exponentially harder in terms of complexity. And if the bot had to simulate a month, or even a lifetime ... with what we currently do, the computational costs are astronomical.

Chess arguments are absurd, because chess are *easy* compared to "beeing yourself".

>>13584865
The point is: If something behaves like it has free will, does it have one? Why does it matter whether it *thinks* it has one (instead of being autistic zombie seeking to fool you). If its an autistic zombie pretending 100% to be human, it's no longer pretending, as there's no trace of the original zombie personality left.

p-zombie is again a question which from the point of formal logic simply makes no sense.

Consider: If it *always* looks like a circle - even if it's in fact a triangle pretending to be a circle 100% of the time - it's factually just a circle in every consequence. Regardless of whether the triangle thinks "haha, im triangle all along, fooled ya". It doesn't matter, when it stays circle 100% of the time. In fact, it could be argued it's just a circle, who's deluded he's a triangle.

Navigation
View posts[+24][+48][+96]