[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.11516876 [View]
File: 46 KB, 894x393, irobot.jpg [View same] [iqdb] [saucenao] [google]
11516876

I have been reading and thinking about the Hard Problem during quarantine, and I have a questions I would like to ask.

Suppose we successfully created a humanoid robot such as the one in I-Robot. From the outside, this Robot looks and acts just like a human.

Now suppose someone wanted to torture and kill this robot for fun.

I imagine someone like David Chalmers (or myself) would determine the ethics of this idea on the basis of whether or not the Robot experienced qualia. If I wasn't sure about the nature of consciousness (I am not), and if the robot told me it did experience pain, longing, a desire not to be killed, I would take pause. Hopefully one day we can test for qualia or understand it enough to navigate these potential ethical questions.

My question is, what does Dan Dennett do in this situation? How does he reason? If qualia don't exist, how to we have any hope in determining which 'systems' are okay to dismantle, and which should be protected. I assume Dan Dennett is against killing small children, what about my robot? What about NPCs in Skyrim or Grand Theft Auto? I think I would better understand his position if I better understood his framework here.

Thanks.

Navigation
View posts[+24][+48][+96]