[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 56 KB, 225x220, fuckno.jpg [View same] [iqdb] [saucenao] [google]
4752999 No.4752999 [Reply] [Original]

DO ROBOTS CURE CONSCIENCE?
When I first began studying people and computers, I saw programmers relating one-to-one with their machines, and it was clear that they felt intimately connected. The computer's reactivity and interactivity?it seemed an almostmind?made them feel they had "company:' even as they wrote code. Over time, that sense of connection became "democratized." Programs became opaque: when we are at our computers, most of us only deal with surfaces. We summon screen icons to act as agents. We are pleased to lose track of the mechanisms behind them and take them "at interface value." But as we summon them to life, our programs come to seem almost companions. Now, "almost" has almost left the equation. Online agents and sociable robots are explicitly designed to convince us that they are adequate companions.

>> No.4753002

>>4752999
Predictably, our emotional involvement ramps up. And we find ourselves comforted by things that mimic care and by the "emotions" of objects that have none. We put robots on a terrain of meaning, but they don't know what we mean. And they don't mean anything at all. When a robot's program cues "disgust," its face will look, in human terms, disgusted. These are "emotions" only for show. What if we start to see them as "real enough" for our purposes? And moral questions come up as robotic companions not only "cure" the loneliness of seniors but assuage the regrets of their families.

>> No.4753004

>>4753002
In the spring of 2009, I presented the case of robotic elder care to a class of Harvard undergraduates. Their professor, political theorist Michael Sandel, was surprised by how easily his students took to this new idea. Sandel asked them to think of a nursing home resident who felt comforted by Paro and then to put themselves in the place of her children, who might feel that their responsibility to their mother had been lessened, or even discharged, because a robot "had it covered." Do plans to provide companion robots to the elderly make us less likely to look for other solutions for their care?

>> No.4753007

>>4753004
As Sandel tried to get his class to see how the promise of robotic companionship could lead to moral complacency, I thought about Tim, who took comfort in how much his mother enjoyed talking to Paro. Tim said it made walk[ing] out that door" so much easier when he visited her at the nursing home.

>> No.4753010

>>4753007
In the short term, Tim's case may look as though it charts a positive development. An older person seems content; a child feels less guilty. But in the long term, do we really want to make it easier for children to leave their parents? Does the "feel-good moment" provided by the robot deceive people into feeling less need to visit? Does it deceive the elderly into feeling less alone as they chat with robots about things they once would have talked through with their children? If you practice sharing "feelings" with robot "creatures:' you become accustomed to the reduced "emotional" range that machines can offer. As we learn to get the "most" out of robots, we may lower our expectations of all relationships, including those with people. In the process, we betray ourselves.

>> No.4753013

>>4753010
All of these things came up in Sanders class. But in the main, his students were positive as they worked through his thought experiment. In the hypothetical case of mother, child, and robot, they took three things as givens, repeated as mantras. First, the child has to leave his mother. Second, it is better to leave one's mother content. Third, children should do whatever it takes to make a mother happy.

>> No.4753026

>>4753013
I left the class sobered, thinking of the fifth graders who, surrounded by a gaggle of peers talking about robots as babysitters and caretakers for their grandparents, began to ask, "Don't we have people for these jobs?" I think of how little resistance this generation will offer to the placement of robots in nursing homes. And it was during that very spring that, fresh from his triumphant sale of a thousand Paros to the Danish government, their inventor had come to MIT to announce opening up shop in the United States.

>Sherry Turkle is the Abby Rockefeller Mauze Professor of the Social Studies of Science and Technology at MIT, the founder and director of the MIT Initiative on Technology and Self, and a licensed clinicial psychologist

Rage General I guess