Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Robot caregivers: Would it matter that they only outwardly cared?

#1
C C Offline
http://blog.talkingphilosophy.com/?p=8641

EXCERPT: [...] In regards to those specific to a companion robot, there are moral concerns about the effectiveness of the care—that is, are the robots good enough that trusting the life of an elderly or sick human would be morally responsible? While that question is important, a rather intriguing moral concern is that the robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate (fake) human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit and such a deceit seems to be morally wrong.

One obvious response is that people [...in placebo-like fashion] still gain value from its “fake” companionship.

[...] there is still an[other] important moral concern here: [...] Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. [...] In philosophical terms, humans have (or are) minds and robots [...] do not have minds. They merely create the illusion of having a mind.

Interestingly enough, philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior). While such behaviorism has been largely abandoned, it does survive in a variety of jokes and crude references to showing people some “love behavior.”

The usual “solution” to the problem is to go with the obvious: I think that other people have minds by an argument from analogy. [...]

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in many and various forms of deceit. [...]

In contrast, a companion robot [...] does not think or feel. Or so it is believed. [...] we can go in and look at the code and the hardware to see how it all works and we will not see any emotions or thought in there. The robot [of today at least], however complicated, is just a [...] machine, incapable of thought or feeling....

- - - - - - - - - - - -

But another element here is that even some human caregivers are only concerned to the extent of it being exhibited by their external behavior; they are paid and obligated by job qualification to pretend intrinsic, non-superperficial feelings for the person being cared for.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The body is the missing link for AI + Rogue robot death + The Robot Protocol C C 0 558 Mar 16, 2017 12:32 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)