Lichocki

A Survey of the Robotics Ethical Landscape

Pawel Lichocki, Peter Kahn, Aude Billard

questions based on pp. 1-5

1 Kahn and Friedman suggest that robots should not be designed to (artificially) mimic intentional states. The authors notes that anthropomorphic robots are designed exactly to trigger such feelings, and therefore that the design trajectory in robotics collides with this possible solution to the problem of misplaced moral agency ascription.  But, from a human factors point of view designers will design robots so as to maximize productivity, and anthropomorphic designs may have benefits in this sense.  How would you propose weighing the danger of intention ascription with the benefits of usability and productivity when designs collide? Is there a third way forward you can propose?

 

2 The issue of responsibility- of ascribing responsibility when harm is done by a robot, is discussed early in the survey article.  But, if a robot is a complex system with more than one human character responsible in intricate ways for its eventual behavior, then a reasonable analogue might be the company. Who is responsible when a company commits harm, and how might you apply ethical and legal analyses of company responsibility to the robot?

 

3 The authors make an argument about human freedom, specifically that, in the service sphere, one approach is to insist that humans have the freedom to choose to interact only with people (i.e. and not with robots).  But this argument is more complex.  Consider human freedom specifically, choose an ethical framework (e.g. consequentialism) and construct a specific argument for both sides- that is two convincing arguments, one explaining why robots are an ethical win, and one explaining why robots are ethically negative.