Wallach: Moral decision-making

Wendell Wallach. Implementing Moral Decision Making Faculties in Computers and Robots.

Consider a home care robot for an elderly person. It helps with simple chores, like cleaning the floor, but also keeps its eye on the occupant. In cases where the person is having trouble- say, spending too long in the bathroom, it has the ability and option to reach out to the occupant’s children by contacting them.  It has to weigh privacy concerns with maintenance of safety and health.

1. The top-down strategy that Wallach describes on pages 466-467- in computer science and AI terms, what are the key breakthroughs we would need in order to implement such top-down moral reasoning for our home care robot? How many years away do you think this is? – explain your estimate.  

2. The bottom-up strategy that Wallach describes on page 467- restate just how is this robot becoming a moral reasoning system?  Wallach suggests that this bottom-up approach has promise due to its embedded learning nature, so that morality is learned in a context of existence and action relevant to one’s own personal experience.  Yet at the bottom of p.467 Wallach suggests porting the resulting, learned system from one robot to another (in effect, a cloning of moral reasoning).  What technical challenges do you foresee in this bottom-up approach? How many years away do you think this may be?