Independently minded robots, Isaac Asimov told us, need rules. With well-structured, law abiding robots, we get terrific garbage service, expertly made French toast and great lawn care. With recklessly structured, disobedient robots, we get “He’s been sent from the future to kill you – that’s WHAT he does! That’s ALL he does!” The choice is clear, and fortunately someone has started the discussion to get us moving in a kinder, gentler robotic direction.
Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University have provided us with Moral Machines: Teaching Robots Right from Wrong to help guide the way. The New Scientist discusses their six strategies for reducing robotic danger here. Here’s one of them:
Program robots with principles
Likelihood of success: Moderate. Recognising the limits of rules, some ethicists look for an over-riding principle that can be used to evaluate all courses of action.
But the history of ethics is a long debate over the value and limits of many proposed single principles. For example, it could seem logical to sacrifice the lives of one person to save the lives of five people. But a human doctor would not sacrifice a healthy person simply to supply organs to five people needing transplants. Would a robot?
Sometimes identifying the best option under a given rule can be extremely difficult. For example, determining which course of action leads to the greatest good would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making such calculations would require time and a great deal of computing power.
If you think this is far-fetched, please check out the giggling robot, the violin-playing robot and, of course, Asimo. We’re not as far away from the future Wallach and Allen describe as it first may seem. And it’s imperative we move fast before this happens…