Moral machines: teaching robots right from wrong
Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. The authors argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. But the standard ethical theories don’t seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun.