Editor | Peter Bowen
Wired.com: When machines go bad
Posted July 22, 2009
Shane Acker’s upcoming sci-fi saga 9 imagines a world where machines have turned on their makers. Of recent, films like the Matrix and Terminator join in this dystopic fantasy that machines will someday turn on their makers. Priya Ganapat in "Robo-Ethicists Want to Revamp Asimov's 3 Laws," published in Wired.com, reports on a real life “robo-ethicist” who is promoting a new robotic psychology, one that updates the “Three Laws of Robotics” proposed by Issac Asimov in his story "Runaround.” According to Asimov:
A robot may not injure a human being or allow one to come to harm; a robot must obey orders given by human beings; and a robot must protect its own existence. Each of the laws takes precedence over the ones following it, so that under Asimov’s rules, a robot cannot be ordered to kill a human, and it must obey orders even if that would result in its own destruction.
But now Chien Hsun Chen, in a paper published in the International Journal of Social Robotics, argues, according to Wired.com, that “as robots have become more sophisticated and more integrated into human lives, Asimov’s laws are just too simplistic.” If this all sounds like geek mumbo jumbo, Wired.com cites several real world incidents where robots killed, or almost killed, their masters. Ganapat leads off with this note: “Two years ago, a military robot used in the South African army killed nine soldiers after a malfunction.” Later he mentions that “earlier this year, a Swedish factory was fined after a malfunctioning robot almost killed a factory worker who was attempting to repair the machine generally used to lift heavy rocks.” What is the solution?
Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a ”legal machine language” that could help police the next generation of intelligent automated devices