Science-- there's something for everyone

Tuesday, September 28, 2010

Deceptive Robots

Ronald Arkin and Alan Wagner of the Georgia Institute of Technology have successfully coached robots in the art of deception.

The engineers began working on robot deception in order to give robots an added tool in handling their various tasks. An obvious application would be for a military robot to trick enemy soldiers about troop strength or location. However, the researchers also foresee non-military uses, such as using deception to calm disaster victims while they are being rescued.

So far, the team has managed to get one robot to trick another robot into thinking it was hiding in one place, when in reality it was somewhere else. The hiding robot deliberately knocked over some markers to indicate a false trail. Needless to say, the researchers have a bit of work ahead of them before they take their deceptive robots on the road. If the picture below is indicative, even my dog could penetrate this level of duplicity. To be fair, it's still early days.

Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner with their robots. The black robot intentionally knocked down the red marker to deceive the red robot into thinking it was hiding down the left corridor. Instead, the black robot is hiding inside the box in the center pathway.

Credit: Georgia Tech Photo: Gary Meek

This line of research does open up a slew of ethical questions. Is it better to falsely reassure victims? And what do robots with the ability to deliberately deceive say about our understanding of intelligence and the theory of mind?

According to Arkin:

We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects. We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.

No comments:

Post a Comment