Robots

The Navy Is Teaching Robots Human Ethics

Making robots less evil one simulation at a time

Robots
Photo Illustration: Vocativ
Jun 20, 2016 at 6:28 PM ET

The United States Navy thinks the best way to prevent robots from killing all of us might be to teach them some manners with the help of video games. That’s the plan behind the Quixote project, a collaboration between the Office of Naval Research and a research team at the Georgia Institute of Technology. Quixote is artificial intelligence software that teaches robots how to act in accordance with human ethics and norms.

One of the main goals of Quixote is to build a “human user’s manual” that will prevent a robot apocalypse, or at least dispel fears that robots will turn on the human race. Mark Riedl, Quixote team leader and associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, believes that the best way to teach robots how to fit in with society isn’t much different than the way we teach humans. “The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels, and other literature,” Riedel said in a statement. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

More The White House Is Trying To Avoid A Robot Apocalypse

In order to teach robots how not to be “psychotic,” the Quixote team first turned to the internet to crowdsource scenarios in which robots might soon interact with humans. Researchers then turned those stories into about 500,000 video game simulations, like going to a pharmacy or a restaurant. Robot’s playing the game must decide whether to wait in line, interact with the pharmacist or waiter, or steal—then are rewarded for good behavior. If the bot going through the game received no moral guidance, then it would probably conclude that stealing is the most efficient way to acquire food or medicine.

Currently, Quixote is about as simple as early arcade games, but over the next six months, Reidl plans to upgrade the software to the complexity of games like Halo or Minecraft, with simulations that require bots to build societal structures. Throughout the half a million simulations that the Quixote bot performed, it acted in a manner in-line with social norms 90 percent of the time. For now, it seems the bots is still 10 percent malicious.