MILITARY

U.S. Military Wants Robots That Can Explain Themselves

The Department of Defense hopes this could be the key to helping humans trust robots

MILITARY
Illustration: R. A. Di Ieso
Aug 15, 2016 at 3:40 PM ET

The future of defense technology will be driven by artificial intelligence (AI), but robotic weapons, vehicles, and soldiers won’t be much use if human service members don’t trust or understand their automated counterparts. In order to build human confidence in machines, Pentagon researchers want to develop systems that explain exactly what they’re doing.

Last week, the United States Defense Advanced Research Projects Agency (DARPA) announced their Explainable AI (XAI) program, an initiative that will ensure people won’t be confused by emerging battlefield tech. The agency will begin development in May 2017 and work on the project for four years.

The human-robot trust gap is a growing concern for the Department of Defense. The military has been funding research in the area for years and last June the Air Force granted $7.5 million for research on ways to make human trust AI so the two can more easily collaborate during battle.

More Air Force Spends Millions To Make Humans Trust Robots

That project is scheduled to wrap up in 2023, but in the meantime, DARPA is already developing one method to help people and bots see eye-to-eye. “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future,” DARPA program manager David Gunning wrote in an announcement for XAI, explaining that these systems will be “capable of translating models into understandable and useful explanation dialogues for the end user.”

In other words, future military technology will be able to explain what it’s doing, how to operate it, and how to troubleshoot. Most importantly, a user could ask questions about a computer’s decision making. For instance, if a combat medic doesn’t trust an AI medical assistant’s suggested diagnosis, the medic could ask how it came to that conclusion.

If fully realized, XAI would be a bit like Siri, but less frustrating and far more self-centered.