Air Force Spends Millions To Make Humans Trust Robots
Building the perfect robot-human team for the battlefield
The United States Air Force wants robots and service members to be best buds on the battlefield.
Last Friday, the Air Force announced a grant of $7.5 million for research on ways to make humans trust artificial intelligence (AI) so that people and machines can collaborate on missions.
Soon service members in every branch of the Armed Forces will be working with AI on a daily basis—be it unmanned ariel vehicles, underwater drones, or robot soldiers (the U.S. Military had to shelve the Boston Dynamic L3 “robotic mule” because it was too loud, but last week the company revealed the much stealthier SpotMini). On November 1, 2014, (one week after Elon Musk compared developing AI to “summoning the demon“) Undersecretary of Defense Frank Kendall issued a memo asking the Defense Science Board to study what issues must be solved in order to expand the use of AI “across all war-fighting domains.”
But robotic weapons and soldiers wont be as effective if their human counterparts don’t trust them. That’s why the military has been studying human-robot interaction for years. In 2012, a team comprised of researchers from University of Southern California (USC) and the U.S. Army Research Laboratory published the study, “Human-robot interaction: Developing trust in robots,” which concluded that humans needed to trust robots before they can effectively collaborate with machines. Last year, researchers from USC and the Army Research Lab, again presented a study on human-robot interaction, this time determining that people trust robots more when the machines are more reliable and explain what they’re doing throughout a task.
Now, the Air Force Research Laboratory is also investing in the field. The Air Force first solicited proposals from contractors last year, explaining that it “has a need to understand the human-machine trust process” with intelligence and surveillance agents and pilots. “[T]o achieve this ambitious vision we need research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams,” the Air Force stated at the time.
According to the statement, the Air Force wanted research to include studies on robots characteristics, how humans interact with robots, how to detect psychologically and physiologically if humans don’t trust a robot, and the consequences of humans not trusting a robot.
The Air Force considered five offers before giving the contract to SRA International, a government service company based in Chantilly, Virginia. A spokesperson for SRA told Vocativ their proposals for the human-machine trust program included automated tools in aircraft cockpits; intelligence, surveillance, and reconnaissance analysis; software code, automated translation capabilities; and robotic systems.
The Air Force believes this focus of research and development will allow them to build human-robot partnerships that can complete tasks more efficiently than either party working independently. “The Air Force is pursuing strategic agility—in our people and technology—to meet the challenges of a newly forming adversarial environment,” Mike Bennett, chief of the Human Trust and Interaction Branch of the Air Force Research Laboratory, told Vocativ. “Airman-autonomy teaming can enhance mission performance by combining the airman’s ability to deal with uncertainties and to provide qualitative judgments with an autonomous system’s ability to digest data rapidly and explore a broad range of decision options in short mission timelines.”
The research, which will take place at the Air Force Research Laboratory just outside of Dayton, Ohio, is scheduled to wrap up in March of 2023. So we could be seeing robots and humans sharing a cockpit within the next seven years.