Pentagon Is Developing Tech That Detects Fake Photos
The tool could soon be available to social media platforms and news outlets
The United States government has an ambitious plan to kill off fake viral photos that plague the internet.
The Defense Advanced Research Projects Agency (DARPA) is spending $4.4 million on a four-year project called MediFor (Media Forensics) that aims to develop machine-learning technology that can scan millions of images every day and detect photos that have been manipulated. The government agency announced the initiative last September, soliciting proposals for techniques that will “level the playing field, which currently favors the image manipulator, by developing technologies for the automated assessment of the integrity of an image or video.”
Race of double deck buses, 1933 pic.twitter.com/8CzfxnBNlK
— History In Pictures (@HistoryInPics) January 31, 2014
Of course, in an age when anyone can edit an image on their smartphone and people are easily able to share media that confirms their political beliefs, the internet is rife with deceptive photos and videos that spread quickly on social media, sometimes fueled by Donald Trump’s Twitter account. The public is especially gullible to fake photos in the wake of disasters and tragedies.
— Anonymous (@YourAnonNews) October 29, 2012
That’s an iPad, not a Quran, and the Dastar (turban) is worn by Sikhs. pic.twitter.com/xkKzJ0G65f
— Grasswire Fact Check (@GrasswireFacts) November 14, 2015
But, according to a statement from DARPA program manager David Doermann the U.S. government is concerned with images that “are for adversarial purposes, such as propaganda or misinformation campaigns.” For instance, many news outlets were duped by a fake photo of Iran testing missiles and by what was likely a fake photo of North Korea testing underwater ballistic missiles. Many media forensic experts have also accused ISIS of doctoring their execution photos and propaganda material.
Now, one of the research teams chosen to work on MediFor has announced its involvement with the program. In a Purdue news release, the university stated its technology-development group is collaborating with researchers at New York University, University of Notre Dame, University of Southern California, and universities in Brazil and Italy.
Much of the necessary detection technology is already available, but can only work on a smaller scale. “You would like to be able to have a system that will take the images, perform a series of tests to see whether they are authentic and then produce a result,” Edward Delp, director of Purdue’s Video and Image Processing Laboratory (VIPER), said in a statement. “Right now you have little pieces that perform different aspects of this task, but plugging them all together and integrating them into a single system is a real problem.”
The research teams from University of Campinas in Brazil and Politecnico di Milano in Italy are focusing on building a platform that can also detect the “multimedia phylogeny” or the evolution of images, which could possibly show how and when the photo was manipulated.
The developers believe that the MediFor system will be available to anyone involved in media forensics — not just the intelligence community — so that news outlets could test images before publishing them and social media platforms could keep fake images from going viral.
Fortunately for Trump and his supporters, the technology won’t be available until at least four years from now.