INTERNET

Scientists Can Make Otherwise Normal Images Unrecognizable To AI

French researchers have found a way to modify images so that a machine vision system can't tell what they are

INTERNET
Photo Illustration: Vocativ
Mar 30, 2017 at 5:40 PM ET

Computers might not “see” the world the way we humans do, but advanced machine learning systems like deep neural networks have become surprisingly good at recognizing everything from individual human faces to handguns. In a time when stalkers can dox pornstars with an app and cops scan social media to identify protesters with facial recognition, having ways to hide your selfies from the machine gaze might not be such a bad idea.

A group of French scientists have created a method for doing just that, using an algorithm that subtly modifies images so that they’re unrecognizable to an AI but still look perfectly normal to a human.

“We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye,” the researchers write in the paper, which was submitted earlier this month to the arXiv server, though it’s yet to be published in a peer-reviewed journal.

The researchers’ method doesn’t technically make images “invisible” to machines — it’s more like the algorithmic equivalent of blowing smoke in someone’s face so that they constantly misidentify what’s in front of them. To do that, the algorithm introduces a tiny, virtually imperceptible amount of noise into the image, which dramatically reduces a computer vision system’s ability to correctly classify its contents.

The algorithm is also universal, meaning it works against virtually any image you can throw at it. In their testing, the researchers found that various neural networks consistently misidentified the images their algorithm had modified. They were able to fool the systems as often as 93.7 percent of the time, with the fool rate decreasing only to 76.2 percent after the networks were fine-tuned. Among other examples, the neural networks mistook a hanging red Christmas sock for an Indian elephant, and a man in a black balaclava for an African Grey parrot.

“We show that universal perturbations have a remarkable generalization property, as perturbations computed for a rather small set of training points fool new images with high probability,” the researchers wrote.

This is far from the first attempt to trick computer vision systems. In 2012, artist and privacy researcher Adam Harvey made headlines with CV Dazzle, a series of dramatic makeup patterns that defeat face recognition algorithms. Since then, he’s been prototyping Hyperface, a material that hides the wearer from surveillance by covering them an abstract pattern that detects more strongly as a face than their actual face.