Driverless Cars

Scientists Can Blind A Self-Driving Car From Seeing Pedestrians

Researchers can use noise to stop computer visions algorithms from detecting specific types of objects — including people in the way of an autonomous car

Driverless Cars
Photo Illustration: Diana Quach
Apr 24, 2017 at 12:31 PM ET

Whether they’re detecting human faces in Snapchat or helping self-driving cars avoid road hazards, artificial intelligence systems depend on computer vision algorithms to distinguish between different types of objects. But researchers have developed tricks to confuse those algorithms, stopping AI from recognizing the contents of images.

A new method developed by German scientists from the University of Freiberg and the Bosch Center for Artificial Intelligence goes further, showing it’s possible to effectively blind machine vision systems from seeing specific categories of objects in a scene, like pedestrians in a road.

Like other recent studies, the trick works by strategically flooding an image with noise that degrades the AI’s ability to recognize objects, but keeps the image perfectly normal-looking to humans. These “universal perturbations” are generated by an algorithm, and work regardless of what type of image, scene or computer vision system they’re applied to.

Rather than stop the algorithm from identifying the entire image, however, the process targets a process called “semantic segmentation,” which chops up the image into groups of pixels in order to identify different types of objects in the scene. This allows researchers to block out certain things from the scene as the AI perceives it while leaving the rest of the scene intact, making the disturbance much harder for the system to detect.

“The main motivation for this experiment is to show how fragile current approaches for semantic segmentation are when confronted with an adversary,” the researchers write in their paper, which currently awaits peer review. “In practice, such attacks could be used in scenarios in which a static camera monitors a scene (for instance in surveillance scenarios) as it would allow an attacker to always output the segmentation of the background scene and blend out all activity” — like burglars caught on camera robbing a shop, for instance.

The researchers successfully tested the method on a dataset called Cityscapes, which contains 3,475 images of scenes from 44 different cities. By targeting only the groups of pixels associated with certain object types in the scene, they were able to remove pedestrians from the picture entirely, effectively making the computer vision system blind to anyone that might be — hypothetically — standing in the path of an autonomous vehicle.

While the experiment shows that this kind of targeted obfuscation is possible, the researchers note that it wouldn’t be practical just yet. Before they can mask certain things from the system, a real-life hacker would first need a way to inject the noise into the digital images captured by a camera or sensor before a computer vision algorithm can examine them.

“[T]he presented method does not directly allow an adversarial attack in the physical world since it requires that the adversary is able to precisely control the digital representation of the scene,” the researchers write. Even though “adversarial attacks might be extended to the physical world and deceive face recognition systems,” they’ve yet to actually present an actual, real-life attack against an automated driving system.