AI Privacy Assistants Could Stop You From Exposing Sensitive Info

Researchers have built an AI that can warn people before they accidentally post private information on social media

Illustration: Vocativ
Apr 06, 2017 at 3:01 PM ET

As the hundreds of people who have publicly posted pictures of their debit cards on Twitter can attest, it’s often easy to unwittingly expose private information in the age of social media.

But what if a friendly automated assistant, similar to Siri or Alexa, warned you before you share sensitive images, potentially mitigating threats like online stalking and identity theft?

That’s the idea behind a recent study from researchers at the Max Planck Institute for Informatics in Germany, who say they’ve built an AI-powered privacy watchdog that can learn a person’s privacy preferences and caution them whenever private information might be exposed in the pictures they post to social media.

“Our model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences,” the researchers write in a recent paper, which awaits peer review. “In fact — as our study shows — people frequently misjudge the privacy relevant information content in an image — which leads to failure of enforcing their own privacy preferences.”

The system, which the researchers call a Visual Privacy Advisor, is ostensibly meant to live on your computer or smartphone, and can recognize whenever a photo you’re about to post contains intimate details like a medical prescription or bank statement.

The privacy advisor was trained on a custom set of 75,000 images and fine-tuned according to every test subject’s privacy preferences. Those preferences were determined by asking each subject a series of questions and mapping their responses to 67 different “privacy attributes,” which represent information that the subject might want to conceal. These attributes were then used to test images for sensitive visual information, and include markers for things like occupation, passport details, credit and debit cards, sexual orientation, and recognizable landmarks in the background of photos or video.

In the end, the researchers found, the privacy advisor was a preliminary success. When comparing a user’s recorded privacy preferences against their own judgement for individual images, two predictive algorithms were better judges of whether an image is privacy-threatening in nearly all cases.

“The significance of this research direction is highlighted by our user study which shows users often fail to enforce their own privacy preferences when judging image content,” the researchers conclude. “In particular, a final comparison of human vs. machine prediction of privacy risks in images, shows an improvement by our model over human judgment which highlights the feasibility and future opportunities of the overarching goal — a Visual Privacy Advisor.”