JUSTICE

How AI Can Aid Authoritarians—And How Humans Fight Back

Hidden algorithms reflect and amplify racism and other human biases, but researchers hope to fix them

JUSTICE
Photo Illustration: R. A. Di Ieso
Mar 23, 2017 at 12:36 PM ET

In 1796, German physiologist Franz Joseph Gall thought he had made a world-altering discovery. By carefully measuring the contours of the human skull, he hypothesized, one could infer information about an individual — including their mental capabilities, personality, skills, and social proclivities.

The result was phrenology, a strain of nineteenth century pseudoscience that went on to inspire centuries of “scientific” justifications for racism. Almost a century later, Italian anthropologist Cesare Lombroso founded a school of criminology that claimed criminality is a trait inherited at birth, theorizing that these “criminaloids” can be detected by measuring the distances between certain features on the face. The theories were later used alongside other fields like eugenics to justify slavery, the Nazis’ pursuit of a white Aryan master race, and other historical atrocities.

While these theories have long since been debunked, a chilling aspect of their legacy lives on in some of today’s advances in artificial intelligence and machine learning. Much like phrenology, researchers say machine learning algorithms are now being given far too much power, invisibly influencing decisions on everything from whether a school teacher gets fired to whether a criminal suspect is released on bail. And perhaps most worryingly, their decisions are frequently painted as “objective” and “unbiased” — when in reality they’re anything but.

Consider a recent paper, in which two Chinese researchers describe an artificial neural network they say can predict whether someone will commit a crime based solely on their facial features. The researchers claim the system’s results are “objective,” reasoning that a computer algorithm has “no biases whatsoever.”

Private companies are already pitching these capabilities to law enforcement agencies. An Israeli face recognition company called Faception has controversially claimed its algorithms can predict whether someone is a “terrorist” or a “pedophile” with 80 percent accuracy. The company is now actively seeking to sell its software to police and governments, telling the Washington Post that it has already signed contracts with at least one unnamed government’s “homeland security agency.”

But a brief look at the researchers’ paper shows the system is trained by analyzing photos of people who have already been convicted under the criminal system. In other words, the system simply defines the common facial features of people who have been labeled “criminal” in the past and reapplies that label to people with similar features. The result evokes a computer-aided rehash of phrenology, with computer vision and learning algorithms standing in for cranial measurement tools.

Thus, those who are already disproportionately targeted by the criminal justice system — African Americans are far more likely to be arrested for drugs, for instance, despite using them at around the same rate as whites — are again disproportionately branded by the algorithm, whose stewards then defend the results by pointing to the system’s supposed “objectivity.”

Rather than removing human biases, the algorithm creates a feedback loop that reflects and amplifies them.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data. Our biases are built into that training data,” Kate Crawford, a principle researcher at Microsoft, said during a talk on AI last week at SXSW in Austin, Texas.

Crawford warned that biased and opaque machine learning algorithms can become especially dangerous in the wrong hands. She mentioned a system built by Palantir, a data-mining company built by President Trump’s’ tech advisor Peter Thiel, that could help power Trump’s crackdown on immigrants and Muslims. And she noted how primitive computer-aided systems helped authoritarians of the past commit atrocities, like the Hollerith tabulating machines built by IBM, which helped the Nazis track and identify Jews and other groups during the World War II.

Today, an algorithm can be the perfect weapon for authoritarian leaders because it lets them efficiently and opaquely enforce systems that are already biased against oppositional and marginalized groups. For example, in a major exposé last year, ProPublica discovered that systems used by courts in Broward County, Florida to assign “risk scores” to criminal defendants consistently rated black defendants with a higher level of risk than whites facing the same charges.

Even worse, authorities who use such systems can easily make claims to their “neutrality” and distance themselves from the consequences — while hiding the system’s inner workings from public view.

“The reason a lot of these algorithms are put into place is so people can deny responsibility for the process as a whole,” Cathy O’Neil, a mathematician and author who frequently writes about the human impacts of big data, told Vocativ. “Sometimes the standard for whether it works or not is whether someone gets to abscond from responsibility, or better yet, whether they get to impose a punitive, inscrutable process.”

In her recent book “Weapons of Math Destruction,” O’Neil outlines several examples of how machine learning systems can mirror and amplify human biases to destructive ends. A recurring theme, she said, is that many of those system are built by third parties and haven’t been independently assessed for bias and fairness. But so far, the algorithms’ creators lack either the means, the desire, or the incentives to conduct those tests.

“It’s a very, very dumb thing,” O’Neil said. “Can you imagine buying a car not knowing whether it’s gonna drive, or not knowing whether it’s safe? That’s just not a reasonable way of going about it. It’s like a car industry where we haven’t developed standards yet.”

AI researchers say that creating those standards is one of the most crucial steps to making machine-learning systems accountable to the humans they pass judgement on. Last September, AI Now, an Obama White House-commissioned report that has since spun off into a research organization led by Crawford, highlighted the need to create tools capable of bringing accountability to “black box” algorithms. That includes mechanisms that allow people affected by these systems to contest their decisions, seek redress, and opt-out of automated decision-making processes altogether.

“AI systems are being integrated into existing social and economic domains, and deployed within new products and contexts, without an ability to measure or calibrate their impact,” the report warns. “The situation can be likened to conducting an experiment without bothering to note the results.”

AI auditing tools wouldn’t necessarily need to inspect the system’s proprietary source code, said O’Neil. They would only need to analyze its input data and the resulting decisions to help humans determine whether the system is functioning correctly and fairly, or whether a skewed dataset is contaminating the output by introducing harmful human bias.

On a more fundamental level, fighting discriminatory AI is about making sure the systems are being ethically designed in the first place. That means teaching ethics alongside regular STEM education and building new standards for accountability into every step of the process, so that the injustices of the past don’t get hard-coded into the robots and neural networks of the future.

“A lot of these algorithms are trotted out in the name of fairness,” says O’Neil. “We should do better than pay lip service to fairness.”