JUSTICE

The Terrible History Of Using Biased Technology To Lock People Up

Courts are relying on racist algorithms in judicial decisions — apparently we've learned nothing from the rise and fall of the polygraph

JUSTICE
Photo Illustration: R. A. Di Ieso
May 03, 2017 at 12:18 PM ET

When Brisha Borden was arrested and charged with stealing a bike in 2014, she became part of an experiment in forensic technology. Standing before a judge at a bail hearing, Borden was evaluated by a cutting-edge computer algorithm called COMPAS, which crunched the facts it knew about her — her background, her education level, her family history — and gave her a score predicting her likelihood of committing future crimes. Borden was deemed a high risk. The judge set her bond at $1,000.

COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, was supposed to take the guesswork and human bias out of judicial decisions such as setting bail, but the technology has come under scrutiny ever since ProPublica covered Borden’s case, among others, last year. Critics say the supposedly unbiased algorithm preserves the biased assumptions of its creators, assigning higher risk values to people of color. COMPAS and other sentencing algorithms have been painted as futuristic dystopia coming true.

But COMPAS has a predecessor.

Cecil Loniello was born 85 years before Brisha Borden, but he too was subjected to a cutting-edge piece of technology that was supposed to eliminate human guesswork and deliver a pure form of scientific justice. In Loniello’s case, he was the first person to be convicted with the help of a polygraph machine.

A polygraph machine is very different from a computer program like COMPAS. But as we grapple with the influence algorithms are having on our criminal justice system, it would be informative to take a look at the history of the lie detector. Just like COMPAS, the polygraph debuted with the promise that it would supersede human uncertainty and bias to determine, with quantifiable accuracy, something that might otherwise seem unknowable. But in fact, it’s hard not to build our prejudices into machines. And when we rely on that technology, problems arise.

The idea that lying causes a minute, but telltale, physiological reaction is an ancient one. In his 1981 history of the polygraph, David Lykken traced this idea back at least to a 1730 treatise on criminality by Daniel Defoe. “Guilt carries fear always about with it, there is a tremor in the blood of a thief, that, if attended to, would effectually discover him,” Defoe wrote.

In the early 20th century, various physiologists, psychiatrists and police officers tinkered with the idea of a device that could detect such tremors (one early developer was William Moulton Marston, who also invented Wonder Woman and her magical lasso of truth). The veritable “truth machine” that emerged measured several physiological responses: pulse rate, respiration rate, blood pressure, perspiration. These were the factors that the machine’s developers believed would indicate a lie — just as COMPAS was designed around factors believed to predict recidivism, like the arrest records of a person’s parents and whether their friends take drugs.

In the case of the polygraph, the obvious problem was human beings are more complicated than a handful of physiological changes. In fact, all kinds of emotions can trigger these responses.

“Anger at being asked accusatory questions; anxiety about not being believed; embarrassment at being asked a personal question — there’s an almost limitless number of potential causes for these physiological responses that have almost nothing to do with deception,” said George Maschke, a former US Army intelligence officer and polygraph opponent. ”There’s no ‘Pinocchio response’ that someone exhibits when lying.”

A few years before being used to convict Cecil Loniello, a prototype polygraph machine had been exhibited at the Chicago World’s Fair, and its introduction coincided “with the wave of technological innovation that had brought Americans electricity, radios, telephones, and cars,” as Margaret Talbot wrote in the New Yorker. As early as 1911, Talbot noted, the New York Times had envisioned a future in which science would do away with such things as policemen, judges, and juries: “These impediments of our courts will be unnecessary. The State will merely submit all suspects in a case to the tests of scientific instruments.” In the polygraph, that prophecy seemed to come true.

In addition to the criminal justice system, the polygraph was also embraced by the Department of Defense — the Army opened its own school for polygraph examiners — as well as by what we now call the “intelligence community,” where the polygraph became a key tool in recruitment as well as investigations. (That’s where Maschke first encountered it, when he “failed” a polygraph while applying to work for the FBI as a translator.)

The spook world generally believed that the polygraph got results: “In the operational arena, numerous double agents have been uncovered, phantom operations and fabricators exposed, and information affecting national policy decisions verified,” wrote John Sullivan, who worked for 35 years as a polygraph examiner at the CIA.

But there were also high-profile fiascos. In 1978, the man who was later convicted as the Green River Killer passed a polygraph test. In 1998 two different examiners, looking at the same results of Los Alamos scientist Wen Ho Lee, separately determined that he had passed the test and that he had failed. And Cold War spy Aldrich Ames famously explained that he was able to dupe the device by getting a good night’s sleep and relaxing during the test.

“The polygraph,” Ames later wrote, “is a scientific godsend: the bureaucrat accounting for a bad decision, or sometimes for a missed opportunity … can point to what is considered an unassailably objective, though occasionally and unavoidably fallible, polygraph judgment.” He called the device an example of “junk science that just won’t die.”

Indeed, “Research on the validity of the polygraph has yielded widely divergent rates of accuracy in detecting deception, some as low as chance and others as high as 95%,” according to a 2014 report by a team of researchers at Massachusetts General Hospital’s Center for Law, Brain and Behavior. This is not news: there’s been pushback against the scientific validity of the test for decades, and the polygraph’s sphere of influence has been shrinking. In 1988, the Employee Protection Act mostly outlawed their use in workplaces. In 1991, the Army banned them from court-martial proceedings; the US Supreme Court upheld that ban in 1998, and today most courts won’t admit polygraph results as evidence.

But the intelligence community has clung on, continuing to polygraph thousands of its employees and prospective employees each year. In fact, in 2013, the Obama administration began to crack down on instructors who taught others how to “beat” a polygraph exam, sending two people to prison on charges of witness tampering in elaborate sting operations. (“Using criminal prosecution to prevent people from learning how to fool a test doesn’t suggest great confidence in that test’s diagnostic power,” Bloomberg’s Drake Bennett observed in a long feature on the case.)

Why has it been so hard to pry the polygraph out of the federal government’s hands? Bioethicist and Georgetown law professor James Giordano says it has to do with the allure of science and promise of making the invisible concrete. “If I see those squiggly lines, it must be real,” he said. Giordano is not necessarily anti-polygraph — he says it has its place as a tool in the arsenal of truth detection. “This is something that’s not perfect. We’re not letting good get in the way of perfect are we?” he said.  “We’re saying, ‘Ok, we realize it’s not perfect but it suits the purpose.’” But he added, “I’d be the first to throw the damn thing away.”

Echoing Ames, he said that people tend to prefer the judgement of a scientific device, with its reassuring appearance of impartiality. “There’s an assumption that if we rely on a machine or technology we can get by certain human frailties or fallibilities,” he said. “The more technology we put in between the cause and effect, if you will, the more we tend to place faith in the stringency and granularity of what that technology is able to pull out.”

Maschke put it another way: The polygraph “creates an air of objectivity,” he said. “It creates an appearance of fairness — it’s the objective test and machine that are calling you a liar.”

That “air of objectivity” is replicated in today’s criminal justice algorithms. “In ways big and small, algorithms make judgments that, under the guise of cold, hard ‘data,’ directly affect people’s lives—for better, often, but sometimes for worse,” Megan Garber wrote in the Atlantic (italics mine). “The appeal of a system like COMPAS is that it proposes to inject objectivity into a criminal justice system that has been compromised, too many times, by human failings.”

Like the polygraph, however, COMPAS and other algorithms are only as good as the human assumptions they’re based on and the inputs that human designers choose: pulse and sweat, education and employment history. And like the polygraph, COMPAS is hardly infallible: ProPublica’s investigation found that its general predictions of recidivism were true only 61 percent of the time, falling to a dismal 20 percent for violent crime.

With any luck it won’t take 80 years to correct this error. But if history is a guide, we might be stuck with COMPAS for a long, long time.

The new season of Dark Net — an eight-part docuseries developed and produced by Vocativ — airs Thursdays at 10 p.m. ET/PT on SHOWTIME.