Great, Even Fact Checkers Can’t Agree On What Is True
A study auditing two major fact check sites finds they often come to different conclusions
Given the presidentially-endorsed era of fake news we live in now, it’s abundantly apparent the media’s fact checkers aren’t trusted by the public.
But part of that distrust, a new study suggests, could have nothing to do with President Trump. Instead, it might be because different fact-checkers rarely bother to examine the same claims, and when they do, they only manage to agree a little more than half the time.
Chloe Lim, a PhD student in the department of political science at Stanford University, compared two prominent fact checking operations to each other — the Washington Post’s Fact Checker and the Tampa Bay Times’ Politifact. The outlets were specifically chosen because they rated the accuracy of claims on a roughly similar 1 to 5 scale. She looked at all the articles that fact checked statements made by presidential and vice presidential candidates from January 2014 right up to last November’s Election Day.
Politifact had checked 1,135 claims, while Fact Checker had done the same for 240. Despite a disparity in the amount of statements checked between the two, which Lim credits to Politifact having a larger base of reporters scattered across multiple states, she was still surprised that the outlets only overlapped a total of 70 times. From the perspective of Fact Checker, they looked at 6 percent of the claims that Politifact did, while Politifact looked at 27 percent of the claims that Fact Checker did. Of these 70, 41 claims were given a generally consistent rating, while 14 got the exact opposite. And when Lim used a common statistical model, she found the fact-checkers fell far short of agreeing with each other as often as scientists need to when making a conclusion about their results.
“A low agreement rate among fact-checkers may explain why fact-checking has failed to discipline politicians the way fact-checkers might have intended,” wrote Lim. “Because fact-checkers rarely fact-check the same statement and seldom agree on the factual accuracy of a given claim, fact-checking may fall short of holding elites accountable for their words.”
An example of disagreement Lim singled out involved the Democratic primary between Hillary Clinton and Bernie Sanders. While Fact Checker awarded Clinton’s statement that Sanders hadn’t gotten any negative ads aimed at him during the campaign “1 Pinocchio,” meaning it was rated mostly true, Politifact had instead rated it false.
More broadly, Lim found that claims revolving around fiscal policy were especially likely to be seen differently by different fact checkers. “This is surprising given that these statements often involve numbers and figures,” she noted.
The checkers aren’t entirely to blame for their lack of consensus, though, Lim said. The characteristic vagueness of politicians inevitably means the accuracy of some claims will be left up to the writer’s judgment rather than any objective number or fact they can readily point to. But without the credibility of being consistent, she concluded, fact checkers will find it hard to accomplish their stated mission “to fulfill the democratic ideal of political watchdog by disabusing readers of mistaken beliefs and preventing political lying.” And that, judging by the never-ending onslaught of Trump tweetstorms, bodes badly for us all.
It should be noted: Lim’s findings, as of yet, haven’t been published in a peer-reviewed journal — which means they haven’t been fact-checked by her peers, for whatever that is worth.