Making Better Polls By Remembering That People Lie
A new study shows how subtle cues phone surveys can help predict who will vote
Polls that aim to predict how people will vote are bad and they’re getting worse. Cliff Zukin, a political scientist and polling expert described the crisis of contemporary polling last June in the New York Times. “We are less sure how to conduct good survey research now than we were four years ago, and much less than eight years ago.”
But that doesn’t mean polling can’t improve. A new study found a way to boost the predictive capacity of phone polls asking people if they’re going to vote. The researchers, from University of California at Berkeley and Harvard, asked the people who make the calls how much they believed the respondents—did people the people who said they were going to vote sound like they were telling the truth? It turned out factoring in the caller’s impressions helped better predict whether that person would actually do what they said they would.
Polling’s problems are various. Zukin wrote that the explosion of cellphone use over the last decade has made it much more difficult and expensive to conduct accurate phone surveys, since most polls are still done with land lines. On top of that, less people are willing to answer phone surveys. Another problem is the different outcomes between anonymous online surveys, and those conducted over the phone. Donald Trump, for instance, has garnered more support from online surveys, since it appears Americans are ashamed of expressing their support for him to a human over the phone.
These challenges are exacerbated by the media. As Norm Ornstein writes in the New York Times, “The polls that make the news are also the ones most likely to be wrong.” Within journalism, there is also an inflated fetish for “data journalism,” typified by Nate Silver and his outlet FiveThirtyEight. Like pretty much everyone else, Silver failed to predict the success of Donald Trump.
The study, “Unacquainted callers can predict which citizens will vote over and above citizens’ stated self-predictions,” suggests that human intuition can play a role improving poll predictions.
Published in the Proceedings of the National Academy of Sciences, it starts from a basic problem: surveys asking people to predict their future behavior are bad. This is especially true of surveys about voting: “For example, in some U.S. elections the majority of respondents who self-predict that they will vote do not actually vote.” The paper also notes that despite the faultiness of these predictions, these data sets are still used to “inform which campaign advertisements are developed, where and when these messages are aired, and which voters are targeted in get-out-the-vote (GOTV) campaigns.”
The conclusions of the study come from two experiments. In the first, researchers followed up with callers involved in a 2009 get out the vote campaign in New Jersey. For respondents who said they would vote, the researchers had the callers rate the likelihood of the person actually cast a vote in the election on a five point scale. Researchers than compared the responses of respondents who said they were going to vote with the public voter file. They found only 47 percent of those who said they were going to vote actually did. Their model incorporating the caller opinions was “successful in predicting the actual voting behavior of 58.5% of self-predicted voters.”
But what cues led callers to believe someone was unlikely to vote when they said they were going to? The second experiment examined this by looking at calls to Texas voters before the 2010 gubernatorial race. This time the calls were recorded and coded by researchers for the presence or absence of nonverbal cues, like uncertainty and nervousness. The authors found cues like “sounding uncertain, sounding insecure, and having longer latencies before responding,” were used by callers to accurately judge if someone would vote. Interestingly, other signs callers used to predict someone wouldn’t vote, like sounding tense or nervous, didn’t predict if someone was more or less likely to vote.
The study concludes “ordinary, untrained human judges can significantly improve predictions of who will follow through versus flake out on important commitments.” There’s no fix for our poll problems, but maybe, there is help.