Psychology Has A Reproducibility Problem
Survey of 100 psychology papers published in prominent journals finds more than half can't be replicated
Replication is the backbone of scientific research—if you’re a scientist and you do an experiment that other researchers following behind you can’t mimic and get the same results, that’s a serious red flag. But for a new study published in Science, researchers surveyed 100 prominent psychology papers and found they could replicate the findings of a mere 39 percent. The findings suggest that psychological science may suffer from a serious reproducibility problem.
Nonetheless, “there’s a lot of room to improve reproducibility, but I don’t see this story as a pessimistic one,” says Brian Nosek, a professor of psychology at the University of Virginia and co-author on the study. “The project is a demonstration of science demonstrating its essential quality—self-correction.”
For the study, Nosek recruited 270 psychologists via the Open Science Collaboration, and asked them to replicate 100 studies that had appeared in three prominent journals: Psychological Science, the Journal of Personality and Social Psychology and the Journal of Experimental Psychology. Nosek also invited the authors of each original study to participate in the replication effort—and, surprisingly, many agreed to help test their published results.
“How else will we converge on the truth?” said Joshua Correll, a psychologist at the University of Colorado, Boulder (whose study, incidentally, could not be replicated) in an interview with Science Magazine. “Really, the surprising thing is that this kind of systematic attempt at replication is not more common.”
In the paper, Nosek notes that the most surprising studies tended to be the least reproducible, implying that journal editors may accept papers with more shocking results even when the methodology isn’t entirely solid. It is also possible, Nosek writes, that some replications failed by chance.
“I don’t want to defend chance findings…[but] the only finding that will replicate 100 percent of the time is one that’s likely to be trite and boring and probably already known,” Alan Kraut, the Executive Director of the Association for Psychological Science, who was not involved in the study, said at a press conference. “I know this may sound odd to the non-scientists that you reporters are writing for, but it’s mathematically true.”
And Kraut is right—sometimes. Reproducibility is still the best way to gauge scientific accuracy, and when a study cannot be replicated, it’s likely that the results are incorrect. But there are famous exceptions. Kraut recalls that the theory of cognitive dissonance (which deals with conflicting thoughts and behaviors) was initially rejected by psychologists because early studies of the phenomenon were not replicable. Cognitive dissonance is now a cornerstone of psychological research. “It turns out cognitive dissonance was always there,” Kraut said. “It just took time and multiple efforts to demonstrate it reliably.”
Other reactions to the findings reflect an unsettling acknowledgment of a systematic problem within psychological and scientific research. “This very well done study shows that psychology has nothing to be proud of when it comes to replication,” Charles Gallistel, president of the Association for Psychological Science, told Science Magazine.