What happens when someone publishes a breakthrough that other scientists can’t reproduce?
Faux-science press releases hyping the next could-be breakthrough litter the Internet. Every other day, it seems, there’s another entrepreneur blowing bubbles of venture capital out of the techno-optimist fog blanketing Silicon Valley. There’s an endless stream of claims, and so much of it turns out to be vaporware, based on experimental results that got way overblown once they entered the media echo chamber. All too often, the idea is great, but the science just isn’t there to back it up, so the product never appears.
Nature News did a survey of its readers, including more than 1,500 researchers, to find out what they thought about the problem of replicating others’ work and getting discordant results. When it comes to scientific reproducibility, the scientists agree: Houston, we have a problem.
The survey reveals sometimes-contradictory attitudes among scientists. More than half of those surveyed concur that there is a significant “crisis” of reproducibility, less than a third think that failure to reproduce published results means that the published result is false, and most say that they still trust the published literature. But there’s a wide range of problems that contribute to irreproducible research, from unclear or undisclosed methods to cherry-picking data, bad luck, or outright fraud.
And the problems vary by field. The laws of physics appear to vary the least, since respondent physicists consider the corpus in their field to be very reliable. In squishier fields like medicine, though, literally not a single respondent agreed with the idea that the whole body of published medical research is trustworthy. The upshot is that doctors don’t believe the crap you see on Dr. Oz, and neither should you.
Sorting out actual discoveries from false positives can be really hard. When an experiment can’t be reproduced, why can’t it be reproduced? How much of the difference boils down to a hypothesis actually being false, as opposed to different humans in different labs doing their slightly different interpretations of a procedure on different equipment?
Perhaps surprisingly, the overwhelming majority of respondents to the Nature survey cited a better understanding of statistics as the number one thing that would enable better reproducibility in experiments. What this means is that even the scientists reporting the data don’t always have a very deep understanding of the math they’re using to analyze that data. It’s easy to intentionally mislead with statistics. It’s even easier to accidentally mislead with statistics when you’re trying to explain something you don’t understand all that well yourself.
One of the other major problems is that we just haven’t really been controlling for this. It seems obvious in hindsight: If you do things in a rigorous, scientific manner, others will be able to reproduce your results. But between pressure to publish, financial constraints, and too few eyes on a given body of work, it’s very easy to give in to selective reporting of data. When funding is at stake, data tends to nucleate around points that confirm the desired thesis. Part of doing science is confronting the horrifying truth programmers already know: that no matter how terrible your lab is, everything else is exactly this hacked together, and the people who did it knew exactly as little as you, probably on a budget just as tight as yours. There is no huge conspiracy. Reproducibility just dies by a thousand cuts.
It might seem simplistic, but the one thing scientists agreed on in chorus was that it’s time to start building reproducibility steps into experiments during the planning phase. Trying to verify your own results is hard; if you’ve already made an error, chances are you’ll also overlook it when sanity-checking your work. But there’s a way around this. Pre-registration is a strategy where scientists submit hypotheses and plans for data analysis to some independent third party, getting an outsider’s eyes on the game plan before they ever do the experiments. This is intended to tighten up experimental design, and to prevent cherry-picking data later.
At the heart of the problem, though, is human nature.
Wishful thinking, combined with the pressure to perform and produce, leads us to indulge belief in what we hope is true. Do you remember cringe-laughing when The Onion joked about adding the “seek funding” step to the scientific method? The fact that scientists have to beg and compete for funding, introducing marketing into research, is the reason we get debacles like Theranos, a Silicon Valley medical startup whose disruptive claims gathered huge amounts of venture capital but seem to be vaporware. It’s easy to focus on what we want to see — and this is just as true for laymen as veteran STEM researchers. In the end, it looks like Reagan had it right: Trust, but verify.