Researchers working on the replication crisis are more divided, though, on the question of whether the last decade of work on the replication crisis has left us better equipped to fight these problems — or left us in the same place where we started.

“The future is bright,” concludes Altmejd and Dreber’s 2019 paper about how to predict replications. “There will be rapid accumulation of more replication data, more outlets for publishing replications, new statistical techniques, and—most importantly—enthusiasm for improving replicability among funding agencies, scientists, and journals. An exciting replicability ‘upgrade’ in science, while perhaps overdue, is taking place.”

Menard, by contrast, argues that this optimism has not been borne out — none of our improved understanding of the replication crisis leads to more papers being published that actually replicate. The project that he’s a part of — an effort to design a better model to predict which papers replicate run by DARPA in the Defense Department — has not seen papers grow any more likely to replicate over time.

“I frequently encounter the notion that after the replication crisis hit there was some sort of great improvement in the social sciences, that people wouldn’t even dream of publishing studies based on 23 undergraduates any more … In reality there has been no discernible improvement,” he writes.

Researchers who are more optimistic point to other metrics of progress. It’s true that papers that fail replication are still extremely common, and that the peer-review process hasn’t improved in a way that catches these errors. But other elements of the error-correction process are getting better.

“Journals now retract about 1,500 articles annually — a nearly 40-fold increase over 2000, and a dramatic change even if you account for the roughly doubling or tripling of papers published per year,” Ivan Oransky at Retraction Watch argues. “Journals have improved,” reporting more details on retracted papers and improving their process for retractions.

Other changes in common scientific practices seem to be helping too. For example, preregistrations — announcing how you’ll conduct your analysis before you do the study — lead to more null results being published.

“I don’t think the influence [of public conversations about the replication crisis on scientific practice] has been zero,” statistician Andrew Gelman at Columbia University told me. “This crisis has influenced my own research practices, and I assume it’s influenced many others as well. And it’s my general impression that journals such as Psychological Science and PNAS don’t publish as much junk as they used to.”

There’s some reassurance in that. But until those improvements translate to a higher percentage of papers replicating and a difference in citations for good papers versus bad papers, it’s a small victory. And it’s a small victory that has been hard-won. After tons of resources spent demonstrating the scope of the problem, fighting for more retractions, teaching better statistical methods, and trying to drag fraud into the open, papers still don’t replicate as much as researchers would hope, and bad papers are still widely cited — suggesting a big part of the problem still hasn’t been touched.

We need a more sophisticated understanding of the replication crisis, not as a moment of realization after which we were able to move forward with higher standards, but as an ongoing rot in the scientific process that a decade of work hasn’t quite fixed.

Our scientific institutions are valuable, as are the tools they’ve built to help us understand the world. There’s no cause for hopelessness here, even if some frustration is thoroughly justified. Science needs saving, sure — but science is very much worth saving.

Leave a Reply