Replicating other scientists’ works appears to be more difficult than one would expect, and these efforts are often ignored by scientific journals. A new online publishing channel has been launched in the attempt to give more space and visibility to this important aspect of research. But some claim science is undergoing a reproducibility crisis.
How difficult it is to replicate other scientists’ findings, and how often does a successful replication actually occur? One of science’s cornerstones is the reproducibility and replicability of experimental results. Scientific findings must always arise from replicable protocols, which allow to make them statistically significant. Once published, these findings must then be reproducible so that any scientist, from the UK to New Zealand, should be able to repeat – or one should say to reproduce – the protocol and obtain congruous results.
But what if reproducibility fails? What should a scientist do when, after months spent trying strenuously to obtain it, that astonishing result which he or she read about in a ground breaking paper refuses to reveal itself? Should he or she publish such effort and let the scientific community know that the ‘landmark’ study failed to be reproduced? Would it be important for other scientists to know it?
The biochemist Bruce Alberts (that Bruce Alberts, I actually studied on his textbook) and the biotech company Amgen Inc., based in Thousand Oaks, California, are so sure about the importance of publishing efforts to confirm academic papers that they launched a new channel called “Preclinical Reproducibility and Robustness” in February. The channel is hosted by F1000Research, the platform of the open-access, online-only publisher, Faculty of 1000, based in London.
As explained in Nature, the idea was conceived after Sasha Kamb, senior vice president for research at Amgen, revealed that in the past years the company had failed to reproduce several academic papers. Indeed, in 2012 Amgen researchers revealed that in the attempt to reproduce 53 ‘landmark’ studies in cancer research they were able to confirm only six of them, while failing to reproduce findings from the other 47.
However, scientific journals are usually poorly interested in publishing replication efforts. So, in order to create a reproducibility research-dedicated channel, Bruce Alberts, which is also a former editor-in-chief of Science now working at F1000Research, suggested the faster route of online publishing. Of course, the new channel publishes both failed and successful confirmation efforts. The accepted studies are firstly published online, then sent to reviewers for peer review. The papers that successfully pass peer review can then be indexed in PubMed.
Several scientists, especially those working for companies, are supporting the channel, which could help improving the signal to noise ratio. It is highly important to know which scientific publications actually stand the reproducibility test. Publishing reproducibility studies enables the scientific community to know how many people are following a certain lead and believe a certain study to be trustworthy. It is a practice that actually enriches the knowledge around new and potentially revolutionary discoveries, and goes beyond a simple acknowledgement of a study considered (or claimed to be)sensational.
However, as some authors point out, if a study fails to be replicated it does not automatically mean that its results are to be considered false. Furthermore, biological experiments are extremely complex and often difficult (but not impossible) to successfully reproduce.
Thus, replication efforts are important, but not enough for disproving (or confirming?) a study. But to what extent is reproducibility an issue in today’s research? According to a compelling survey performed by Nature and published earlier this year, science might be undergoing a reproducibility crisis.
The British scientific journal collected opinions from 1,576 researchers on the reproducibility matter. Some figures? More than 70% of the interviewees have failed to reproduce another researcher’s work and more than 50% did not even succeed in reproducing their own. Despite this serious rate, less than one in five researchers have ever been contacted by another scientist having trouble to reproduce his or her experiments. It is interesting to notice that this lack in communication appears to originate more in social rather than technical aspects. Contacting an original author for help might be interpreted as lack of expertise, an accusation attempt, or could perhaps reveal too much about an ongoing project.
Finally, although issues in the reproducibility process undoubtedly exist, this is not undermining scientists’ trust in one another. Less than one third of the surveyed researchers believe results failed to be reproduced are wrong.
I believe there’s a lot to think about here. Yes, we cannot judge a scientific work only according to its reproducibility. But what if confirmation by reproduction becomes just an optional feature? How can we trust a scientific work to be truly ground breaking when no one else except the original author is able to perform it? But perhaps more importantly, if a discovery can only be admired, how useful would it be for the advancement of science, technology or medicine?