This article from Jeffrey T. Leek and Roger D. Peng called “Opinion: Reproducible research can still be wrong: Adopting a prevention approach“ address an important question about the reproducible /replicable research and provides good definitions about it as we can see in the following below:
We define reproducibility as the ability to recompute data analytic results given an observed dataset and knowledge of the data analysis pipeline. The replicability of a study is the chance that an independent experiment targeting the same scientific question will produce a consistent result (1). Concerns among scientists about both have gained significant traction recently due in part to a statistical argument that suggested most published scientific results may be false positives (2). At the same time, there have been some very public failings of reproducibility across a range of disciplines from cancer genomics (3) to economics (4), and the data for many publications have not been made publicly available, raising doubts about the quality of data analyses. Popular press articles have raised questions about the reproducibility of all scientific research (5), and the US Congress has convened hearings focused on the transparency of scientific research (6). The result is that much of the scientific enterprise has been called into question, putting funding and hard won scientific truths at risk.
So far so good. But the problem it is about the following sentence:
Unfortunately, the mere reproducibility of computational results is insufficient to address the replication crisis because even a reproducible analysis can suffer from many problems—confounding from omitted variables, poor study design, missing data—that threaten the validity and useful interpretation of the results.
If we think that using enforce replication/reproduction patterns in any experiments will prevent/vanish any methodological problems, this assumption it’s not only wrong but naive for the lack of a better word.
The point about the replication/reproducibility it’s a matter to put a higher standard in science where we can ensure that: 1) All the process follow some methodology that explains how some solution transformed until the final result, 2) implies that with that we have a better chance to remove any bias (e.g. cognitive, publication, systematic, etc. ), and 3) if the methodology it’s wrong this methodology can be verified, checked and fixed for the entire scientific society.
When we have an important paper from important economists whom voices are listened by world leaders (those can change the economic policy using that kind of study) fails to be reproducible and someone can catch the methodological flaws and fix it, the point of the importance replication/reproducible research it’s already made.
This is the real payoff of reproducible/replicable science.