The sunset of statistical significance

Brian Resnick hit the nail in his last column in Vox called 800 scientists say it’s time to abandon “statistical significance” where he brings an important discussion in how the p-value is misleading science, especially for for the studies that has clear measurements of some particular effect but they’re thrown away because of the lack of statistical significance.

In the column Mr. Resnick put one alternative to how to get (…) a better, more nuanced approaches to evaluating science (…).

- Concentrating oneffect sizes (how big of a difference does an intervention make, and is it practically meaningful?)

- Confidence intervals (what’s the range of doubt built into any given answer?)

- Whether a result is novel study or a replication (put some more weight into a theory many labs have looked into)

- Whether a study’s design was preregistered (so that authors can’t manipulate their results post-test), and that the underlying data is freely accessible (so anyone can check the math)

- There are also alternative statistical techniques — like Bayesian analysis — that in some ways more directly evaluate a study’s results. (P-values ask the question “how rare are my results?” Bayes factors ask the question “what is the probability my hypothesis is the best explanation for the results we found?” Both approaches have trade-offs. )

PS: Frank Harrell (Founding Chair of Biostatistics, Vanderbilt U. Expert Statistical Advisor, Office of Biostatistics) gave to us this very delightful tweet:

Source Twitter