Sunday, January 29, 2023

More Than Half of Statistically Significant Research Findings in the Environmental Sciences are Actually Not; the median power of p-value tests is between 6% to 12%, which is the lowest yet identified for any discipline

More Than Half of Statistically Significant Research Findings in the Environmental Sciences are Actually Not. Teshome Deressa, David Stern, Jaco Vangronsveld, Jan Minx, Sebastien Lizin, Robert Malina, Stephan Bruns. EcoEvoRxiv, Jan 2023. https://doi.org/10.32942/X24G6Z

Abstract: Researchers have incentives to search for and selectively report findings that appear to be statistically significant and/or conform to prior beliefs. Such selective reporting practices, including p-hacking and publication bias, can lead to a distorted set of results being published, potentially undermining the process of knowledge accumulation and evidence-based decision making. We take stock of the state of empirical research in the environmental sciences using 67,947 statistical tests obtained from 547 meta-analyses. We find that 59% of the p-values that were reported as significant are not actually expected to be statistically significant. The median power of these tests is between 6% to 12%, which is the lowest yet identified for any discipline. Only 8% of tests are adequately powered with statistical power of 80% or more. Exploratory regressions suggest that increased statistical power and the use of experimental research designs reduce the extent of selective reporting. Differences between subfields can be mostly explained by methodological differences. To improve the environmental sciences evidence base, researchers should pay more attention to statistical power, but incentives for selective reporting may remain even with adequate statistical power. Ultimately, a paradigm shift towards open science is needed to ensure the reliability of published empirical research.


No comments:

Post a Comment