Wednesday, February 5, 2020

An Excess of Positive Results in Psychology: The large gap betwwen standard reports and registered reports suggests that authors underreport negative results to an extent that threatens cumulative science

Scheel, Anne M., Mitchell Schijen, and Daniel Lakens. 2020. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” PsyArXiv. February 5. doi:10.31234/osf.io/p6e9c

Abstract: When studies with positive results that support the tested hypotheses have a higher probability of being published than studies with negative results, the literature will give a distorted view of the evidence for scientific claims. Psychological scientists have been concerned about the degree of distortion in their literature due to publication bias and inflated Type-1 error rates. Registered Reports were developed with the goal to minimise such biases: In this new publication format, peer review and the decision to publish take place before the study results are known. We compared the results in the full population of published Registered Reports in Psychology (N = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (N = 152) by searching 633 journals for the phrase ‘test* the hypothes*’ (replicating a method by Fanelli, 2010). Analysing the first hypothesis reported in each paper, we found 96% positive results in standard reports, but only 44% positive results in Registered Reports. The difference remained nearly as large when direct replications were excluded from the analysis (96% vs 50% positive results). This large gap suggests that psychologists underreport negative results to an extent that threatens cumulative science. Although our study did not directly test the effectiveness of Registered Reports at reducing bias, these results show that the introduction of Registered Reports has led to a much larger proportion of negative results appearing in the published literature compared to standard reports.


Check also Researchers frequently make inappropriate requests to statisticians: Removing/altering data to support the hypothesis; interpreting the findings on the basis of expectation, not results; not reporting the presence of key missing data; & ignoring violations of assumptions
Researcher Requests for Inappropriate Analysis and Reporting: A U.S. Survey of Consulting Biostatisticians. Min Qi Wang, Alice F. Yan, Ralph V. Katz. Annals of Internal Medicine, https://www.bipartisanalliance.com/2018/11/researchers-frequently-make.html
And How to crack pre-registration: There are methods for camouflaging a registered study as successful
How to crack pre-registration: Toward transparent and open science. Yuki Yamada. Front. Psychol. Sep 2018. https://www.bipartisanalliance.com/2018/09/how-to-crack-pre-registration-there-are.html
And Questionable Research Practices prevalence in ecology: cherry picking statistically significant results 64%, p hacking 42%, and hypothesising after the results are known (HARKing) 51%. Such practices have been directly implicated in the low rates of reproducible results
Questionable research practices in ecology and evolution. Hannah Fraser et al. PLOS One, Jul 2018. https://www.bipartisanalliance.com/2018/07/questionable-research-practices.html
And The scientific practices of experimental psychologists have improved dramatically:
Psychology's Renaissance. Leif D. Nelson, Joseph P. Simmons, and Uri Simonsohn. Annual Review of Psychology, forthcoming. https://www.bipartisanalliance.com/2017/11/the-scientific-practices-of.html


No comments:

Post a Comment