Monday, September 4, 2017

Correcting for Bias in Psychology: A Comparison of Meta-analytic Methods

Carter, Evan C, Felix D Schönbrodt, Will M Gervais, and Joseph Hilgard. 2017. “Correcting for Bias in Psychology: A Comparison of Meta-analytic Methods”. PsyArXiv. September 1. https://psyarxiv.com/9h3nu

Abstract: Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correcting for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data typical of psychological research. We tried to mimic realistic scenarios by simulating several levels of questionable research practices, publication bias, and heterogeneity, using study sample sizes empirically derived from the literature. Our results indicate that one method – the three-parameter selection model (Iyengar & Greenhouse, 1988; McShane, Böckenholt, & Hansen, 2016) – generally performs better than trim-and-fill, p-curve, p-uniform, PET, PEESE, or PET-PEESE, and that some of these other methods should typically not be used at all. However, it is unknown whether the success of the three-parameter selection model is due to the match between its assumptions and our modeling strategy, so future work is needed to further test its robustness. Despite this, we generally recommend that meta-analysts of data in psychology use the three-parameter selection model. Moreover, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications.

No comments:

Post a Comment