Wednesday, May 13, 2020

Ambiguous statistical language was rated as of higher quality, allowing to communicate causal interpretations to readers without being punished for violating the norm against straightforward causal language

Alvarez-Vargas, Daniela, David W. Braithwaite, Hugues Lortie-Forgues, Melody M. Moore, Mayan Castro, Sirui Wan, Elizabeth A. Martin, et al. 2020. “Hedges, Mottes, and Baileys: Causally Ambiguous Statistical Language Can Increase Perceived Study Quality and Policy Relevance.” PsyArXiv. May 12. doi:10.31234/osf.io/nkf9

Abstract: There is a norm in psychological research to use causally ambiguous statistical language, rather than straightforward causal language, when describing methods and results of nonexperimental studies. We hypothesized that this norm leads to higher ratings of study quality and greater acceptance of policy recommendations that rely on causal interpretations of the results. In a preregistered experiment, we presented psychology faculty, postdocs, and doctoral students (n=142) with abstracts from hypothetical studies. Abstracts described studies’ results using either straightforward causal or causally ambiguous statistical language, but all concluded with policy recommendations relying on causal interpretations of the results. As hypothesized, participants rated studies with causally ambiguous statistical language as of higher quality (by .48-.59 SD) and as similarly or more supportive (by .16-.26 SD) of policy recommendations. Thus, causally ambiguous statistical language may allow psychologists to communicate causal interpretations to readers without being punished for violating the norm against straightforward causal language.




No comments:

Post a Comment