Wednesday, March 30, 2022

Meta-analysis: Cognitive abilities are not related to the willingness to take financial risks

Cognitive abilities affect decision errors but not risk preferences: A meta-analysis. Tehilla Mechera-Ostrovsky, Steven Heinke, Sandra Andraszewicz & Jörg Rieskamp. Psychonomic Bulletin & Review, Mar 30 2022. https://link.springer.com/article/10.3758/s13423-021-02053-1

Abstract: When making risky decisions, people should evaluate the consequences and the chances of the outcome occurring. We examine the risk-preference hypothesis, which states that people’s cognitive abilities affect their evaluation of choice options and consequently their risk-taking behavior. We compared the risk-preference hypothesis against a parsimonious error hypothesis, which states that lower cognitive abilities increase decision errors. Increased decision errors can be misinterpreted as more risk-seeking behavior because in most risk-taking tasks, random choice behavior is often misclassified as risk-seeking behavior. We tested these two competing hypotheses against each other with a systematic literature review and a Bayesian meta-analysis summarizing the empirical correlations. Results based on 30 studies and 62 effect sizes revealed no credible association between cognitive abilities and risk aversion. Apparent correlations between cognitive abilities and risk aversion can be explained by biased risk-preference-elicitation tasks, where more errors are misinterpreted as specific risk preferences. In sum, the reported associations between cognitive abilities and risk preferences are spurious and mediated by a misinterpretation of erroneous choice behavior. This result also has general implications for any research area in which treatment effects, such as decreased cognitive attention or motivation, could increase decision errors and be misinterpreted as specific preference changes.

Discussion

We conducted a Bayesian meta-analysis with a total of 30 studies and examined whether a potential meta effect size is better explained by the risk-preference hypothesis, which assumes a correlation between cognitive abilities and risk aversion because cognitive abilities affect the evaluation of risky options and, consequently risk-taking behavior, or by the error hypothesis, which assumes that mixed results are the product of a relationship between cognitive abilities and decision errors resulting from a bias of the architectural properties of the risk-preference-elicitation task. Our results show that the correlation between cognitive ability and risk aversion is noncredible. Notably, we find that when studies applied unbalanced choice sets, they reported a stronger negative (or positive) correlation between cognitive abilities and risk aversion depending on the direction of this unbalance. The effect of the RCRT bias was robust across all meta-analytical model specifications and thus provides strong evidence for the error hypothesis. That is, our findings support the claim that previous mixed evidence of a correlation between cognitive abilities and risk aversion is mainly driven by the important interaction between the architecture of the risk-preference-elicitation task and errors in decision making. In addition, we found an effect of task framing, where including losses in risk-preference-elicitation tasks only weakly moderates the relation between cognitive abilities and risk aversion. Note that this effect was not robust across all meta-analytical model specifications and appears to be highly correlated with the RCRT bias of the choice set, where the latter has a higher explanatory power. We found no mediating effects of the type of cognitive ability test applied or of the number of decisions. We conclude that a potential correlation between cognitive abilities and risk aversion is moderated by the link between cognitive abilities and the probability of making unsystematic decision errors.

A recent meta-analysis by Lilleholt (2019) similarly explored the link between cognitive abilities and risk preferences. However, in contrast to our work, Lilleholt’s analysis did not directly test whether the mixed findings regarding the link between cognitive abilities and risk preferences could be explained by the error hypothesis and the bias in the architecture of most risk-preference-elicitation tasks. There are other important differences. First, Lilleholt had a broader literature search scope, leading to a larger set of examined studies. For instance, the author included experience-based risk-preference-elicitation tasks, which we excluded from our analysis. In such tasks, people have no information about the outcomes of gambles and the probabilities with which the outcomes occur but learn this from feedback. Thus, in these tasks, learning plays a major role in how people make their decisions, thereby making the interpretation of a potential link between cognitive abilities and risk preferences more complicated. In general, it has been argued that description-based and experience-based tasks differ in both architecture and interpretation (Frey et al., 2017). Therefore, in contrast to Lilleholt, we have focused on a description-based task that makes it easier to code all relevant task-architecture information precisely.

Since Lilleholt (2019) ran the meta-analysis for each domain separately, we compared Lilleholt’s results with our results by estimating our meta-analytic models on Lilleholt’s merged data set (see Appendix Table 8). In line with Lilleholt’s results, we find a credible metaeffect of −.05 with a 95% BCI ranging from −.07 to −.03 for the loss, gain, and mixed domains. Note that our restricted data set exhibits a comparable effect size of −.03, with a 95% BCI ranging from −.08 to .02. Additionally, the inclusion of losses as outcomes of the choice options had a credible effect on the correlation between cognitive abilities and risk preferences with a mean estimate of .12 and a 95% BCI ranging from .08 to .15 (see Appendix Table 8, Model Mf). The model comparison shows that the model that includes this variable is superior to a model that exclude it (see Appendix Table 8, Models Mf, M2). The effect of the RCRT bias towards risk aversion on the correlation between cognitive abilities and risk preferences was credible across all model specifications (see Appendix Table 8, Models Mf, M1, M2) and exhibited a mean estimate of −.17 and a 95% BCI ranging from −.25 to −.09 (see Appendix Table 8, Model Mf). More importantly, a regression model comparison procedure (see Appendix Table 8) shows that accounting for RCRT bias (Mf vs. M1 BF = 5.9×106) and the inclusion of losses (Mf vs. M2BF = 2.1×1010) improve the model fit substantially for the merged data set of Lilleholt (2019), replicating our results. However, given the larger set of studies in Lilleholt compared with ours, this replication should be interpreted with caution.

Our finding of a moderating effect of an RCRT-biased task architecture on the correlation between cognitive ability and risk aversion contributes to the discussion in the decision sciences and experimental economics literature. For instance, in line with the error hypothesis, Andersson et al. (2016) experimentally demonstrated that the link between cognitive abilities and risk aversion is spurious, as it is moderated by the link between cognitive abilities and random choice behavior (Andersson et al., 2016). In keeping with this result, Olschewski et al. (2018) reported that in risk-taking tasks, cognitive abilities correlated negatively with decision errors. We followed this work and rigorously tested the error hypothesis with a meta-analysis. Our results show that the correlation between cognitive abilities and risk aversion can be explained by the characteristics of the choice set (i.e., task architecture), implying an RCRT bias, a phenomenon that leads to misclassifying random choices as a specific risk preference.

Our findings support the view of the error hypothesis that cognitive abilities are linked to the probability of making unsystematic errors (Burks et al., 2008; Dean & Ortoleva, 2015; Olschewski et al., 2018; Tymula et al., 2013). Additionally, it is plausible to assume that people with lower cognitive abilities apply simpler decision strategies (i.e., heuristics) that reduce information-processing load. However, the use of heuristics does not necessarily imply more or less risk-taking behavior; only the interaction between the applied heuristic and the task architecture leads to a specific observed risk-taking behavior. As we discussed above, some heuristics lead to higher (or lower) observed risk-seeking behavior compared with more complex decision strategies, depending on the choice set. Therefore, one would not necessarily expect a specific correlation between people’s cognitive abilities and the observed risk-taking behavior across the different tasks, but instead expect some heterogeneity in the results. However, the use of specific strategies cannot explain the relationship between the observed average risk preferences in a task and the RCRT bias in the task. Thus, the link between cognitive abilities and the selected decision strategies does not imply a link between cognitive abilities and the latent risk preferences. Crucially, when examining the potential link between cognitive abilities, decision strategies, and risk preferences, it is necessary to first identify the specific strategies people apply in specific environments or task architectures (Olschewski & Rieskamp, 2021; Rieskamp, 2008; Rieskamp & Hoffrage, 19992008; Rieskamp & Otto, 2006). Future work should examine the different heuristics and decision strategies to arrive at a comprehensive understanding of whether and how those shape the correlation between cognitive abilities and risk preferences.

The results of this study also resonate with a recent empirical discourse on the validity of risk-preference-elicitation measures. For instance, Frey et al. (2017) and Pedroni et al. (2017) found behavioral risk-elicitation tasks to be less stable elicitations of risk preferences compared with self-reported measures. Importantly, the difference between behavioral and self-reported measures could disappear once measurement errors are accounted for (Andreoni & Kuhn, 2019) by applying better task architectures.

Our results also have implications for interpreting experimental results in other research domains. For example, when testing for a specific treatment effect it appears important to control for increased decision errors, so that a potential increase in errors is not misinterpreted as a specific treatment effect. Whether such misinterpretation is likely to occur depends on whether the task architecture has a bias, so that random choice behavior leads to a specific psychological interpretation. For instance, a potential effect of increased time pressure on people’s risk preferences could also simply be due to an increase in decision errors under high time pressure (e.g., Olschewski & Rieskamp, 2021). Likewise, the potential effect of cognitive load on people’s risk preferences, intertemporal time preferences, or social preferences could also simply be due to an increase in decision errors under cognitive load manipulations (e.g., Olschewski et al., 2018). Finally, the potential effect of increased monetary incentives on people’s preferences could also be due to lower decision errors with higher monetary incentives (e.g., Holt & Laury, 2002; Smith & Walker, 1993). In general, treatment effects on preferences have been observed in intertemporal discounting (e.g., Deck & Jahedi, 2015; Ebert, 2001; Hinson et al., 2003; Joireman et al., 2008) as well as social preferences (e.g., Cappelletti et al., 2011; Halali et al., 2014; Schulz et al., 2014). Across these domains, it is important to understand how changes in decision errors affect preference measurements. Failure to do so could potentially lead to misinterpretations of observed effects.

Consequently, addressing the issue of decision errors captured by the error hypothesis is of general importance to any research in behavioral economics and psychology with the objective to elicit individual preferences. There are two possible ways to address this matter. First, one can account for random errors ex ante by choosing an experimental design that controls for random errors. At the experimental design stage, researchers could apply a variety of measures to assess people’s preferences. In this way, they could cancel out systematic errors and minimize measurement errors in the associated biased classifications (Frey et al., 2017). For instance, Andersson et al. (2016) suggested choosing a symmetrical choice set when measuring risk preferences. However, this approach may not always be suitable for every preference-elicitation task. Leading to the second approach, one can account for error at the data analysis stage. For example, accounting for potential biases with an explicit structural decision-making model what includes an error theory at the data-analysis stage could be advantageous (Andersson et al., 2020). Recently, behavioral economists Gillen et al. (2015) and Andreoni and Kuhn (2019) proposed an instrumental variable approach to address this problem (see also Gillen et al., 2015).

It is important to note the task architecture determines the context in which a choice option is presented. Consequently, various theories relating to the context effect could also contribute to the fact that people with lower cognitive abilities are more prone to be influenced by the task architecture. For example, Andraszewicz and Rieskamp (2014) and Andraszewicz et al. (2015) demonstrated that pairs of gambles with the same differences in expected values and the same variances (i.e., risk) but various covariances (i.e., similarity) result in more unsystematic choices when the covariance between the two gambles is lower (Andraszewicz et al., 2015; Andraszewicz & Rieskamp, 2014). This effect called the covariance effect results from the fact that pairs of gambles with low covariances are more difficult to be compared with each other. Simonson and Tversky (1992) demonstrated that context effects can result from the available sample of choice options, such that extreme outcomes may appear as extreme in face of the available sample (Simonson & Tversky, 1992). Along the same lines, Ungemacht et al. (2011) demonstrated that people’s preferential choices depend on one’s exposure to hypothetical choice options.

To summarize, this meta-analysis highlights the importance of accounting for choice-set architecture, in particular, its interaction with random decision errors. Our applied methods and results go beyond the current research scope and suggest that neglecting the effect of random decision errors at the experimental design stage or at the data-analysis stage can lead to spurious correlations and the identification of “apparently new” phenomena (Gillen et al., 2019). The findings presented in this meta-analysis offer an important contribution to the scientific communities in judgment and decision making, psychology, experimental finance, and economics. In these fields of studies, measuring risk-taking propensity is particularly important. Therefore, findings of the current meta-analysis are very relevant to all researchers investigating risk-taking behavior using common risk-preference-elicitation methods.

No comments:

Post a Comment