Wednesday, March 30, 2022

Parasites: Male psychopaths are not set against fatherhood, as long as someone else takes on the effort of raising the kids

Cads in Dads’ Clothing? Psychopathic Traits and Men’s Preferences for Mating, Parental, and Somatic Investment. Kristopher J. Brazil & Anthony A. Volk. Evolutionary Psychological Science, Mar 30 2022. https://link.springer.com/article/10.1007/s40806-022-00318-z

Abstract: Psychopathic traits are sometimes viewed as an alternative reproductive strategy that prioritizes mating over parental investment, particularly in men. Two aspects of this research receiving less attention are (1) the inclusion of somatic investment, which refers to the growth and maintenance of oneself, and (2) measuring perceptions of investment domains in addition to behavior and attitude outcomes. In this study, we used a sample of 255 young adult men from MTurk (Mage = 29.55, SD = 2.97) to examine how the three domains of investment (mating, parental, and somatic) relate to individual differences in men’s psychopathic traits, relationship/parental status, and age using outcome measures of (1) behavioral attitudes and (2) perceptions of stimuli associated with each investment domain (e.g., attractive women’s faces and cute infants). Results showed that while they were associated with being a parent, psychopathic traits were associated with higher mating and lower parental and somatic behavioral attitudes. Psychopathic traits were associated with negative perceptions of indirect somatic cues (e.g., working and forming friendships), positive perceptions of mating cues, and no relationship with perceptions of direct somatic (e.g., exercising) or parental cues. Our results agree with previous research but extend them by showing that while they engage in lower somatic behavior, men higher in psychopathic traits do not appear to have aversive reactions towards infant stimuli and are more likely to be parents themselves. We argue that these patterns are consistent with a parasitic parenting strategy that focuses on mating while depending on others to invest in their children.


Meta-analysis: Cognitive abilities are not related to the willingness to take financial risks

Cognitive abilities affect decision errors but not risk preferences: A meta-analysis. Tehilla Mechera-Ostrovsky, Steven Heinke, Sandra Andraszewicz & Jörg Rieskamp. Psychonomic Bulletin & Review, Mar 30 2022. https://link.springer.com/article/10.3758/s13423-021-02053-1

Abstract: When making risky decisions, people should evaluate the consequences and the chances of the outcome occurring. We examine the risk-preference hypothesis, which states that people’s cognitive abilities affect their evaluation of choice options and consequently their risk-taking behavior. We compared the risk-preference hypothesis against a parsimonious error hypothesis, which states that lower cognitive abilities increase decision errors. Increased decision errors can be misinterpreted as more risk-seeking behavior because in most risk-taking tasks, random choice behavior is often misclassified as risk-seeking behavior. We tested these two competing hypotheses against each other with a systematic literature review and a Bayesian meta-analysis summarizing the empirical correlations. Results based on 30 studies and 62 effect sizes revealed no credible association between cognitive abilities and risk aversion. Apparent correlations between cognitive abilities and risk aversion can be explained by biased risk-preference-elicitation tasks, where more errors are misinterpreted as specific risk preferences. In sum, the reported associations between cognitive abilities and risk preferences are spurious and mediated by a misinterpretation of erroneous choice behavior. This result also has general implications for any research area in which treatment effects, such as decreased cognitive attention or motivation, could increase decision errors and be misinterpreted as specific preference changes.

Discussion

We conducted a Bayesian meta-analysis with a total of 30 studies and examined whether a potential meta effect size is better explained by the risk-preference hypothesis, which assumes a correlation between cognitive abilities and risk aversion because cognitive abilities affect the evaluation of risky options and, consequently risk-taking behavior, or by the error hypothesis, which assumes that mixed results are the product of a relationship between cognitive abilities and decision errors resulting from a bias of the architectural properties of the risk-preference-elicitation task. Our results show that the correlation between cognitive ability and risk aversion is noncredible. Notably, we find that when studies applied unbalanced choice sets, they reported a stronger negative (or positive) correlation between cognitive abilities and risk aversion depending on the direction of this unbalance. The effect of the RCRT bias was robust across all meta-analytical model specifications and thus provides strong evidence for the error hypothesis. That is, our findings support the claim that previous mixed evidence of a correlation between cognitive abilities and risk aversion is mainly driven by the important interaction between the architecture of the risk-preference-elicitation task and errors in decision making. In addition, we found an effect of task framing, where including losses in risk-preference-elicitation tasks only weakly moderates the relation between cognitive abilities and risk aversion. Note that this effect was not robust across all meta-analytical model specifications and appears to be highly correlated with the RCRT bias of the choice set, where the latter has a higher explanatory power. We found no mediating effects of the type of cognitive ability test applied or of the number of decisions. We conclude that a potential correlation between cognitive abilities and risk aversion is moderated by the link between cognitive abilities and the probability of making unsystematic decision errors.

A recent meta-analysis by Lilleholt (2019) similarly explored the link between cognitive abilities and risk preferences. However, in contrast to our work, Lilleholt’s analysis did not directly test whether the mixed findings regarding the link between cognitive abilities and risk preferences could be explained by the error hypothesis and the bias in the architecture of most risk-preference-elicitation tasks. There are other important differences. First, Lilleholt had a broader literature search scope, leading to a larger set of examined studies. For instance, the author included experience-based risk-preference-elicitation tasks, which we excluded from our analysis. In such tasks, people have no information about the outcomes of gambles and the probabilities with which the outcomes occur but learn this from feedback. Thus, in these tasks, learning plays a major role in how people make their decisions, thereby making the interpretation of a potential link between cognitive abilities and risk preferences more complicated. In general, it has been argued that description-based and experience-based tasks differ in both architecture and interpretation (Frey et al., 2017). Therefore, in contrast to Lilleholt, we have focused on a description-based task that makes it easier to code all relevant task-architecture information precisely.

Since Lilleholt (2019) ran the meta-analysis for each domain separately, we compared Lilleholt’s results with our results by estimating our meta-analytic models on Lilleholt’s merged data set (see Appendix Table 8). In line with Lilleholt’s results, we find a credible metaeffect of −.05 with a 95% BCI ranging from −.07 to −.03 for the loss, gain, and mixed domains. Note that our restricted data set exhibits a comparable effect size of −.03, with a 95% BCI ranging from −.08 to .02. Additionally, the inclusion of losses as outcomes of the choice options had a credible effect on the correlation between cognitive abilities and risk preferences with a mean estimate of .12 and a 95% BCI ranging from .08 to .15 (see Appendix Table 8, Model Mf). The model comparison shows that the model that includes this variable is superior to a model that exclude it (see Appendix Table 8, Models Mf, M2). The effect of the RCRT bias towards risk aversion on the correlation between cognitive abilities and risk preferences was credible across all model specifications (see Appendix Table 8, Models Mf, M1, M2) and exhibited a mean estimate of −.17 and a 95% BCI ranging from −.25 to −.09 (see Appendix Table 8, Model Mf). More importantly, a regression model comparison procedure (see Appendix Table 8) shows that accounting for RCRT bias (Mf vs. M1 BF = 5.9×106) and the inclusion of losses (Mf vs. M2BF = 2.1×1010) improve the model fit substantially for the merged data set of Lilleholt (2019), replicating our results. However, given the larger set of studies in Lilleholt compared with ours, this replication should be interpreted with caution.

Our finding of a moderating effect of an RCRT-biased task architecture on the correlation between cognitive ability and risk aversion contributes to the discussion in the decision sciences and experimental economics literature. For instance, in line with the error hypothesis, Andersson et al. (2016) experimentally demonstrated that the link between cognitive abilities and risk aversion is spurious, as it is moderated by the link between cognitive abilities and random choice behavior (Andersson et al., 2016). In keeping with this result, Olschewski et al. (2018) reported that in risk-taking tasks, cognitive abilities correlated negatively with decision errors. We followed this work and rigorously tested the error hypothesis with a meta-analysis. Our results show that the correlation between cognitive abilities and risk aversion can be explained by the characteristics of the choice set (i.e., task architecture), implying an RCRT bias, a phenomenon that leads to misclassifying random choices as a specific risk preference.

Our findings support the view of the error hypothesis that cognitive abilities are linked to the probability of making unsystematic errors (Burks et al., 2008; Dean & Ortoleva, 2015; Olschewski et al., 2018; Tymula et al., 2013). Additionally, it is plausible to assume that people with lower cognitive abilities apply simpler decision strategies (i.e., heuristics) that reduce information-processing load. However, the use of heuristics does not necessarily imply more or less risk-taking behavior; only the interaction between the applied heuristic and the task architecture leads to a specific observed risk-taking behavior. As we discussed above, some heuristics lead to higher (or lower) observed risk-seeking behavior compared with more complex decision strategies, depending on the choice set. Therefore, one would not necessarily expect a specific correlation between people’s cognitive abilities and the observed risk-taking behavior across the different tasks, but instead expect some heterogeneity in the results. However, the use of specific strategies cannot explain the relationship between the observed average risk preferences in a task and the RCRT bias in the task. Thus, the link between cognitive abilities and the selected decision strategies does not imply a link between cognitive abilities and the latent risk preferences. Crucially, when examining the potential link between cognitive abilities, decision strategies, and risk preferences, it is necessary to first identify the specific strategies people apply in specific environments or task architectures (Olschewski & Rieskamp, 2021; Rieskamp, 2008; Rieskamp & Hoffrage, 19992008; Rieskamp & Otto, 2006). Future work should examine the different heuristics and decision strategies to arrive at a comprehensive understanding of whether and how those shape the correlation between cognitive abilities and risk preferences.

The results of this study also resonate with a recent empirical discourse on the validity of risk-preference-elicitation measures. For instance, Frey et al. (2017) and Pedroni et al. (2017) found behavioral risk-elicitation tasks to be less stable elicitations of risk preferences compared with self-reported measures. Importantly, the difference between behavioral and self-reported measures could disappear once measurement errors are accounted for (Andreoni & Kuhn, 2019) by applying better task architectures.

Our results also have implications for interpreting experimental results in other research domains. For example, when testing for a specific treatment effect it appears important to control for increased decision errors, so that a potential increase in errors is not misinterpreted as a specific treatment effect. Whether such misinterpretation is likely to occur depends on whether the task architecture has a bias, so that random choice behavior leads to a specific psychological interpretation. For instance, a potential effect of increased time pressure on people’s risk preferences could also simply be due to an increase in decision errors under high time pressure (e.g., Olschewski & Rieskamp, 2021). Likewise, the potential effect of cognitive load on people’s risk preferences, intertemporal time preferences, or social preferences could also simply be due to an increase in decision errors under cognitive load manipulations (e.g., Olschewski et al., 2018). Finally, the potential effect of increased monetary incentives on people’s preferences could also be due to lower decision errors with higher monetary incentives (e.g., Holt & Laury, 2002; Smith & Walker, 1993). In general, treatment effects on preferences have been observed in intertemporal discounting (e.g., Deck & Jahedi, 2015; Ebert, 2001; Hinson et al., 2003; Joireman et al., 2008) as well as social preferences (e.g., Cappelletti et al., 2011; Halali et al., 2014; Schulz et al., 2014). Across these domains, it is important to understand how changes in decision errors affect preference measurements. Failure to do so could potentially lead to misinterpretations of observed effects.

Consequently, addressing the issue of decision errors captured by the error hypothesis is of general importance to any research in behavioral economics and psychology with the objective to elicit individual preferences. There are two possible ways to address this matter. First, one can account for random errors ex ante by choosing an experimental design that controls for random errors. At the experimental design stage, researchers could apply a variety of measures to assess people’s preferences. In this way, they could cancel out systematic errors and minimize measurement errors in the associated biased classifications (Frey et al., 2017). For instance, Andersson et al. (2016) suggested choosing a symmetrical choice set when measuring risk preferences. However, this approach may not always be suitable for every preference-elicitation task. Leading to the second approach, one can account for error at the data analysis stage. For example, accounting for potential biases with an explicit structural decision-making model what includes an error theory at the data-analysis stage could be advantageous (Andersson et al., 2020). Recently, behavioral economists Gillen et al. (2015) and Andreoni and Kuhn (2019) proposed an instrumental variable approach to address this problem (see also Gillen et al., 2015).

It is important to note the task architecture determines the context in which a choice option is presented. Consequently, various theories relating to the context effect could also contribute to the fact that people with lower cognitive abilities are more prone to be influenced by the task architecture. For example, Andraszewicz and Rieskamp (2014) and Andraszewicz et al. (2015) demonstrated that pairs of gambles with the same differences in expected values and the same variances (i.e., risk) but various covariances (i.e., similarity) result in more unsystematic choices when the covariance between the two gambles is lower (Andraszewicz et al., 2015; Andraszewicz & Rieskamp, 2014). This effect called the covariance effect results from the fact that pairs of gambles with low covariances are more difficult to be compared with each other. Simonson and Tversky (1992) demonstrated that context effects can result from the available sample of choice options, such that extreme outcomes may appear as extreme in face of the available sample (Simonson & Tversky, 1992). Along the same lines, Ungemacht et al. (2011) demonstrated that people’s preferential choices depend on one’s exposure to hypothetical choice options.

To summarize, this meta-analysis highlights the importance of accounting for choice-set architecture, in particular, its interaction with random decision errors. Our applied methods and results go beyond the current research scope and suggest that neglecting the effect of random decision errors at the experimental design stage or at the data-analysis stage can lead to spurious correlations and the identification of “apparently new” phenomena (Gillen et al., 2019). The findings presented in this meta-analysis offer an important contribution to the scientific communities in judgment and decision making, psychology, experimental finance, and economics. In these fields of studies, measuring risk-taking propensity is particularly important. Therefore, findings of the current meta-analysis are very relevant to all researchers investigating risk-taking behavior using common risk-preference-elicitation methods.

A majority could well imagine undergoing psychotherapy via artificial intelligence, among other things because of the ability to comfortably talk about embarrassing experiences

Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Mehmet Emin Aktan, Zeynep Turhan, İlknur Dolu. Computers in Human Behavior, March 29 2022, 107273. https://doi.org/10.1016/j.chb.2022.107273

Highlights

• We explored the factors of choosing AI-based psychotherapy.

• The less stigma and remote access were found as key in preferring AI-based therapy.

• Trust of the security of data in AI-based therapy were less than human therapists.

• The beliefs about limited ability to empathize in AI-based psychotherapy was found.

Abstract: The use of artificial intelligence (AI) in psychotherapy has been increased in recent years. While these technologies in psychotherapy are growing, the circumstances of accepting artificial tools during psychotherapy need to be explored to improve effective AI tools during the sensitive therapeutic environment. In this study, the factors around the preferences for AI-based psychotherapy were investigated. This cross-sectional study was conducted with a sample of 872 individuals who are highly educated, 18 aged and above. Attitude towards AI-based Psychotherapy, Attitude towards Seeking Professional Psychological Help Scale- Short Form, and Stigma Scale for Receiving Psychological Help Scale were used to examine the factors of participants' preferences for AI-based psychotherapy. While 55% of the sample preferred AI-based psychotherapy, the majority of participants trusted more human psychotherapists than AI-based systems when asked participants’ trust about the security of personal data. However, three important benefits of AI-based psychotherapy were identified as being able to comfortably talk about the embarrassing experiences, having accessibility at any time, and accessing remote communication. Importantly, factors of preferences for AI-based psychotherapy were related to the idea of AI-based psychotherapy systems can improve themselves based on the results from previous therapeutic experiences. Gender and the types of profession related to psychology and technical/engineering were also associated with choosing AI-based psychotherapy. The results suggest that both raising awareness of the benefits and effectiveness of psychotherapy as well as the trust to the artificial intelligence tools can improve the rate of the preferences for AI-based psychotherapy.

Keywords: Artificial intelligencePsychotherapyAccessibilityHelp-seeking behaviorStigma


Serial Sexual Murderers: Criminal paraphilia developed to reinforce positive emotions from sexual fantasies and helped to create a sense of intimacy to avoid being rejected

The Role of Child and Adult Sexual Fantasies and Criminal Paraphilia Involving Serial Sexual Murderers. Heather Brown. Walden University, PhD Dissertation. Mar 2022. https://www.proquest.com/openview/9bca3a21831039db7a1412f44af51536/1?pq-origsite=gscholar&cbl=18750&diss=y

Abstract: Childhood trauma may be a reason a child develops maladaptive coping mechanisms such as sexual fantasies and paraphilia. These coping mechanisms increase in intensity, leading to sexual violence to gain a sense of power and control. Even though researchers have identified that serial sexual killers suffer from child and adult sexual fantasies and criminal paraphilia, details of the sexual fantasies and paraphilia have not been examined. The purpose of this qualitative exploratory case study was to explore the role of child and adult sexual fantasies and criminal paraphilia involving serial sexual murderers. Hickey’s trauma-control model and relational paraphilic attachment theory were used as the theoretical foundations. Data were collected from 12 U.S. male participants identified as serial sexual murderers. Four themes were identified from the thematic analysis and were linked to all 12 case participants. Findings indicated child and adult sexual fantasies began as a maladaptive coping mechanism to avoid feeling abandoned, which escalated to ways of feeling control and revenge. Criminal paraphilia developed to reinforce positive emotions from sexual fantasies and helped to create a sense of intimacy to avoid being rejected. Findings may assist law enforcement, school staff, and mental health professionals to promote positive social change by preventing future risk for behaviors that lead to and are incorporated into the sexual murders committed by serial killers.


Men and women invested equally in improving their appearance if exercising and bodybuilding were included

Sex Differences in Physical Attractiveness Investments: Overlooked Side of Masculinity. Marta Kowal. Int. J. Environ. Res. Public Health 2022, 19(7), 3842; Mar 24 2022. https://doi.org/10.3390/ijerph19073842

Abstract

Background: Public opinion on who performs more beauty-enhancing behaviors (men or women) seems unanimous. Women are often depicted as primarily interested in how they look, opposed to men, who are presumably less focused on their appearance. However, previous studies might have overlooked how masculinity relates to self-modification among men. Methods: We explored this issue in depth by conducting a qualitative Study 1 aimed to establish how men and women enhance their attractiveness (N = 121) and a quantitative Study 2 aimed to test time spent on activities that increase one’s attractiveness in a longitudinal design (with seven repeated measures from 62 participants; N(total) = 367). Results: We observed no sex differences in beauty investments. Although women spent more time on make-up and cosmetics usage, men caught up with women in exercising and bodybuilding. Conclusion: Our study provides evidence that there may not be such wide sex differences in the intensity of enhancing one’s appearance as has been previously thought. We hypothesize that this might partly stem from changes in gender roles regarding masculinity.

Keywords: gender; diary study; enhancing beauty; self-modification; sex comparison