Friday, November 29, 2019

Neuroticism (negative), extraversion, agreeableness, and to a lesser extent conscientiousness predicted wellbeing; the hypothesis that self-enhancement is beneficial for wellbeing is doubtful

An integrated model of social psychological and personality psychological perspectives on personality and wellbeing. Ulrich Schimmack, Hyunji Kim. Journal of Research in Personality, Volume 84, February 2020, 103888. https://doi.org/10.1016/j.jrp.2019.103888

Highlights
•    Largest sample size for multi-method studies of self-enhancement.
•    No support for benefits of positive illusions on wellbeing.
•    Multi-method evidence that personality influences well-being.

Abstract: This article uses multi-rater data from 458 triads (students, mother, father, total N = 1374) to examine the relationship of personality ratings with wellbeing ratings, using a multi-method approach to separate accurate perceptions (shared across raters) from biased perceptions of the self (rater-specific variance). The social-psychological perspective predicts effects of halo bias in self-ratings on wellbeing, whereas the personality-psychological perspective predicts effects of personality traits on wellbeing. Results are more consistent with the personality perspective in that neuroticism (negative), extraversion, agreeableness, and to a lesser extent conscientiousness predicted wellbeing, whereas positive illusions about the self were only weakly and not significantly related to wellbeing. These results cast doubt on the hypothesis that self-enhancement is beneficial for wellbeing.

4. Discussion

The main contribution of this article was to examine wellbeing from an integrated personality and social psychological perspective. While personality psychologists focused on the contribution of actual traits, social psychologists focused on biases in self-perceptions of traits. Multi-method measurement models were used to separate valid trait variance from illusory perceptions of personality in self-ratings and ratings of other family members. The results show that actual personality traits are more important for wellbeing than positive biases in self-perceptions. In fact, the most important finding was that positive illusions about the self were unrelated to wellbeing impressions that are shared across informants. This finding challenges Taylor and Brown (1988) influential and highly controversial claim that positive illusions not only foster higher wellbeing, but are a sign of optimal and normal functioning. Subsequently, we discuss the implications of our findings for the future of wellbeing science and for individuals’ pursuit of wellbeing.

4.1. Positive illusions and public wellbeing

The social psychological perspective on wellbeing is grounded in the basic assumption that human information processing is riddled with errors. Taylor and Brown (1988) quote Fiske and Taylor (1984) book about social cognitions to support this assumption. “Instead of a naïve scientist entering the environment in search of the truth, we find the rather unflattering picture of a charlatan trying to make the data come out in a manner most advantageous to his or her already-held theories” (p. 88). Thirty years later, it has become apparent that human information processing is more accurate than Fiske and Taylor (1984) assumed (Funder, 1995Jussim, 1991McCrae and Costa, 1991Schimmack and Oishi, 2005). Thus, Taylor and Brown (1988) model of wellbeing is based on outdated evidence and needs to be revised.
The vast majority of studies have relied on self-ratings of wellbeing to measure benefits of wellbeing. This is problematic because self-ratings of wellbeing can be inflated by the very same processes that inflate self-ratings of personality (Humberg et al., 2019). There have been only a handful of studies with valid illusion measures and informant ratings of wellbeing and these studies have found similar weak results (Dufner et al., 2019).
The lack of evidence for benefits of positive illusions is not for a lack of trying. Taylor, Lerner, Sherman, Sage, and McDowell (2003) claimed that effects of positive illusions are not limited to self-ratings. “We conducted a study with multiple measures of self-enhancement along with multiple measures and judges of mental health, comprehensively assessing their relationship. The results indicated that self-enhancement is positively associated with multiple indicators of mental health” (p. 165). Contrary to this claim, Table 5 shows correlations of various self-enhancement measures with peer-rated mental health ranging from r = −0.13 to 0.09. None of these correlations were significant, in part due to the low statistical power of the study (N = 55). Thus, even Taylor and colleagues never provided positive evidence that positive illusions increase wellbeing in ways that can be measured with a method other than self-reports. The social cognitive model of wellbeing also faces other problems. One problem is causality. Even if there were a small correlation between positive illusions about the self and wellbeing, it is not clear that it is causal. It is equally plausible that happiness distorts self-perceptions. Thirty years of research have failed to address this problem (cf. Humberg et al., 2019). Another problem is that third variables produce a spurious correlation between illusions about the self and wellbeing. For example, relationship researchers have shown that illusions about a partner predict relationship satisfaction (see Weidmann, Ledermann, & Grob, 2016, for a review), and Kim et al. (2012) showed that individuals with positive illusions about the self also tend to have positive illusions about others. Thus, it is possible that positive illusions about others, not the self, are beneficial for social relationships and wellbeing. Future research needs to include measures of positive illusions about the self and others to examine this question. Given these problems, we question broad conclusions about the benefits of positive illusions for wellbeing (Dufner et al., 2019Humberg et al., 2019).

4.2. Positive illusions and private wellbeing

The present study replicated the finding that positive illusions predict unique variance in self-ratings of wellbeing. That is, individuals who claim to be more extraverted and more agreeable than others perceive them also claim to be happier than others perceive them to be (Dufner et al., 2019Humberg et al., 2019Taylor et al., 2003). As noted in the introduction, there are two possible explanation for this finding. One explanation is that positive illusions enhance wellbeing in a way that is not observable to others. The challenge for this model is to explain how positive illusions foster private wellbeing and to provide empirical evidence for this model. To explain why informants are unable to see the happiness of individuals with positive illusions, we have to assume that the illusion-based happiness is not visible to others. This requires a careful examination of the variance in self-ratings of wellbeing that is not shared with informants (Schneider & Schimmack, 2010).
The private-wellbeing illusion model also faces an interesting contradiction in assumptions about the validity of personality and wellbeing judgments. To allow for effects of positive illusions on private wellbeing, the model assumes that people have illusions about their personality, while their self-ratings of wellbeing are highly accurate and trustworthy. In contrast, social psychologists have argued that wellbeing judgments are highly sensitive to context effects and provide little valid information about individuals’ wellbeing (Schwarz & Strack, 1999). In contrast, personality psychologists have pointed to self-informant agreement in wellbeing judgments as evidence for the validity of self-ratings of wellbeing. If informant ratings validate self-ratings, then we would expect predictors of wellbeing also to be related to self-ratings of wellbeing and to informant ratings of wellbeing. Our main contribution is to show that this is not the case for positive illusions, or at least, that the effect size is small. No single study can resolve deep philosophical questions, but our study suggests that hundreds of studies that relied on self-ratings of wellbeing to demonstrate the benefits of positive illusions may have produced illusory evidence of these benefits.

4.3. Positive illusions as halo bias

Evidence for halo biases in personality ratings is nearly 100 years old (Thorndike, 1920). Ironically, some of the strongest evidence for the pervasiveness of halo biases stems from social psychology (Nisbett & Wilson, 1977). Given the evidence that halo biases in ratings are pervasive, halo bias provides a simple and parsimonious explanation for the finding that positive illusions are only related to the unique variance in self-ratings and not to informant ratings of wellbeing. One explanation for halo bias is that many trait concepts have a denotative and a connotative (evaluative) meaning (Osgood, Suci, & Tannenbaum, 1957). While denotative meaning and valid information produce agreement between raters, ratings are also biased by the connotative meaning of words and liking of a target. For example, lazy has a denotative meaning of not putting a lot of effort into tasks and a negative connotation. Ratings of laziness will be enhanced by dislike and attenuated by liking of an individual independent of the objective effort targets exert (Leising, Erbs, & Fritz, 2010). It seems plausible that halo bias also influences ratings of desirable attributes like happiness and having a good life. Thus, halo bias offers a plausible explanation for our results that is also consistent with heuristic and bias models in social psychology.

4.4. Personality and wellbeing

The present study provided new evidence on the relationship between personality and wellbeing from a multi-rater perspective. Results confirmed that neuroticism is the strongest predictor of wellbeing and that the influence on wellbeing is mediated by hedonic balance. This finding is consistent with the hypothesis that neuroticism is a broad disposition to experience more unpleasant mood states (Costa and McCrae, 1980Schimmack, Radhakrishnan, Oishi et al., 2002Watson and Tellegen, 1985). As experiencing unpleasant mood is undesirable it lowers wellbeing independent of actual life-circumstances. Twin studies suggest that individual differences in neuroticism are partially heritable and that the genetic variance in neuroticism accounts for a considerable portion of the shared variance between neuroticism and wellbeing (Nes et al., 2013).
In comparison, the other personality traits explain relatively small amounts of variance in wellbeing. While, the effects of extraversion and agreeableness were also mediated by hedonic balance, the results for conscientiousness suggested a unique influence on life evaluations. Future research needs to go beyond demonstrating effects of the Big Five and wellbeing and start to investigate the causal processes that link personality to wellbeing. McCrae and Costa (1991) proposed that agreeableness is beneficial for more harmonious social relationships, while conscientiousness is beneficial for work, but there have been few attempts to test these predictions. One way to test potential mediators are integrated top-down bottom-up models with domain satisfaction as mediators (Brief et al., 1993Schimmack, Diener, Oishi, 2002). It is important to use multi-method measurement models to separate top-down effects from halo bias (Schneider & Schimmack, 2010). It is also important to examine the relationship of personality and wellbeing with a more detailed assessment of personality traits. While the Big Five have the advantage of covering a broad range of personality traits with a few, largely orthogonal dimensions, the disadvantage is that they cannot represent all of the variation in personality. Some studies showed that the depression facet of neuroticism and the cheerfulness facet of extraversion explain additional variance in wellbeing (Allik et al., 2018Schimmack et al., 2004). More research with narrow personality traits is needed to specify the precise personality traits that are related to wellbeing.

Pseudo-profound bullshit titles makes the art grow profounder

Bullshit makes the art grow profounder. Martin Harry Turpin et al. Judgment and Decision Making, Vol. 14, No. 6, November 2019, pp. 658-670. http://journal.sjdm.org/19/190712/jdm190712.html

Abstract: Across four studies participants (N = 818) rated the profoundness of abstract art images accompanied with varying categories of titles, including: pseudo-profound bullshit titles (e.g., The Deaf Echo), mundane titles (e.g., Canvas 8), and no titles. Randomly generated pseudo-profound bullshit titles increased the perceived profoundness of computer-generated abstract art, compared to when no titles were present (Study 1). Mundane titles did not enhance the perception of profoundness, indicating that pseudo-profound bullshit titles specifically (as opposed to titles in general) enhance the perceived profoundness of abstract art (Study 2). Furthermore, these effects generalize to artist-created abstract art (Study 3). Finally, we report a large correlation between profoundness ratings for pseudo-profound bullshit and “International Art English” statements (Study 4), a mode and style of communication commonly employed by artists to discuss their work. This correlation suggests that these two independently developed communicative modes share underlying cognitive mechanisms in their interpretations. We discuss the potential for these results to be integrated into a larger, new theoretical framework of bullshit as a low-cost strategy for gaining advantages in prestige awarding domains.

Keywords: pseudo-profound bullshit, impression management, abstract art, meaning, social navigation

The complex relation between receptivity to pseudo-profound bullshit and political ideology. Nilsson, Artur; ERLANDSSON, Arvid and Västfjäll, Daniel (2018) In Personality and Social Psychology Bulletin, Jan 2019. https://www.bipartisanalliance.com/2019/01/bullshit-receptivity-robustly.html

Check also the first author's MA Thesis... Bullshit Makes the Art Grow Profounder: Evidence for False Meaning Transfer Across Domains. Martin Harry Turpin. MA Thesis, Waterloo Univ., Ontario. https://www.bipartisanalliance.com/2018/10/pairing-abstract-art-pieces-with.html

And Bullshit-sensitivity predicts prosocial behavior. Arvid Erlandsson et al. PLOS, https://www.bipartisanalliance.com/2018/08/bullshit-receptivity-perceived.html

Non-believers: Reflection increases belief in God through self-questioning

Reflection increases belief in God through self-questioning among non-believers. Onurcan Yilmaz, Ozan Isler. Judgment and Decision Making, Vol. 14, No. 6, November 2019, pp. 649-657. http://journal.sjdm.org/19/190605/jdm190605.html

The dual-process model of the mind predicts that religious belief will be stronger for intuitive decisions, whereas reflective thinking will lead to religious disbelief (i.e., the intuitive religious belief hypothesis). While early research found intuition to promote and reflection to weaken belief in God, more recent attempts found no evidence for the intuitive religious belief hypothesis. Many of the previous studies are underpowered to detect small effects, and it is not clear whether the cognitive process manipulations used in these failed attempts worked as intended. We investigated the influence of intuitive and reflective thought on belief in God in two large-scale preregistered experiments (N = 1,602), using well-established cognitive manipulations (i.e., time-pressure with incentives for compliance) and alternative elicitation methods (between and within-subject designs). Against our initial hypothesis based on the literature, the experiments provide first suggestive then confirmatory evidence for the reflective religious belief hypothesis. Exploratory examination of the data suggests that reflection increases doubts about beliefs held regarding God’s existence. Reflective doubt exists primarily among non-believers, resulting in an overall increase in belief in God when deciding reflectively.

Keywords: reflection, intuition, analytic cognitive style, belief, belief in God or gods


4  Discussion

In both experiments, we found that reflection increases belief in God and that the effect is stronger among non-believers. Exploratory analysis suggested that the overall increase in religious belief is likely due to the religious self-questioning (i.e., reflective doubt) of non-believers who tended to revise their responses on the scale towards the middle point (i.e., “not sure”). The results also showed that those who make greater use of their reflective capacities (as measured by CRT-2) are less likely to endorse belief in God or gods. These results provide evidence against the hypothesis that intuition fosters and that reflection dampens religious belief (Gervais & Norenzayan, 2012; Shenhav et al., 2012; Yilmaz et al., 2016) but it converges with the longstanding correlational results demonstrating that tendency for reflective thinking is negatively associated with religious belief (e.g., Bahçekapili & Yilmaz, 2017; Gervais et al., 2018; Pennycook et al., 2016; Stagnaro et al., 2018; Stagnaro, Ross, Pennycook & Rand, 2019).
Why does reflection increase belief in God in the current research? Our exploratory analysis strongly suggests that reflection, rather than directly increasing belief in God, increases doubt about one’s initial and intuitively held belief regarding God’s existence. It is likely that reflection increased religious belief in our overall sample because religious self-questioning is stronger among non-believers than among believers. On the other hand, we show that endorsement of agnosticism, deism, and polytheism is associated with both increase and decrease in belief in God, which may drive reflective doubt. Future research should try to experimentally distinguish this reflective religious doubt hypothesis implicated by our exploratory analysis from the reflective religious belief hypothesis. Nevertheless, we expect the effect of reflection on religious belief to be small because the belief in God question, as regularly used in the literature, will tend to probe stable opinions. Having answered the same question numerous times over the course of one’s life, participants are likely to know, as a defining characteristic of their personal identity, whether and to what extent they believe in God.
We also hypothesized but found no strong evidence that Pascal’s Wager may motivate a religious belief. Accordingly, reflected evaluation of the possibility of God’s existence could highlight the potentially infinite benefits of belief and costs of disbelief, hence questioning religious disbelief through a rational utility calculus. Although plausible, the tendency in our sample to agree with Pascal’s Wager did not clearly explain the reflected change in religious belief. However, our test was limited by the fact that religious believers (i.e., those with already high levels of belief) agreed with the Wager more than non-believers as well as by the fact that there were fewer atheists and agnostics in our sample.
An alternative explanation of the positive effect of reflection on religious belief may be that reflection makes people less extreme in their beliefs in general (i.e., religious and non-religious) but that openness to such self-criticism may be stronger among non-believers since they also tend to be reflective thinkers (Pennycook et al., 2016). Comparing religious and secular belief change among non-believers can therefore provide an explanation for our main finding. Likewise, Pascal’s Wager can be tested using improved methods, for example, by studying the effect of Pascal’s argument as an experimental manipulation. Finally, the two-stage procedure used in Experiment 2 was more insightful to studying religious belief change than the standard between-subject design of Experiment 1. The two-stage technique can be used in future studies of cooperation and morality in order to dissociate dual cognitive processes.
We also suggest that these experimental manipulations might have more influence on less stable beliefs or on those who are less confident about the existence of God. A similar distinction has been made in the field of political psychology (Talhelm, 2018; Talhelm et al., 2015; Yilmaz & Saribay, 2016, 2017). Activating reflective thinking did not have an impact on political opinions when they were measured by standard scale items based on identity labels (e.g., liberal or conservative), but it led to a significant change in less stable contextualized opinions (e.g., forming opinions about a newspaper article; Yilmaz & Saribay, 2017). A similar distinction can be made in the field of cognitive science of religion. For example, while belief in God, reflecting relatively stable opinions, may be more resistant to cognitive process manipulations, the relative reliance on natural vs. supernatural explanations for an uncertain event (e.g., the disappearance of airplanes in the Bermuda Triangle) may be more open to the influence of intuitive and reflective thinking. This possibility should be examined in future research.
A surprising contrast emerges from our data: the positive causal effect of reflection on belief in God vs. the negative correlation between individual tendency for reflected thinking and religious belief. While it is not clear why experimental and correlational tests lead to different conclusions, one may conjecture that the two approaches capture separate psychological mechanisms occurring across distinct time-frames. In particular, correlational measures may reflect self-selection of intuitively inclined people to religious belief (a long-term process of identity formation), while promoting reflection may isolate the possibly short-term effects of questioning one’s own and already established beliefs. While correlational findings are prevalent in the literature, there is a need for more experimental research on this topic. In particular, the generalizability of our results across cultures (e.g., using multi-lab experiments) is an open question.
In sum, recent failures to support the intuitive religious belief hypothesis suggested that the early evidence supporting the hypothesis is not easily reproducible. Using stronger manipulations and two large-scale experiments, we found that the effect of reflection and intuition on belief in God is in fact the opposite of intuitive belief hypothesis. Our results suggest that reflection on God’s existence may promote religious self-questioning, especially among non-believers.

Wronging past rights: The sunk cost bias distorts moral judgment

Wronging past rights: The sunk cost bias distorts moral judgment. Ethan A. Meyers et al. Judgment and Decision Making, Vol. 14, No. 6, November 2019, pp. 721-727. http://journal.sjdm.org/19/190909b/jdm190909b.html

When people have invested resources into an endeavor, they typically persist in it, even when it becomes obvious that it will fail. Here we show this bias extends to people’s moral decision-making. Across two preregistered experiments (N = 1592) we show that people are more willing to proceed with a futile, immoral action when costs have been sunk (Experiment 1A and 1B). Moreover, we show that sunk costs distort people’s perception of morality by increasing how acceptable they find actions that have received past investment (Experiment 2). We find these results in contexts where continuing would lead to no obvious benefit and only further harm. We also find initial evidence that the bias has a larger impact on judgment in immoral compared to non-moral contexts. Our findings illustrate a novel way that the past can affect moral judgment. Implications for rational moral judgment and models of moral cognition are discussed.

Keywords: sunk costs, morality, decision-making, judgment, open data, open materials, preregistered

4  General Discussion

We found that the sunk cost bias extends to moral judgments. When costs were sunk, participants were more willing to proceed with a futile, immoral action compared to when costs were not sunk. For example, they were more willing to sacrifice monkeys to develop a medical cure when some monkeys had already been sacrificed than when none had been. Moreover, people judged these actions as more acceptable when costs were sunk. Importantly, these effects occurred even though the benefit of the proposed immoral action was eliminated.
Our findings illustrate a novel way that the past can impact moral judgment. Moral research conducted to-date has focused extensively on future consequences (e.g., Baez et al., 2017; Miller & Cushman, 2013). Although this makes normative sense as only the future should be relevant to decisions, it is well known that choice is affected by irrelevant factors like past investment (Kahneman, 2011; Kahneman, Slovic & Tversky, 1982; Szaszi, Palinkas, Palfi, Szollosi & Aczel, 2018; Tversky & Kahneman, 1974). As such, our findings show that as is true with other (non-moral) judgments, people’s moral judgments are affected by factors that rational agents “should” ignore when making them.
Further, our findings show that a major decision bias (i.e., the sunk cost effect) extends to moral judgment. This finding is broadly consistent with research showing that moral judgments are affected by such biases. This earlier work shows that when making moral judgments, people are sensitive to how options are framed (e.g., Shenhav & Greene, 2010) and prefer acts of omission over commission (e.g., Bostyn & Roets, 2016). For example, people make different moral judgments when the decision is presented in a gain frame than when it is presented in a loss frame, even though these two decisions are logically identical (Kern & Chugh, 2009). Likewise, people judge lying to the police about who is at fault in a car accident (a harmful commission), to be more immoral than not informing the police precisely who is at fault (a harmful omission) (Spranca, Minsk & Baron, 1991). However, unlike most of these previous demonstrations, our findings directly compare the presence of decision-making biases across moral and non-moral contexts (also see Cushman & Young, 2011).
In our first experiment, we also found that the sunk cost bias may be stronger in moral decision-making than in other situations. This is surprising. In non-moral cases proceeding with a futile course of action is wasteful. But in our moral version of the scenarios, proceeding is wasteful, harmful to others, and morally wrong. Yet, there was a greater discrepancy between willingness to act in response to sunk costs in the immoral condition. Increasing the reasons to not proceed with the action amplified the sunk cost bias. One potential explanation for this is that people are unwilling to admit their prior investments were in vain (Brockner, 1992). People succumb to the sunk cost bias in part because they feel a need to justify their past decisions as correct (Ku, 2008; also see Staw, 1976). Likewise, moral judgments seem to generate a much greater need to provide reasons to justify past decisions (Haidt, 2012). Thus, those making decisions in an immoral context might have additional pressures to justify their previous choice that stem from the nature of moral judgment itself.
Another explanation is that the initial investment was of a larger magnitude in the immoral compared to the non-moral condition. In both cases, participants incurred an economic cost, but only in one did participants incur an additional moral cost. People are more likely to succumb to the sunk cost bias when initial investments are large (Arkes & Ayton, 1999; Arkes & Blumer, 1985; Sweis et al., 2018). Perhaps sunk costs exerted a greater effect in the immoral condition because the past investments were greater (i.e., of two kinds: economic and moral, rather than just one: economic). However, as we do not know if the economic resources (e.g., pine trees and lab monkeys) were of comparable value, the discrepancy between moral conditions may entirely stem from the lab monkeys being valued higher and thus larger in investment magnitude. Thus, we are hesitant to draw any strong conclusion from this finding. The difference in sunk cost magnitude could stem from differences in financial costs between the immoral and non-moral contexts.
Our finding that moral violations led to increased willingness to act is reminiscent of the “what the hell” effect, in which people who violate their diet then give up on it and continue to overindulge (Cochran & Tesser, 1996; Polivy, Herman & Deo, 2010). We see this as similar to persisting in an immoral course of action after costs have been sunk. After engaging in a morally equivocal act, people may feel disinhibited and willing to continue the act even when its immorality becomes clear. Likewise, people may persist in an attempt to maintain the status quo (Kahneman, Knetsch & Thaler, 1991; Samuelson & Zeckhauser, 1988). These accounts, though, may not explain why sunk costs changed people’s moral perceptions. One possibility is that this resulted from cognitive dissonance between people’s actions and their moral code (Aronson, 1969; Festinger, 1957; Harmon-Jones & Mills, 1999). For example, sacrificing monkeys to develop a cure may cause dissonance between not wanting to harm but having done so. To resolve this, people might change their moral perceptions, molding their moral code to fit their behavior.
We close by considering a broader implication of this work. The extension of decision biases to moral judgment has been previously construed as supporting domain-general accounts of morality that suggest moral judgment operates similarly to ordinary judgment (Osman & Wiegmann, 2017; Greene, 2015). This is because if morality is not unique, one could reasonably expect that a factor that affects ordinary judgment would likewise affect moral judgment. Thus, if information irrelevant to the decision at hand (e.g., past investments) influences whether we continue to bulldoze land to build a highway, so too should it influence the same bulldoze decision that requires confiscating the land. This is not conclusive however, and our findings could be interpreted to support domain-specific accounts instead (e.g., Mikhail, 2011). For instance, the sunk cost bias was demonstrably larger in moral judgments. Nevertheless, an interpretation of our results as evidence for a domain-general account of morality must explain how the varying effect of past investment on judgment is a difference in degree but not kind.

Thursday, November 28, 2019

Switzerland: Those exposed to civil conflict/mass killing during childhood are 35 pct more prone to violent crime; effect is mostly confined to co-nationals, consistent with inter-group hostility persisting over time

Couttenier, Mathieu, Veronica Petrencu, Dominic Rohner, and Mathias Thoenig. 2019. "The Violent Legacy of Conflict: Evidence on Asylum Seekers, Crime, and Public Policy in Switzerland." American Economic Review, 109 (12): 4378-4425. DOI: 10.1257/aer.20170263

Abstract: We study empirically how past exposure to conflict in origin countries makes migrants more violence-prone in their host country, focusing on asylum seekers in Switzerland. We exploit a novel and unique dataset on all crimes reported in Switzerland by the nationalities of perpetrators and of victims over 2009–2016. Our baseline result is that cohorts exposed to civil conflict/mass killing during childhood are 35 percent more prone to violent crime than the average cohort. This effect is particularly strong for early childhood exposure and is mostly confined to co-nationals, consistent with inter-group hostility persisting over time. We exploit cross-region heterogeneity in public policies within Switzerland to document which integration policies are best able to mitigate the detrimental effect of past conflict exposure on violent criminality. We find that offering labor market access to asylum seekers eliminates two-thirds of the effect.

Could they be lying?: Vegetarian women reported that they are more prosocially motivated to follow their diet & adhere to their diet more strictly (i.e., are less likely to cheat & eat meat)

Gender Differences in Vegetarian Identity: How Men and Women Construe Meatless Dieting. Daniel L.Rosenfeld. Food Quality and Preference, November 28 2019, 103859. https://doi.org/10.1016/j.foodqual.2019.103859

Highlights
• This research evaluated psychological differences between vegetarian men and women.
• Women are more prosocially motivated to follow a vegetarian diet than men are.
• Women adhere to their vegetarian diet more strictly than men do.

Abstract: Meat is deeply associated with masculine identity. As such, it is unsurprising that women are more likely than men are to become vegetarian. Given the gendered nature of vegetarianism, might men and women who become vegetarian express distinct identities around their diets? Through two highly powered preregistered studies (Ns = 890 and 1,775) of self-identified vegetarians, combining both frequentist and Bayesian approaches, I found that men and women differ along two dimensions of vegetarian identity: (1) dietary motivation and (2) dietary adherence. Compared to vegetarian men, vegetarian women reported that they are more prosocially motivated to follow their diet and adhere to their diet more strictly (i.e., are less likely to cheat and eat meat). By considering differences in how men and women construe vegetarian dieting, investigators can generate deeper insights into the gendered nature of eating behavior.

Keywords: vegetarianismfood choicedietinggenderidentity


About lies and prosociality in women, nonreligion is socially risky, atheism is more socially risky than other forms of nonreligion, & women and members of other marginalized groups avoid the most socially risky forms of nonreligion: From Existential to Social Understandings of Risk: Examining Gender Differences in Nonreligion. Penny Edgell, Jacqui Frost, Evan Stewart. Social Currents, Dec 2018. https://www.bipartisanalliance.com/2018/12/nonreligion-is-socially-risky-atheism.html

Check also Taste and health concerns trump anticipated stigma as barriers to vegetarianism. Daniel L.Rosenfeld, A. JanetTomiyama. Appetite, Volume 144, January 1 2020, 104469. https://www.bipartisanalliance.com/2019/09/vegetarian-diets-may-be-perceived-as.html

And Relationships between Vegetarian Dietary Habits and Daily Well-Being. John B. Nezlek, Catherine A. Forestell & David B. Newman. Ecology of Food and Nutrition, https://www.bipartisanalliance.com/2018/10/vegetarians-reported-lower-self-esteem.html

And Psychology of Men & Masculinity: Eating meat makes you sexy / Conformity to dietary gender norms and attractiveness. Timeo, S., & Suitner, C. (2018). Eating meat makes you sexy: Conformity to dietary gender norms and attractiveness. Psychology of Men & Masculinity, 19(3), 418-429. https://www.bipartisanalliance.com/2018/06/psychology-of-men-masculinity-eating.html


Great interest exists in identifying methods to predict neuropsychiatric disease states and treatment outcomes from high-dimensional data, including neuroimaging and genomics data; best practices are discussed

Establishment of Best Practices for Evidence for Prediction: A Review. Russell A. Poldrack, Grace Huckins, Gael Varoquaux. JAMA Psychiatry, November 27, 2019. doi:https://doi.org/10.1001/jamapsychiatry.2019.3671

Abstract
Importance  Great interest exists in identifying methods to predict neuropsychiatric disease states and treatment outcomes from high-dimensional data, including neuroimaging and genomics data. The goal of this review is to highlight several potential problems that can arise in studies that aim to establish prediction.

Observations  A number of neuroimaging studies have claimed to establish prediction while establishing only correlation, which is an inappropriate use of the statistical meaning of prediction. Statistical associations do not necessarily imply the ability to make predictions in a generalized manner; establishing evidence for prediction thus requires testing of the model on data separate from those used to estimate the model’s parameters. This article discusses various measures of predictive performance and the limitations of some commonly used measures, with a focus on the importance of using multiple measures when assessing performance. For classification, the area under the receiver operating characteristic curve is an appropriate measure; for regression analysis, correlation should be avoided, and median absolute error is preferred.

Conclusions and Relevance  To ensure accurate estimates of predictive validity, the recommended best practices for predictive modeling include the following: (1) in-sample model fit indices should not be reported as evidence for predictive accuracy, (2) the cross-validation procedure should encompass all operations applied to the data, (3) prediction analyses should not be performed with samples smaller than several hundred observations, (4) multiple measures of prediction accuracy should be examined and reported, (5) the coefficient of determination should be computed using the sums of squares formulation and not the correlation coefficient, and (6) k-fold cross-validation rather than leave-one-out cross-validation should be used.

---
Excerpts (full paper, references, etc., at the DOI above):

Introduction

The development of biomarkers for disease is attracting increasing interest in many domains of biomedicine. Interest is particularly high in neuropsychiatry owing to the current lack of biologically validated diagnostic or therapeutic measures.1 An essential aspect of biomarker development is demonstration that a putative marker is predictive of relevant behavioral outcomes,2 disease prognosis,3 or therapeutic outcomes.4 As the size and complexity of data sets have increased (as in neuroimaging and genomics studies), it has become increasingly common that predictive analyses have been performed using methods from the field of machine learning, with techniques that are purpose-built for generating accurate predictions on new data sets.

Despite the potential utility of prediction-based research, its successful application in neuropsychiatry—and medicine more generally—remains challenging. In this article, we review a number of challenges in establishing evidence for prediction, with the goal of providing simple recommendations to avoid common errors. Although most of these challenges are well known within the machine learning and statistics communities, awareness is less widespread among research practitioners.

We begin by outlining the meaning of the concept of prediction from the standpoint of machine learning. We highlight the fact that predictive accuracy cannot be established by using the same data both to fit and test the model, which our literature review found to be a common error in published claims of prediction. We then turn to the question of how accuracy should be quantified for categorical and continuous outcome measures. We outline the ways in which naive use of particular predictive accuracy measures and cross-validation methods can lead to biased estimates of predictive accuracy. We conclude with a set of best practices to establish valid claims of successful prediction.

Code to reproduce all simulations and figures is available at https://github.com/poldrack/PredictionCV.

Association vs Prediction

A claim of prediction is ultimately judged by its ability to generalize data to new situations; the term implies that it is possible to successfully predict outcomes in data sets other than the one used to generate the claim. When a statistical model is applied to data, the goodness of fit of that model to those data will in part reflect the underlying data-generating mechanism, which should generalize to new data sets sampled from the same population, but it will also include a contribution from noise (ie, unexplained variation or randomness) that is specific to the particular sample.5 For this reason, a model will usually fit better to the sample used to estimate it than it will to a new sample, a phenomenon known in machine learning as overfitting and in statistics as shrinkage.

Because of overfitting, it is not possible to draw useful estimates of predictive accuracy simply from a model’s goodness of fit to a data set; such estimates will necessarily be inflated, and their degree of optimism will depend on many factors, including the complexity of the statistical model and the size of the data set. The fit of a model to a specific data set can be improved by increasing the number of parameters in the model; any data set can be fit with 0 error if the model has as many parameters as data points. However, as the model becomes more complex than the process that generates the data, the fit of the model starts to reflect the specific noise values in the data set. A sign of overfitting is that the model fits well to the specific data set used to estimate the model but fits poorly to new data sets sampled from the same population. Figure 1 presents a simulated example, in which increasing model complexity results in decreased error for the data used to fit the model, but the fit to new data becomes increasingly poor as the model grows more complex than the true data-generating process.

Because we do not generally have a separate test data set to assess generalization performance, the standard approach in machine learning to address overfitting is to assess model fit via cross-validation, a process that uses subsets of the data to iteratively train and test the predictive performance of the model. The simplest form of cross-validation is known as leave-one-out, in which the model is successively fit on every data point but 1 and is then tested on that left-out point. A more general cross-validation approach is known as k-fold cross-validation, in which the data are split into k different subsets, or folds. The model is successively trained on every subset but 1 and is then tested on the held-out subset. Cross-validation can also help discover the model that will provide the best predictive performance on a new sample (Figure 1).

One might ask how poorly inflated the in-sample association is as an estimate of out-of-sample prediction; if the inflation is small, or only occurs with complex models, then perhaps it can be ignored for practical purposes. Figure 2 shows an example of how the optimism of in-sample fits depends on the complexity of the statistical model; in this case, we use a simple linear model but vary the number of irrelevant independent variables in the model. As the number of variables increases, the fit of the model to the sample increases owing to overfitting. However, even for a single predictor in the model, the fit of the model is inflated compared with new data or cross-validation. The optimism of in-sample fits is also a function of sample size (Figure 2). This example demonstrates the utility of using cross-validation to estimate predictive accuracy on a new sample.


Statistical Significance vs Useful Prediction

A second reason that significant statistical association does not imply practically useful prediction is exemplified by the psychiatric genetic literature. Large genome-wide association studies have now identified significant associations between genetic variants and mental illness diagnoses. For example, Ripke et al6 compared more than 21 000 patients with schizophrenia with more than 38 000 patients without schizophrenia and found 22 genetic variants significant at a genome-wide level (P = 5 × 10−8), the strongest of which (rs9268895) had a combined P value of 9.14 × 10−14. However, this strongest association would be useless on its own as a predictor of schizophrenia. The combined odds ratio for this risk variant was 1.167; assuming a population prevalence of schizophrenia of 1 in 196 individuals as the baseline risk,7 possessing the risk allele for this strongest variant would raise an individual’s risk to 1 in 167. Such an effect is far from clinically actionable. In fact, the increased availability of large samples has made clear the point that Meehl8 raised more than 50 years ago, which stated that in the context of null hypothesis testing, as samples become larger, even trivial associations become statistically significant.

A more general challenge exists regarding the prediction of uncommon outcomes, such as a diagnosis of schizophrenia. Consider the case in which a researcher has developed a test for schizophrenia that has 99% sensitivity (ie, a 99% likelihood that the test will return a positive result for someone with the disease) and 99% specificity (ie, a 99% likelihood that the test will return a negative result for someone without the disease). These are performance levels that any test developer would be thrilled to obtain; in comparison, mammography has a sensitivity of 87.8% and a specificity of 90.5% for the detection of breast cancer.9 If this test for schizophrenia were used to screen 1 million people, it would detect 99% of those with schizophrenia (5049 individuals) but would also incorrectly detect 9949 individuals without schizophrenia; thus, even with exceedingly high sensitivity and specificity, the predictive value of a positive test result remains well below 50%. As we can straightforwardly deduce from the Bayes theorem, false alarm rates will usually be high when testing for events with low baseline rates of occurrence.


Misinterpretation of Association as Prediction

A significant statistical association is insufficient to establish a claim of prediction. However, in our experience, it is common for investigators in the functional neuroimaging literature to use the term prediction when describing a significant in-sample statistical association. To quantify the prevalence of this practice, we identified 100 published studies between December 24, 2017, and October 30, 2018, in PubMed by using the search terms fMRI prediction and fMRI predict. For each study, we identified whether the purported prediction was based on a statistical association, such as a significant correlation or regression effect, or whether the researchers used a statistical procedure specifically designed to measure prediction, such as cross-validation or out-of-sample validation. We only included studies that purported to predict an individual-level outcome based on fMRI data and excluded other uses of the term prediction, such as studies examining reward prediction error. A detailed description of these studies is presented in the eTable in the Supplement.

Of the 100 studies assessed, 45 reported an in-sample statistical association as the sole support for the claims of prediction, suggesting that the conflation of statistical association and predictive accuracy is common.10 The remaining studies used a mixture of cross-validation strategies, as shown in Figure 3.


Factors That Can Bias Assessment of Prediction

Although performing some type of assessment of an out-of-sample prediction is essential, it is also clear that cross-validation still leaves room for errors when establishing predictive validity. We now turn to issues that can affect the estimation of predictive accuracy even when using appropriate predictive modeling methods.

- Small Samples

The use of cross-validation with small samples can lead to highly variable estimates of predictive accuracy. Varoquaux11 noted that a general decrease in the level of reported prediction accuracy can be observed as sample sizes increase. Given the flexibility of analysis methods12 and publication bias for positive results, such that only the top tail of accuracy measures is reported, the high variability of estimates with small samples can lead to a body of literature with inflated estimates of predictive accuracy.

Our literature review found a high prevalence of small samples, with more than half of the samples comprising fewer than 50 people and 15% of the studies with samples comprising fewer than 20 people (Figure 3). Most studies that use small samples are likely to exhibit highly variable estimates. This finding suggests that many of the claims of predictive accuracy in the neuroimaging literature may be exaggerated and/or not valid.

- Leakage of Test Data

To give a valid measure of predictive accuracy, cross-validation needs to build on a clean isolation of the test data during the fitting of models to the training data. If information leaks from the testing set into the model-fitting procedure, then estimates of predictive accuracy will be inflated, sometimes wildly. For example, any variable selection that is applied to the data before application of cross-validation will bias the results if the selection involves knowledge of the variable being predicted. Of the 57 studies in our review that used cross-validation procedures, 10 may have applied dimensionality reduction methods that involved the outcome measure (eg, thresholding based on correlation) to the entire data set. This lack of clarity raises concerns regarding the level of methodological reporting in these studies.13

In addition, any search across analytic methods, such as selecting the best model or the model parameters, must be performed using nested cross-validation, in which a second cross-validation loop is used within the training data to determine the optimal method or parameters. The best practice is to include all processing operations within the cross-validation loop to prevent any potential for leakage. This practice is increasingly possible using cross-validation pipeline tools, such as those available within the scikit-learn software package (scikit-learn Developers).14

- Model Selection Outside of Cross-validation

Selecting a predictive method based on the data creates an opportunity for bias that could involve the potential use of a number of different classifiers, hyperparameters for those classifiers, or various preprocessing methods. As in standard data analysis, there is a potential garden of forking paths,15 such that data-driven modeling decisions can bias the resulting outcomes even if there is no explicit search for methods providing the best results. The outcomes are substantially more biased if an explicit search for the best methods is performed without a held-out validation set.

As reported in studies by Skocik et al16 using simulations and Varoquaux11 using fMRI data, it is possible to obtain substantial apparent predictive accuracy from data without any true association if a researcher capitalizes on random fluctuations in classifier performance and searches across a large parameter space. A true held-out validation sample is a good solution to this problem. A more general solution to the problem of analytic flexibility is the preregistration of analysis plans before any analysis, as is increasingly common in other areas of science.17

- Nonindependence Between Training and Testing Sets

Like any statistical technique, the use of cross-validation to estimate predictive accuracy involves assumptions, the failure of which can undermine the validity of the results. An important assumption of cross-validation is that observations in the training and testing sets are independent. While this assumption is often valid, it can break down when there are systematic relationships between observations. For example, the Human Connectome Project data set includes data from families, and it is reasonable to expect that family members will be closer to each other in brain structure and function than will individuals who are not biologically related.

Similarly, data collected as a time series will often exhibit autocorrelation, such that observations closer in time are more similar. In these cases, there are special cross-validation strategies that must be used to address this structure. For example, in the presence of family structure, such as the sample used in the Human Connectome Project, a researcher might cross-validate across families (ie, leave-k-families-out) rather than individuals to address the nonindependence potentially induced by family structure.18

- Quantification of Predictive Accuracy

Two main categories of problems occur in predictive modeling. The first, classification accuracy, involves the prediction of discrete class membership, such as the presence or absence of a disease diagnosis; the second, regression accuracy, involves the prediction of a continuous outcome variable, such as a test score or disease severity measure. In our literature review, we found that 37 studies performed classification while 64 performed regression to determine predictive accuracy. These strategies generally involve different methods for quantification of accuracy, but in each case, potential problems can arise through the naive use of common methods.

- Quantifying Classification Accuracy

In a classification problem, we aim to quantify our ability to accurately predict class membership, such as the presence of a disease or a cognitive state. When the number of members in each class is equal, then average accuracy (ie, the proportion of correct classifications, as used in the examples in Figure 2) is a reasonable measure of predictive accuracy. However, if any imbalance exists between the frequencies of the different classes, then average accuracy is a misleading measure. Consider the example of a predictive model for schizophrenia, which has a prevalence of 0.5% in the population; the classifier can achieve average accuracy of 99.5% across all cases by predicting that no one has the disease, simply owing to the low frequency of the disease.

A standard method to address the class imbalance problem is to use the receiver operating characteristic curve from signal detection theory.19 A receiver operating characteristic curve can be constructed given any continuous measure of evidence, as provided by most classification models. A threshold is then applied to this measure of evidence, systematically ranging from low (in which most cases will be assigned to the positive class, and the number of false positives will be high) to high (in which most cases will be assigned to the negative class, and the number of false positives will be low). The area under the curve can then be used as an integrated measure of classification accuracy. A perfect prediction leads to an area under the curve of 1.0, while a fully random prediction leads to an area under the curve of 0.5. Importantly, the area under the curve value of 0.5 expected by chance is not biased by imbalanced frequencies of positive and negative cases in the way that simple measures of accuracy would be. It is also useful to separately present the sensitivity (ie, the proportion of positive cases correctly identified as positive) and specificity (ie, the proportion of negative cases correctly identified as negative) of the classifier, to allow assessment of the relative balance of false positives and false negatives.

- Quantifying Regression Accuracy

It is increasingly common to apply predictive modeling in cases in which the outcome variable is continuous rather than discrete—that is, in regression rather than classification problems. For example, a number of studies in cognitive neuroscience have attempted to predict phenotypic measures, such as age,20 personality,21 or behavioral outcomes.22 For continuous predictions, accuracy can be quantified either by the relation between the predicted and actual values, relative to perfect prediction, or by a measure of the absolute difference between predicted and actual values (ie, the error). A relative measure is useful because its value can easily be related to the success of the prediction. For this purpose, a useful measure is the fraction of explained variance, often called the coefficient of determination or R2. If a model makes perfect predictions, its associated R2 value will be 1.0, whereas a model making random predictions should have an R2 value of approximately 0. If a model is particularly poor, to the point that its predictions are less accurate than they would be if the model simply returned the mean value for the data set, the R2 value can be negative, despite the fact that it is called R2. The disadvantage of this measure is that it does not support comparisons of the quality of predictions across different data sets because the variance of the outcome variable may differ between one data set and another. For this purpose, absolute error measurements, such as the mean absolute error, which has the benefit of quantifying error in the units of the original measure (such as IQ points), are useful.

It is common in the literature to use the correlation between predicted and actual values as a measure of predictive performance; of the 64 studies in our literature review that performed prediction analyses on continuous outcomes, 30 reported such correlations as a measure of predictive performance. This reporting is problematic for several reasons. First, correlation is not sensitive to scaling of the data; thus, a high correlation can exist even when predicted values are discrepant from actual values. Second, correlation can sometimes be biased, particularly in the case of leave-one-out cross-validation. As demonstrated in Figure 4, the correlation between predicted and actual values can be strongly negative when no predictive information is present in the model. A further problem arises when the variance explained (R2) is incorrectly computed by squaring the correlation coefficient. Although this computation is appropriate when the model is obtained using the same data, it is not appropriate for out-of-sample testing23; instead, the amount of variance explained should be computed using the sum-of-squares formulation (as implemented in software packages such as scikit-learn).

As discussed previously in this section, leave-one-out cross-validation is problematic because it allows for the possibility of negative R2 values. For classification settings, the effect is the same; in a perfectly balanced data set, leave-one-out cross-validation creates a testing set comprising a single observation that is in the minority class of the training set. A simple prediction rule, such as majority vote, would thus lead to predictions that would be incorrect.24 Rather, the preferred method of performing cross-validation is to leave out 10% to 20% of the data, using k-fold or shuffle-split techniques that repeatedly split the data randomly. Larger testing sets enable a good computation of measurements, such as the coefficient of determination or area under the receiver operating characteristic curve.


Best Practices for Predictive Modeling

We have several suggestions for researchers engaged in predictive modeling to ensure accurate estimates of predictive validity:

.    In-sample model fit indices should not be reported as evidence for predictive accuracy because they can greatly overstate evidence for prediction and take on positive values even in the absence of true generalizable predictive ability.

.    The cross-validation procedure should encompass all operations applied to the data. In particular, predictive analyses should not be performed on data after variable selection if the variable selection was informed to any degree by the data themselves (ie, post hoc cross-validation). Otherwise, estimated predictive accuracy will be inflated owing to circularity.25

.    Prediction analyses should not be performed with samples smaller than several hundred observations, based on the finding that predictive accuracy estimates with small samples are inflated and highly variable.26

.    Multiple measures of prediction accuracy should be examined and reported. For regression analyses, measures of variance, such as R2, should be accompanied by measures of unsigned error, such as mean squared error or mean absolute error. For classification analyses, accuracy should be reported separately for each class, and a measure of accuracy that is insensitive to relative class frequencies, such as area under the receiver operating characteristic curve, should be reported.

.    The coefficient of determination should be computed by using the sums-of-squares formulation rather than by squaring the correlation coefficient.

.    k-fold cross-validation, with k in the range of 5 to 10,27 should be used rather than leave-one-out cross-validation because the testing set in leave-one-out cross-validation is not representative of the whole data and is often anticorrelated with the training set.


Author Contributions: Dr Poldrack and Ms Huckins had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Poldrack, Varoquaux.

Acquisition, analysis, or interpretation of data: Poldrack, Huckins.

Drafting of the manuscript: Poldrack.

Critical revision of the manuscript for important intellectual content: Huckins, Varoquaux.

Statistical analysis: All authors.

Administrative, technical, or material support: Poldrack.

Among well-nourished populations of Westerners, men's high testosterone levels represent an outlier of cross‐cultural variation; probably due to intrasexual competition in reproductive contexts; it increases prostate cancer risk


From 2012... Do evolutionary life‐history trade‐offs influence prostate cancer risk? A review of population variation in testosterone levels and prostate cancer disparities. Louis Calistro Alvarado. Evolutionary Applications, December 11 2012. https://doi.org/10.1111/eva.12036

Abstract: An accumulation of evidence suggests that increased exposure to androgens is associated with prostate cancer risk. The unrestricted energy budget that is typical of Western diets represents a novel departure from the conditions in which men's steroid physiology evolved and is capable of supporting distinctly elevated testosterone levels. Although nutritional constraints likely underlie divergent patterns of testosterone secretion between Westernized and non‐Western men, considerable variability exists in men's testosterone levels and prostate cancer rates within Westernized populations. Here, I use evolutionary life history theory as a framework to examine prostate cancer risk. Life history theory posits trade-offs between investment in early reproduction and long-term survival. One corollary of life history theory is the ‘challenge hypothesis’, which predicts that males augment testosterone levels in response to intrasexual competition occurring within reproductive contexts. Understanding men's evolved steroid physiology may contribute toward understanding susceptibility to prostate cancer. Among well-nourished populations of Westerners, men's testosterone levels already represent an outlier of cross‐cultural variation. I hypothesize that Westernized men in aggressive social environments, characterized by intense male–male competition, will further augment testosterone production aggravating prostate cancer risk.



Discussion

Modern Westernized environments represent a clear deviation from the environment in which male reproductive physiology evolved. Largely removed from energetic constraint and pathogen burden, Westernized men are capable of supporting distinctly elevated testosterone at the upper limit of human variability and amplifying the incidence of hormone‐sensitive cancer. Variation in nutritional status can largely account for observed disparities in men's testosterone levels and prostate cancer between Westernized and non‐Western populations, but not within Westernized populations—the populations at highest risk of prostate cancer. By incorporating a challenge hypothesis framework, another source of lifetime variation in testosterone exposure was proposed: Aggressive social environments affect prostate cancer incidence through the responsiveness of male androgen physiology to challenges, specifically among Westerners who are able to support the energetic costs of high testosterone levels. I reviewed literature which showed that ancestry, a widely recognized risk factor for prostate cancer, is in and of itself biologically unimportant when accounting for lifestyle factors. For instance, population disparities in testosterone levels of black‐and white‐American men become attenuated and nonsignificant when comparing among college‐educated men from similar backgrounds (Mazur 1995, 2006). And in a nationally representative sample, there was no significant difference in testosterone levels of black‐and white‐American men after accounting for differences in anthropometry (age and body fat percentage) and lifestyle factors (drug use and physical activity) (Rohrmann et al. 2007). To reiterate, there is surprisingly little evidence to suggest that testosterone levels are a direct consequence of ancestry. And as discussed earlier, men of lower SES, regardless of ethnicity, demonstrate higher rates of male–male violence, higher testosterone levels, and higher prostate cancer. Using ancestry as a putative biomarker of prostate cancer risk is effective only to the extent which it tracks environmental circumstances and living conditions that influence cancer risk.

Additionally, I argued that poverty and compromised male investment lead to prioritized mating effort and increased male–male competition, culminating into chronically elevated testosterone and higher rates of prostate cancer. This general trend would be expected only if inequity in wealth distribution translated into more agonistic interactions between males at the population level. In other words, if the relationship between poverty and aggressive social environments is moderated, then there would be little expectation for lower SES to contribute to prostate cancer risk. Norwegian men, for example, deviate from the normally observed correlation between low SES and increased prostate cancer risk. This is particularly interesting because of the sizeable welfare program that is characteristic of Nordic social policy (Sachs 2006), which is associated with some of the lowest crime rates, violent or otherwise (Barclay et al. 2001). As such, Norway invests heavily in poverty reduction, boasts the lowest homicide rate within the developed world, and does not exhibit a concentration of prostate cancer among men of lower SES. Taken together, it would appear that comprehensive social programs might decouple socioeconomic differentials from male–male violence and prostate cancer risk, and may provide a surprising example of how improved social policies and poverty alleviation strategies are fundamental to the interest of public health.

And finally, the challenge hypothesis framework developed in this review may have occupational health implications, considering that men's testosterone levels vary according to occupational status (Dabbs 1992), and that some professions carry a disproportionate risk of prostate cancer (Demers et al. 1994; Zeegers et al. 2004). Dabbs (1992) and colleagues (1998) found that blue‐collar workers have higher salivary and serum testosterone than white‐collar workers. However, distinct social contexts within a profession can also give rise to differences in testosterone levels. Although lawyers as a group are white‐collar workers, trial lawyers have significantly higher salivary testosterone than nontrial lawyers, which has been attributed to the polemical nature of face‐to‐face litigation (Dabbs et al. 1998). If this pattern of elevated testosterone from agonistic interactions persists across occupations, it seems reasonable to expect that men in professions with a higher intensity of competitive interaction would exhibit a greater incidence of prostate cancer. Findings from an extensive cohort study of 58,279 Western European men (ages 55–69 years) from 20 separate occupations are consistent with this reasoning (Zeegers et al. 2004). After accounting for individual characteristics and lifestyle factors (age, diet, drug and alcohol use, education, family disease history, and physical activity), it was police officers who showed the highest relative risk for prostate cancer. Indeed, prostate cancer risk increased 67% for each 10 years of occupational duty as a policeman. The framework proposed here can explain these seemingly peculiar associations between career choice and prostate cancer risk.

Liberals & conservatives are similarly obedient to their own authorities & condemn perceived abuses of their ideology’s sacralized objects & and heroes; liberals & conservatives seem made up of the same psychological stuff

Do liberals and conservatives use different moral languages? Two replications and six extensions of Graham, Haidt, and Nosek’s (2009) moral text analysis. Jeremy A. Frimer. Journal of Research in Personality, November 28 2019, 103906.  https://doi.org/10.1016/j.jrp.2019.103906

Abstract: Do liberals and conservatives tend to use different moral languages? The Moral Foundations Hypothesis states that liberals rely more on foundations of care/harm and fairness/cheating whereas conservatives rely more on loyalty/betrayal, authority/subversion, and purity/degradation in their moral functioning. In support, Graham, Haidt, and Nosek (2009; Study 4) showed that sermons delivered by liberal and conservative pastors differed as predicted in their moral word usage, except for the loyalty foundation. I present two high-powered replication studies in religious contexts and six extension studies in politics, the media, and organizations to test ideological differences in moral language usage. On average, replication success rate was 30% and effect sizes were 38 times smaller than those in the original study. A meta-analysis (N=303,680) found that compared to liberals, conservatives used more authority r=.05, 95% confidence interval=[.02,.09] and purity words, r=.14 [.09,.19], fewer loyalty words, r=-.08 [-.10,-.05], and no more or less harm, r=.00 [-.02,.02], or fairness words, r=-.03 [-.06,.01].

Keywords: morality, language, ideology, conservatism, replication, moral foundations theory

General Discussion

Two replications and six extensions found limited support for the MFH in terms of language usage. Whereas a close replication of sermons from the same two U.S. Christian denominations as those in the original was successful (Study 1), a conceptual replication with 12 other U.S. Christian denominations was largely unsuccessful (Study 2), meaning that the two denominations studied in Graham et al. (2009) may not be representative of Christian denominations in general. This suggests that even within the context of religious sermons by U.S. Christian pastors, liberals and conservatives may not use different moral languages as much as previously thought. Although Graham et al. (2009) suggested that political speeches may not be the ideal context for detecting the different moral languages of liberals and conservatives, conceptual replications with four political samples were successful in aggregate for four of the five foundations (Study 3). A moderation analysis found that the differences in the moral languages of liberals and conservatives changed when moving from a religious to a political context for two of the five foundations only, meaning that the distinction between religion and politics may not be as important as Graham et al. (2009) suggested.

Samples drawn from the media and organizations, contexts not ruled out by Graham et al. (2009), allowed for a novel assessment of whether liberal and conservative commoners (broadly defined) use different moral languages (Studies 4-5). Tests of the MFH in these contexts were predominantly unsuccessful. Across all samples, metrics, foundations, and dictionaries, replication success rate was just 30%, meaning that 70% of replications failed. A meta-analysis (Study 6) of all the available data found support for the MFH for the authority and purity foundations, no evidence to support the MFH for harm and fairness, and evidence that is counter to the MFH for the loyalty foundation. Effect sizes were 38 times smaller on average. The most generous viable conclusion is that these results offer limited support for the MFH in the language of liberals and conservatives.

Analytical Considerations

The present analyses revealed that most distributions generated by the moral foundations dictionaries have a large number of identically-zero entries and are skewed. Correcting for this skew had relatively little effect on replication success and the resulting effect sizes. Thus, this analytic issue ended up being relatively inconsequential vis-à-vis replication considerations. Another analytical question concerned the dictionaries themselves. I used both the original MFD1 and the more recent and more valid MFD2. While results were not always the same, they tended to be largely similar. Analyses of non-skewed distributions stemming from the MFD2 are probably the most valid due to enhanced normality and predictive validity of this analytical set up.

Both GHN and the present studies relied on a simple word counting program to operationalize the usage of moral languages (GHN also coded the speakers’ attitudes towards those words). For more than a century, psychologists have drawn inferences about topics of conversation and speakers’ internal states and traits through methods like these. And word counting procedures have generally been shown to be valid. However, topics are not fully reducible to the presence of certain words. Future work might use other linguistic techniques to assess whether liberals and conservatives have similar or different attitudes toward moral languages and use them in similar or different ways.

Theoretical Considerations

Graham et al. (2009) found that liberals used more loyalty words than conservatives, a finding that is at variance with the MFH. The present analyses suggested that although this effect is weak, it is robust. Why liberals talk more about a topic upon which their morality is not based remains an important and pressing question for MFT.

The present and recent empirical findings motivate the revisiting of a fundamental question: what is a moral foundation, psychologically speaking? Proponents of the theory have advocated for construct pluralism in the sense that foundations are general mental modules that manifest in multiple psychological forms, including values, perceptions, behavioral orientations, language, and so on. The present findings, along with other work, raise questions about this tenet of Moral Foundations Theory. Results from the present studies suggest that differences in the moral language usage of liberals and conservatives are generally small. Moreover, for three foundations, the MFH was unsupported. It would probably be more accurate to conclude that liberals and conservatives use similar moral languages than that they use different languages.

Along with their similar languages, liberals and conservatives may not be as different as previously thought in terms of their general action orientations: liberals and conservatives are similarly obedient to their own authorities (Frimer, Gaucher, & Schaefer, 2014) and condemn perceived abuses of their ideology’s sacralized objects (Frimer et al. 2015, 2016) and heroes (Frimer, Biesanz, Walker, & MacKinley, 2013). This growing body of evidence is in line with idea that liberals and conservatives are made up of the same psychological stuff, but each ideology has its own set of cherished values and symbols. Whereas conservatism tends to cherish religion and the military, liberalism champions social justice and the environment (Frimer et al. 2015, 2016). Psychologically speaking, liberals and conservatives may cut from the same cloth.

They meta-analyze whether race or ethnicity moderate the heritability of intelligence in the US; find moderate to high heritabilities that do not substantially differ by race or ethnicity

Racial and ethnic group differences in the heritability of intelligence: A systematic review and meta-analysis. Bryan J.Pesta et al. Intelligence, Volume 78, January–February 2020, 101408. https://doi.org/10.1016/j.intell.2019.101408

Highlights
•    We meta-analyze whether race or ethnicity moderate the heritability of intelligence.
•    The main sample (k = 16) was comprised of Whites Blacks, and Hispanics from the USA.
•    We found moderate to high heritabilities for both groups.
•    Heritabilities, however, did not substantially differ by race or ethnicity.
•    Results are largely inconsistent with predictions from the Scarr-Rowe hypothesis.

Abstract: Via meta-analysis, we examined whether the heritability of intelligence varies across racial or ethnic groups. Specifically, we tested a hypothesis predicting an interaction whereby those racial and ethnic groups living in relatively disadvantaged environments display lower heritability and higher environmentality. The reasoning behind this prediction is that people (or groups of people) raised in poor environments may not be able to realize their full genetic potentials. Our sample (k = 16) comprised 84,897 Whites, 37,160 Blacks, and 17,678 Hispanics residing in the United States. We found that White, Black, and Hispanic heritabilities were consistently moderate to high, and that these heritabilities did not differ across groups. At least in the United States, Race/Ethnicity × Heritability interactions likely do not exist.


1. Introduction

In behavioral genetic research, individual variance in cognitive ability is commonly partitioned into three components. The first is the additive genetic component (a2, also known as h2), which refers to genetic effects on a trait that act additively. This component is called (narrow) “heritability.” The second component is the common or shared environment (c2), which denotes environmental effects that make family members more similar. The third component is the unshared environment (e2), which consists of non-genetic effects (plus measurement error) that are not shared between family members, but which instead differentiate them from each other. Collectively, the last two components are known as “environmentality” (Plomin, DeFries, Knopik, & Neiderhiser, 2014).

These three components together comprise the “ACE” model of behavioral genetics. The model represents one basic, biometric framework behavioral geneticists may use when studying the heritability of human traits, including intelligence. The ACE model assumes that environmental and genetic influences are additive, but allows that interactions (e.g., A × E) may also exist between components; these can be estimated as well (Plomin et al., 2014; Vinkhuyzen, van der Sluis, Maes, & Posthuma, 2012). Moreover, the model is useful in intelligence research because the behavioral genetic architecture of the trait is “surprisingly simple” (Plomin et al., 2014, p. 200). Finally, the ACE model nicely fits IQ data, and ACE estimates do not require the use of cumbersome kinship designs.

The relative importance of genetic and environmental sources of individual differences in cognitive ability has been extensively studied. Results for the general population show that the proportion of variance in IQ explained by genes increases with age (Plomin et al., 2014). Specifically, in early childhood, genetic effects explain less than 50% of IQ variance, and the effect of the shared environment is relatively strong. As children age, though, genetic effects become increasingly prominent, and the environmental variance due to factors common to siblings decreases. In adults, the heritability of intelligence is 60–80%, while the effect of common environment is small, if not zero (Plomin et al., 2014). The unshared environment explains the rest.

The degree to which one can generalize heritability estimates to other populations has been debated (see, e.g., Sesardic, 2005). It is clear, though, that some variables (e.g., age; Plomin et al., 2014) moderate the heritability of cognitive ability. One putative moderator is the quality of one’s environment. Poorer (richer) environments supposedly correspond to lower (higher) heritability, to a presumably measurable degree. Said differently, “natural potentials for adaptive functioning are more fully expressed in the context of more nourishing environmental experiences” (Tucker-Drob & Bates, 2016, p. 1). This prediction is known as the Scarr-Rowe hypothesis (Scarr-Salapatek, 1971; Turkheimer, Harden, D’onofrio, & Gottesman, 2011).

The Scarr-Rowe hypothesis predicts lower heritabilities for lower performing social classes and racial/ethnic groups (Scarr-Salapatek, 1971, p. 1286). Scarr-Salapatek’s (1971) original hypothesis and related ones – examples include the “Threshold Hypothesis” (Jensen, 1968), the “Bio-ecological Model” (Bronfenbrenner & Ceci, 1994), and the “Gene–Gini Hypothesis” (Selita & Kovas, 2019) – predict that Scarr-Rowe interactions will result when there are environmental differences. Assuming that social class and racial/ethnic differences are largely environmental in origin, Scarr-Salapatek (1971) and others have predicted lower heritabilities for the lower scoring groups.

Does the heritability of human intelligence differ by either social class or race/ethnicity? The answer is complicated because variables like age and the country sampled can moderate the effects. For example, a meta-analysis by Tucker-Drob and Bates (2016) found greater heritability with higher socioeconomic status, but these effects existed only with participants from the United States. Regarding age, recent data from Germany suggest the existence of a Scarr-Rowe interaction, but one which declines with increasing age (Gottschling et al., 2019).

While Scarr-Rowe interactions for social class are relatively well-studied, interactions for race or ethnicity are less so. Hence, whether Scarr-Rowe interactions for race or ethnicity exist is unclear. Some reviews suggest that the heritability of intelligence is similar across cultures (Plomin et al., 2014) and ethnic groups (Jensen, 1998; Rushton & Jensen, 2005). Others suggest differently (Turkheimer, Harden, & Nisbett, 2017).

The issue is relevant for several reasons, including evaluating the trans-ethnic validity of polygenic scores. Recently, Lee et al. (2018) developed polygenic scores for both intelligence and educational levels. These scores were derived from European samples and they showed lower predictive accuracy in non-European groups such as African Americans. The typical explanation offered for attenuated predictive accuracy is decay of linkage disequilibrium (LD) which results in differences in the correlations between SNPs across different ancestry groups (Zanetti & Weale, 2018). Another hypothesis appeals to lower within-group heritability in non-White groups (see, e.g., Rabinowitz et al., 2019). Both explanations are plausible since the predictive accuracy of polygenic scores is a joint function of (1) the validity of the scores as predictors of the traits, and (2) the within-group heritability of the traits in question (i.e., the association between the genotype and the phenotype; Daetwyler, Villanueva, & Woolliams, 2008). While LD decay might be a theoretically adequate explanation for attenuated predictive accuracy of PGS (Zanetti & Weale, 2018), whether it is the actual explanation can only be properly evaluated when the heritabilities of the trait within the different subgroups are known.

Our aim is to shed light on these matters by conducting a systematic review and meta-analysis. The goal is to test for the presence of Scarr-Rowe interactions with respect to race/ethnicity. Our specific research question is whether the heritability of intelligence differs across racial/ethnic groups residing in the United States (we searched for studies worldwide but found only samples from this country).

Wednesday, November 27, 2019

From 2017... Opiate of the Masses? Inequality, Religion, and Political Ideology in the U.S.

Schnabel, Landon. 2017. “Opiate of the Masses? Inequality, Religion, and Political Ideology in the United States.” SocArXiv. July 18. doi:10.31235/osf.io/dnz2w

Abstract: This study considers the assertion that religion is the opiate of the masses. Using a special module of the General Social Survey, I first demonstrate that religion functions as a compensatory resource for structurally-disadvantaged groups—women, racial minorities, those with lower incomes, and, to a lesser extent, sexual minorities. I then demonstrate that religion—operating as both palliative resource and values-shaping schema—suppresses what would otherwise be larger group differences in political ideology. This study provides empirical support for the general “opiate” claim that religion is the “sigh of the oppressed creature” and suppressor of emancipatory political values. I expand and refine the theory, however, showing religion provides (1) compensatory resources for lack of social, and not just economic, status, and (2) traditional-values-oriented schemas that impact social attitudes more than economic attitudes.


Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.
                     -Karl Marx (1970 [1843])


Whenever a candidate or policy that advantages the few while disadvantaging the many wins an election, pundits assume people voted against their own self-interests and then wonder why. For example, after the 2016 U.S. presidential election many wondered why women did not vote more consistently for the first woman nominated by a major party. Status and positionality theories of politics excel at predicting why structurally-disadvantaged groups often support and vote for progressive candidates and policies, but these theories break down in the not infrequent cases when disadvantaged groups are not liberal. For example, as I will show, men are more supportive of a woman’s right to choose abortion than are women. Are disadvantaged groups simply irrational, or is there a missing piece or overlapping identity that, when added to positionality theories of politics, explains otherwise unexpected attitudes and voting behavior?

Marx, Du Bois, Weber, and other classical social theorists said religion appeals to the disenfranchised and helps them through suffering. But, according to these theorists, negatives accompany the positives, with religion legitimating subordination and/or distracting people from the root causes of their suffering. Marx’s “opiate of the masses” argument would predict that religion constrains revolution by suppressing political engagement. Yet, in the contemporary United States and many other countries, the most intensely religious people are often the most politically engaged, having an outsized impact on politics (Bolzendahl, Schnabel, and Sagi 2019). Although religion does not seem to make people apolitical, it is still possible that religion legitimates the status quo. Applying and synthesizing several theoretical traditions—including structuration (Giddens 1984; Sewell 1992), system justification (Jost and Hunyady 2002), compensatory control (Kay et al. 2009), and related cultural and social psychological approaches to the study of religion (Edgell 2012; Hoffmann and Bartkowski 2008; Willer 2009)—I explore, expand upon, and refine the classic “opiate” argument.

In the process of exploring the “opiate” argument, this study answers, at least in part, two broader social scientific questions: (1) Why are some groups consistently more religious than others? (2) Why do attitudes toward certain social issues, such as abortion and same-sex relationships, seem to contradict the positionality principle of disadvantage promoting progressive values? I conclude that, as Marx and others have argued, religion can legitimate inequality. But I propose a new mechanism: Rather than suggesting that religions make people less political, less agentic, or more irrational, I argue that religions shape political ideology in accordance with the deeply-held identities, interests, and values of agentic people with multiple overlapping identities seeking meaning and wellbeing in the face of uncertainty and injustice. By acting as a compensatory resource that disproportionately provides comfort and strength to the disadvantaged and a schema that disproportionately shapes their political ideology according to traditional religious values, contemporary American religion—and Christianity in particular—suppresses what would otherwise be larger group differences in political ideology.


Religious affiliation and marital satisfaction: commonalities among Christians, Muslims, and atheists

Religious affiliation and marital satisfaction: commonalities among Christians, Muslims, and atheists. Piotr Sorokowski1, Marta Kowal1 and  Agnieszka Sorokowska. Front. Psychol. | doi: 10.3389/fpsyg.2019.02798.

ORIGINAL RESEARCH ARTICLE Provisionally accepted The full-text will be published soon

Abstract: Scientists have long been interested in the relationship between religion and numerous aspects of people’s lives, such as marriage. This is because religion may differently influence one’s level of happiness. Some studies have suggested that Christians have greater marital satisfaction, while others have found evidence that Muslims are more satisfied. Additionally, less-religious people have shown the least marital satisfaction. In the present study, we examined marital satisfaction among both sexes, and among Muslims, Christians, and atheists, using a large, cross-cultural sample from the dataset in Sorokowski et al. (2017). Our results show that men have higher marital satisfaction ratings than women, and that levels of satisfaction do not differ notably among Muslims, Christians, and atheists. We discuss our findings in the context of previous research on the association between marriage and religion.

Keywords: Religious affiliation, marital satisfaction, Christians, Muslims, Atheists

Discussion

The present study’s primary goal was to examine the association between religious affiliation and marital satisfaction, and the results showed that there was no relationship between the former and level of the latter—Christians and Muslims were found to be similarly satisfied with their marriages, as were atheists. Nevertheless, the present analysis provided support for a link between marital satisfaction and age (younger people showed higher marital happiness), material status (higher material status, higher marital satisfaction), or sex (men were happier in their marriages than women).
Previous findings have indicated Abrahamic religions (e.g., Christianity, Islam) share many similarities (Agius and Chircop, 1998; Zarean and Barzegar, 2016) and promote formation of traditional family ties, such as marriage rather than cohabitation, and marital rather than non-marital births (Dollahite and Lambert, 2007; Zarean and Barzegar, 2016). However, these religions have some substantive differences in beliefs and practices. For example, polygyny is not accepted in Christianity, whereas it is widely accepted in Islam, and such a family model may negatively influence marital life (Al-Krenawi and Graham, 2006). Despite the discrepancies between those two religions, the present study found no differences between them as far as marital satisfaction, and this included people from different parts of the world.
Moreover, since the New York City terrorist attacks on September 11, 2001, Islam has been central in many debates, discussions, and publications (Alghafli et al., 2014). Discussion on Islam frequently concerns familial issues, perceived by the Western media mostly in a negative light. Problematic issues include, for instance, gender roles and the treatment of women (McDonald, 2006; Ridouani, 2011; Ennaji, 2016). Studies, however, do not support this unfavorable view of females’ situations: religious Muslims show increased marital satisfaction (Abdel-Khalek, 2006, 2010; Asamarai et al., 2008; Ahmadi and Hossein-Abadi, 2009; Zaheri et al., 2016, but see also Abu-Rayya, 2007).
The present study’s results provide evidence that Christians and Muslims do not differ in their level of marital satisfaction. People from various countries identifying themselves as belonging to one of these two religions had similar level of marital happiness, which is consistent with previous findings. For instance, Dabone (2012) compared marital satisfaction among Muslim and Christian spouses, and found relative dissatisfaction, while the religious affiliation did not affect the satisfaction.
As scarce data exist on marital satisfaction among atheists, the present study’s second aim was to investigate whether atheists have similar marital satisfaction to marriages as do religious adherents. Considering positive correlations found between religiosity and marital satisfaction (Marks, 2005), atheists may be expected to have significantly lower levels of both variables. A major drawback of previous related research is its predominant focus on comparisons between more-religious and less-religious people (Fincham et al., 2011), excluding the relatively large group that atheists represent. Additionally, most studies have been conducted in the United States, where atheists are often negatively stereotyped (Zuckerman, 2009). The present study results provide evidence that atheists are neither more nor less satisfied with their marriages than religious adherents, which suggests religion may not influence marital satisfaction.
There are a few possible explanations for observed similar marital satisfaction ratings across people of different religions. Overall, married couples constitute a lower percentage of people in a relationship (Nock, 1995). Those who decide to get married may be particularly committed or well-suited to partnership, regardless of their religious affiliation. Entering a serious relationship, such as marriage, requires strong enthusiasm toward the partner (Wang and Chang, 2002) and, thus, results in higher ratings of subjectively perceived relationship satisfaction. Another possible explanation may be that people generally consider marriage a long-lasting relationship (Silliman and Schumm, 2004; Willoughby and Dworkin, 2009), and when they decide to get married, they rationalize and “cognitively close” their choice (Webster and Kruglanski, 1994). Participants in the study population may have felt they had to be satisfied with their relationship, as they had invested so much energy into its development. Had they reported being unsatisfied, feeling an internal conflict may have surfaced (e.g., “Why am I even with him/her if it makes me unhappy?”). The need to explain the dissonance of staying in an unsuccessful relationship would be negatively perceived, and could yield unpleasant emotions, especially in Western, individualistic cultures, which value the pursuit of personal happiness at all costs (Gilovich et al., 2015). Such emotion could also occur in Eastern, collectivistic cultures, which emphasize the importance of being unselfish, grateful, and appreciative of one’s partner (Kagawa-Fox, 2010).
In general, participants were relatively satisfied with their marriages. Nonetheless, men’s marital satisfaction differed from women’s (independent of religious affiliation). Over 40 years ago, Bernard (1975) presented a provocative and controversial thesis asserting marriage is better for men than for women, and his statement has raised heated discussions. Most of the research has provided evidence for to support Bernard’s (1975) that thesis (Fowers, 1991; Schumm et al., 1998), and this is also true in non-Western cultures (Shek and Tsang, 1993; Asamarai et al., 2008). However, there was also one study which yielded unclear findings (McNulty et al., 2008). Results of the present study – which is based on the analysis of a large, cross-cultural sample, confirm the differences among men’s and women’s marital satisfaction: husbands did indeed have higher marital satisfaction than wives. Nevertheless, the size effect of these sex differences was extremely small (Eta < 0.01).
In conclusion, despite a large body of research on marital satisfaction (Bradbury et al., 2000; Twenge et al., 2003; Hilpert et al., 2016), most studies have rarely controlled for participants’ religion. Even when they have done so, they have not explored the differences between people of various religious affiliations (Sullivan, 2001; Williams and Lawler, 2003; Olson et al., 2016). Future research should therefore focus on people of different (1) religions (especially less-prevalent ones); and (2) cultures (as most studies up to date have been conducted on Western, educated, industrialized, rich, and democratic populations (Henrich et al., 2010), and should take into consideration other factors that may influence marital satisfaction among people of different religious affiliations (e.g., number of children, education, country’s development), as this would provide further understanding on the interaction between religion and marital happiness, as well as culture.