Monday, March 1, 2021

Communicating extreme forecasts... Scientists mention uncertainty far more frequently than non-scientists; thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'

Apocalypse now? Communicating extreme forecasts. David C. Rode; Paul S. Fischbeck. International Journal of Global Warming, 2021 Vol.23 No.2, pp.191 - 211. DOI: 10.1504/IJGW.2021.112896

Abstract: Apocalyptic forecasts are unique. They have, by definition, no prior history and are observed only in their failure. As a result, they fit poorly with our mental models for evaluating and using them. However, they are made with some frequency in the context of climate change. We review a set of forecasts involving catastrophic climate change-related scenarios and make several observations about the characteristics of those forecasts. We find that mentioning uncertainty results in a smaller online presence for apocalyptic forecasts. However, scientists mention uncertainty far more frequently than non-scientists. Thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'. Also, the desire among many climate change scientists to portray unanimity may enhance the perceived seriousness of the potential consequences of climate catastrophes, but paradoxically undermine their credibility in doing so. We explore strategies for communicating extreme forecasts that are mindful of these results.

Keywords: apocalypse; climate change; communication; extreme event; forecast; forecasting; global warming; media; policy; prediction; risk; risk communication; uncertainty.


5 Implications for policy and risk communication

Uncertainty is a core challenge for climate change science. It can undermine public engagement (Budescu et al., 2012) and form a barrier to public mobilisation (Pidgeon and Fischhoff, 2011). Our findings in this paper support these results and suggest that the exclusion of uncertainty from communication of apocalyptic climate-related forecasts can increase the visibility of the forecasts. However, the increased visibility comes at the cost of emphasising the voices of speakers without a scientific background. But focusing only on the quantity of communications, and not the ‘weight’ attached to them, neglects the important role that their credibility plays in establishing trust.

Trust (in subject-matter authorities and in climate research) influences perceived risk (Visschers, 2018; Siegrist and Cvetkovich, 2000). The impact of that trust is significant. Although belief in the existence of climate change remains strong, belief that its risks have been exaggerated has grown (Wang and Kim, 2018; Poortinga et al., 2011; Whitmarsh, 2011). Gaps have also emerged between belief in climate change and estimates of the seriousness of its impact (Markowitz and Guckian, 2018). To the extent that failed predictions damage that trust, the public’s perception of climate-related risk is altered. If the underlying purpose of making apocalyptic predictions is to recommend action, and if the predictions fail to materialise, the wisdom of the recommendations based on those predictions may be called into question. Climate science’s perceived value is thereby diminished. If the perceived value (or the certainty of that value) is diminished, policy action is harder to achieve.

It is not simply the presence of uncertainty that is an impediment, it is that communications characterised by ‘hype and alarmism’ also undermine trust (Howe et al., 2019; O’Neill and Nicholson-Cole, 2009). The continual failure of the predictions to materialise may be seen to validate the public’s belief that such claims are in fact exaggerated. Although such beliefs may be the result of outcome bias (Baron and Hershey, 1988), recent evidence has also suggested that certain commonly accepted scientific predictions may indeed be exaggerated (Lewis and Curry, 2018). The model of belief we presented in Subsection 2.4 demonstrates that observing only failures will inevitably result in a reduction in subjective beliefs about apocalyptic risks. To build trust, any forecasts made must be ‘scientific’ – that is, able to be observed both incorrect and correct (Green and Armstrong, 2007). Under such circumstances, they should also incorporate clear statements acknowledging uncertainty, as doing so may work to increase trust (Joslyn and LeClerc, 2016). It is important to provide settings where the audience can ‘calibrate’ its beliefs. “A climate forecast can only be evaluated and potentially falsified if it provides a quantitative range of uncertainty” [Allen et al., (2013), p.243]. The acknowledgement of the uncertainty should include both worst-case and best-case outcomes (Howe et al., 2019).

One key to increasing credibility is to build up a series of shorter, simpler (non-apocalyptic) predictions (Nemet, 2009). Instead of predicting solely an apocalyptic event 50 years out, offer a series of contingent forecasts of shorter characteristic time (Byerly, 2000) that lead toward the ultimate event. Communications about climate change – and especially climate change-related predictions – should emphasise areas of the science that are less extreme in outcome, but more tangibly certain in likelihood (Howe et al., 2019). This implies, inter alia, that compound forecasts of events and the consequences of events should be separated. The goal may even be to exploit an outcome bias in decision making by moving from small- to large-scale predictions. By establishing a successful track record of smaller-scale predictions, validated with ex post evaluations of forecast accuracy, the public may be more inclined to increase its trust of the larger-scale predictions – even when such predictions are inherently less certain. This approach has been advocated directly by Pielke (2008) and Fildes and Kourentzes (2011) and supports the climate prediction efforts of Meehl et al. (2014), Smith et al. (2019), and others. To that end, we propose four concrete steps that can be taken to improve the usefulness of extreme climate forecasts. First, the authors of the forthcoming Sixth Assessment Report of the IPCC should be encouraged to tone down ‘deadline-ism’ (Asayama et al., 2019). Forecasters should make an effort to influence the interpretation of their forecasts; for example, by correcting media reporting of them. The sequential releases of the IPCC’s Assessment Reports, for example, should consider calling out particularly erroneous or incomplete interpretations of statements from previous Assessment Reports.

Second, given the extensive evidence about the limited forecasting abilities of

individual experts (Tetlock, 2005), forecasters should give more weight to the unique

ability of markets to serve as efficient aggregators of belief in lieu of negotiated

univocality. So-called prediction markets have a strong track record (Wolfers and

Zitzewitz, 2004). Although they have been suggested multiple times for climate

change-related subjects (Lucas and Mormann, 2019; Vandenbergh et al., 2014), they

have almost never been used. An exception is the finding that pricing in weather financial

derivatives is consistent with the output of climate models of temperature (Schlenker and

Taylor, 2019).

Third, efforts to provide reliable mid-term predictions should be encouraged. The

multi-year and decadal prediction work of Smith et al. (2019) and Meehl et al. (2014) is

in this direction. But what should (also) be developed are repeated and sequential

forecasts in order to facilitate learning about the forecasting process itself. That is, not

just how current climate forecasting models perform in hindcasts, but how previous

climate forecasts have performed (and hopefully improved) over time. Efforts to

determine the limits of predictability are also important (Meehl et al., 2014) and should

be studied in conjunction with the evaluation of forecast performance over time.

Fourth, extreme caution should be used in extrapolating from forecasts of climate events (e.g., temperature or CO2 levels) to their social and physical consequences (famine, flooding, etc.) without the careful modelling of mitigation and adaptation efforts and other feedback mechanisms. While there have been notable successes in predicting certain climate characteristics, such as surface temperature (Smith et al., 2019), the ability to tie such predictions to quantitative forecasts of consequences is more limited. The efforts to model damages as part of determining the social cost of carbon (such as with the DICE, PAGE, and FUND integrated assessment models) are a start but are subject to extreme levels of parameter sensitivity (Wang et al., 2019); uncertainty should be reflected in any forecasts of apocalyptic forecasts of climate change consequences. Scientists are often encouraged to ‘think big’, especially in policy applications. What we are suggesting here is that climate policy analysis could benefit from thinking ‘small’. That is, from focusing on the lower-level building blocks that go into making larger-scale predictions. One means by which to build public support for a complex idea like climate change is to demonstrate to the public that our understanding of the building blocks of that science are solid, that we are calibrated as to the accuracy of the building block forecasts, and that we understand how lower-level uncertainty propagates through to probabilistic uncertainty in the higher-level forecasts of events and consequences.

Rolf Degen summarizing... Recent genome-wide association studies have shown that genetic influences on psychological traits are driven by thousands of DNA variants, each with very small effect sizes; effects of "the environment" appear to be as fragmented and unspecific

From Genome-Wide to Environment-Wide: Capturing the Environome. Sophie von Stumm, Katrina d’Apice. Perspectives on Psychological Science, March 1, 2021. https://doi.org/10.1177/1745691620979803

Rolf Degen's take: Recent genome-wide association studies have shown that genetic influences on psychological traits are driven by thousands of DNA variants, each with very small effect sizes. Effects of "the environment" appear to be as fragmented and unspecific

Abstract: Genome-wide association (GWA) studies have shown that genetic influences on individual differences in affect, behavior, and cognition are driven by thousands of DNA variants, each with very small effect sizes. Here, we propose taking inspiration from GWA studies for understanding and modeling the influence of the environment on complex phenotypes. We argue that the availability of DNA microarrays in genetic research is comparable with the advent of digital technologies in psychological science that enable collecting rich, naturalistic observations in real time of the environome, akin to the genome. These data can capture many thousand environmental elements, which we speculate each influence individual differences in affect, behavior, and cognition with very small effect sizes, akin to findings from GWA studies about DNA variants. We outline how the principles and mechanisms of genetic influences on psychological traits can be applied to improve the understanding and models of the environome.

Keywords: genomics, genetics, environment, large data, effect sizes

Throughout this article, we have highlighted ways in which psychological science may take inspiration from genomic research to advance the understanding and models of environmental influences. Our aim is now to outline the steps that we believe are essential to bring about an effective research agenda for the environome.

A first challenge—having the technical tools available to capture the environome—is under way, although it is far from being complete. The environome comprises an infinite number of dynamic processes, whose assessment requires robust technologies that enable collecting precise, in-depth observations at multiple time points with little measurement error (Wild, 2012). Although assessment technologies have rapidly improved in recent years, capturing even one individual’s environome in its totality remains impossible to date (Roy et al., 2009).

The second challenge is to develop the computational methods required for modeling these rich data, for example using machine-learning approaches such as data mining and cluster analysis. This challenge is not specific to studies of the environome but shared with analyses of the genome. Although current GWA studies already incorporate a vast number of SNPs, they typically include only a fraction of the potentially available genomic information (Wainschtein et al., 2019). Another parallel between genome and environome suggests itself here: GWA studies currently consider only additive effects of SNPs, although interactions are plausible. Likewise, environmental factors are likely to involve interactive effects between each other in addition to additivity and collinearity. We predict that statistical advances in genomics will prevail at a fast pace and that they will be applicable not only to the genome but also to studies of the environome.

The third challenge is to develop a theoretical framework for organizing and modeling the environome and its influence on complex traits. We anticipate that this challenge can be met only through large-scale collaborations, akin to the consortia that dominate contemporary genetic research, such as the Psychiatric Genomics Consortium; https://www.med.unc.edu/pgc/) that focuses on mental health issues or the Social Science Genetic Association Consortium (https://www.thessgac.org/) that targets social science outcomes, as its name suggests. These and other consortia like them typically involve hundreds of researchers and organizations that engage in interdisciplinary collaborations and pool data across biobanks, population cohort studies, and independent samples. They offer extraordinary opportunities for scientific breakthroughs: The majority of the recent discoveries about the role of genetic influences of people’s differences in psychological traits emerged on the back of the work completed in consortia. For modeling the environome, longitudinal population cohort studies, which are typically defined by the year or decade of the cohort members’ birth and by the geographical scope from which they were recruited, will be of particular value (Cave & von Stumm, 2020). For one, longitudinal cohort studies can elucidate at least some of the environome’s dynamic changes that occur across people’s life span because cohort members are repeatedly assessed over time, including observations of the prenatal environment in some cases. For the other, population cohort studies are key to exploring the environome’s socio-historical development across generations—in other words, how the environmental experiences of today’s children differ from their parents’ and grandparents’ environmental experiences.

Rather than creating new consortia or shifting attention away from existing ones, we suggest broadening their scope to also pool data and expertise on the environome. Akin to the HapMap Project, a first step for a systematic research program into the environome would call for charting the breadth of environments that humans experience. A bottom-up approach, for example by creating comprehensive archives of environmental measures that are available across biobanks, population cohort studies, and independent samples, has some appeal. The alternative top-down approach would involve developing a theoretical taxonomy that could be applied to categorize observations of environments, including those already collected in previous studies, and then be subjected to empirical validation. An encouraging example is the DIAMONDS taxonomy that proposes eight dimensions to classify psychological situations by the extent to which they pertain to duty (i.e., something has to be done), intellect (i.e., learning opportunity), adversity (i.e., threat), mating (i.e., sexually charged), positivity (i.e., playfulness), negativity (i.e., stress), deception (i.e., sabotage), and sociality (i.e., social interaction; Rauthmann et al., 2014). Although the DIAMONDS taxonomy has to date been applied to only a select number of contexts and is fairly abstract, its theoretical framework may inspire analogous models for describing the environome.

GWA studies serve to identify genetic predictors of developmental differences in psychological traits, but they currently offer little value for elucidating the causality that underlies this prediction (Belsky & Harden, 2019). Likewise, the framework we proposed here for modeling the environome focuses on prediction. It does not qualify for finding the functional or causal mechanisms that explain why certain environmental conditions benefit phenotypic development more than others. Although not always appreciated, accurate prediction of psychological traits is immensely precious in itself because it enables identifying risk and resilience before problems manifest. In addition, a better understanding of the environome will help generate hypotheses that in the future can facilitate direct tests of causality, akin to current endeavors in functional genomics that try to make sense of gene and protein functions and interactions.

Participants made less prosocial decisions (i.e., became more selfish) in different-gender avatars, regardless of their own sex; women embodying a male avatar were more sensitive to temptations of immediate rewards

Bolt, Elena, Jasmine Ho, Marte Roel, Alexander Soutschek, Philippe N. Tobler, and Bigna Lenggenhager. 2021. “How the Virtually Embodied Gender Influences Social and Temporal Decision Making.” PsyArXiv. March 1. doi:10.31234/osf.io/84v9n

Abstract: Mounting evidence has demonstrated that embodied virtual reality, during which physical bodies are replaced with virtual surrogates, can strongly alter cognition and behavior even when the virtual body radically differs from one’s own. One particular emergent area of interest is the investigation of how virtual gender swaps can influence choice behaviors. Economic decision making paradigms have repeatedly shown that women tend to display more prosocial sharing choices than men. To examine whether a virtual gender swap can alter gender-specific differences in prosociality, 48 men and 51 women embodied either a same- or different-gender avatar in immersive virtual reality. In a between-subjects design, we differentiated between specifically social and non-social decision making by means of an interpersonal and intertemporal discounting task, respectively. We hypothesized that a virtual gender swap would elicit social behaviors that stereotypically align with the gender of the avatar. To relate potential effects to changes in self-perception, we measured implicit and explicit gender identification, and used questionnaires that assessed the strength of the illusion. Contrary to our hypothesis, our results show that participants made less prosocial decisions (i.e., became more selfish) in different-gender avatars, independent of their own biological sex. Moreover, women embodying a male avatar in particular were more sensitive to temptations of immediate rewards. Lastly, the manipulation had no effects on implicit and explicit gender identification. To conclude, while we showed that a virtual gender swap indeed alters decision making, gender-based expectancies cannot account for all the task-specific interpersonal and intertemporal changes following the virtual gender swap.


US: Religiosity decreased over time at a similar rate for the heterosexual and sexual minority groups; spirituality significantly increased over time for the sexual minority group but not for the heterosexual youth

Religious and Spiritual Development from Adolescence to Early Adulthood in the U.S.: Changes over Time and Sexual Orientation Differences. Kalina M. Lamb, Robert S. Stawski & Sarah S. Dermody. Archives of Sexual Behavior, Feb 22 2021. https://link.springer.com/article/10.1007%2Fs10508-021-01915-y

Abstract: Adolescence is a critical time in the U.S. for religious development in that many young people eschew their religious identity as they enter adulthood. In general, religion is associated with a number of positive health outcomes including decreased substance use and depression. The current study compared the developmental patterns of religiosity and spirituality in heterosexual and sexual minority youth. The design was a secondary data analysis of the first five waves of the Longitudinal Study of Adolescent Health and Wellness (N = 337, 71.8% female). Using multilevel linear (for spirituality) and quadratic (for religiosity) growth models, the initial level and change over time in religiosity and spirituality, as well as the correlations between growth processes, were compared between heterosexual and sexual minority individuals. The heterosexual group had significantly higher initial religiosity levels than the sexual minority group. Religiosity decreased over time at a similar rate for the heterosexual and sexual minority groups. Spirituality significantly increased over time for the sexual minority group but not for the heterosexual youth. The change over time in religiosity and spirituality were significantly and positively correlated for heterosexual individuals but were uncorrelated for sexual minority individuals. Results indicate there are differences in religious development based on sexual minority status. Future research should take into account how these differential religious and spiritual developmental patterns seen in heterosexual and sexual minority youth might predict various health outcomes.


Consensual non-monogamous relationships: Not necessarily less satisfying or less stable, some individuals can experience psychological need fulfillment and satisfying relationships with concurrent partners

Wood J, Quinn-Nilas C, Milhausen R, Desmarais S, Muise A, Sakaluk J (2021) A dyadic examination of self-determined sexual motives, need fulfillment, and relational outcomes among consensually non-monogamous partners. PLoS ONE 16(2): e0247001. https://doi.org/10.1371/journal.pone.0247001

Abstract: Intimate and sexual relationships provide opportunity for emotional and sexual fulfillment. In consensually non-monogamous (CNM) relationships, needs are dispersed among multiple partners. Using Self-Determination Theory (SDT) and dyadic data from 56 CNM partnerships (112 individuals), we tested how sexual motives and need fulfillment were linked to relational outcomes. We drew from models of need fulfillment to explore how sexual motives with a second partner were associated with satisfaction in the primary relationship. In a cross-sectional and daily experience study we demonstrated that self-determined reasons for sex were positively associated with sexual satisfaction and indirectly linked through sexual need fulfillment. Self-determined reasons for sex predicted need fulfillment for both partners at a three-month follow up. The association between sexual motives and need fulfillment was stronger on days when participants engaged in sex with an additional partner, though this was not related to satisfaction in the primary relationship. Implications for need fulfillment are discussed.


Some Implications

Our findings have implications both for intimate and sexual partners wishing to enhance their relationship(s) and clinicians working with CNM and monogamous individuals/couples. Promoting self-determined reasons for engaging in sex could encourage partners to engage in sexual interactions that are more likely to fulfill their needs (e.g., having sex when they are excited about the activity, rather than to avoid conflict). Encouraging partners to explore why they may be having sex for less self-determined reasons, and how they may shift to having sex for more self-determined reasons, is one strategy clinicians can use to promote relational well-being. Clinicians working with CNM partners can also encourage individuals to communicate and express continued affection and desire for established partners when new relationships occur in order to maintain sexual and relationship satisfaction in the primary dyad.

The current research also has implications for individuals in CNM communities. Popular assumptions of romantic relationships position CNM partnerships as less satisfying or less stable compared to monogamous relationships [6,20]. CNM partners in the current research noted high levels of satisfaction and sexual need fulfilment with both their first and second partners. Moreover, a concurrent sexual partnership did not appear to have significant detrimental effects on the first relationship. These findings verify what CNM researchers and advocates have previously emphasized: that for some, CNM relationships are a viable and fulfilling alternative to monogamy, and one of many approaches to encouraging personal growth and fulfillment [49]. These results may help to destigmatize CNM partnerships as they confirm that individuals can experience psychological need fulfillment and satisfying relationships with concurrent partners.

Finally, research on sexual behaviour, and on CNM generally, has been criticized for lacking theoretical frameworks [4,21,42,78]. The current studies contribute to a growing body of research that utilizes social psychological approaches to the study of sexual behaviour and emphasizes the importance of sexuality to relational well-being [36,42,45,52,71]. The research provides a theoretical context within which to understand the associations between sexual motives, need fulfilment, and relational outcomes in relationships where sexual and emotional needs are met by multiple partners, thus expanding the experiences represented in the social psychological literature.

The Origins and Design of Witches and Sorcerers

The Origins and Design of Witches and Sorcerers. Manvir Singh. Current Anthropology, Feb 2021. https://www.journals.uchicago.edu/doi/abs/10.1086/713111

Abstract: In nearly every documented society, people believe that some misfortunes are caused by malicious group mates using magic or supernatural powers. Here I report cross-cultural patterns in these beliefs and propose a theory to explain them. Using the newly created Mystical Harm Survey, I show that several conceptions of malicious mystical practitioners, including sorcerers (who use learned spells), possessors of the evil eye (who transmit injury through their stares and words), and witches (who possess superpowers, pose existential threats, and engage in morally abhorrent acts), recur around the world. I argue that these beliefs develop from three cultural selective processes: a selection for intuitive magic, a selection for plausible explanations of impactful misfortune, and a selection for demonizing myths that justify mistreatment. Separately, these selective schemes produce traditions as diverse as shamanism, conspiracy theories, and campaigns against heretics—but around the world, they jointly give rise to the odious and feared witch. I use the tripartite theory to explain the forms of beliefs in mystical harm and outline 10 predictions for how shifting conditions should affect those conceptions. Societally corrosive beliefs can persist when they are intuitively appealing or they serve some believers’ agendas.

8. Discussion


8.1. The origins of sorcerers, lycanthropes, the evil eye, and witches


Table 5 displays the three cultural selective processes hypothesized to be responsible for shaping beliefs in practitioners of mystical harm. Figure 3 shows how those processes interact to produce some of the malicious practitioners identified in Figure 1 (sorcerers, the evil eye, lycanthropes, and witches).


[Table 5. The three cultural selective schemes responsible for beliefs in practitioners of mystical harm.]


According to the theory outlined here, sorcerers are the result of both a selection for intuitive magic and a selection for plausible explanations. The selection for intuitive magic produces compelling techniques for controlling uncertain outcomes, including rain magic, gambling superstitions, and magic aimed at harming others, or sorcery. Once people accept that this magic is effective and that other people practice it, it becomes a plausible explanation for misfortune. A person who feels threatened and who confronts unexplainable tragedy will easily suspect that a rival has ensorcelled them. As people regularly consider how others harm them, they build plausible portrayals of sorcerers.

Beliefs about werewolves, werebears, weresnakes, and other lycanthropes also develop from a selection for plausible explanations. Baffled as to why an animal attacked them, a person suspects a rival of becoming or possessing an animal and stalking them at night. This explanation becomes more conceivable as the lycanthrope explains other strange events and as conceptions of the lycanthrope become more plausible. Many societies ascribe transformative powers to other malicious practitioners (see Table 3), showing that people also suspect existing practitioners after attacks by wild animals.

Beliefs in the malignant power of stares and words likewise develop to explain misfortune. As reviewed earlier, people around the world connect jealousy and envy to a desire to induce harm. Thus, people who stare with envy or express a compliment are suspected of harboring malice and an intention to harm. A person who suffers a misfortune remembers these stares and suspects those people of somehow injuring them. In regularly inferring how envious individuals attacked them, people craft a compelling notion of the evil eye.

Why suspect the evil eye rather than sorcery? There are at least two possibilities. First, an accused individual may ardently vow not to know sorcery or to have attacked the target (see these claims among the Azande, both described in text: Evans-Pritchard 1937:119-125; and shown in film: Singer 1981, minute 21). Alternatively, given beliefs that effective sorcery requires powers that develop with age, special knowledge, or certain experiences, it may seem unreasonable that a young or unexperienced group mate effectively ensorcelled the target. In these instances, the idea that the stare itself harmed the target may provide a more plausible mechanism.

The famous odious, powerful witch, I propose, arises when blamed malicious practitioners become demonized. People who fear an invisible threat or who have an interest in mistreating competitors benefit from demonizing the target, transforming them into a heinous, threatening menace. Thus, witches represent a confluence of two and sometimes all three cultural selective processes.

In Figure 1, I showed that beliefs about malicious practitioners exist along two dimensions. The tripartite theory accounts for this structure. All of the practitioners displayed are plausible explanations of how group mates inflict harm. One dimension (SORCERY-EVIL EYE) distinguishes those explanations of misfortune that include magic (sorcerers) from those that do not (evil eye, lycanthrope). The other dimension shows the extent to which different practitioners have been demonized. In short, all beliefs about harmful practitioners are explanations; sometimes they use magic, sometimes they’re made evil.


8.2. Ten predictions

The proposed theory generates many predictions for how shifting conditions should drive changes in beliefs about malicious practitioners. I referred to several of these throughout the paper. Here are ten (the section of the paper is noted when a prediction is discussed in the paper):

1. People are more likely to believe in sorcerers as sorcery techniques become more effective-seeming. 2. People are more likely to ascribe injury to mystical harm when they are distrustful of others, persecuted, or otherwise convinced of harmful intent. (sect. 6.2.1) 3. The emotions attributed to malicious practitioners will be those that most intensely and frequently motivate aggression. (sect. 6.2.1) 4. People are more likely to attribute injury to mystical harm when they lack alternative explanations. (sect. 6.2.2) 5. The greater the impact of the misfortune, the more likely people are to attribute it to mystical harm. (sect. 6.2.2) 6. Practitioners of mystical harm are more likely to become demonized during times of stressful uncertainty. 7. The traits ascribed to malicious practitioners will become more heinous or sensational as Condoners become more trustful or reliant on information from Campaigners. 8. Malicious practitioners will become less demonized when there is less disagreement or resistance about their removal. 9. The traits that constitute demonization will be those that elicit the most punitive outrage, controlling for believability. (sect. 7.2.1) 10. Malicious practitioners whose actions can more easily explain catastrophe, such as those who employ killing magic compared to love magic, will be easier to demonize.

8.3. The cultural evolution of harmful beliefs

Social scientists, and especially those who study the origins of religion and belief, debate over whether cultural traditions evolve to provide group-level benefits (Baumard and Boyer 2013; Norenzayan et al. 2016). Reviving the analogy of society as an organism, some scholars maintain that cultural traits develop to ensure the survival and reproduction of the group (Wilson 2002). These writers argue that traditions that undermine societal success should normally be culled away, while traditions that enhance group-level success should spread (Boyd and Richerson 2010). In this paper, I have examined cultural traits with clear social costs: mystical harm beliefs. As sources of paranoia, distrust, and bloodshed, these beliefs divide societies, breeding contempt even among close family members. But I have explained them without invoking group-level benefits. Focusing on people’s (usually automatic) decisions to adopt cultural traditions, I have shown that beliefs in witches and sorcerers are maximally appealing, providing the most plausible explanations and justifying hostile aims. Corrosive customs recur as long as they are useful and cognitively appealing.


Unpopularity of atheists: National studies (US, Sweden) consistently show that disbelievers (vs. believers) are less inclined to endorse moral values that serve group cohesion (the binding moral foundations)

The amoral atheist? A cross-national examination of cultural, motivational, and cognitive antecedents of disbelief, and their implications for morality. Tomas Ståhl. PLoS One. 2021 Feb 24;16(2):e0246593, doi: 10.1371/journal.pone.0246593

Abstract: There is a widespread cross-cultural stereotype suggesting that atheists are untrustworthy and lack a moral compass. Is there any truth to this notion? Building on theory about the cultural, (de)motivational, and cognitive antecedents of disbelief, the present research investigated whether there are reliable similarities as well as differences between believers and disbelievers in the moral values and principles they endorse. Four studies examined how religious disbelief (vs. belief) relates to endorsement of various moral values and principles in a predominately religious (vs. irreligious) country (the U.S. vs. Sweden). Two U.S. M-Turk studies (Studies 1A and 1B, N = 429) and two large cross-national studies (Studies 2-3, N = 4,193), consistently show that disbelievers (vs. believers) are less inclined to endorse moral values that serve group cohesion (the binding moral foundations). By contrast, only minor differences between believers and disbelievers were found in endorsement of other moral values (individualizing moral foundations, epistemic rationality). It is also demonstrated that presumed cultural and demotivational antecedents of disbelief (limited exposure to credibility-enhancing displays, low existential threat) are associated with disbelief. Furthermore, these factors are associated with weaker endorsement of the binding moral foundations in both countries (Study 2). Most of these findings were replicated in Study 3, and results also show that disbelievers (vs. believers) have a more consequentialist view of morality in both countries. A consequentialist view of morality was also associated with another presumed antecedent of disbelief-analytic cognitive style.

Discussion

The results of this study closely replicated the findings from Study 2. Disbelievers (vs. believers) were once again less inclined to endorse the binding moral foundations in both countries, whereas disbelief explained miniscule amounts of variance in endorsement of the individualizing moral foundations, Liberty/oppression, and moralization of epistemic rationality. Also consistent with results of Study 2, exposure to CREDs was a strong predictor of endorsement of the binding moral foundations in both countries. Going beyond Study 2, higher levels of ACS were associated with lower levels of endorsement of the binding moral foundations. Moreover, once differences in exposure to CREDs and ACS had been controlled for, disbelief (vs. belief) explained only a small amount of variance in endorsement of the binding moral foundations. Thus, results were once again consistent with the notion that exposure to CREDs help explain why believers endorse the binding moral foundations more than disbelievers. This study also provided initial evidence consistent with the idea that differences in cognitive style help explain the negative association between disbelief and endorsement of the binding moral foundations. Results also showed that mentalizing abilities were associated with higher levels of endorsement of the binding moral foundations. Mentalizing abilities were also positively associated with stronger endorsement of the individualizing moral foundations. However, these relationships do not help account for differences in moral values between believers and disbelievers, as mentalizing abilities were unrelated to disbelief.

Disbelievers (vs. believers) were also more inclined to endorse consequentialist thinking, an association that was slightly stronger among Swedes (vs. Americans). Finally, there was some evidence consistent with the notion that differences in cognitive style help account for why disbelievers (vs. believers) are more inclined to endorse a consequentialist view of morality, as ACS was positively associated with consequentialist thinking, and also accounted for a small part of the difference in consequentialist thinking between disbelievers and believers. However, disbelief remained a stronger predictor of consequentialist thinking than ACS, mentalizing and CREDs. Thus, there appear to be additional unknown factors that contribute to higher levels of consequentialist thinking among disbelievers (vs. believers).

General discussion

Although cross-cultural stereotypes suggest that atheists lack a moral compass [8], no studies to date have systematically examined to what extent disbelievers’ and believers’ conceptualizations of morality are distinct. The purpose of the present research was to fill that gap. Based on current theorizing in moral psychology, I identified a highly plausible difference in moral values between believers and disbelievers. Specifically, it has been proposed that a core function of religion is to create highly cohesive communities that serve to promote cooperation and to prevent free-riding as societies increase in size [217677]. Based on Moral Foundations Theory, it has further been proposed that religious communities generally promote moral values that serve group cohesion [21]. Based on this analysis it was hypothesized that disbelievers (vs. believers) should be less inclined to endorse the binding moral foundations. Across four studies, strong support was found for this prediction. In two U.S. M-Turk samples (Studies 1A and 1B), as well as two large cross-national samples (Studies 2–3), disbelievers were considerably less inclined than believers to endorse the binding moral foundations. Notably, this was the case in a country where religious belief is the norm (the U.S.) as well as in one of the most secular countries in the world (Sweden). Thus, the negative association between disbelief and endorsement of the binding moral foundations was independent of the normative status of religious belief in society. Moreover, this association was only slightly weakened when controlling for socially desirable responding (Study 1B), and remained robust (albeit weaker) when controlling for political orientation (all four studies). By contrast, and again consistent across all studies, disbelief (vs. belief) only explained miniscule amounts of variance in endorsement of moral values that serve to protect vulnerable individuals (the individualizing moral foundations) and Liberty/oppression. Furthermore, disbelief (vs. belief) was unrelated to amorality (Studies 1A, 1B), and explained only a miniscule amount of variance in moralization of epistemic rationality (Studies 2–3).

The moral psychology literature also suggests that disbelievers are more inclined than believers to form moral judgments on the basis of the consequences of specific actions compared to inaction [5759]. This finding was replicated in Study 3. American as well as Swedish disbelievers (vs. believers) were indeed more inclined to rely on consequentialist thinking, an association that was slightly stronger among Swedes. In short, the present four studies suggest that believers and disbelievers share similar moral concerns about protecting vulnerable individuals, Liberty/oppression, and about being epistemically rational. They also score equally low on amorality. However, disbelievers (vs. believers) are (1) considerably less inclined to endorse moral values that promote group cohesion, but (2) more inclined to form moral judgments about harm based on the specific consequences of actions.

Explaining moral disagreements between believers and disbelievers

In addition to mapping the similarities and differences in the moral values and principles endorsed by disbelievers and believers, a second aim of the present research was to explore where moral psychological differences between these groups may stem from. A set of hypotheses were derived based on an integration of insights regarding cultural, (de)motivational and cognitive antecedents of disbelief [27] and current theorizing in moral psychology. First, many CREDs not only signal the credibility and importance of specific religious beliefs, but also of the broader religious community. Based on this observation it was proposed that lower exposure to CREDs may contribute to weaker endorsement of the binding moral foundations among disbelievers (vs. believers). Second, based on a substantial literature linking various existential threats to increased group cohesion [345154], it was proposed that lower existential threat may also contribute to weaker endorsement of the binding moral foundations among disbelievers (vs. believers). Third, based on some recent evidence linking an intuitive cognitive style (low ACS) to disgust sensitivity and endorsement of the binding moral foundations [55], it was proposed that high ACS may also contribute to lower endorsement of the binding moral foundations among disbelievers (vs. believers).

Previous research on the moral psychological consequences of analytic thinking [576063] also led to the prediction that ACS should account for disbelievers’ (vs. believers) higher inclination to rely on consequentialist moral reasoning. Finally, and more speculatively, it was proposed that ACS may also help account for disbelievers’ stronger inclination to moralize epistemic rationality.

The present research provided support for all of these predictions but one. First, results supported the predictions that CREDs, existential threat, and cognitive style should help explain differences between believers and disbelievers in their level of endorsement of the binding moral foundations. Low (vs. high) exposure to CREDs (Studies 2 & 3), low (vs. high) existential threat (Study 2), and high (vs. low) ACS (Study 3) in large part accounted for the weaker endorsement of the binding moral foundations observed among disbelievers (vs. believers) in both countries. Results also provided some support for the prediction that differences in cognitive style help explain why disbelievers (vs believers) are more inclined to rely on consequentialist moral reasoning (Study 3). Specifically, high (vs. low) ACS contributed slightly to disbelievers’ (vs. believers’) stronger reliance on consequentialist moral reasoning. However, no support was found for the prediction that differences in cognitive style contribute to differences between disbelievers and believers in their inclination to moralize epistemic rationality, as disbelief (Studies 2–3) and ACS (Study 3) explained only a miniscule amount of variance in moralized rationality.

Theoretical and practical implications

The present findings paint a clear and consistent picture of how disbelievers in the U.S. and in Sweden conceptualize morality, and provide some initial evidence of the processes that may explain why they view morality in this way. First and foremost, this data suggests that the cross-cultural stereotype of atheists as lacking a moral compass is inaccurate. In fact, the data suggests that disbelievers share many of the moral values endorsed by believers, and they score equally low on amoral tendencies. Notably, these findings are consistent with, and complement, experience sampling research showing that disbelievers and believers engage in (self-defined) moral and immoral behaviors at the same rate [13]. At the same time, the present research also suggests that disbelievers have a more constrained view of morality than believers do, in that they are less inclined to endorse the binding moral foundations, and more inclined to judge the morality of actions that inflict harm on a consequentialist, case-by-case basis. It is worth noting that moral character evaluations of people who make decisions based on consequentialist (vs. rule-based) principles are more negative, because consequentialists are perceived as less empathic [78]. In the light of such findings it seems plausible that atheists’ inclination to rely on consequentialist principles, along with their weak endorsement of the binding moral foundations, may to some degree have contributed to their reputation as lacking in moral character.

The present findings also have implications for our understanding of where disbelievers moral values and principles stem from. Atheism merely implies the absence of religious belief, and says nothing about what positive beliefs the disbeliever holds. I therefore argue that disbelief itself should contribute little to the endorsement of moral values and principles. Disbelief may contribute to the rejection of specific moral rules that stem from, or are closely associated with, religion (e.g., “You should not eat pork”, “you should not work on the Sabbath”). However, it seems implausible that disbelief itself causes people to adopt certain moral principles (e.g., consequentialism), or to discard broad classes of values as irrelevant for morality (e.g., the binding moral foundations). Instead, it has been argued in the present article that unique features of disbelievers’ morality may stem from the cultural, (de)motivational, and cognitive antecedents of their disbelief. The present studies provided support for this line of reasoning, as three of the four presumed antecedents of disbelief examined were associated with the moral profile of disbelievers. Low exposure to CREDs, low levels of existential threat, and high ACS were associated with the weaker endorsement of the binding moral foundations observed among disbelievers (vs. believers). Furthermore, high ACS, and low exposure to CREDs were (slightly) associated with the higher levels of consequentialist thinking observed among disbelievers (vs. believers). Having said that, however, it is important to note that the present data was correlational in nature. As a consequence, conclusions cannot be drawn about the causal role of these presumed antecedents of disbelief in the process of moral value acquisition and rejection. I will return to this point below.

Limitations and suggestions for future research

The present research constitutes the first systematic examination of the moral values and principles endorsed by disbelievers, and the processes through which they may be acquired. As such it represents an important first step in the process of explaining the relationship between disbelief and morality. However, the studies reported in this article have a number of limitations that should be addressed in future studies. First and foremost, the present studies relied exclusively on self-reported moral values and principles. Although this approach provided important insights regarding how disbelievers (vs. believers) think about morality, future studies should examine what the behavioral consequences are of these similarities and differences in moral values and principles. Because moral stances are often particularly strong predictors of behavior, there are reasons to believe that the differences in moral values and principles between disbelievers and believers observed in the present studies could have important behavioral implications in various domains. However, it should also be noted that religious belief is positively associated with reputational concerns [23], which can also promote prosocial behavior [2425]. It therefore seems plausible that moral values and principles may serve as somewhat weaker predictors of behavior among disbelievers (vs. believers), at least when other people are present [11, see also 79].

Another limitation of the present research is the exclusive reliance on cross-sectional data. Although the results are consistent with the notion that presumed antecedents of disbelief help explain why disbelievers’ (vs. believers’) views of morality differ, the cross-sectional designs employed in these studies do not allow for conclusions about causal relationships. Future studies should further examine these relationships using methods that enable causal conclusions. In particular, confidence in the current interpretation of these results would increase considerably with corroborating evidence from longitudinal and experimental studies.

The main purpose of the present research was to examine how people who do not believe in God think about morality, and to compare their views about morality with those of people who do believe in God. Participants in the two large cross-national surveys were therefore explicitly asked whether they believed in God or not, and those who reported not knowing whether they were believers or not were dropped. Although this classification may seem simplistic, it served to ensure that comparisons were made between people who did (vs. did not) believe in God. These studies also included a continuous measure of belief (vs. disbelief) strength. However, because scores on this measure deviated drastically from normality, and because results did not substantially alter any of the results reported in this article, those analyses were reported elsewhere (see S1 Text). It is also worth noting that results were strikingly similar when using a continuous measure of level of religiosity (Studies 1A-1B). The consistent results across different measures indicate that the present findings are not an artefact of a particular method of assessing religious disbelief (vs. belief). That said, however, future studies should expand on the present studies and examine the extent to which differences between disbelievers’ and believers’ views about morality vary as a function of the kind of religious beliefs that people endorse.

Related to the point above, although it is a notable strength of the present research that similar results were obtained across multiple studies, using large samples from two countries in which the populations differ dramatically in disbelief (vs. belief), the U.S. and Sweden are nonetheless relatively similar countries in many respects. For example, religious believers are predominantly Christian in both countries, and both cultures are WEIRD (i.e., Western, Educated, Industrialized, Rich, Democratic). Future research is needed to determine whether the moral psychological similarities and differences observed between disbelievers and believers hold up in non-WEIRD cultures, and in cultures where other religious beliefs are the norm.

Sunday, February 28, 2021

Rolf Degen summarizing... Atheism is as much a part of the original state of the human mind as is belief in god

New Cognitive and Cultural Evolutionary Perspectives on Atheism. Thomas J. Coleman III, Kyle Messick, Valerie van Mulukom. In The Routledge Handbook of Evolutionary Approaches to Religion, January 2021. https://psyarxiv.com/ze5mv

Rolf Degen's take: Atheism is as much a part of the original state of the human mind as is belief in god

1. Introduction 

Atheism is a topic that has only recently attracted the attention of evolutionarily minded scholars. In this chapter, we will present the current issues with the study of atheism from an evolutionary perspective. 

Attempts to place atheism into an evolutionary framework have followed a methodological direction that, we argue, may have stymied inquiry thus far: the idea that the best starting place to develop an explanation of atheism is by building on explanations of theism (e.g., Barrett, 2004, 2010; Bering, 2002, 2010; Johnson, 2012; Kalkman, 2013; Norenzayan & Gervais, 2013; Mercier, Krammer, & Shariff, 2018). Under this view, atheism is situated at the low end of a psychological continuum of religiosity and/or is a result of malfunctioning cognitive capacities that, if working normally, would produce religious belief (cf. Caldwell-Harris, 2012; Weekes-Shackleford & Shackleford, 2012). Thus, this stance assumes a priori that humans evolved to become homo religiosus (the idea that humans are inherently god believing creatures) and implies that atheists are either psychological deviants or closet believers (Coleman & Messick, 2019; Shook, 2017). Moreover, this view entails the idea that atheism is an empty signifier and individual atheists are therefore defined by the beliefs or psychological processes that they lack, rather than the ones they have. The problem for this perspective is: How can the absence of something(s) be linked to our evolved psychological endowment? Under this view, the possibility that atheism might be produced, in-part, by its own set of mechanisms (and not just a reversal of “theistic cognition”), or be evolutionary adaptive, would remain unexplored. 

In this chapter, we explore atheism—in its broadest sense—as a product of our evolved species-typical psychology. We build on past scholarship and research, whilst also taking this in several new directions. First, we argue that atheism can be defined in “positive” terms, and then we link this definition to evolved psychological mechanisms. This allows us to explore the phylogeny of atheism, including the possibility that our ancestors exhibited atheistic beliefs. Second, informed by evolutionary psychology, we review the ontogeny of atheism, as well as discussing the development of theistic cognition. Third, we review several adaptive and nonadaptive evolutionary hypotheses for atheism developed by Johnson (2012) and use new evidence to argue in favor of atheism as an adaptive worldview. Fourth, we reflect on the limited ability of existing biophysiological studies to inform current understandings of atheism. In closing, we further extrapolate advantages of this approach, as well as some potential limitations, and discuss future directions for research. Our overall aim is to spark renewed discussion for possible evolutionary perspectives on atheism.


6. The functionally adaptive explanation for atheism

In traditional evolutionary arguments, functional and adaptive traits are carried down to future generations through natural selection (or analogous processes operating at the cultural level; Laland & Brown, 2011). Adaptive traits help with the survival and success of a species. Citing numerous studies suggesting religiosity confers multiple beneficial outcomes, ranging from coping with stress, increasing social relatedness and facilitating social coordination, reducing death anxiety, and increasing psychological well-being and meaning in life, a group of researchers has consistently argued that religion should be considered an adaptive trait (Johnson, 2012, 2016; Laurin, 2017; Norenzayan et al., 2016; Sosis & Alcorta, 2003; Wilson, 2002; Wood & Shaver, 2018)4. Far less attention however has been given to the possibility that atheism might be similarly adaptive (although see Szocik & Messick, 2020; Messick, Szocik, & Langston, 2019; Shults, 2018), and it was not until recently that evidence has accumulated in support of this position however, through a broader set of mechanisms than what is found with religion.

Dominic Johnson (2012) has proposed ten evolutionary hypotheses for the emergence of atheism. There are three non-adaptive hypotheses (no variation, natural variation, unnatural variation), that posit that there either are no real atheists (because everyone has some level of implicit or explicit belief in supernatural agency), or that atheists are a result of a natural distribution of belief, or that a variety of life circumstances could result in the emergence of atheism. The latter two hypotheses essentially outline atheism as being a byproduct, and thus, not as adaptive. The remaining seven explanations proposed by Johnson (2012), he suggests, are adaptive at either the individual or group level.

note 4 There are several reasons to be skeptical of religion as an adaptation, ranging from religion’s incoherence as a trait that could be selected (Richerson & Newson, 2008) to an overestimation of “the degree to which ostensive benefits would be sufficient to permit natural selection to systematically favor religious variants over nonreligious ones” (Kirkpatrick, 2006, p. 167).

One of the adaptive hypotheses is the exploitation hypothesis. This hypothesis claims that atheism is adaptive for the individuals only when they are in a position of power. This hypothesis builds off Karl Marx’s claim that religion functions as a tool for the elite to control “the masses,” as the figureheads of a society can exploit religious belief among their denizens to increase their own power, wealth, and status. In its strongest form, this hypothesis assumes that the majority of atheists were or are socio-political elites and have made a “Machiavellian calculation” (p. 59) that their own level of belief only matters to the extent that they can exercise control over their lower status religious adherents. The ecological contingency hypothesis also posits that atheism, like theism, can be adaptive at the individual level, but only in certain settings, as some traits are environment-and context-dependent. For example, this hypothesis assumes that atheists are disposed to a type of rationalist thinking that is more likely to flourish in times of abundance and peace, and that the adaptive components of religious beliefs are costly and more likely to flourish in times of scarcity and warfare. The atheism is a religion hypothesis views atheism as being functionally equivalent to religion. This hypothesis assumes that atheism, as a shared belief and collection of values, can confer the functional benefits associated with religion. This hypothesis will be expanded upon further in the next section. The final individually adaptive hypothesis is the frequency dependency hypothesis. This hypothesis builds off evolutionary game theory, which in turn posits that coexisting traits, such as belief and nonbelief, can be beneficial for one another through competition. In other words, this hypothesis assumes that atheists can receive the benefits of religion without believing, as long as atheism is not overly common.

Finally, Johnson proposes three theories that he claims can explain how atheism can be adaptive at the group level: 1) as a catalyst for the facilitation of the adaptive advantages of belief, 2) through serving to bolster religious belief as a reaction to skepticism, and 3) through atheists being skeptical of religious doctrine which results in the religious ‘toning down’ their doctrine to make it seemingly more credible. As Johnson (see 2012, p. 65) himself notes, these explanations outline atheism as being beneficial for believers, but without clear benefits to the atheists themselves. In other words, the existence of individual atheists is a non-adaptive (but not maladaptive) by-product of religion having been selected at the group level. It is not clear why Johnson labels these as adaptive hypotheses for the group-level selection of atheists, as the position seems to confuse what he argues is selected at the group-level (i.e., religion) for what he argues is the adaptive benefits of atheism at the individual level (i.e., rationality).

Of the ten theories offered by Johnson (2012), we argue that explanations of atheism as a fluke, byproduct of, or bolster for religious adaptations do not sufficiently account for why atheism persists and how it functions. The next section will further outline two perspectives to support this idea: We will argue first that atheism can be adaptive in ways similar to religious belief, and secondly that atheism becomes more prominent when the adaptiveness of religious belief becomes obsolete or redundant through secular societal mechanisms. Both explanations give credence to the functional/adaptive explanation for why atheism exists while recognizing atheism as a phenomenon that is comparable to religion, rather than a side-effect of it. 

The average American user is increasingly integrating politics into social identities, adding political words to describe themselves; they are more likely to describe themselves by their political affiliation than their religious one

Using Twitter Bios to Measure Changes in Self-Identity: Are Americans Defining Themselves More Politically Over Time? Nick Rogers; Jason J. Jones. Journal of Social Computing, Volume 2, Issue 1, March 2021), DOI 10.23919/JSC.2021.0002

Abstract: Are Americans weaving their political views more tightly into the fabric of their self-identity over time? If so, then we might expect partisan disagreements to continue becoming more emotional, tribal, and intractable. Much recent scholarship has speculated that this politicization of Americans' identity is occurring, but there has been little compelling attempt to quantify the phenomenon, largely because the concept of identity is notoriously difficult to measure. We introduce here a methodology, Longitudinal Online Profile Sampling (LOPS), which affords quantifiable insights into the way individuals amend their identity over time. Using this method, we analyze millions of “bios” on the microblogging site Twitter over a 4-year span, and conclude that the average American user is increasingly integrating politics into their social identity. Americans on the site are adding political words to their bios at a higher rate than any other category of words we measured, and are now more likely to describe themselves by their political affiliation than their religious affiliation. The data suggest that this is due to both cohort and individual-level effects.

6 Discussion and Conclusion

To the extent that a person’s Twitter bio is a valid measure of their sense of identity, Americans are defining themselves more saliently by their politics. This is important, because the formation of a group identity tends to change individual behavior in powerful ways. Through the phenomenon of “group polarization”, people who begin with vague, weakly-held opinions tend to become more radical and dogmatic when put into like-minded groups. They also quickly develop hostile feelings towards outgroup members. Rational, evidence-based dissent tends to lose effectiveness within the groups, and in fact make group members even more invested in their original opinion. To what may this increase in prevalence of political group identity be attributed? Is a more politicallyengaged set of people joining Twitter for the first time, making the aggregate site more political than it was in prior years? Or are existing Twitter users amending their profiles to add a political keyword where formerly there were none? In other words, is this a generational/cohort effect, or is change occurring within individual identities? As our data reveal: both. Comparison between the cross-sectional and the longitudinal data suggests that (1) new entrants are more politically-oriented than the older participants they are joining or replacing, and also (2) individual people are amending their identities to be more political. This dual nature of the phenomenon, as well as the effects it is likely to produce, portend a national polarization that is more likely to deepen than subside, in the short term. As Americans define themselves increasingly by their political allegiances, their feelings towards political “others” can be expected to become more negative, and debate on matters of policy will become more emotional and intractable. Traditionally, a solution to the problem of tribalism has been found in the concept of “superordinate goals”. Rival groups can put aside their perceived zerosum differences when presented with a shared obstacle that requires cooperation to surmount. In the Robbers’ Cave experiment, the Rattlers and Eagles were able to work together and even form intergroup friendships, once they were presented with obstacles that required cooperation for shared benefit[37]. Particular to our political context, some experimental research has suggested that priming a national identity (American) can mitigate partisan bias[38]. The attacks of September 11, 2011, for example, led to a period of bipartisan focus on international terrorism. Yet in the current political climate, such agreed-upon goals seem rare. Democrats and Republicans seem to diagnose distinct social maladies from each other, unable to even agree on shared definitions of problems.

Limitations and future inquiry.

Although we believe our method provides a useful, digital-age measure of individual identity that is similar to the seminal Twenty Statements Test, there are imperfections worth noting. First is the potential influence of “bots”. It is wellestablished that Russian intelligence sought in 2016, and continues to seek, to influence American political discourse through the creation of social media accounts that pose as American users and spread divisive (and often fabricated) political content[39]. It is conceivable that our documented increase in prevalence of political keywords in bios is partially attributable to a growing number of these bots. However, our best evidence suggests any such influence is minimal. To investigate this possibility we tested random subsamples of our data using “Botometer”, an automated tool to detect automated “bot” accounts. Almost all accounts received low scores. The mean for accounts in the longitudinal sample was 0.6 on a scale of 0 (probably not a bot) to 5 (probably a bot). The growth rate of botlike accounts fluctuated across our study period and could not account for the increases in political identity reported here. A full account of this analysis is included as the Appendix. A second concern: Are our findings generalizable to the American general public, or is the politicization specific to Twitter users? To be sure, a sample of Twitter users is not the same as a random sample of Americans. In a recent study by Pew Research Genter[40], Twitter users are discovered to be younger, wealthier, and more educated than the United States at large. They are also modestly more liberal and more likely to say that voted in the last election. So it is conceivable that Twitter users are also more likely to adopt political identities than the general population. More data would be necessary to resolve this ambiguity. But we think that a general politicization of social identity is consistent with the other measures of politicization that we referenced in Section 1—voter turnout, affective polarization, cultural sorting, and so on. Further, our sampling method samples tweets rather than users. Users who do not use tweets—who may have an account only to receive information or direct message—are thus not observed. These users may be systematically different from our sample of users who do use tweets, and the present method cannot speak to whether their self-identification is changing or not. A third issue is the construction of our lists of keywords. We were sensitive to the possibility that certain “trendy” keywords could increase in prevalence not because individuals are defining themselves more politically, but rather because the keywords themselves are becoming more popular and supplanting “outdated” keywords that are not in our lists. For example, a hypothetical Twitter user might have had an Obamasupportive “Yes We Can” phrase in their bio in 2015, but swapped it out in 2016 for a “Nasty Woman” reference to Hillary Clinton. Because the former phrase is not in our list, and the latter phrase is, our method would give the misleading impression that the user had “politicized” their bios, when in fact it was political all along. We considered a number of methods that might limit the amount of subjectivity of that process. We searched for an adequate pre-existing keyword set, to no compelling avail. We analyzed the Twitter bios of several dozens of popular political figures, to see what descriptors they commonly employed. To our surprise, these individuals rarely used words that were even implicitly partisan, in their bios . We contemplated various Natural Language Processing techniques, to obtain frequently-used words on political hotbeds such as Reddit’s r/politics subreddit. But ultimately we concluded that the utility of such methods would be outweighed by the drawbacks and complications. Future research may build upon these results by constructing more comprehensive (or selective) banks of keywords. It would also be fruitful to expand upon these descriptive data and incorporate more layered analyses. With demographic information on our Twitter users, for example, we could conduct models to determine which characteristics are most correlated with changes in political identity. We could also analyze the users’ tweets over time (rather than merely their bios), and analyze what sorts of rhetoric tends to portend or reflect a recent change of identity. Continued inquiry on the matter is important: It is crucial to understand the dynamics underlying American political polarization. The stability of a people is dependent on some sense of unifying solidarity. Without it, order is imperiled and chaos invited.

 

Individual Valuing of Social Equality in Political and Personal Relationships: We will sometimes prioritize equality over competing values, but the weight of social equality diminishes when moving from personal to political cases

Individual Valuing of Social Equality in Political and Personal Relationships. Ryan W. Davis & Jessica Preece. Review of Philosophy and Psychology, Feb 28 2021. https://rd.springer.com/article/10.1007/s13164-021-00527-8

Rolf Degen's take: Social equality matters more to people in their personal relationships than in the realm of politics

Abstract: Social egalitarianism holds that individuals ought to have equal power over outcomes within relationships. Egalitarian philosophers have argued for this ideal by appealing to (sometimes implicit) features of political society. This way of grounding the social egalitarian principle renders it dependent on empirical facts about political culture. In particular, egalitarians have argued that social equality matters to citizens in political relationships in a way analogous to the value of equality in a marriage. In this paper, we show how egalitarian philosophers are committed to psychological premises, and then illustrate how to test the social egalitarian’s empirical claims. Using a nationally representative survey experiment, we find that citizens will sometimes prioritize equality over competing values, but that the weight of social equality diminishes when moving from personal to political cases. These findings raise questions for thinking about how to explain the normative significance of social equality.


Recent research on sexual orientation and sexual fluidity illustrates distinctions among subtypes of same-gender sexuality (such as mostly-heterosexuality, bisexuality, and exclusive same-gender experience)

The New Genetic Evidence on Same-Gender Sexuality: Implications for Sexual Fluidity and Multiple Forms of Sexual Diversity

The New Genetic Evidence on Same-Gender Sexuality: Implications for Sexual Fluidity and Multiple Forms of Sexual Diversity. Lisa M. Diamond. The Journal of Sex Research, Feb 23 2021, https://doi.org/10.1080/00224499.2021.1879721

h/t David Schmitt: genes associated with ever engaging in same-gender sexual behavior differed from the genes associated with one’s relative proportion of same-gender to other-gender behavior...findings speak to distinctions among subtypes of same-gender sexuality

Abstrct: In September of 2019, the largest-ever (N = 477,522) genome-wide-association study of same-gender sexuality was published in Science. The primary finding was that multiple genes are significantly associated with ever engaging in same-gender sexual behavior, accounting for between 8–25% of variance in this outcome. Yet an additional finding of this study, which received less attention, has more potential to transform our current understanding of same-gender sexuality: Specifically, the genes associated with ever engaging in same-gender sexual behavior differed from the genes associated with one’s relative proportion of same-gender to other-gender behavior. I review recent research on sexual orientation and sexual fluidity to illustrate how these findings speak to longstanding questions regarding distinctions among subtypes of same-gender sexuality (such as mostly-heterosexuality, bisexuality, and exclusive same-gender experience). I conclude by outlining directions for future research on the multiple causes and correlates of same-gender expression.


From 2016... Facial expressions and other behavioral responses to pleasant and unpleasant tastes in cats (Felis silvestris catus)

From 2016... Facial expressions and other behavioral responses to pleasant and unpleasant tastes in cats (Felis silvestris catus). Michaela Hanson et al. Applied Animal Behaviour Science, Volume 181, August 2016, Pages 129-136. https://doi.org/10.1016/j.applanim.2016.05.031

Rolf Degen's take: Science has also documented the “pleasure” face in cats, a relaxed expression with the eyes half closed

Highlights

• Cats display distinct facial expressions to pleasant and unpleasant tastes.

• No masking effect of a pleasant taste on an unpleasant taste was observed.

• Behavioral responses may be more informative than consumption data concerning taste.

Abstract: The goal of the present study was to assess how cats react to tastes previously reported to be preferred or avoided relative to water. To this end, the facial and behavioral reactions of 13 cats to different concentrations of l-Proline and quinine monohydrochloride (QHCl) as well as mixtures with different concentrations of the two substances were assessed using a two-bottle preference test of short duration. The cats were videotaped and the frequency and duration of different behaviors were analyzed. Significant differences in the cats’ behavior in response to the taste quality of the different solutions included, but were not limited to, Tongue Protrusions (p < 0.039), Mouth smacks (p = 0.008) and Nose Licks (p = 0.011) with four different stimulus concentrations. The cats responded to preferred taste by keeping their Eyes half-closed (p = 0.017) for significantly longer periods of time with four different stimulus concentrations compared to a water control. When encountering mixtures containing l-Proline and QHCl the cats performed Tongue protrusion gapes (p < 0.038) significantly more frequently with three different stimulus concentrations compared to an l-Proline control. A stepwise increase in the concentration of l-Proline from 5 mM to 500 mM in mixtures with 50 μM QHCl did not overcome the negative impact of the bitter taste on intake. The results of the present study suggest that behavioral responses provide an additional dimension and may be more informative than consumption data alone to assess whether cats perceive tastes as pleasant or unpleasant. Thus, the analysis of behavioral responses to different taste qualities may be a useful tool to assess and improve the acceptance of commercial food by cats.

Keywords: BehaviorCatFelis silvestris catusTaste reactivityl-ProlineQuinine monohydrochloride


Shoplift is common, we eat a few of the customer's fries when delivering food to them, we overcharge buyers in computer repairs or fish markets... New study: Field experiments on dishonesty and stealing

Field experiments on dishonesty and stealing: what have we learned in the last 40 years? Hugo S. Gomes, David P. Farrington, Ivy N. Defoe & Ângela Maia. Journal of Experimental Criminology, Feb 27 2021. https://rd.springer.com/article/10.1007/s11292-021-09459-w

Rolf Degen's take: The Price is not Right: In the five field experiments conducted so far, buyers were overcharged for the goods

Abstract

Objectives: Field experiments combine the benefits of the experimental method and the study of human behavior in real-life settings, providing high internal and external validity. This article aims to review the field experimental evidence on the causes of offending.

Methods: We carried out a systematic search for field experiments studying stealing or monetary dishonesty reported since 1979.

Results: The search process resulted in 60 field experiments conducted within multiple fields of study, mainly in economics and management, which were grouped into four categories: Fraudulent/ dishonest behavior, Stealing, Keeping money, and Shoplifting.

Conclusions: The reviewed studies provide a wide variety of methods and techniques that allow the real-world study of influences on offending and dishonest behavior. We hope that this summary will inspire criminologists to design and carry out realistic field experiments to test theories of offending, so that criminology can become an experimental science.


Study suggests that women may have greater cognitive reserve but faster cognitive decline than men, which could contribute to sex differences in late-life dementia

Sex Differences in Cognitive Decline Among US Adults. Deborah A. Levine et al. JAMA Netw Open. 2021;4(2):e210169. February 25, 2021, doi:10.1001/jamanetworkopen.2021.0169

h/t David Schmitt women have greater cognitive reserve but faster later-life cognitive decline than men. Evidence suggests that dementia incidence in Europe and the US has declined over the past 25 years, but declines were less in women than in men


Key Points

Question  Does the risk of cognitive decline among US adults vary by sex?

Findings  In this cohort study using pooled data from 26 088 participants, women, compared with men, had higher baseline performance in global cognition, executive function, and memory. Women, compared with men, had significantly faster declines in global cognition and executive function, but not memory.

Meaning  These findings suggest that women may have greater cognitive reserve but faster cognitive decline than men.

Abstract: Importance  Sex differences in dementia risk are unclear, but some studies have found greater risk for women.

Objective  To determine associations between sex and cognitive decline in order to better understand sex differences in dementia risk.

Design, Setting, and Participants  This cohort study used pooled analysis of individual participant data from 5 cohort studies for years 1971 to 2017: Atherosclerosis Risk in Communities Study, Coronary Artery Risk Development in Young Adults Study, Cardiovascular Health Study, Framingham Offspring Study, and Northern Manhattan Study. Linear mixed-effects models were used to estimate changes in each continuous cognitive outcome over time by sex. Data analysis was completed from March 2019 to October 2020.

Exposure  Sex.

Main Outcomes and Measures  The primary outcome was change in global cognition. Secondary outcomes were change in memory and executive function. Outcomes were standardized as t scores (mean [SD], 50 [10]); a 1-point difference represents a 0.1-SD difference in cognition.

Results  Among 34 349 participants, 26 088 who self-reported Black or White race, were free of stroke and dementia, and had covariate data at or before the first cognitive assessment were included for analysis. Median (interquartile range) follow-up was 7.9 (5.3-20.5) years. There were 11 775 (44.7%) men (median [interquartile range] age, 58 [51-66] years at first cognitive assessment; 2229 [18.9%] Black) and 14 313 women (median [interquartile range] age, 58 [51-67] years at first cognitive assessment; 3636 [25.4%] Black). Women had significantly higher baseline performance than men in global cognition (2.20 points higher; 95% CI, 2.04 to 2.35 points; P < .001), executive function (2.13 points higher; 95% CI, 1.98 to 2.29 points; P < .001), and memory (1.89 points higher; 95% CI, 1.72 to 2.06 points; P < .001). Compared with men, women had significantly faster declines in global cognition (−0.07 points/y faster; 95% CI, −0.08 to −0.05 points/y; P < .001) and executive function (−0.06 points/y faster; 95% CI, −0.07 to −0.05 points/y; P < .001). Men and women had similar declines in memory (−0.004 points/y faster; 95% CI, −0.023 to 0.014; P = .61).

Conclusions and Relevance  The results of this cohort study suggest that women may have greater cognitive reserve but faster cognitive decline than men, which could contribute to sex differences in late-life dementia.

Discussion

Among 26 088 individuals pooled from 5 prospective cohort studies, women had higher baseline performance than men in global cognition, executive function, and memory. Women, compared with men, had significantly faster declines in global cognition and executive function but not memory. These sex differences persisted after accounting for the influence of age, race, education, and cumulative mean BP.

Our results provide evidence suggesting that women have greater cognitive reserve but faster cognitive decline than men, independent of sex differences in cardiovascular risk factors and educational years. Previous studies31 have shown that women have higher initial scores on most types of cognitive tests except those measuring visuospatial ability. Few studies have examined sex differences in cognitive trajectories in population-based cohorts of cognitively normal Black and White individuals. A 2016 study31 of older adults in Baltimore (mean ages 64-70 years) found that men had steeper rates of decline on 4 of 12 cognitive tests (mental status [Mini Mental State Examination], perceptuomotor speed and integration, visual memory, and visuospatial ability) but no sex differences in declines on 8 of 12 cognitive tests (verbal learning and memory, object recognition and semantic retrieval, fluent language production, attention, working memory and set-shifting, perceptuomotor speed, and executive function). Similarly, we found no sex differences in verbal learning and memory; but, in contrast, we found that women had faster cognitive decline in global cognitive performance and executive function than men. These latter results might differ because we included young and middle-aged adults (mean age 58 years). Our findings are consistent with studies showing that women with mild cognitive impairment or AD have faster decline in global cognition than men.32,33

Our results of sex differences in cognitive decline were consistent across most cohorts. The potential reasons for the finding of slower cognitive decline in women in the Framingham Offspring Study are unclear and might be due to socioeconomic, life stress, geographic, and environmental factors as well as cohort differences in sampling strategies, eligibility criteria, and cognitive tests. Although our finding that declines in memory do not differ by sex are consistent with other studies,31 the finding is surprising because memory decline is the clinical hallmark of AD, a common cause of dementia,1 and some studies suggest that women have higher incidence of AD.4-6 One explanation is that women manifest verbal memory declines at more advanced stages of neurodegenerative disease than men owing to women having greater initial verbal memory scores and cognitive reserve.34,35 However, evidence against this explanation is that women in our study had faster declines in global cognition and executive function despite having higher initial levels of these measures. Another explanation is that the memory measure was less sensitive than the global cognition and executive function measures to detect sex differences in cognitive decline.

If the observed sex differences in declines in global cognition and executive function are causal, then they would be clinically significant, equivalent to 5 to 6 years of cognitive aging. The faster declines in mean cognitive scores associated with female sex can be related to approximate equivalent changes in years of brain or cognitive aging by calculating the ratio of slope coefficients for female sex and baseline age on cognition. Experts have defined clinically meaningful cognitive decline as a decline in cognitive function of 0.5 or more SDs from baseline cognitive scores.36-38 Women will reach the threshold of a 0.5-SD decrease from the baseline score 4.72 years faster than men for global cognition, 1.97 years faster for executive function, and 0.24 years faster for memory (eTable 7 in the Supplement). Based on this approach, sex differences in cognitive declines are clinically meaningful. Declines in global cognition and executive function markedly raise the risk of death, dementia, and functional disability.39-41 Diagnosis of the clinical syndrome of dementia/neurocognitive disorder requires cognitive decline by history and objective measurement.42 Our findings that women have faster declines in global cognition and executive function mean women would have greater risk than men for being diagnosed with dementia based on objectively measured cognitive decline. Our findings that women had higher initial cognitive scores suggest informants and clinicians might not observe significant cognitive decline in women until substantial loss and impairment has occurred.

Studies have consistently found evidence of sex differences in baseline cognitive functioning with women demonstrating stronger verbal cognitive skills than men, but men demonstrating stronger visuospatial skills than women (eg, mental rotations).31,43 Reasons for these sex differences are complex and likely influenced by biological (eg, sex hormones), genetic (eg, APOE), and social and cultural factors.43 While sex differences in cognitive reserve might also be associated with differences in life course risk factors such as vascular risk,44 education, and health behaviors such as smoking and exercise,45 our findings of sex differences in baseline cognitive performance independent of these factors suggest that additional contributors and biological pathways play a role.

Women might have faster cognitive decline than men because of differences in sex hormones, structural brain development, genetics, psychosocial factors, lifestyle factors, functional connectivity, and tau pathology.45-47 Women might have greater burden of small vessel disease, including white matter hyperintensity volume, and less axonal structural integrity that in turn leads to faster cognitive decline particularly in executive function and processing speed.48,49 Women also appear to have lower gray matter volume,50 so they might be more vulnerable to both the accelerated gray volume loss that occurs with aging and the differential volume loss in specific brain regions that occurs with neurodegenerative diseases.51 Recent studies suggest that women develop greater neurofibrillary degeneration, brain parenchymal loss, and cognitive decline.52-54 Our results suggest that women’s greater cognitive reserve might enable them to withstand greater AD-pathology than men.

Strengths and Limitations

Our study has several strengths. By pooling 5 large, high-quality cohorts, we had longitudinal cognitive assessments and vascular risk factor measurements in a large number of Black and White individuals who were young, middle-aged, and older-aged to estimate cognitive trajectories in men and women. We had repeated cognitive measures during up to 21 years of follow-up. The cohort studies included in our study systematically measured major cognitive domains important for daily, occupational, and social functioning: global cognition, executive function, and memory. Our findings were consistent across cohorts.

This study also has several limitations. While we adjusted for educational years, we could not adjust for educational quality, literacy, other socioeconomic factors,10 or depressive symptoms, because not all cohorts had these data at or before the first cognitive assessment. However, studies suggest that socioeconomic factors tend to influence initial cognitive scores (ie, intercepts) rather than the change in cognitive scores over time (slopes).55,56 Selective attrition of cognitively impaired participants could underestimate the rate of cognitive decline57 or not.58 Estimating the potential clinical impact of sex differences in cognitive decline by correlating it with decline due to aging is a common approach, but it does not directly measure clinical impact, and a clinically meaningful change might vary by an individual’s age, educational quality, race, and baseline cognition.59 There were no sex differences in participants excluded because of stroke or dementia before first cognitive assessment, so this would not influence sex differences in cognitive decline (eTable 8 in the Supplement).

We did not study incident dementia because some cohort studies lacked this information. By design, we did not adjust for baseline cognition. We also did not study any particular age interval associated with greatest risk of sex-related cognitive decline. Heterogeneity of the association of sex with cognitive decline between cohorts might have affected the statistical validity of the summary estimate of the effect in the pooled cohort. Smaller sample size and fewer cognitive assessments might have reduced precision of estimates of cognitive decline in executive function and memory (ie, the secondary outcomes). We did not have information on participants’ instrumental activities of daily living, family history of dementia, and hormone replacement therapy use. While the assumption that participants’ postmortem cognitive data are missing at random might lead to immortal cohort bias and underestimate memory declines,60 it is valid to answer the research question quantifying sex differences in cognitive trajectories through study follow-up. Women might have had a greater likelihood of regressing to a lower value than men at follow-up because they had higher baseline cognitive function than men. Using a fixed effect for cohorts might have produced conservative estimates of sex effects on cognitive slopes.