Tuesday, March 2, 2021

From 2010... A central feature of patriarchy has been the construction of ‘moral’ ideologies that inhibit women from exploiting their erotic capital to achieve benefits; women have more erotic capital because they work harder at it

From 2010... Erotic Capital. Catherine Hakim. European Sociological Review, Volume 26, Issue 5, October 2010, Pages 499–518, https://doi.org/10.1093/esr/jcq014

Abstract: We present a new theory of erotic capital as a fourth personal asset, an important addition to economic, cultural, and social capital. Erotic capital has six, or possibly seven, distinct elements, one of which has been characterized as ‘emotional labour’. Erotic capital is increasingly important in the sexualized culture of affluent modern societies. Erotic capital is not only a major asset in mating and marriage markets, but can also be important in labour markets, the media, politics, advertising, sports, the arts, and in everyday social interaction. Women generally have more erotic capital than men because they work harder at it. Given the large imbalance between men and women in sexual interest over the life course, women are well placed to exploit their erotic capital. A central feature of patriarchy has been the construction of ‘moral’ ideologies that inhibit women from exploiting their erotic capital to achieve economic and social benefits. Feminist theory has been unable to extricate itself from this patriarchal perspective and reinforces ‘moral’ prohibitions on women's sexual, social, and economic activities and women’s exploitation of their erotic capital.

Denial of Erotic Capital

The Male Bias in Perspectives

Why has erotic capital been overlooked by social scientists? This failure of Bourdieu and other researchers is testimony to the continuing dominance of male perspectives in sociology and economics, even in the 21st century. Bourdieu's failure is all the more remarkable because he analysed relationships between men and women, and was sensitive to the competition for control and power in relationships (Bourdieu, 1998). However, like many others, Bourdieu was only interested in the three class-related and inheritable assets that are convertible into each other. Erotic capital is distinctive in not being controlled by social class and status,21 and has a subversive character.

Erotic capital has been overlooked because it is held mostly by women, and the social sciences have generally overlooked or disregarded women in their focus on male activities, values and interests. The patriarchal bias in the social sciences reflects the male hegemony in society as a whole. Men have taken steps to prevent women exploiting their one major advantage over men, starting with the idea that erotic capital is worthless.22 Women who parade their beauty or sexuality are belittled as stupid, lacking in intellect, and other 'meaningful' social attributes. The Christian religion has been particularly vigorous in deprecating and disdaining everything to do with sex and sexuality as base and impure, shameful, belonging to a lower aspect of humanity. Laws are devised to prevent women from exploiting their erotic capital. For example, female dancers in Britain are debased by classifying lapdancing clubs as ‘sexual encounter’ venues, later amended to the marginally less stigmatizing ‘sexual entertainment’ venues in the new Crime and Policing law debated in Parliament in 2009. They are prohibited from charging commercial fees for surrogate pregnancies, a job that is exclusively and peculiarly female. If men could produce babies, it seems likely that it would be one of the highest paid occupations, but men use ‘moral’ arguments to ensure that women are not allowed to exploit any advantage.

The most powerful and effective weapon deployed by men to curtail women's use of erotic capital is the disdain and contempt heaped on female sex workers. Sex surveys in Europe show that few people regard commercial sex jobs as an occupation just like any other. Women working in the commercial sex industry are regarded as victims, drug addicts, losers, incompetents, or as people you would not wish to meet socially (Shrage, 1994, pp. 88). The patriarchal nature of these stereotypes is exposed by quite different attitudes to male prostitutes: attitudes here are ambivalent, conflicted, and unsure (Malo de Molina, 1992, pp. 203). Commercial sex is often classified as a criminal activity so that it is forced underground, as in the USA, and women working in the industry are harassed by the police and criminal justice system. Even in countries where selling sex is legal, such as Britain, Finland, or Kenya, everything connected with the work is stigmatized and criminalized, with the same effect.

Male control of female erotic capital is primarily ideological. The ‘moral’ opprobrium that enfolds the commercial sale of sexual performance and sexual services extends to all contexts where there is any exchange of erotic capital for money or status. Occupations, such as stripper or lapdancer, are stigmatized as lewd, salacious, sleazy, meretricious, and prurient (Frank, 2002). An attractive young woman who seeks to marry a wealthy man is branded a ‘gold-digger’, criticized for ‘taking advantage of’ men unfairly and immorally. The underlying logic is that men should get what they want from women for free, especially sex. Surprisingly, feminists have supported this ideology instead of seeking to challenge and overturn it. Even the participants in beauty contests are criticized by women.23

The patriarchal ‘morality’ that denies the economic value of erotic capital operates in a similar way to downplay the economic value of other personal services and care work. England and Folbre (1999: pp. 46) point out that the principle that money cannot buy love has the unintended and perverse consequence of justifying low pay for personal service and care work, a conclusion reiterated by Zelizer (2005, pp. 302).

The Failure of Feminist Theory

Why have women, and feminists more particularly, failed to identify and valorize erotic capital? In essence, because feminist theory has proven unable to shed the patriarchal perspective, reinforcing it while ostensibly challenging it. Strictly speaking, this position is a feature of radical Anglo-Saxon feminism more specifically, but the international prominence of the English language (and of the USA) makes this the dominant feminist perspective today.24

Feminist theory erects a false dichotomy: either a woman is valued for her human capital (her brains, education, work experience, and dedication to her career) or she is valued for her erotic capital (her beauty, elegant figure, dress style, sexuality, grace, and charm). Women with brains and beauty are not allowed to use both—to ‘walk on two legs’ as Chairman Mao put it.

Any scholar who argues that women have unique skills or special assets of any kind is instantly outlawed by being branded an ‘essentialist’. In principle, biological essentialism refers to an outdated theory that there are important and unalterable biological differences between men and women, which assign them to separate life courses. At present, it is often used to refer to the evolutionary psychology thesis that men focus on the sexual selection of the best women with whom to breed, while women invest heavily in their offspring. Put crudely, ‘sexuality for men and reproduction for women' are treated as the root cause of all social and economic differences between men and women. In practice, the ‘essentialist’ label has become an easy term of abuse among feminists, being applied to any theory or idea regarded as unacceptable or unwelcome (Campbell, 2002). This has the advantage of avoiding the need to address the research evidence for inconvenient ideas and theories. This approach is displayed in books that seek to summarize current feminist debates on sex/gender, in the process demonstrating that these discussions are so ideological, and so divorced from empirical research, that they have become theological debates (Browne, 2007).

A key failure of feminist scholarship is the way it has maintained the male hegemony in theory, although it has been more innovative and fruitful in empirical research. Feminists insist that women's position in society should depend exclusively on their economic and social capital. Cultural capital (where women can have the edge over men) is rarely pulled into the picture. It follows that women should invest in educational qualifications and employment careers in preference to developing their erotic capital and investing in marriage careers. The European Commission has adopted feminist ideology wholesale, and insists that ‘gender equality’ is to be measured exclusively by employment rates, occupational segregation, access to the top jobs, personal incomes, and the pay gap,25 treating women without paid jobs as ‘unequal’ to men.

Female social scientists repeatedly dismiss the idea that physical attractiveness and sexuality are power assets for women vis-à-vis men. For example, Lipman-Blumen (1984, pp. 89–90) lists this as just one in a series of 'control myths' adopted by men to justify the status quo; she claims that male vested interests necessarily bias any argument offered by men, even if they are social scientists. Feminist theory has so far failed to explain why men with high incomes and status regularly choose trophy (second) wives and arm-candy mistresses, while women who have achieved career success and high incomes generally prefer to marry alpha males rather than seeking toyboys and impecunious men who would make good househusbands and fathers (Hakim, 2000, pp. 153, 201).

Sylvia Walby (1990, pp. 79) admits in passing that the power to create children is one of women's few power bases, but she never states what the others might be. Mary Evans admits that Anglo-Saxon feminism is profoundly uncomfortable with sexuality, and frames it in a relentlessly negative perspective (Evans, 2003, pp. 99; see also Walby, 1990, pp. 110). Feminists argue that there is no real distinction between marriage and prostitution; that (hetero)sexuality is central to women's subordination by men; that patriarchal men seek to establish what Carole Pateman (1988, pp. 194, 205) calls ‘male sex-right’—male control of men's sexual access to women. Marriage and prostitution are portrayed as forms of slavery (Pateman, 1988, pp. 230; Wittig, 1992). Sexuality is the setting for every kind of male violence against women, overlooking women's use of sexuality to control men—a very one-sided perspective.26 Feminist theory and debate also display ambivalence about the idea of sex and gender, which is presented as a patriarchal cultural imposition, with no link to the human body and motherhood (Wittig, 1992; Browne, 2007). With heterosexuality (and motherhood) presented as the root cause of women’s oppression, feminist solutions include celibacy, autoeroticism, lesbianism, and androgyny (Coppock et al.1995). Paradoxically, these solutions reduce the supply of female sexuality to men, and thus raise the value of erotic capital among heterosexual women.

Feminist discourse comes in many colours and flavours, and is constantly changing, but the common theme is that women are the victims of male oppression and patriarchy, so that heterosexuality becomes suspect, a case of sleeping with the enemy, and the deployment of erotic capital becomes an act of treason. Post-feminism seems at first to avoid this perspective. Post-feminism is a mixed bag of literature, by novelists and journalists as well as social scientists (Coppock et al.1995; Whelehan, 1995). There is no single theme or thesis, although men are less likely to be treated as the source of all women’s problems. However post-feminism is unable to escape from the puritan Anglo-Saxon asceticism and its unwavering antipathy towards beauty and sexuality. Naomi Wolf’s The Beauty Myth, a diatribe against the rising value and importance of beauty and sexual attractiveness, is reinforced by feminists (Jeffreys, 2005). Lookism encapsulates the puritan Anglo-Saxon antipathy to beauty and sexuality, arguing that taking any account of someone’s appearance should be outlawed, effectively making the valorization of erotic capital unlawful (Chancer, 1998, pp. 82–172).

One stream of feminist theory presents men as violent sexual predators who exploit women. Another theoretical stream dismantles the concepts of sex and gender, so there are no fixed ‘opposites’ for mutual attraction anyway. A third theme treats beauty and pleasure as dangerous traps. Between these three themes, ideas of exuberant sexuality and women’s erotic power over men are squeezed out of existence. Feminist perspectives are so infused with patriarchal ideology that they seem unable to perceive heterosexuality as a source of pleasure and entertainment, and of women’s power over men.

Far from evaluating new evidence dispassionately & infallibly, individual scientists often cling stubbornly to prior findings; loss-of-confidence sentiments are common but rarely become part of the public record

Putting the Self in Self-Correction: Findings From the Loss-of-Confidence Project. Julia M. Rohrer et al. Perspectives on Psychological Science, March 1, 2021. https://doi.org/10.1177/1745691620964106

Abstract: Science is often perceived to be a self-correcting enterprise. In principle, the assessment of scientific claims is supposed to proceed in a cumulative fashion, with the reigning theories of the day progressively approximating truth more accurately over time. In practice, however, cumulative self-correction tends to proceed less efficiently than one might naively suppose. Far from evaluating new evidence dispassionately and infallibly, individual scientists often cling stubbornly to prior findings. Here we explore the dynamics of scientific self-correction at an individual rather than collective level. In 13 written statements, researchers from diverse branches of psychology share why and how they have lost confidence in one of their own published findings. We qualitatively characterize these disclosures and explore their implications. A cross-disciplinary survey suggests that such loss-of-confidence sentiments are surprisingly common among members of the broader scientific population yet rarely become part of the public record. We argue that removing barriers to self-correction at the individual level is imperative if the scientific community as a whole is to achieve the ideal of efficient self-correction.

Keywords: self-correction, knowledge accumulation, metascience, scientific falsification, incentive structure, scientific errors

The Loss-of-Confidence Project raises a number of questions about how one should interpret individual self-corrections.

First, on a substantive level, how should one think about published empirical studies in cases in which the authors have explicitly expressed a loss of confidence in the results? One intuitive view is that authors have no privileged authority over “their” findings, and thus such statements should have no material impact on a reader’s evaluation. On the other hand, even if authors lack any privileged authority over findings they initially reported, they clearly often have privileged access to relevant information. This is particularly salient for the p-hacking disclosures reported in the loss-of-confidence statements. Absent explicit statements of this kind, readers would most likely not be able to definitively identify the stated problems in the original report. In such cases, we think it is appropriate for readers to update their evaluations of the reported results to accommodate the new information.

Even in cases in which a disclosure contributes no new methodological information, one might argue that the mere act of self-correction should be accorded a certain weight. Authors have presumably given greater thought to and are more aware of their own study’s potential problems and implications than a casual reader. The original authors may also be particularly biased to evaluate their own studies favorably—so if they have nonetheless lost confidence, this might heuristically suggest that the evidence against the original finding is particularly compelling.

Second, on a metalevel, how should one think about the reception one’s project received? On the one hand, one could argue that the response was about as positive as could reasonably be expected. Given the unconventional nature of the project and the potentially high perceived cost of public self-correction, the project organizers (J. M. Rohrer, C. F. Chabris, T. Yarkoni) were initially unsure whether the project would receive any submissions. From this perspective, even the 13 submissions we ultimately received could be considered a clear success and a testament to the current introspective and self-critical climate in psychology.

On the other hand, the survey responses we received suggest that the kinds of errors disclosed in the statements are not rare. Approximately 12% of the 316 survey respondents reported losing confidence in at least one of their articles for reasons that matched our stringent submission criteria (i.e., because of mistakes that the respondent took personal responsibility for), and nearly half acknowledged a loss of confidence more generally.

This suggests that potentially hundreds, if not thousands, of researchers could have submitted loss-of-confidence statements but did not do so. There are many plausible reasons for this, including not having heard of the project. However, we think that at least partially, the small number of submitted statements points to a gap between researchers’ ideals and their actual behavior—that is, public self-correction is desirable in the abstract but difficult in practice.

Fostering a culture of self-correction

As has been seen, researchers report a variety of reasons for both their losses of confidence and their hesitation to publicly disclose a change in thinking. However, we suggest that there is a broader underlying factor: In the current research environment, self-correction, or even just critical reconsideration of one’s past work, is often disincentivized professionally. The opportunity costs of a self-correction are high; time spent on correcting past mistakes and missteps is time that cannot be spent on new research efforts, and the resulting self-correction is less likely to be judged a genuine scientific contribution. Moreover, researchers may worry about self-correction potentially backfiring. Corrections that focus on specific elements from an earlier study might be perceived as undermining the value of the study as a whole, including parts that are in fact unaffected by the error. Researchers might also fear that a self-correction that exposes flaws in their work will damage their reputation and perhaps even undermine the credibility of their research record as a whole.

To tackle these obstacles to self-correction, changes to the research culture are necessary. Scientists make errors (and this statement is certainly not limited to psychological researchers; see e.g., Eisenman et al., 2014García-Berthou & Alcaraz, 2004Salter et al., 2014Westra et al., 2011), and rectifying these errors is a genuine scientific contribution—whether it is done by a third party or the original authors. Scientific societies could consider whether they want to more formally acknowledge efforts by authors to correct their own work. Confronted with researchers who publicly admit to errors, other researchers should keep in mind that willingness to admit error is not a reliable indicator of propensity to commit errors—after all, errors are frequent throughout the scientific record. On the contrary, given the potential (or perceived) costs of individual self-corrections, public admission of error could be taken as a credible signal that the issuer values the correctness of the scientific record. However, ultimately, given the ubiquity of mistakes, we believe that individual self-corrections should become a routine part of science rather than an extraordinary occurrence.

Different media for self-correction

Unfortunately, good intentions are not enough. Even when researchers are committed to public self-correction, it is often far from obvious how to proceed. Sometimes, self-correction is hindered by the inertia of journals and publishers. For example, a recent study suggested that many medical journals published correction letters only after a significant delay, if at all (Goldacre et al., 2019), and authors who tried to retract or correct their own articles after publication have encountered delays and reluctance from journals (e.g., Grens, 2015). Even without such obstacles, there is presently no standardized protocol describing what steps should be taken when a loss of confidence has occurred.

Among the participants of the Loss-of-Confidence Project, Fisher et al. (2015) decided to retract their article after they became aware of their misspecified model. But researchers may often be reluctant to initiate a retraction given that retractions occur most commonly as a result of scientific misconduct (Fang et al., 2012) and are therefore often associated in the public imagination with cases of deliberate fraud. To prevent this unwelcome conflation and encourage more frequent disclosure of errors, journals could introduce a new label for retractions initiated by the original authors (e.g., “Authorial Expression of Concern” or “voluntary withdrawal”; see Alberts et al., 2015). Furthermore, an option for authorial amendments beyond simple corrections (up to and including formal versioning of published articles) could be helpful.

Thus, it is not at all clear that widespread adoption of retractions would be an effective, fair, or appropriate approach. Willén (2018) argued that retraction of articles in which questionable research practices (QRPs) were employed could deter researchers from being honest about their past actions. Furthermore, retracting articles because of QRPs known to be widespread (e.g., John et al., 2012) could have the unintended side effect that some researchers might naively conclude that a lack of a retraction implies a lack of QRPs. Hence, Willén suggested that all articles should be supplemented by transparent retroactive disclosure statements. In this manner, the historical research record remains intact because information would be added rather than removed.

Preprint servers (e.g., PsyArXiv.com) and other online repositories already enable authors to easily disclose additional information to supplement their published articles or express their doubts. However, such information also needs to be discoverable. Established databases such as PubMed could add links to any relevant additional information provided by the authors. Curate Science (curatescience.org), a new online platform dedicated to increasing the transparency of science, is currently implementing retroactive statements that could allow researchers to disclose additional information (e.g., additional outcome measures or experimental manipulations not reported in the original article) in a straightforward, structured manner.

Another, more radical step would be to move scientific publication entirely online and make articles dynamic rather than static such that they can be updated on the basis of new evidence (with the previous version being archived) without any need for retraction (Nosek & Bar-Anan, 2012). For example, the Living Reviews journal series in physics by Springer Nature allows authors to update review articles to incorporate new developments.

The right course of action once one has decided to self-correct will necessarily depend on the specifics of the situation, such as the reason for the loss of confidence, publication norms that can vary between research fields and evolve over time, and the position that the finding takes within the wider research. For example, a simple but consequential computational error may warrant a full retraction, whereas a more complex confound may warrant a more extensive commentary. In research fields in which the published record is perceived as more definitive, a retraction may be more appropriate than in research fields in which published findings have a more tentative status. In addition, an error in an article that plays a rather minor role in the context of the wider research may be sufficiently addressed in a corrigendum, whereas an error in a highly cited study may require a more visible medium for the self-correction to reach all relevant actors.

That said, we think that both the scientific community and the broader public would profit if additional details about the study, or the author’s reassessment of it, were always made public and always closely linked to the original article—ideally in databases and search results as well as the publisher’s website and archival copies. A cautionary tale illustrates the need for such a system: In January 2018, a major German national weekly newspaper published an article (Kara, 2018a) that uncritically cited the findings of Silberzahn and Uhlmann (2013). Once the journalist had been alerted that these findings had been corrected in Silberzahn et al. (2014), she wrote a correction to her newspaper article that was published within less than a month of the previous article (Kara, 2018b), demonstrating swift journalistic self-correction and making a strong point that any postpublication update to a scientific article should be made clearly visible to all readers of the original article.

All of these measures could help to transform the cultural norms of the scientific community, bringing it closer to the ideal of self-correction. Naturally, it is hard to predict which ones will prove particularly fruitful, and changing the norms of any community is a nontrivial endeavor. However, it might be encouraging to recall that over the past few years, scientific practices in psychology have already changed dramatically (Nelson et al., 2018). Hence, a shift toward a culture of self-correction may not be completely unrealistic, and psychology, with its increasing focus on openness, may even serve as a role model for other fields of research to transform their practices.

Finally, it is quite possible that fears about negative reputational consequences are exaggerated. It is unclear whether and to what extent self-retractions actually damage researchers’ reputations (Bishop, 2018). Recent acts of self-correction such as those by Carney (2016), which inspired our efforts in this project, Silberzahn and Uhlmann (Silberzahn et al., 2014), Inzlicht (2016)Willén (2018), and Gervais (2017) have received positive reactions from within the psychological community. They remind us that science can advance at a faster pace than one funeral at a time.

Low-to-Moderate Alcohol Intake Associated with Lower Risk of Incidental Depressive Symptoms: A Pooled Analysis of Three Intercontinental Cohort Studies

Low-to-Moderate Alcohol Intake Associated with Lower Risk of Incidental Depressive Symptoms: A Pooled Analysis of Three Intercontinental Cohort Studies. Lirong Liang et al. Journal of Affective Disorders, February 26 2021. https://doi.org/10.1016/j.jad.2021.02.050

Abstract

Background: The existing findings of the longitudinal impact of low-to-moderate drinking on symptomatic depression were controversial, as results ranged from finding no association to finding both a protective and adverse association.

Methods: The present study examined the association between low-to-moderate alcohol consumption and incident depressive symptoms by pooled analysis of three European, American and Chinese representative samples of middle-aged and older adults.

Results: A total of 29,506 participants (55.5% female) were included. During 278,782 person-years of follow-up, we found that subjects with low-to-moderate drinking had a significantly lower incidence of depressive symptoms compared to never-drinking subjects, with pooled hazard ratios of 0.87 (95% confidence interval [CI]: 0.79–0.96) for men and 0.87 (95% CI: 0.80–0.95) for women, whereas heavy drinkers failed to show significantly higher risk of depressive symptoms. Furthermore, a J-shaped relation between alcohol consumption and incident depressive symptoms was identified in Chinese men, US men, and UK men and women.

Limitations: The classification of depressive symptoms based on the Center for Epidemiologic Studies Depression Scale may not be completely comparable to diagnosis from a clinical setting.

Conclusions: Low-to-moderate alcohol consumption was significantly associated with a lower risk of depressive symptoms on a long-term basis compared to never drinking. Our results support the threshold of moderate drinking in current US guidelines. However, caution should be exercised in engaging in guideline-concordant drinking habits, for even moderate drinkers are at risk of developing heavy drinking habits and experiencing future alcohol-related problems.

Keywords: alcohol consumptionlow-to-moderate drinkingdepressive symptomscohort studies


Representations of violence are not intrinsically senseless & can even aspire to beauty; violent media are able to represent, by way of implication, deeper truths about the nature of the universe & our human interrelationships

Beautiful Violence: Polemos, Responsibility, and Tragic Wisdom. Jeffrey Ventola. Academia Letters, Feb 2021. https://www.academia.edu/45093421

Introduction

I will argue in this paper that representations of violence are not intrinsically senseless and can even aspire to beauty. They may evince more or less responsible portrayals of conflict. As I will justify and valuate representations of violence I wish to define my scope to avoid misinterpretation. I believe that violent media are able to represent, by way of implication, deeper truths about the nature of the universe and our human interrelationships. This applies most relevantly to visual, fictional, narrative media such as television and films. I believe what these representations can uncover, when properly deployed, is Heraclitus’s Polemos. While typically translated as “war”, Claudia Baracchi warns against such a strict interpretation. She convincingly argues that Polemos and Logos itself are very closely bound. Polemos is then the conflict necessarily implied by our existence and not “the human exercise of warfare”(Baracchi 268).

In order to demonstrate an aesthetic connection between this interpretation of Polemos and violent media, I will reference Nietzsche’s Birth of Tragedy as he argues that tragedy and indeed all art is produced from a kind of conflict that is, at once always changing and part of a larger unity. I argue that due to their structural similarities art can demonstrate or imply Polemos, revealing an aspect of Logos that may typically be beyond our comprehension. Due to its inclination to entertain as well as philosophize, many forms of media typically represent conflict through violence. I do not believe this is necessarily bad. There can even be artistic and pragmatic effects from viewing responsibly constructed violent imagery.



A Lineage of 400,000 English Individuals 1750-2020 shows Genetics Determines most Social Outcomes

For Whom the Bell Curve Tolls: A Lineage of 400,000 English Individuals 1750-2020 shows Genetics Determines most Social Outcomes. Gregory Clark, March 1, 2021. faculty.econ.ucdavis.edu/faculty/gclark/ClarkGlasgow2021.pdf

Abstract: Economics, Sociology, and Anthropology are dominated by the belief that social outcomes depend mainly on parental investment and community socialization. Using a lineage of 402,000 English people 1750-2020 we test whether such mechanisms better predict outcomes than a simple additive genetics model. The genetics model predicts better in all cases except for the transmission of wealth. The high persistence of status over multiple generations, however, would require in a genetic mechanism strong genetic assortative in mating. This has been until recently believed impossible. There is however, also strong evidence consistent with just such sorting, all the way from 1837 to 2020. Thus the outcomes here are actually the product of an interesting genetics-culture combination.


Terror Management Theory: The predictive power of this theory just isn't holding up to scrutiny

Are We Really Terrorized By Thoughts of Death? Morbid curiosity and challenges to Terror Management Theory. Coltan Scrivner. Psychology Today, Mar 1 2021. https://www.psychologytoday.com/intl/blog/morbid-minds/202103/are-we-really-terrorized-thoughts-death

*  Terror Management Theory attempts to explain human behavior on the premise that awareness of our mortality causes paralyzing terror. 

*  However, morbid curiosity may show that we generally do not experience such terror at the thought of death.

*  Also, the future of Terror Management Theory looks bleak, as researchers are unable to replicate traditional findings.


My own assessment of the literature suggests that the future of Terror Management Theory looks bleak. As inabilities to replicate traditional findings continue to be published, the foundations upon which Terror Management Theory are built are beginning to crumble. The predictive power of this theory just isn't holding up to scrutiny.


"If I Could Turn Back Time": A majority of both males (66.95%) and females (54.00%) reporting they would not want to change anything about their first coital experience

"If I Could Turn Back Time": Female and Male Reflections on Their Initial Experience of Coitus. Israel M. Schwartz & Edward Coffield. Sexuality & Culture, Mar 1 2021. https://rd.springer.com/article/10.1007/s12119-021-09826-9

Rolf Degen's take: Most young people would not change any aspects of their first coitus if they could travel back in time. However, many women would have liked to have someone else with them

Abstract: Using a mixed methods approach this study compared young people’s reflections on their initial experience of coitus, exploring similarities and differences from a gender perspective, with regard to a reported desire to change, or not change, any aspect of the event. The sample population was comprised of 318 university students in the northeastern region of the United States (women n = 200, men n = 118). Thematic analysis was used to evaluate the open-ended question, “If you could go back in time to your first sexual intercourse, would you want to change anything? If so, what would you change and why?” Responses were subsequently transformed into quantitative measures for additional analyses. T-tests or chi-square tests were conducted to evaluate gender differences. Notable findings included a majority of both males (66.95%) and females (54.00%) reporting they would not want to change anything about their first coital experience. Among respondents who reported a desired change the three primary desired change themes were partner (15.72%), age (8.18%), and location (5.03%), although the percentages differed by gender. Qualitative responses to the most frequently reported desired change categories are presented to contextualize the quantitative data. The novelty of understanding what young people would change about their first sexual intercourse provides important contextual information to the research on the feelings experienced at first coitus.

Monday, March 1, 2021

Communicating extreme forecasts... Scientists mention uncertainty far more frequently than non-scientists; thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'

Apocalypse now? Communicating extreme forecasts. David C. Rode; Paul S. Fischbeck. International Journal of Global Warming, 2021 Vol.23 No.2, pp.191 - 211. DOI: 10.1504/IJGW.2021.112896

Abstract: Apocalyptic forecasts are unique. They have, by definition, no prior history and are observed only in their failure. As a result, they fit poorly with our mental models for evaluating and using them. However, they are made with some frequency in the context of climate change. We review a set of forecasts involving catastrophic climate change-related scenarios and make several observations about the characteristics of those forecasts. We find that mentioning uncertainty results in a smaller online presence for apocalyptic forecasts. However, scientists mention uncertainty far more frequently than non-scientists. Thus, the bias in media toward coverage of non-scientific voices may be 'anti-uncertainty', not 'anti-science'. Also, the desire among many climate change scientists to portray unanimity may enhance the perceived seriousness of the potential consequences of climate catastrophes, but paradoxically undermine their credibility in doing so. We explore strategies for communicating extreme forecasts that are mindful of these results.

Keywords: apocalypse; climate change; communication; extreme event; forecast; forecasting; global warming; media; policy; prediction; risk; risk communication; uncertainty.


5 Implications for policy and risk communication

Uncertainty is a core challenge for climate change science. It can undermine public engagement (Budescu et al., 2012) and form a barrier to public mobilisation (Pidgeon and Fischhoff, 2011). Our findings in this paper support these results and suggest that the exclusion of uncertainty from communication of apocalyptic climate-related forecasts can increase the visibility of the forecasts. However, the increased visibility comes at the cost of emphasising the voices of speakers without a scientific background. But focusing only on the quantity of communications, and not the ‘weight’ attached to them, neglects the important role that their credibility plays in establishing trust.

Trust (in subject-matter authorities and in climate research) influences perceived risk (Visschers, 2018; Siegrist and Cvetkovich, 2000). The impact of that trust is significant. Although belief in the existence of climate change remains strong, belief that its risks have been exaggerated has grown (Wang and Kim, 2018; Poortinga et al., 2011; Whitmarsh, 2011). Gaps have also emerged between belief in climate change and estimates of the seriousness of its impact (Markowitz and Guckian, 2018). To the extent that failed predictions damage that trust, the public’s perception of climate-related risk is altered. If the underlying purpose of making apocalyptic predictions is to recommend action, and if the predictions fail to materialise, the wisdom of the recommendations based on those predictions may be called into question. Climate science’s perceived value is thereby diminished. If the perceived value (or the certainty of that value) is diminished, policy action is harder to achieve.

It is not simply the presence of uncertainty that is an impediment, it is that communications characterised by ‘hype and alarmism’ also undermine trust (Howe et al., 2019; O’Neill and Nicholson-Cole, 2009). The continual failure of the predictions to materialise may be seen to validate the public’s belief that such claims are in fact exaggerated. Although such beliefs may be the result of outcome bias (Baron and Hershey, 1988), recent evidence has also suggested that certain commonly accepted scientific predictions may indeed be exaggerated (Lewis and Curry, 2018). The model of belief we presented in Subsection 2.4 demonstrates that observing only failures will inevitably result in a reduction in subjective beliefs about apocalyptic risks. To build trust, any forecasts made must be ‘scientific’ – that is, able to be observed both incorrect and correct (Green and Armstrong, 2007). Under such circumstances, they should also incorporate clear statements acknowledging uncertainty, as doing so may work to increase trust (Joslyn and LeClerc, 2016). It is important to provide settings where the audience can ‘calibrate’ its beliefs. “A climate forecast can only be evaluated and potentially falsified if it provides a quantitative range of uncertainty” [Allen et al., (2013), p.243]. The acknowledgement of the uncertainty should include both worst-case and best-case outcomes (Howe et al., 2019).

One key to increasing credibility is to build up a series of shorter, simpler (non-apocalyptic) predictions (Nemet, 2009). Instead of predicting solely an apocalyptic event 50 years out, offer a series of contingent forecasts of shorter characteristic time (Byerly, 2000) that lead toward the ultimate event. Communications about climate change – and especially climate change-related predictions – should emphasise areas of the science that are less extreme in outcome, but more tangibly certain in likelihood (Howe et al., 2019). This implies, inter alia, that compound forecasts of events and the consequences of events should be separated. The goal may even be to exploit an outcome bias in decision making by moving from small- to large-scale predictions. By establishing a successful track record of smaller-scale predictions, validated with ex post evaluations of forecast accuracy, the public may be more inclined to increase its trust of the larger-scale predictions – even when such predictions are inherently less certain. This approach has been advocated directly by Pielke (2008) and Fildes and Kourentzes (2011) and supports the climate prediction efforts of Meehl et al. (2014), Smith et al. (2019), and others. To that end, we propose four concrete steps that can be taken to improve the usefulness of extreme climate forecasts. First, the authors of the forthcoming Sixth Assessment Report of the IPCC should be encouraged to tone down ‘deadline-ism’ (Asayama et al., 2019). Forecasters should make an effort to influence the interpretation of their forecasts; for example, by correcting media reporting of them. The sequential releases of the IPCC’s Assessment Reports, for example, should consider calling out particularly erroneous or incomplete interpretations of statements from previous Assessment Reports.

Second, given the extensive evidence about the limited forecasting abilities of

individual experts (Tetlock, 2005), forecasters should give more weight to the unique

ability of markets to serve as efficient aggregators of belief in lieu of negotiated

univocality. So-called prediction markets have a strong track record (Wolfers and

Zitzewitz, 2004). Although they have been suggested multiple times for climate

change-related subjects (Lucas and Mormann, 2019; Vandenbergh et al., 2014), they

have almost never been used. An exception is the finding that pricing in weather financial

derivatives is consistent with the output of climate models of temperature (Schlenker and

Taylor, 2019).

Third, efforts to provide reliable mid-term predictions should be encouraged. The

multi-year and decadal prediction work of Smith et al. (2019) and Meehl et al. (2014) is

in this direction. But what should (also) be developed are repeated and sequential

forecasts in order to facilitate learning about the forecasting process itself. That is, not

just how current climate forecasting models perform in hindcasts, but how previous

climate forecasts have performed (and hopefully improved) over time. Efforts to

determine the limits of predictability are also important (Meehl et al., 2014) and should

be studied in conjunction with the evaluation of forecast performance over time.

Fourth, extreme caution should be used in extrapolating from forecasts of climate events (e.g., temperature or CO2 levels) to their social and physical consequences (famine, flooding, etc.) without the careful modelling of mitigation and adaptation efforts and other feedback mechanisms. While there have been notable successes in predicting certain climate characteristics, such as surface temperature (Smith et al., 2019), the ability to tie such predictions to quantitative forecasts of consequences is more limited. The efforts to model damages as part of determining the social cost of carbon (such as with the DICE, PAGE, and FUND integrated assessment models) are a start but are subject to extreme levels of parameter sensitivity (Wang et al., 2019); uncertainty should be reflected in any forecasts of apocalyptic forecasts of climate change consequences. Scientists are often encouraged to ‘think big’, especially in policy applications. What we are suggesting here is that climate policy analysis could benefit from thinking ‘small’. That is, from focusing on the lower-level building blocks that go into making larger-scale predictions. One means by which to build public support for a complex idea like climate change is to demonstrate to the public that our understanding of the building blocks of that science are solid, that we are calibrated as to the accuracy of the building block forecasts, and that we understand how lower-level uncertainty propagates through to probabilistic uncertainty in the higher-level forecasts of events and consequences.

Rolf Degen summarizing... Recent genome-wide association studies have shown that genetic influences on psychological traits are driven by thousands of DNA variants, each with very small effect sizes; effects of "the environment" appear to be as fragmented and unspecific

From Genome-Wide to Environment-Wide: Capturing the Environome. Sophie von Stumm, Katrina d’Apice. Perspectives on Psychological Science, March 1, 2021. https://doi.org/10.1177/1745691620979803

Rolf Degen's take: Recent genome-wide association studies have shown that genetic influences on psychological traits are driven by thousands of DNA variants, each with very small effect sizes. Effects of "the environment" appear to be as fragmented and unspecific

Abstract: Genome-wide association (GWA) studies have shown that genetic influences on individual differences in affect, behavior, and cognition are driven by thousands of DNA variants, each with very small effect sizes. Here, we propose taking inspiration from GWA studies for understanding and modeling the influence of the environment on complex phenotypes. We argue that the availability of DNA microarrays in genetic research is comparable with the advent of digital technologies in psychological science that enable collecting rich, naturalistic observations in real time of the environome, akin to the genome. These data can capture many thousand environmental elements, which we speculate each influence individual differences in affect, behavior, and cognition with very small effect sizes, akin to findings from GWA studies about DNA variants. We outline how the principles and mechanisms of genetic influences on psychological traits can be applied to improve the understanding and models of the environome.

Keywords: genomics, genetics, environment, large data, effect sizes

Throughout this article, we have highlighted ways in which psychological science may take inspiration from genomic research to advance the understanding and models of environmental influences. Our aim is now to outline the steps that we believe are essential to bring about an effective research agenda for the environome.

A first challenge—having the technical tools available to capture the environome—is under way, although it is far from being complete. The environome comprises an infinite number of dynamic processes, whose assessment requires robust technologies that enable collecting precise, in-depth observations at multiple time points with little measurement error (Wild, 2012). Although assessment technologies have rapidly improved in recent years, capturing even one individual’s environome in its totality remains impossible to date (Roy et al., 2009).

The second challenge is to develop the computational methods required for modeling these rich data, for example using machine-learning approaches such as data mining and cluster analysis. This challenge is not specific to studies of the environome but shared with analyses of the genome. Although current GWA studies already incorporate a vast number of SNPs, they typically include only a fraction of the potentially available genomic information (Wainschtein et al., 2019). Another parallel between genome and environome suggests itself here: GWA studies currently consider only additive effects of SNPs, although interactions are plausible. Likewise, environmental factors are likely to involve interactive effects between each other in addition to additivity and collinearity. We predict that statistical advances in genomics will prevail at a fast pace and that they will be applicable not only to the genome but also to studies of the environome.

The third challenge is to develop a theoretical framework for organizing and modeling the environome and its influence on complex traits. We anticipate that this challenge can be met only through large-scale collaborations, akin to the consortia that dominate contemporary genetic research, such as the Psychiatric Genomics Consortium; https://www.med.unc.edu/pgc/) that focuses on mental health issues or the Social Science Genetic Association Consortium (https://www.thessgac.org/) that targets social science outcomes, as its name suggests. These and other consortia like them typically involve hundreds of researchers and organizations that engage in interdisciplinary collaborations and pool data across biobanks, population cohort studies, and independent samples. They offer extraordinary opportunities for scientific breakthroughs: The majority of the recent discoveries about the role of genetic influences of people’s differences in psychological traits emerged on the back of the work completed in consortia. For modeling the environome, longitudinal population cohort studies, which are typically defined by the year or decade of the cohort members’ birth and by the geographical scope from which they were recruited, will be of particular value (Cave & von Stumm, 2020). For one, longitudinal cohort studies can elucidate at least some of the environome’s dynamic changes that occur across people’s life span because cohort members are repeatedly assessed over time, including observations of the prenatal environment in some cases. For the other, population cohort studies are key to exploring the environome’s socio-historical development across generations—in other words, how the environmental experiences of today’s children differ from their parents’ and grandparents’ environmental experiences.

Rather than creating new consortia or shifting attention away from existing ones, we suggest broadening their scope to also pool data and expertise on the environome. Akin to the HapMap Project, a first step for a systematic research program into the environome would call for charting the breadth of environments that humans experience. A bottom-up approach, for example by creating comprehensive archives of environmental measures that are available across biobanks, population cohort studies, and independent samples, has some appeal. The alternative top-down approach would involve developing a theoretical taxonomy that could be applied to categorize observations of environments, including those already collected in previous studies, and then be subjected to empirical validation. An encouraging example is the DIAMONDS taxonomy that proposes eight dimensions to classify psychological situations by the extent to which they pertain to duty (i.e., something has to be done), intellect (i.e., learning opportunity), adversity (i.e., threat), mating (i.e., sexually charged), positivity (i.e., playfulness), negativity (i.e., stress), deception (i.e., sabotage), and sociality (i.e., social interaction; Rauthmann et al., 2014). Although the DIAMONDS taxonomy has to date been applied to only a select number of contexts and is fairly abstract, its theoretical framework may inspire analogous models for describing the environome.

GWA studies serve to identify genetic predictors of developmental differences in psychological traits, but they currently offer little value for elucidating the causality that underlies this prediction (Belsky & Harden, 2019). Likewise, the framework we proposed here for modeling the environome focuses on prediction. It does not qualify for finding the functional or causal mechanisms that explain why certain environmental conditions benefit phenotypic development more than others. Although not always appreciated, accurate prediction of psychological traits is immensely precious in itself because it enables identifying risk and resilience before problems manifest. In addition, a better understanding of the environome will help generate hypotheses that in the future can facilitate direct tests of causality, akin to current endeavors in functional genomics that try to make sense of gene and protein functions and interactions.

Participants made less prosocial decisions (i.e., became more selfish) in different-gender avatars, regardless of their own sex; women embodying a male avatar were more sensitive to temptations of immediate rewards

Bolt, Elena, Jasmine Ho, Marte Roel, Alexander Soutschek, Philippe N. Tobler, and Bigna Lenggenhager. 2021. “How the Virtually Embodied Gender Influences Social and Temporal Decision Making.” PsyArXiv. March 1. doi:10.31234/osf.io/84v9n

Abstract: Mounting evidence has demonstrated that embodied virtual reality, during which physical bodies are replaced with virtual surrogates, can strongly alter cognition and behavior even when the virtual body radically differs from one’s own. One particular emergent area of interest is the investigation of how virtual gender swaps can influence choice behaviors. Economic decision making paradigms have repeatedly shown that women tend to display more prosocial sharing choices than men. To examine whether a virtual gender swap can alter gender-specific differences in prosociality, 48 men and 51 women embodied either a same- or different-gender avatar in immersive virtual reality. In a between-subjects design, we differentiated between specifically social and non-social decision making by means of an interpersonal and intertemporal discounting task, respectively. We hypothesized that a virtual gender swap would elicit social behaviors that stereotypically align with the gender of the avatar. To relate potential effects to changes in self-perception, we measured implicit and explicit gender identification, and used questionnaires that assessed the strength of the illusion. Contrary to our hypothesis, our results show that participants made less prosocial decisions (i.e., became more selfish) in different-gender avatars, independent of their own biological sex. Moreover, women embodying a male avatar in particular were more sensitive to temptations of immediate rewards. Lastly, the manipulation had no effects on implicit and explicit gender identification. To conclude, while we showed that a virtual gender swap indeed alters decision making, gender-based expectancies cannot account for all the task-specific interpersonal and intertemporal changes following the virtual gender swap.


US: Religiosity decreased over time at a similar rate for the heterosexual and sexual minority groups; spirituality significantly increased over time for the sexual minority group but not for the heterosexual youth

Religious and Spiritual Development from Adolescence to Early Adulthood in the U.S.: Changes over Time and Sexual Orientation Differences. Kalina M. Lamb, Robert S. Stawski & Sarah S. Dermody. Archives of Sexual Behavior, Feb 22 2021. https://link.springer.com/article/10.1007%2Fs10508-021-01915-y

Abstract: Adolescence is a critical time in the U.S. for religious development in that many young people eschew their religious identity as they enter adulthood. In general, religion is associated with a number of positive health outcomes including decreased substance use and depression. The current study compared the developmental patterns of religiosity and spirituality in heterosexual and sexual minority youth. The design was a secondary data analysis of the first five waves of the Longitudinal Study of Adolescent Health and Wellness (N = 337, 71.8% female). Using multilevel linear (for spirituality) and quadratic (for religiosity) growth models, the initial level and change over time in religiosity and spirituality, as well as the correlations between growth processes, were compared between heterosexual and sexual minority individuals. The heterosexual group had significantly higher initial religiosity levels than the sexual minority group. Religiosity decreased over time at a similar rate for the heterosexual and sexual minority groups. Spirituality significantly increased over time for the sexual minority group but not for the heterosexual youth. The change over time in religiosity and spirituality were significantly and positively correlated for heterosexual individuals but were uncorrelated for sexual minority individuals. Results indicate there are differences in religious development based on sexual minority status. Future research should take into account how these differential religious and spiritual developmental patterns seen in heterosexual and sexual minority youth might predict various health outcomes.


Consensual non-monogamous relationships: Not necessarily less satisfying or less stable, some individuals can experience psychological need fulfillment and satisfying relationships with concurrent partners

Wood J, Quinn-Nilas C, Milhausen R, Desmarais S, Muise A, Sakaluk J (2021) A dyadic examination of self-determined sexual motives, need fulfillment, and relational outcomes among consensually non-monogamous partners. PLoS ONE 16(2): e0247001. https://doi.org/10.1371/journal.pone.0247001

Abstract: Intimate and sexual relationships provide opportunity for emotional and sexual fulfillment. In consensually non-monogamous (CNM) relationships, needs are dispersed among multiple partners. Using Self-Determination Theory (SDT) and dyadic data from 56 CNM partnerships (112 individuals), we tested how sexual motives and need fulfillment were linked to relational outcomes. We drew from models of need fulfillment to explore how sexual motives with a second partner were associated with satisfaction in the primary relationship. In a cross-sectional and daily experience study we demonstrated that self-determined reasons for sex were positively associated with sexual satisfaction and indirectly linked through sexual need fulfillment. Self-determined reasons for sex predicted need fulfillment for both partners at a three-month follow up. The association between sexual motives and need fulfillment was stronger on days when participants engaged in sex with an additional partner, though this was not related to satisfaction in the primary relationship. Implications for need fulfillment are discussed.


Some Implications

Our findings have implications both for intimate and sexual partners wishing to enhance their relationship(s) and clinicians working with CNM and monogamous individuals/couples. Promoting self-determined reasons for engaging in sex could encourage partners to engage in sexual interactions that are more likely to fulfill their needs (e.g., having sex when they are excited about the activity, rather than to avoid conflict). Encouraging partners to explore why they may be having sex for less self-determined reasons, and how they may shift to having sex for more self-determined reasons, is one strategy clinicians can use to promote relational well-being. Clinicians working with CNM partners can also encourage individuals to communicate and express continued affection and desire for established partners when new relationships occur in order to maintain sexual and relationship satisfaction in the primary dyad.

The current research also has implications for individuals in CNM communities. Popular assumptions of romantic relationships position CNM partnerships as less satisfying or less stable compared to monogamous relationships [6,20]. CNM partners in the current research noted high levels of satisfaction and sexual need fulfilment with both their first and second partners. Moreover, a concurrent sexual partnership did not appear to have significant detrimental effects on the first relationship. These findings verify what CNM researchers and advocates have previously emphasized: that for some, CNM relationships are a viable and fulfilling alternative to monogamy, and one of many approaches to encouraging personal growth and fulfillment [49]. These results may help to destigmatize CNM partnerships as they confirm that individuals can experience psychological need fulfillment and satisfying relationships with concurrent partners.

Finally, research on sexual behaviour, and on CNM generally, has been criticized for lacking theoretical frameworks [4,21,42,78]. The current studies contribute to a growing body of research that utilizes social psychological approaches to the study of sexual behaviour and emphasizes the importance of sexuality to relational well-being [36,42,45,52,71]. The research provides a theoretical context within which to understand the associations between sexual motives, need fulfilment, and relational outcomes in relationships where sexual and emotional needs are met by multiple partners, thus expanding the experiences represented in the social psychological literature.

The Origins and Design of Witches and Sorcerers

The Origins and Design of Witches and Sorcerers. Manvir Singh. Current Anthropology, Feb 2021. https://www.journals.uchicago.edu/doi/abs/10.1086/713111

Abstract: In nearly every documented society, people believe that some misfortunes are caused by malicious group mates using magic or supernatural powers. Here I report cross-cultural patterns in these beliefs and propose a theory to explain them. Using the newly created Mystical Harm Survey, I show that several conceptions of malicious mystical practitioners, including sorcerers (who use learned spells), possessors of the evil eye (who transmit injury through their stares and words), and witches (who possess superpowers, pose existential threats, and engage in morally abhorrent acts), recur around the world. I argue that these beliefs develop from three cultural selective processes: a selection for intuitive magic, a selection for plausible explanations of impactful misfortune, and a selection for demonizing myths that justify mistreatment. Separately, these selective schemes produce traditions as diverse as shamanism, conspiracy theories, and campaigns against heretics—but around the world, they jointly give rise to the odious and feared witch. I use the tripartite theory to explain the forms of beliefs in mystical harm and outline 10 predictions for how shifting conditions should affect those conceptions. Societally corrosive beliefs can persist when they are intuitively appealing or they serve some believers’ agendas.

8. Discussion


8.1. The origins of sorcerers, lycanthropes, the evil eye, and witches


Table 5 displays the three cultural selective processes hypothesized to be responsible for shaping beliefs in practitioners of mystical harm. Figure 3 shows how those processes interact to produce some of the malicious practitioners identified in Figure 1 (sorcerers, the evil eye, lycanthropes, and witches).


[Table 5. The three cultural selective schemes responsible for beliefs in practitioners of mystical harm.]


According to the theory outlined here, sorcerers are the result of both a selection for intuitive magic and a selection for plausible explanations. The selection for intuitive magic produces compelling techniques for controlling uncertain outcomes, including rain magic, gambling superstitions, and magic aimed at harming others, or sorcery. Once people accept that this magic is effective and that other people practice it, it becomes a plausible explanation for misfortune. A person who feels threatened and who confronts unexplainable tragedy will easily suspect that a rival has ensorcelled them. As people regularly consider how others harm them, they build plausible portrayals of sorcerers.

Beliefs about werewolves, werebears, weresnakes, and other lycanthropes also develop from a selection for plausible explanations. Baffled as to why an animal attacked them, a person suspects a rival of becoming or possessing an animal and stalking them at night. This explanation becomes more conceivable as the lycanthrope explains other strange events and as conceptions of the lycanthrope become more plausible. Many societies ascribe transformative powers to other malicious practitioners (see Table 3), showing that people also suspect existing practitioners after attacks by wild animals.

Beliefs in the malignant power of stares and words likewise develop to explain misfortune. As reviewed earlier, people around the world connect jealousy and envy to a desire to induce harm. Thus, people who stare with envy or express a compliment are suspected of harboring malice and an intention to harm. A person who suffers a misfortune remembers these stares and suspects those people of somehow injuring them. In regularly inferring how envious individuals attacked them, people craft a compelling notion of the evil eye.

Why suspect the evil eye rather than sorcery? There are at least two possibilities. First, an accused individual may ardently vow not to know sorcery or to have attacked the target (see these claims among the Azande, both described in text: Evans-Pritchard 1937:119-125; and shown in film: Singer 1981, minute 21). Alternatively, given beliefs that effective sorcery requires powers that develop with age, special knowledge, or certain experiences, it may seem unreasonable that a young or unexperienced group mate effectively ensorcelled the target. In these instances, the idea that the stare itself harmed the target may provide a more plausible mechanism.

The famous odious, powerful witch, I propose, arises when blamed malicious practitioners become demonized. People who fear an invisible threat or who have an interest in mistreating competitors benefit from demonizing the target, transforming them into a heinous, threatening menace. Thus, witches represent a confluence of two and sometimes all three cultural selective processes.

In Figure 1, I showed that beliefs about malicious practitioners exist along two dimensions. The tripartite theory accounts for this structure. All of the practitioners displayed are plausible explanations of how group mates inflict harm. One dimension (SORCERY-EVIL EYE) distinguishes those explanations of misfortune that include magic (sorcerers) from those that do not (evil eye, lycanthrope). The other dimension shows the extent to which different practitioners have been demonized. In short, all beliefs about harmful practitioners are explanations; sometimes they use magic, sometimes they’re made evil.


8.2. Ten predictions

The proposed theory generates many predictions for how shifting conditions should drive changes in beliefs about malicious practitioners. I referred to several of these throughout the paper. Here are ten (the section of the paper is noted when a prediction is discussed in the paper):

1. People are more likely to believe in sorcerers as sorcery techniques become more effective-seeming. 2. People are more likely to ascribe injury to mystical harm when they are distrustful of others, persecuted, or otherwise convinced of harmful intent. (sect. 6.2.1) 3. The emotions attributed to malicious practitioners will be those that most intensely and frequently motivate aggression. (sect. 6.2.1) 4. People are more likely to attribute injury to mystical harm when they lack alternative explanations. (sect. 6.2.2) 5. The greater the impact of the misfortune, the more likely people are to attribute it to mystical harm. (sect. 6.2.2) 6. Practitioners of mystical harm are more likely to become demonized during times of stressful uncertainty. 7. The traits ascribed to malicious practitioners will become more heinous or sensational as Condoners become more trustful or reliant on information from Campaigners. 8. Malicious practitioners will become less demonized when there is less disagreement or resistance about their removal. 9. The traits that constitute demonization will be those that elicit the most punitive outrage, controlling for believability. (sect. 7.2.1) 10. Malicious practitioners whose actions can more easily explain catastrophe, such as those who employ killing magic compared to love magic, will be easier to demonize.

8.3. The cultural evolution of harmful beliefs

Social scientists, and especially those who study the origins of religion and belief, debate over whether cultural traditions evolve to provide group-level benefits (Baumard and Boyer 2013; Norenzayan et al. 2016). Reviving the analogy of society as an organism, some scholars maintain that cultural traits develop to ensure the survival and reproduction of the group (Wilson 2002). These writers argue that traditions that undermine societal success should normally be culled away, while traditions that enhance group-level success should spread (Boyd and Richerson 2010). In this paper, I have examined cultural traits with clear social costs: mystical harm beliefs. As sources of paranoia, distrust, and bloodshed, these beliefs divide societies, breeding contempt even among close family members. But I have explained them without invoking group-level benefits. Focusing on people’s (usually automatic) decisions to adopt cultural traditions, I have shown that beliefs in witches and sorcerers are maximally appealing, providing the most plausible explanations and justifying hostile aims. Corrosive customs recur as long as they are useful and cognitively appealing.