Friday, January 31, 2020

Lying to appear honest: People may lie to appear honest in cases where the truth is highly favorable to them, such that telling the truth might make them appear dishonest to others

Choshen-Hillel, S., Shaw, A., & Caruso, E. M. (2020). Lying to appear honest. Journal of Experimental Psychology: General. Jan 2020. https://doi.org/10.1037/xge0000737

Abstract: People try to avoid appearing dishonest. Although efforts to avoid appearing dishonest can often reduce lying, we argue that, at times, the desire to appear honest can actually lead people to lie. We hypothesize that people may lie to appear honest in cases where the truth is highly favorable to them, such that telling the truth might make them appear dishonest to others. A series of studies provided robust evidence for our hypothesis. Lawyers, university students, and MTurk and Prolific participants said that they would have underreported extremely favorable outcomes in real-world scenarios (Studies 1a–1d). They did so to avoid appearing dishonest. Furthermore, in a novel behavioral paradigm involving a chance game with monetary prizes, participants who received in private a very large number of wins reported fewer wins than they received; they lied and incurred a monetary cost to avoid looking like liars (Studies 2a–2c). Finally, we show that people’s concern that others would think that they have overreported is valid (Studies 3a–3b). We discuss our findings in relation to the literatures on dishonesty and on reputation.


Sexual orientation explains < 1% of the variation in consumption-favoring behaviors; the common belief of a stylish & extremely wealthy gay consumer must be questioned; differences decrease with age

Sexual orientation and consumption: Why and when do homosexuals and heterosexuals consume differently? Martin Eisend, Erik Hermann. International Journal of Research in Marketing, Jan 31 2020. https://doi.org/10.1016/j.ijresmar.2020.01.005

Highlights
• Sexual orientation explains < 1% of the variation in consumption-favoring behaviors.
• The common belief of a stylish and extremely wealthy gay consumer must be questioned.
• Consumption differences between homosexuals and heterosexuals decrease with age.
• Consumption differences increase when comparing homosexuals and heterosexuals of the same gender.

Abstract: The increasing visibility of homosexuality in society, combined with the lesbian and gay community's considerable buying power, has triggered marketers and researchers' interest in understanding homosexual consumers' consumption patterns. Prior research on whether homosexual consumers behave differently from heterosexual consumers has yielded mixed results, and researchers and practitioners still do not know whether any substantial differences exist, what these differences look like, and how they can be explained. The findings from a meta-analysis reveal that sexual orientation explains on average < 1% of the variation in consumption behavior across 45 papers, indicating only slightly different consumption behaviors. Findings from a moderator analysis contradict conventional wisdom and lay theories, while partly supporting assumptions that are rooted in evolutionary and biological theories that show consumption differences decrease with age; they increase when comparing homosexuals and heterosexuals of the same gender. These findings, which question long-held beliefs about homosexual consumers, help marketers to successfully adjust their strategies.

Keywords: Sexual orientationConsumptionMeta-analysis

The art of flirting: What are the traits that make it effective?

The art of flirting: What are the traits that make it effective? Menelaos Apostolou, Christoforos Christoforou. Personality and Individual Differences, Volume 158, 1 May 2020, 109866. https://doi.org/10.1016/j.paid.2020.109866

Highlights
•    Identified 47 traits which turn flirting effective.
•    Classified the 47 traits into nine factors which turn flirting effective.
•    Found that women rated gentle approach, while men rated good looks as more effective.
•    Found that older participants rated factors, gentle approach as more effective.

Abstract: Flirting is an essential aspect of human interaction and key for the formation of intimate relationships. In the current research, we aimed to identify the traits that turn it more effective. In particular, in Study 1 we used open-ended questionnaires in a sample of 487 Greek-speaking participants, and identified 47 traits that make flirting effective. In Study 2, we asked 808 Greek-speaking participants to rate how effective each trait would be on them. Using principal components analysis, we classified these traits into nine broader factors. Having a good non-verbal behavior, being intelligent and having a gentle approach, were rated as the most important factors. Sex difference were found for most of the factors. For example, women rated gentle approach as more effective on them, while men rated good looks as more effective. Last but not least, older participants rated factors, such as the “Gentle approach,” to be more effective on them.

Check also A considerable proportion of people in postindustrial societies experience difficulties in intimate relationships and spend considerable time being single:
The Association Between Mating Performance, Marital Status, and the Length of Singlehood: Evidence From Greece and China. Menelaos Apostolou, Yan Wang. Evolutionary Psychology, November 13, 2019. https://www.bipartisanalliance.com/2019/11/a-considerable-proportion-of-people-in.html

Search for meaning is positively associated with presence of meaning only for those with greater maladaptive traits; & the search for meaning in adverse circumstances appears to be more effective than in benign conditions

Is the Search for Meaning Related to the Presence of Meaning? Moderators of the Longitudinal Relationship. Steven Tsun-Wai Chu & Helene Hoi-Lam Fung. Journal of Happiness Studies, January 30 2020. https://link.springer.com/article/10.1007/s10902-020-00222-y

Abstract: Meaning in life is an important element of psychological well-being. Intuitively, the search for meaning is associated with greater presence of meaning, but whether the relationship exists is met with mixed findings in the literature. The present studies aim to investigate the moderators of this relationship. Two studies, a one-month longitudinal study (N = 166, retention rate = 100%) and a six-month longitudinal study (N = 181, retention rate = 83%) were carried out. Participants completed measures on meaning in life, personality variables, and psychological needs in the baseline survey, and meaning in life in the follow-up survey. Multiple regression analysis showed that optimism, BIS, and psychological needs emerged to be significant moderators of the longitudinal relationship. Search for meaning at baseline was positively associated with presence of meaning at follow-up only for those with greater maladaptive traits. The search for meaning in adverse circumstances appears to be more effective than in benign conditions. Deficiency search is functional.



Drug makers feel burned: By the time the vaccine was ready—after the peak of the outbreak—public fear of the new flu had subsided; many people didn’t want the vaccine, and some countries refused to take their full orders

From 2018... Who will answer the call in the next outbreak? Drug makers feel burned by string of vaccine pleas. Helen Branswell. Stat News, January 11, 2018. https://www.statnews.com/2018/01/11/vaccines-drug-makers/

Excerpts:

Every few years an alarming disease launches a furious, out-of-the-blue attack on people, triggering a high-level emergency response. SARS. The H1N1 flu pandemic. West Nile and Zika. The nightmarish West African Ebola epidemic.

In nearly each case, major vaccine producers have risen to the challenge, setting aside their day-to-day profit-making activities to try to meet a pressing societal need. With each successive crisis, they have done so despite mounting concerns that the threat will dissipate and with it the demand for the vaccine they are racing to develop.

Now, manufacturers are expressing concern about their ability to afford these costly disruptions to their profit-seeking operations. As a result, when the bat-signal next flares against the night sky, there may not be anyone to respond.

...

Drug makers “have very clearly articulated that … the current way of approaching this — to call them during an emergency and demand that they do this and that they reallocate resources, disrupt their daily operations in order to respond to these events — is completely unsustainable,” said Richard Hatchett, CEO of CEPI, an organization set up after the Ebola crisis to fund early-stage development of vaccines to protect against emerging disease threats.

...

Nearly all the major pharmaceutical companies that work on these vaccines have found themselves holding the bag after at least one of these outbreaks.

GSK stepped up during the Ebola crisis, but has since essentially shelved the experimental vaccine it once raced to try to test and license. Two other vaccines — Merck’s and one being developed by Janssen, the vaccines division of Johnson & Johnson — are still slowly wending their ways through difficult and costly development processes. Neither company harbors any hope of earning back in sales the money it spent on development.

A number of flu vaccine manufacturers were left on the hook with ordered but unpaid for vaccine during the mild 2009 H1N1 flu pandemic. By the time the vaccine was ready — after the peak of the outbreak — public fear of the new flu had subsided. Many people didn’t want the vaccine, and some countries refused to take their full orders. GSK, Sanofi Pasteur, and Novartis — which has since shed its vaccines operation — produced flu vaccine in that pandemic.

Dr. Rip Ballou, who heads the U.S. research and development center for GSK Global Vaccines, told STAT it’s not in the “company’s DNA” to say “no” to pleas to respond to appeals in an emergency. But the way it has responded in the past is no longer tenable.

“We do not want to have these activities compete with in-house programs,” said Ballou. “And our learnings from Ebola, from pandemic flu, from SARS previously, is that it’s very disruptive and that’s not the way that we want to do business going forward.”

GSK has proposed using a facility it has in Rockville, Md., as a production plant for vaccines needed in emergencies, but the funding commitments that would be needed to turn that idea into reality haven’t materialized.

...

Sanofi Pasteur has also taken several enormous hits in the successive rounds of disease emergency responses. In the early 2000s, the company worked on a West Nile virus vaccine. Though the disease still causes hundreds of cases of severe illness in the U.S. every year and is estimated to have been responsible for over 2,000 deaths from 1999 to 2016, public fear abated, taking with it the prospects for sales of a vaccine. Sanofi eventually pulled the plug.

...

At the same time, the company bore the brunt of a barrage of criticism for not publicly committing to a low-price guarantee for developing countries. Facing horrible PR and no sales prospects, Sanofi announced late last summer that it was out.

...

In an emergency, regulatory agencies may be willing to bend some rules. But once the crisis subsides, they revert to normal operating procedures — as Merck has found out as it tries to persuade regulators to accept data from an innovative ring-vaccination trial conducted on its Ebola vaccine.

“This is sort of a human nature problem. People pay attention to the burning house, and maybe not the one that’s got bad wiring, right, that’s down the street,” Shiver said.

Predictive Pattern Classification Can Distinguish Gender Identity Subtypes (the subjective perception of oneself belonging to a certain gender) from Behavior and Brain Imaging

Predictive Pattern Classification Can Distinguish Gender Identity Subtypes from Behavior and Brain Imaging. Benjamin Clemens, Birgit Derntl, Elke Smith, Jessica Junger, Josef Neulen, Gianluca Mingoia, Frank Schneider, Ted Abel, Danilo Bzdok, Ute Habel. Cerebral Cortex, bhz272, January 29 2020, https://doi.org/10.1093/cercor/bhz272

Abstract: The exact neurobiological underpinnings of gender identity (i.e., the subjective perception of oneself belonging to a certain gender) still remain unknown. Combining both resting-state functional connectivity and behavioral data, we examined gender identity in cisgender and transgender persons using a data-driven machine learning strategy. Intrinsic functional connectivity and questionnaire data were obtained from cisgender (men/women) and transgender (trans men/trans women) individuals. Machine learning algorithms reliably detected gender identity with high prediction accuracy in each of the four groups based on connectivity signatures alone. The four normative gender groups were classified with accuracies ranging from 48% to 62% (exceeding chance level at 25%). These connectivity-based classification accuracies exceeded those obtained from a widely established behavioral instrument for gender identity. Using canonical correlation analyses, functional brain measurements and questionnaire data were then integrated to delineate nine canonical vectors (i.e., brain-gender axes), providing a multilevel window into the conventional sex dichotomy. Our dimensional gender perspective captures four distinguishable brain phenotypes for gender identity, advocating a biologically grounded reconceptualization of gender dimorphism. We hope to pave the way towards objective, data-driven diagnostic markers for gender identity and transgender, taking into account neurobiological and behavioral differences in an integrative modeling approach.

Keywords: fMRI, gender identity, machine learning, resting-state functional connectivity, transgender

Thursday, January 30, 2020

The youngest students in a class are less satisfied with their life, have worse general health, more frequent psychosomatic complaints and are more likely overweight

Younger, Dissatisfied, and Unhealthy - Relative Age in Adolescence. L. Fumarco, S. Baert, F. Sarracino. Economics & Human Biology, January 30 2020, 100858. https://doi.org/10.1016/j.ehb.2020.100858

Highlights
•    The youngest students in a class are less satisfied with their life.
•    They have worse general health.
•    They have more frequent psychosomatic complaints and are more likely overweight.

Abstract: We investigate whether relative age (i.e. the age gap between classmates) affects life satisfaction and health in adolescence. We analyse data on students between 10 and 17 years of age from the international survey ‘Health Behaviour in School-Aged Children’ and find robust evidence that a twelve-month increase in relative age (i.e. the hypothetical maximum age gap between classmates) i) increases life satisfaction by 0.168 standard deviations, ii) increases self-rated general health by 0.108 standard deviations, iii) decreases psychosomatic complaints by 0.072 standard deviations, and iv) decreases chances of being overweight by 2.4%. These effects are comparable in size to the effects of students’ household socio-economic status. Finally, gaps in life satisfaction are the only ones to reduce with the increase in absolute age, but only in countries where the first tracking of students occurs at 14 years of age or later.

Neanderthal genes might have helped Homo sapiens adjust to life beyond Africa, influencing skin pigmentation towards fairer skin & then increased life expectancy (at a cost of more skin cancer)

Women with fair phenotypes seem to confer a survival advantage in a low UV milieu. A nested matched case control study. Pelle G. Lindqvist et al. PLOS, January 30, 2020. https://doi.org/10.1371/journal.pone.0228582

Abstract
Background Sun exposure in combination with skin pigmentation is the main determinant for vitamin D status. Human skin color seems to be adapted and optimized for regional sun ultraviolet (UV) intensity. However, we do not know if fair, UV-sensitive skin is a survival advantage in regions with low UV radiation.

Methods A population-based nested case–control study of 29,518 Caucasian women, ages 25 to 64 years from Southern Sweden who responded to a questionnaire regarding risk-factors for malignant melanoma in 1990 and followed for 25 years. For each fair woman, defined as having red hair or freckles (n = 11,993), a control was randomly selected from all non-fair women from within the cohort of similar age, smoking habits, education, marital status, income, and comorbidity, i.e., 11,993 pairs. The main outcome was the difference in all-cause mortality between fair and non-fair women in a low UV milieu, defined as living in Sweden and having low-to-moderate sun exposure habits. Secondary outcomes were mortality by sun exposure, and among those non-overweight.

Results In a low UV milieu, fair women were at a significantly lower all-cause mortality risk as compared to non-fair women (log rank test p = 0.04) with an 8% lower all-cause mortality rate (hazard ratio [HR] = 0.92, 95% CI 0.84‒1.0), including a 59% greater risk of dying from skin cancer among fair women (HR 1.59, 95% CI 1.26‒2.0). Thus, it seem that the beneficial health effect from low skin coloration outweigh the risk of skin cancer at high latitudes.

Conclusion In a region with low UV milieu, evolution seems to improve all-cause survival by selecting a fair skin phenotype, i.e., comprising fair women with a survival advantage.


Discussion

Women with a fair UV-sensitive phenotype living in a low UV milieu had a significantly increased life expectancy as compared to non-fair women. Fair women were at an eight percent lower all-cause mortality rate, as compared to those with non-fair skin. There is a strong inverse dose-dependent risk between increasing sun-exposure habits and all-cause mortality.

Strengths and limitations

Our large sample, comprising 20% of all women in the south Swedish region between 25 and 64 ages, as drawn by random selection from the population registry at the study inception 1990 is a strength. It was thus a representative sample of the South Swedish population at the time of recruitment before the large immigration of the 2000’s. It comprises almost exclusively European Caucasian women. Thus, the comparison between fair and non-fair was mainly a comparison between Fitzpatrick types 1 skin vs. type 2‒3 skin. Since the questionnaire was administrated at the inception of the study, there was no recall bias. Since we earlier have been criticized that our adjustments in Cox regression might not be adequate, we decided to perform a one-to-one matched design. Historically, during evolution there was no possibility to use solarium or to travel for sunbathing. Therefore, we were predetermined to make the main outcome comparison in a low UV milieu, i.e., among those with low-to-moderate sun exposure habits. As secondary outcome we assessed mortality by sun-exposure with adjustment for exercise or stratified for low BMI, only including the time period after year 2000. A major limitation is that the significance level of the lower risk of all-cause mortality among fair women was close to the 5% significance level in all analyses regarding skin type, but it was according to the predetermined hypothesis. Another strength is that the analyses from year 2000 including exercise habits, and BMI showed similar results, but with wider CIs. The results might not be generalized into regions with more intense UV radiation. The aim of the study was not to assess cause specific mortality. However, it is impossible to publish on beneficial effects by sun exposure without including data on skin cancer mortality. Thus, our study is in agreement with the large amount of papers showing an increased incidence of skin cancer with fair skin and we also showed increased mortality in skin cancer. Since fair skin is selected at high latitudes, an improved all-cause survival is also expected from an evolutionary perspective [2]. Frost and coworkers reported in an open internet-based study that red-haired women were particularly prone to ovarian-, uterus-, cervical, and colorectal cancer, our results could not reproduce these findings and we did not find an increased incidence of these groups among fair women in our study [15]. There has been somewhat conflicting evidence regarding sun exposure and all-cause mortality. The Swedish Women´s Lifestyle and Health Study reported that increased sun exposure (measured as sunbathing holidays, i.e., which was one of our four questions) was related to reduced HRs for all-cause mortality [16]. On the other hand, a large US epidemiological study based on regional, not personal, UV radiation reported a positive relation between increasing UV radiation and all-cause mortality [17]. A possible explanation for the opposing results might be the differences in latitude and, therefore, UV intensity (Sweden latitude 55o to 59o and continental US latitude 24o to 42o. While the mean level of the biomarker vitamin D for sun exposure was 48.6 (± 20.5) nmol/L in Sweden it was 77.0 (± 25.0) nmol/L in the US, indicating a greater problem with sun deficiency at high latitudes [9, 18]. Based on data from the Swedish Meteorological and Hydrological Institute (SMHI), in 2014 there was one day with strong UV exposure, i.e., UV-index ≥ 6.

Skin cancer mortality

When we investigated whether the increased mortality associated with skin cancer influenced the strong inverse relationship between all-cause mortality and increasing sun exposure habits and found that this was not the case. Women with fair skin were at a 59% increased risk of death in skin cancer. This was counterbalanced by the health benefits, as measured by all-cause mortality, of fair skin and sun exposure. There is an increased risk of skin cancer with both fair skin and increasing sun exposure, but the prognosis of skin cancer seem to improve with increasing sun exposure [19, 20]. Thus, there seem to be a tradeoff between health benefits and skin cancer and in regions with scarcity of solar UV radiation fair skin have been selected [2]. In our modern society there is not unusual with a mismatch between skin coloration and geography/climate/ habits that might cause increased morbidity and mortality [2].

Sun exposure and overweight

Overweight and obese women do not seem to obtain the same benefit from having fair skin or from sun exposure as non-overweight women. We have seen similar findings in prior studies, where the lower risk of type 2 diabetes mellitus and endometrial cancer after UV exposure was mainly seen in non-overweight women [21, 22]. Wortsman and coworkers have clearly demonstrated that obesity has a detrimental effect on vitamin D levels for a given amount of UV exposure [23]. Thus, lower sun exposure habits among overweight is not the cause. It appears that vitamin D is either produced in a smaller quantity or consumed/inactivated among overweight women. Further, a study using Mendelian randomization analysis showed that increasing BMI leads to lower vitamin D levels [24]. The differential impact of BMI by sun exposure on all-cause mortality is an area that would benefit from additional research. Since BMI seem to be in the causal pathway of sun exposure and all-cause mortality, we chose not to adjust for BMI and present only stratified analysis.
It has been hypothesized that the inbreeding with Neanderthals some 47,000 to 65,000 years ago in northern Canaan might have helped Homo sapiens adjust to life beyond Africa [2527]. Studies of the ancient Neanderthal genome have shown that Westerners carry approximately 1% to 3% of Neanderthal DNA [25, 26]. People of European origin are highly likely (≈ 60% to 70%) to have the Neanderthal DNA that affects keratin filaments, i.e., zinc finger protein basonuclin-2 (BNC2). The latter alleles are thought to be involved in the adaptive variation of skin coloration, influencing skin pigmentation towards fairer skin [6, 28]. With our finding of increased life expectancy with fair skin, we speculate that the preserved high carriership of the Neanderthal BNC2 allel might be an advantage at high latitudes.
We interpret our findings to support that a fair, UV-sensitive phenotype in Sweden seems to be related to prolonged life expectancy in a low UV milieu, but at the cost of an increased risk of death due to skin cancer. Over thousands of years a fair UV-sensitive phenotype has possibly been selected for optimal health at high latitudes

Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly & popular literature; we find no threshold beyond which greater IQ cease to be beneficial

Brown, Matt, Jonathan Wai, and Christopher Chabris. 2020. “Can You Ever Be Too Smart for Your Own Good? Linear and Nonlinear Effects of Cognitive Ability.” PsyArXiv. January 30. https://psyarxiv.com/rpgea/

Abstract: Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly and popular literature. This divergence of beliefs among researchers, practitioners, and the general public presents an obstacle for evidence-based policy and decision-making in a variety of settings. One commonly held idea is that greater cognitive ability does not matter or is actually harmful beyond a certain point (sometimes stated as either 100 or 120 IQ points). We empirically test these notions using data from four longitudinal, representative cohort studies comprising a total of 48,558 participants in the U.S. and U.K. from 1957 to the present. We find that cognitive ability measured in youth has a positive association with most occupational, educational, health, and social outcomes later in life. Most effects were characterized by a moderate-to-strong linear trend or a practically null effect (mean R2 = .002 to .256). Although we detected several nonlinear effects, they were small in magnitude (mean incremental R2= .001). We found no support for any detrimental effects of cognitive ability and no evidence for a threshold beyond which greater scores cease to be beneficial. Thus, greater cognitive ability is generally advantageous—and virtually never detrimental.

The vast majority of dogs and cats were reported to remember past events; both species reportedly remembered single-occurrence events that happened years ago

Pet memoirs: The characteristics of event memories in cats and dogs, as reported by their owners. Amy Lewis, Dorthe Berntsen. Applied Animal Behaviour Science, Volume 222, January 2020, 104885. https://doi.org/10.1016/j.applanim.2019.104885

Highlights
• The vast majority of dogs and cats were reported to remember past events.
• Both species reportedly remembered single-occurrence events that happened years ago.
• The events were diverse and often involved an interaction with an animal or person.
• They were often recalled when current external stimuli overlapped with the memory.

Abstract: The case for episodic memory in non-human animals has been intensely debated. Although a variety of paradigms have shown elements of episodic memory in non-human animals, research has focused on rodents, birds and primates, using standardized experimental designs, limiting the types of events that can be investigated. Using a novel survey methodology to address memories in everyday life, we conducted two studies asking a total of 375 dog and cat owners if their pet had ever remembered an event, and if so, to report on their pet’s memory of the event. In both studies, cats and dogs were reported to remember a variety of events, with only 20% of owners reporting that their pet had never remembered an event. The reported events were often temporally specific and were remembered when commonalities (particularly location) occurred between the current environment and the remembered event, analogous to retrieval of involuntary memories in humans.

Keywords: Event memoryEpisodic-like memoryDog cognitionCat cognitionInvoluntary autobiographical memory

Domestic dogs respond correctly to verbal cues issued by an artificial agent; generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved

Domestic dogs respond correctly to verbal cues issued by an artificial agent. Nicky Shaw, Lisa M. Riley. Applied Animal Behaviour Science, January 30 2020, 104940. https://doi.org/10.1016/j.applanim.2020.104940

Highlights
• Domestic dogs can recall to an artificial agent and respond correctly to its pre-recorded, owner spoken verbal cues as reliably as to their owners in person and while alone in the test room.
• Generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved.
• No behavioural indicators of poor welfare were recorded during interaction with the agent directly.

Abstract: Human-canine communication technology for the home-alone domestic dog is in its infancy. Many criteria need to be fulfilled in order for successful communication to be achieved remotely via artificial agents. Notably, the dogs’ capacity for correct behavioural responses to unimodal verbal cues is of primary consideration. Previous studies of verbal cues given to dogs alone in the test room have revealed a deterioration in correct behavioural responses in the absence of a source of attentional focus and reward. The present study demonstrates the ability of domestic pet dogs to respond correctly to an artificial agent. Positioned at average human eye level to replicate typical human-dog interaction, the agent issues a recall sound followed by two pre-recorded, owner spoken verbal cues known to each dog, and dispenses food rewards for correct behavioural responses. The agent was used to elicit behavioural responses in three test conditions; owner and experimenter present; experimenter present; and dog alone in the test room. During the fourth (baseline) condition, the same cues were given in person by the owner of each dog. The experiments comprised a familiarisation phase followed by a test phase of the four conditions, using a counterbalanced design. Data recorded included latency to correct response, number of errors before correct response given and behavioural welfare indicators during agent interaction. In all four conditions, at least 16/20 dogs performed the correct recall, cue 1 response, and cue 2 response sequence; there were no significant differences in the number of dogs who responded correctly to the sequence between the four conditions (p = 0.972). The order of test conditions had no effect on the dogs’ performances (p = 0.675). Significantly shorter response times were observed when cues were given in person than from the agent (p = 0.001). Behavioural indicators of poor welfare recorded were in response to owners leaving the test room, rather than as a direct result of agent interaction. Dogs left alone in the test room approached and responded correctly to verbal cues issued from an artificial agent, where rapid generalisation of learned behaviours and adjustment to the condition was achieved.

Keywords: DogDog-human communicationDog trainingUnimodal verbal cuesArtificial AgentWelfare

That confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, are claims that rest on shaky foundations: Memory is malleable but essentially reliable

Regaining Consensus on the Reliability of Memory. Chris R. Brewin, Bernice Andrews, Laura Mickes. Current Directions in Psychological Science, January 30, 2020. https://doi.org/10.1177/0963721419898122

Abstract: In the last 20 years, the consensus about memory being essentially reliable has been neglected in favor of an emphasis on the malleability and unreliability of memory and on the public’s supposed unawareness of this. Three claims in particular have underpinned this popular perspective: that the confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, and that the public wrongly sees memory as being like a video camera. New research has clarified that all three claims rest on shaky foundations, suggesting there is no reason to abandon the old consensus about memory being malleable but essentially reliable.

Keywords: false memory, memory accuracy, confidence, lay beliefs


Academic dishonesty—to cheat, fabricate, falsify, and plagiarize in an academic context—is positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, & honesty-humility

Plessen, Constantin Y., Marton L. Gyimesi, Bettina M. J. Kern, Tanja M. Fritz, Marcela Victoria Catalán Lorca, Martin Voracek, and Ulrich S. Tran. 2020. “Associations Between Academic Dishonesty and Personality: A Pre-registered Multilevel Meta-analysis.” PsyArXiv. January 30. doi:10.31234/osf.io/pav2f

Abstract: Academic dishonesty—the inclination to cheat, fabricate, falsify, and plagiarize in an academic context—is a highly prevalent problem with dire consequences for society. The present meta-analysis systematically examined associations between academic dishonesty and personality traits of the Big Five, the HEXACO model, Machiavellianism, narcissism, subclinical psychopathy, and the Dark Core. We provide an update and extension of the only meta-analysis on this topic by Giluk and Postlethwaite (2015), synthesizing in total 89 effect sizes from 50 studies—containing 38,189 participants from 23 countries. Multilevel meta-analytical modelling showed that academic dishonesty was positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, and honesty-humility. The moderate-to-high effect size heterogeneity—ranging from I2 = 57% to 91%—could only be partially explained by moderator analyses. The observed relationships appear robust with respect to publication bias and measurement error, and can be generalized to a surprisingly large scope (across sexes, continents, scales, and study quality). Future research needs to examine these associations with validated and more nuanced scales for academic dishonesty.



High emotion recognition ability may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information; also, high-ERA individuals do not appear to be happier with their lives


Inter- and Intrapersonal Downsides of Accurately Perceiving Others’ Emotions. Katja Schlegel. In: Social Intelligence and Nonverbal Communication pp 359-395, Jan 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_13

Abstract: The ability to accurately perceive others’ emotions from nonverbal cues (emotion recognition ability [ERA]) is typically conceptualized as an adaptive skill. Accordingly, many studies have found positive correlations between ERA and measures of social and professional success. This chapter, in contrast, examines whether high ERA can also have downsides, both for other people and for oneself. A literature review revealed little evidence that high-ERA individuals use their skill to hurt others. However, high ERA may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information. Furthermore, high-ERA individuals do not appear to be happier with their lives than low-ERA individuals. Overall, the advantages of high ERA outweigh the downsides, but many open questions regarding negative effects remain to be studied.

Keywords: Emotion recognition Emotional intelligence Well-being Dark side Interpersonal accuracy

Summary and Conclusion

The ability to accurately recognize others’ emotions from the face, voice,
and body is typically considered to be an adaptive skill contributing to
social and professional success. This has been supported by various studies
(see Schmid Mast & Hall, 2018; Hall et al., 2009; Elfenbein et al.,
2007, for reviews). Much less research has looked into the potential
downsides or disadvantages of high ERA for oneself (i.e., for one’s wellbeing)
and for others (i.e., by manipulating other people or hampering
smooth interactions with others). The present chapter reviewed this
research in non-clinical adults, specifically focusing on the following
questions: Is there a “dark” side to high ERA in that people use it to hurt
others? Can high ERA negatively affect the quality of relationships? Why
is high ERA uncorrelated with psychological well-being? Finally, is there
an optimal level of ERA?
Although more research is clearly needed to answer these questions
with more confidence, the current state of the literature suggests that
ERA is a double-edged sword that affects one’s well-being and social outcomes
both positively and negatively. One common theme that emerged
as a possible explanation for both positive and negative pathways is the
heightened emotional awareness of or attunement to others’ feelings in
persons with high ERA. Because high-ERA individuals are more perceptive
of others’ positive and negative emotions, their own emotions also
appear be more affected by what is happening around them, contributing
to various inter- and intrapersonal outcomes.
For instance, high-ERA individuals seem to be more prosocial and
cooperative, maybe in order to perceive more positive emotions in others
and to preserve their own psychological well-being. Heightened emotional
awareness for others’ feelings can also explain the positive associations
between ERA and social and workplace effectiveness found in many
studies. On the other hand, “hyperawareness” in high-ERA individuals
can inadvertently contribute to lower rapport, less favorable impressions
in others, and lower relationship quality due to “eavesdropping” and the
failure to show “motivated inaccuracy” when it might be adaptive.
Because high emotional awareness appears to amplify the effects of
perceived positive and negative emotions, in stable environments with
only few stressors, the adaptive advantages of high ERA may outweigh
the downsides. However, as adversity or instability increases, the higher
proportion of perceived and experienced negative affect may contribute
to lower well-being and the development of depressive symptoms. A
higher tendency to suffer with others in distress might represent one possible
mechanism negatively influencing psychological well-being.
Taken together, the various positive and negative pathways between
high ERA and well-being as well as interpersonal relationships may
explain why ERA does not appear to be positively correlated with wellbeing,
although this had been found for emotional intelligence more
broadly (e.g., Sánchez-Álvarez et al., 2015). One may speculate that other
components of emotional intelligence such as the ability to regulate one’s
own negative emotions efficiently or the ability to manage others’ emotions
have fewer potential downsides than ERA with respect to one’s own
well-being, although they may be more “useful” when it comes to manipulating
others (e.g., Côté et al., 2011).
An interesting question is whether the terms “emotional hyperawareness”
(e.g., Davis & Nichols, 2016) or “hypersensitivity” (Fiori & Ortony,
2016) are appropriate to describe high-ERA individuals. These terms are
often used to describe an exaggerated, maladaptive reactivity of neurophysiological
structures related to mental disorders (e.g., Frick et al.,
2012; Neuner et al., 2010). In healthy individuals with high ERA, however,
the elevated attunement to emotions might represent a more realistic
and holistic view of the social world rather than a bias (Scherer, 2007). If
this is the case, then the absence of a correlation between ERA and wellbeing
or life satisfaction may also reflect that those high in ERA evaluate
these constructs more realistically and thus more negatively, although
they might be “happier” than others if different criteria were used. It may
also be that high-ERA individuals, compared to low-ERA individuals, are
relatively more satisfied with some life domains (e.g., friendships) and
less satisfied with others (e.g., work), which may cancel each other out
when global well-being or life satisfaction is considered.
The current literature can be expanded in several ways. In particular,
more studies that examine the moderating effects of personality traits on
the link between ERA and outcomes are needed. In particular, traits
related to the processing and regulation of emotions in oneself and others
might moderate the effects of ERA not only on intrapersonal outcomes
such as psychological well-being but also on interpersonal outcomes such
as relationship quality. For example, it would be interesting to examine
how ERA, empathic concern, and detachment interact in predicting
stress, emotional exhaustion, or work engagement in helping professions.
One can hypothesize that a high ability to detach oneself from stressful
negative work experiences protects professionals that are highly perceptive
of clients’ negative feelings and express empathic concern from negative
effects on well-being. Other possible moderating variables include
“positivity offset” (Ito & Cacioppo, 2005) and stable appraisal biases
(Scherer, 2019). In addition, “dark” personality traits might moderate the
effects on interpersonal behaviors such as deception, such that high ERA
may, for example, amplify the effects of high Machiavellianism or trait
exploitativeness (Konrath et al., 2014). Future studies should also look
into curvilinear relationships to examine which levels of ERA are the
most beneficial or detrimental for various outcomes and situations.
Furthermore, longitudinal studies may shed light on the causality
underlying ERA and the development of psychological well-being over
time as a function of a person’s environment. For example, it could be
tested whether Wilson and Csikszentmihalyi’s (2007) finding that prosociality
is beneficial in stable environments but detrimental in adverse
ones also holds for ERA. Such studies would also allow investigating the
causal pathways linking ERA and depressive symptoms, including testing
the possibilities that dysphoria increases ERA (Harkness et al., 2005) and
that ERA, due to a more realistic perception of the social world, makes
people “wiser but sadder” (Scherer, 2007).
Many of the above conclusions rely on the assumption that high ERA
relates to a higher attunement to emotions in our surroundings. However,
only few studies to date examined this association. Fiori and Ortony
(2016) and Freudenthaler and Neubauer (2007) pointed out that ability
tests of emotional intelligence measure maximal performance and crystallized
knowledge, but do not necessarily capture typical performance
and more fluid emotion processing. More research is thus needed to corroborate
the idea that being good at accurately labeling emotional expressions
when one is explicitly instructed to do so is related to paying more
attention to emotions in everyday life when an abundance of different
types of information is available. Future research should involve the
development of new standard tests tapping into typical performance
regarding emotion perception. Future studies could also benefit from
using methods such as portable eye tracking or experience sampling to be
able to study more real-life situations. Finally, future studies may examine
satisfaction in specific life domains as outcome measures of ERA in addition
to general measures of well-being.
The current review also raises the question whether available trainings
for increasing ERA (see Blanch-Hartigan, Andrzejewski, & Hill, 2012
for a meta-analysis) are useful if high ERA can have detrimental effects.
The answer may depend on what outcomes are considered. If an ERA
training improves law enforcement officers’ job performance (Hurley,
Anker, Frank, Matsumoto, & Hwang, 2014) or helps doctors to better
understand their patients (Blanch-Hartigan, 2012), the answer would be
that trainings are useful. When psychological well-being is considered as
the outcome, stand-alone ERA trainings may not always be useful, for
example, if a person is experiencing chronic stress or depressive symptoms.
In these cases, it may be beneficial to combine an ERA training
with a training targeted at the use of adaptive emotion regulation strategies
to prevent potentially detrimental effects.
To conclude, I would like to emphasize that, overall, ERA should still
be considered an adaptive and valuable skill, especially when effective
interpersonal interactions in the workplace or close relationships are
considered
(e.g., reviews by Elfenbein et al., 2007; Schmid Mast & Hall,
2018). High-ERA individuals receive better ratings from others on various
positive traits (e.g., socio-emotional competence) and report being
more open, more conscientious, and more tolerant (Hall et al., 2009).
The interpersonal downsides and “dark” aspects of high ERA in healthy
adults discussed in the present chapter seem to be limited to relatively
specific situations or ERA profiles, although more research is needed.
With respect to psychological well-being, however, the picture seems to
be more nuanced, implying both positive and negative pathways that
may be more or less influential based on a person’s life situation and personality
traits. More sophisticated study designs, novel data collection
methods, and more complex statistical analyses can help us better understand
these mechanisms.

Being so important to identify deception, we are really bad at it, so we developed the equivalent of an intelligence network that would pass along information and evidence, thus rendering the need for an individual lie detector moot

Nonverbal Communication: Evolution and Today. Mark G. Frank, Anne Solbu. In: Social Intelligence and Nonverbal Communication pp 119-162, January 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_5

Abstract: One aspect of social intelligence is the ability to identify when others are being deceptive. It would seem that individuals who were bestowed with such an ability to recognize honest signals of emotion, particularly when attempts to suppress them are made, would have a reproductive advantage over others without it. Yet the research literature suggests that on average people are good at detecting only overt manifestations of these signals. We argue instead that our evolution as a social species living in groups permitted discovery of deceptive incidents due to the factual evidence of the deception transmitted verbally through social connections. Thus the same principles that pressed for our evolution as a cooperative social species enabled us to develop the equivalent of an intelligence network that would pass along information and evidence, thus rendering a press for an individual lie detector moot.

Keywords: Deception detection Evolution Emotions Behavioral signals Social life

Conclusion
Taken together, it is clear that there are strong signals for various emotions and intentions and a strong rationale for why these signals would be
‘engineered’ to solve a recurrent problem. And despite being wired to
detect these signals, humans are poor detectors of these signals once they
become subtle through efforts to conceal them. Yet this ability to spot
these dishonest and/or subtle versions of the signals would seem to be of
great benefit to any given individual in his or her quest to survive and
pass on his or her genes to the next generation. This sense that evolution
did not bestow our species with these internal event detectors seems puzzling, until we unpack some of the social structures of the ancient world.
It seems the cooperative structures, and little (at least initially) opportunities to ‘cheat’, often may have allowed, in essence, an intelligence network to be developed where pejorative information could be passed along
easily and cheaply to identify any particular cheater. Thus, the evolution
of cooperative behavior was the key to lie-catching. It seems logical that
there would be no strong independent press to develop internal cheater
detectors, when a strong social network would do the job for at a greatly
reduced cost (Smith, 2010).
Importantly, lie detection in the laboratory or in single case studies
does not fully translate to the real world, where gossip and relationships
with others matter (Haidt, 2001). People rely on gossip, even when accuracy may be limited (Sommerfeld, Krambeck, & Milinski, 2008); it may
nevertheless actually improve lie detection (Klein & Epley, 2015).
Moreover, it is through the influence from others that we may decide to
override our tendency to cooperate (Bear & Rand, 2016) and employ
conscious deliberation to make our decisions (Haidt, 2001). The alignment of emotions through empathy, and increased goal sharing (Tomasello,
Carpenter, Call, Behne, & Moll, 2005), as evidenced by the #MeToo
movement (Rodino-Colocino, 2018), gave rise to the same powerful
group thinking and sociality as seen in the emergence of human morality
(Jensen, Vaish, & Schmidt, 2014). Haidt (2001) states “A group of judges
independently seeking truth is unlikely to reach an effective consensus,
but a group of people linked together in a large web of mutual influence
may eventually settle into a stable configuration” (p. 826). This becomes,
functionally, a long-range radar type system that has agents reporting
back actions, behaviors, and relationships to each other, which in turn sets
the groundwork for recognizing inconsistencies regarding people not
being where they say they are, people being with people they deny knowing, and so forth. The presence of this communication network would
reduce the need to make individuals hyper-vigilant in every interaction,
or to individually develop super-acute deception detection skills. Likewise,
unusual interpersonal behaviors can trigger individuals to search for evidence to verify their hypotheses about someone’s veracity, and they can
then activate their social networks to verify the information provided by
the unusually behaving person (Novotny et al., 2018). These networks are
not just passive providers of information. Thus, the socially intelligent
person is the one who has the best access to the collective intelligence—
and likely the most friends, as believed by the Ugandans (Wober, 1974).
We believe the research literature has neglected this larger system in which
our social structures exist, which often detect the deception for us. Even
as our society expands, social media and movements like #MeToo have
become like the global village, where previously unacquainted individuals
can now verify the truth or falsity of each other, thus (hopefully) betraying
the attempted liar.

Wednesday, January 29, 2020

Does Attractiveness Lead to or Follow From Occupational Success? Findings From German Associational Football

Does Attractiveness Lead to or Follow From Occupational Success? Findings From German Associational Football. Henk Erik Meier, Michael Mutz. SAGE Open, January 29, 2020. https://doi.org/10.1177/2158244020903413

Abstract: Prior research has provided evidence that attractiveness is associated with work-related advantages. It is less clear, however, whether attractiveness is an antecedent or a consequence of professional success. To answer this question, associational football in Germany is used as an exemplifying case. Portrait pictures of German football players were retrieved, one picture from a very early career stage and one from a very late one. Attractiveness of these portraits was assessed by the “truth of consensus” method. Panel regression models are applied to analyze changes in attractiveness and relate these changes to professional success. Findings show that success as a footballer cannot be predicted with attractiveness at early career stages. Instead, the increase of attractiveness over time is more pronounced among very successful players. It is thus concluded that successful individuals are not more attractive in the very beginning, but improve their appearance throughout their careers.

Keywords: attractiveness, beauty, appearance, professional success, football


The role of physical attractiveness for job-related interactions and outcomes is intensely debated. Previous research has pointed to the existence of a beauty premium in the labor market, but scholars have recently emphasized that the causal mechanisms behind this beauty effect are not completely understood. The objective of this study was to provide some clues on the direction of the dependency, whether attractiveness leads to or follows from success. The first notion that attractiveness fosters professional success in associational football was clearly rejected (H1). At the same time, it was shown that more successful football players markedly improve their physical appearance over time, lending support to the second idea that attractiveness follows from success (H2). Hence, it can be concluded from the findings that attractiveness is less an antecedent, but more a consequence of success. Hence, beauty is not a stable characteristic of a football player, but something modified by “beauty work.”
Large cross-sectional studies on football in Germany had shown that attractiveness and success are correlated (Rosar et al., 2010, 2013, 2017). In the interpretation of this association, it was claimed that coaches may give attractive footballers an advantage in fielding decisions which may help attractive players to become successful. In particular, the interpretation that coaches favor more attractive players was put forward by Rosar and colleagues (2017). However, bearing in mind that football is one of the few professional domains where attractiveness has particularly no relevance as a productivity factor, this interpretation comes as a surprise. Our results lend more support to the notion that players who are fielded more often (and are thus more often in the public spotlight) invest more into their beauty. Although this needs to be tested in future research more explicitly (including measures for grooming), the findings presented here suggest that the beauty premium in sport is probably more accurately interpreted as a by-product of beauty work and not as a form of discrimination against less attractive players.
If this line of reasoning is correct, it is still unclear what motivates this beauty work: On one hand, professional athletes are offered huge financial rewards for attractiveness and popularity, because these qualities are valued by media and the sport industry. For an athlete, beauty work can thus be a form of strategic investment to reach a broader public beyond the narrow scope of regular football fans and, in doing so, increase his endorser qualities. David Beckham or Cristiano Ronaldo may be considered textbook examples of this strategy (Coad, 2005). In forms of sponsorship and marketing deals, beauty work may thus pay-off for athletes and lead to higher revenues. However, Hamermesh et al. (2002) have also contested the idea that additional earnings due to investments in physical appearance recover costs (e.g., for clothing and cosmetics). However, this study was not conducted in the realm of professional sport and may thus not hold true in this particular context. On the other hand, beauty work must not necessarily represent an investment strategy, but may simply be a form of “conspicuous consumption” (Veblen, 1899/2007). Conspicuous consumption refers to the acquisition of luxury goods, including expensive clothing, to publicly demonstrate wealth and a high social status. Hence, in this line of interpretation, the “returns” of beauty work do not tend to a monetary but to a symbolic level, aiming at distinction and prestige. Moreover, it was also claimed that showy spending increases sex appeal among men (Sundie et al., 2011). Hence, beauty work among high-class football players, who stand in the limelight of a huge TV audience each weekend, may simply represent a form of impression management to showcase oneself in a positive way and generate symbolic capital.
This finding comes with strong implications for future research on the role of physical attractiveness in professional sport: Future research has to go beyond correlational analysis and needs to employ longitudinal research designs to be able to discriminate between different mechanisms at stake. Simple correlational analysis does not suffice for making conclusive inferences on the impact of attractiveness on football players’ careers. Moreover, as the current study leaves unclear why successful football players improve their physical appearance, future research should address beauty work and its financial and symbolic returns.
One limitation of this study is that it measured beauty solely based on facial attractiveness. According to Hakim (2010), beauty, sexual attractiveness, physical fitness, liveliness, charm, and style are distinctive features that can make a person attractive for others. Although some of these characteristics are hard to measure as they are not assessable with pictures (e.g., charm) or change quickly (e.g., style), it should be kept in mind that this study (as with many previous studies) reduces beauty to facial attractiveness while ignoring other (body) characteristics. Moreover, as an alternative to the “truth of consensus”-rating method, scholars have suggested a software-based approach, analyzing facial geometry, for instance, horizontal symmetry, ratio of nose to ear length or ratio of face width to face height (Hoegele et al., 2015). This is a promising approach so that future studies would do well to integrate rater-based as well as software-based methods for assessing facial attractiveness. Finally, this study solely focused on male athletes so that it remains uncertain whether these findings would also hold for female athletes. Previous studies on attractiveness and occupational success found stronger effects for women compared with men (Jæger, 2011). Similar findings were reported for female professional tennis players, whose popularity is much more driven by their attractiveness compared with male players (Konjer et al., 2019). However, in view of the fact that women’s football is less professionalized and commercialized as a sport in Germany (e.g., with regard to media coverage, salary levels, or endorsement deals), the incentives to invest into beauty and appearance may not be as high as in men’s football. Hence, replications of this study in women’s football, in other fields of professional sport, and in different domains of the entertainment industry would be helpful to assess whether the findings presented here are generalizable or an expression of peculiarities of European associational men’s football.

It has been commonly believed that information in short-term memory (STM) is maintained in persistent delay-period spiking activity; experiments have revealed that information in STM can be maintained in neural passive states

Reevaluating the Role of Persistent Neural Activity in Short-Term Memory. Nicolas Y. Masse, Matthew C. Rosen, David J. Freedman. Trends in Cognitive Sciences, January 29 2020. https://doi.org/10.1016/j.tics.2019.12.014

Highlights
.    It has been commonly believed that information in short-term memory (STM) is maintained in persistent delay-period spiking activity.
.    Recent experiments have begun to question this assumption, as the strength of persistent activity appears greater for tasks that require active manipulation of the memoranda, as opposed to tasks that require only passive maintenance.
.    New experiments have revealed that information in STM can be maintained in neural ‘hidden’ states, such as short-term synaptic plasticity.
.    Machine-learning-based recurrent neural networks have been successfully trained to solve a diversity of working memory tasks and can be leveraged to understand putative neural substrates of STM.

Abstract: A traditional view of short-term working memory (STM) is that task-relevant information is maintained ‘online’ in persistent spiking activity. However, recent experimental and modeling studies have begun to question this long-held belief. In this review, we discuss new evidence demonstrating that information can be ‘silently’ maintained via short-term synaptic plasticity (STSP) without the need for persistent activity. We discuss how the neural mechanisms underlying STM are inextricably linked with the cognitive demands of the task, such that the passive maintenance and the active manipulation of information are subserved differently in the brain. Together, these recent findings point towards a more nuanced view of STM in which multiple substrates work in concert to support our ability to temporarily maintain and manipulate information.

The Origin of Our Modern Concept of Depression—The History of Melancholia From 1780-1880: A Review

The Origin of Our Modern Concept of Depression—The History of Melancholia From 1780-1880: A Review. Kenneth S. Kendler. JAMA Psychiatry, January 29, 2020. doi:10.1001/jamapsychiatry.2019.4709

Abstract: The modern concept of depression arose from earlier diagnostic formulations of melancholia over the hundred years from the 1780s to the 1880s. In this historical sketch, this evolution is traced from the writings of 12 authors outlining the central roles played by the concepts of faculty psychology and understandability. Five of the authors, writing from 1780 through the 1830s, including Cullen, Pinel, and Esquirol, defined melancholia as a disorder of intellect or judgment, a “partial insanity” often, but not always, associated with sadness. Two texts from the 1850s by Guislain, and Bucknill and Tuke were at the transition between paradigms. Both emphasized a neglected disorder—melancholia without delusions—arguing that it reflected a primary disorder of mood—not of intellect. In the final phase in the 1860s to 1880s, 5 authors (Griesinger, Sankey, Maudsley, Krafft-Ebing, and Kraepelin) all confronted the problem of the cause of delusional melancholia. Each author concluded that melancholia was a primary mood disorder and argued that the delusions emerged understandably from the abnormal mood. In this 100-year period, the explanation of delusional melancholia in faculty psychology terms reversed itself from an intellect to mood to a mood to intellect model. The great nosologists of the 19th century are often seen as creating our psychiatric disorders using a simple inductive process, clustering the symptoms, signs, and later the course of the patients. This history suggests 2 complexities to this narrative. First, in addition to bottom-up clinical studies, these nosologists were working top-down from theories of faculty psychology proposed by 18th century philosophers. Second, for patient groups experiencing disorders of multiple faculties, the nosologists used judgments about understandability to assign primary causal roles. This historical model suggests that the pathway from patient observation to the nosologic categories—the conceptual birth of our diagnostic categories—has been more complex than is often realized.

Introduction

Before the rise of modern psychiatry in the late 18th century, the concept of melancholia differed substantially from our modern view of depression,1-6 which did not emerge until the late 19th century.1,2,7,8 By examining key texts published from 1780 to 1880, I document the nature and timing of this shift through 3 phases. Two theories play important roles in this story: faculty psychology9-13 and understandability.13-16 Faculty psychology is defined as

The theory, in vogue particularly during the second half of the eighteenth and first half of the nineteenth centuries, that the mind is divided up into separate inherent powers or “faculties.”17(p253)

I focus on 2 of these inborn faculties, one predominant at the initiation of this story (intellect, understanding, or judgment), and the other whose rising influence I track across the 19th century: mood, affect, or moral (ie, psychological) sentiment.

Given the frequency of patients apparently experiencing disorders both of intellect and mood, the theory of faculty psychology posed a problem. To give a proper diagnosis, clinicians needed to distinguish between 3 hypotheses about such patients. Did they have 2 independent disorders, a primary disorder of intellect with a secondary mood disorder or a primary disorder of mood with a secondary disorder of intellect?13 A dominant approach to this problem, later popularized by Karl Jaspers,14,15 was that with careful observation and empathy, the clinician could discriminate between these hypotheses, for example, determining if a delusion (a disorder of intellect) could arise understandably from a disordered mood.

Phase 1: 1780-1830
In the first historical phase, all major authors emphasized that melancholia was primarily a disorder of intellect, often—but not always—accompanied by sadness.

I begin with the medical nosology of William Cullen (1710-1790), a physician and leading figure in the Scottish enlightenment. In his highly influential 1780 nosology,18,19 melancholia was placed within the class of neuroses (nervous disorders), and the order of vésanie (mental diseases/insanity) characterized as “a disorder of the functions of the judging faculty of the mind, without fever or sleepiness.”19 Melancholia was defined as “partial insanity without dyspepsia,” with the phrase “without dyspepsia” included to distinguish it from hypochondriasis. By partial insanity, Cullen meant that the delusions were limited to a single subject, leaving the affected individual with intact areas of intellectual functioning.

Phillipe Pinel (1745-1826),20,21 a major reformer and one of the founders of modern psychiatry, provided the following definition of melancholia in 1801:

Delirium (ie, delusions) exclusively upon one subject … free exercise in other respects of all the faculties of the understanding: in some cases, equanimity of disposition, or a state of unruffled satisfaction: in others, habitual depression and anxiety, and frequently a moroseness of character … and sometimes to an invincible disgust with life.21(p149)Like Cullen,18,19 Pinel’s definition emphasized intellectual dysfunction (eg, partial insanity), but he added a range of associated mood states. Some of the states reflect depression but another described emotional equanimity.

In his 1804 treatise on madness and suicide,”22 the English physician William Rowley (1742-1806) gave a succinct definition of melancholia that agrees in essential points with his predecessors, with the disordered intellect here termed “alienation of the mind”:
Madness, or insanity, is an alienation of the mind, without fever. It is distinguished into two species; melancholy, or mania…. The former is known by sullenness, taciturnity, meditation, dreadful apprehensions, and despair.22(p1)Rowley differs from his predecessors in associating melancholia only with the moods of sadness and anxiety.

In his 1817 monograph on melancholia,23 Maurice Roubaud-Luce’s description of melancholia resembled that of his French predecessor, Pinel,20,21 including its possible association with elevated mood states:

Melancholy is characterized by an exclusive and chronic delirium focused on a single object, or on a particular series of objects, with a free exercise of intellectual faculties on everything that is foreign to these objects. This condition is often accompanied by a deeply concentrated sadness, a state of dejection and stupor, and an ardent love of solitude. Sometimes also it excites, for no apparent reason, immoderate joy…. 23(p1)

Jean Esquirol (1772-1840), Pinel’s student and successor as leader of French psychiatry, coined the term lypemania as a synonym for melancholia.24,25 Like Rowley, in his 1838 textbook, he removed the association with mania-like partial insanities:

We consider it well defined, by saying that melancholy … or lypemania, is a cerebral malady, characterized by partial, chronic delirium, without fever, and sustained by a passion of a sad, debilitating or oppressive character.25(p203)

Phase 2: 1850-1860
In phase 2, the dominant view of melancholia as a primary disorder of intellect came under challenge.

Joseph Guislain (1789-1860), a Belgian alienist and director of the psychiatric hospital at Ghent, took a first step toward the modern view of depression. He described, in his 1852 text,26 6 elementary forms of mental maladies, one of which was mélancolie, defined as “mental pain—augmentation of sentiments of sadness.26(p94) He then described the relatively novel category of nondelusional melancholia, calling it

exclusively an exaggeration of affective feelings; it is a pathological emotion, a sadness, a grief, an anxiety, a fear, a fright, and nothing more. It is not a state which appreciably weakens conceptual faculties.26(p112)He continues:

The description that the [prior] authors gave us of this disease [melancholy] leaves something to be desired; almost all spoke of delusional melancholy, and none, to my knowledge, describes melancholy in its state of greatest simplicity: there are melancholies without delusions … without noticeable disturbance of intelligence or ideas. Melancholy without delusion is the simplest form under which the suffering mode can occur; it is a state of sadness, dejection … without notable aberration of imagination, judgement or intelligence … a despair dominates him; he is absorbed into this painful feeling.26(p186)

In their influential 1858 textbook, John Bucknill (1817-1897) and Daniel Tuke (1827-1895) took a further step away from the view of melancholia as primarily a disorder of intellect. In the section on melancholia, written by Tuke, he begins with the quotation above from Esquirol25 to which he adds a critical comment (italics added):

“We consider it well-defined,” he observes “by saying that melancholia or lypemania, is cerebral malady, characterized by partial chronic delirium, without fever, and sustained by a passion of a sad, debilitating, or oppressive character.” A definition sufficiently accurate, if we except the “chronic delirium,” disorder of the intellect not being, as we shall presently see, an essential part of the disorder.27(p152)

Tuke argues that delusions have been incorrectly understood as the primary melancholic symptom. Following Guislain,26 Tuke operationalizes this change by defining a simple form of melancholia in which “there is here no disorder of the intellect, strictly speaking; no delusion or hallucination.27(p158) Bucknill and Tuke are then more explicit about their new conceptualization of melancholia: “it can be shown that the disorder at present under consideration, may coexist with a sound condition of the purely intellectual part of our mental constitution.27(p159)

Tuke provides his rationale for this conceptual shift in his earlier chapter on classification. After reviewing prior nosologic systems, he writes of the importance of faculty psychology in psychiatric nosology: “The writer thinks there is much to be said in favor of the attempt to classify the various forms of insanity, according to the mental functions affected.”27(p95) He then quotes his coauthor, “Dr Bucknill observes that insanity may be either intellectual, emotional, or volitional.”27(p95) We cannot, he argues, base our nosology on the “physiology of the organ of the mind,” because we do not know it. But, he continues, “in the absence of this knowledge it would seem reasonable to adapt them to the affected function.”27(p95) We could then, he concludes, “speak of disorders of the intellect, sentiment, etc. instead of basing our classification exclusively on prominent symptoms.”27(p95) He formalizes the conclusion:

In bringing the phenomena of diseased mind into relation with such classification, we should endeavor to refer every form of disease to that class or group of the mental faculties which the disease necessarily, though not exclusively, involves in its course.27(p98)In his ideal nosology, idiocy, dementia, and monomania, which commonly manifests delusions and hallucinations, are disorders of the intellect while melancholia is considered a disorder of “moral sentiment,” that is, mood.

Phase 3: 1860-1883
Phase 3 continues the shift from the view that melancholia was predominantly a disorder of intellect to one of mood. But these authors also confronted the problem of delusional melancholia. If it too is primarily a disorder of mood, how can the emergence of delusions be explained? Their response to this question will incorporate the concept of understandability.

The first professor of psychiatry in Germany and a strong advocate for a brain-based psychiatry, Wilhelm Griesinger (1817-1868), early in his 1861 textbook,28,29 adopts a faculty psychological approach to psychiatric nosology in his chapter entitled “The Elementary Disorders in Mental Disease”:

In those cerebral affections which come under consideration as mental diseases, there are, as in all others, only three essentially distinct groups…. Thus, according to this threefold division, we have to consider successively each of the three leading groups of elementary disturbances—intellectual insanity, emotional insanity, and insanity of movement.29(p60)

Although like Guislain26 before him, Griesinger viewed melancholia as typically forming the first stage of a unitary psychosis: both of their descriptions are of relevance. Griesinger begins, “The fundamental affection in all these forms of disease consists in the morbid influence of a painful depressing negative affection—in a mentally painful state.”29(p209) That is, he clearly emphasized the affective nature of the disorder. He elaborates:

In many cases, after a period of longer or shorter duration, a state of vague mental and bodily discomfort … a state of mental pain becomes always more dominant and persistent…. This is the essential mental disorder in melancholia, and, so far as the patient himself is concerned, the mental pain consists in a profound feeling of ill-being, of inability to do anything, of suppression of the physical powers, of depression and sadness…. The patient can no longer rejoice in anything, not even the most pleasing.29(p223)

Earlier in the book, Griesinger sought to explain how disordered mood can produce delusions.

As to their contents, two leading differences are particularly to be observed in insane conceptions [one of which is] … somber, sad, and painful thoughts …. [which arise] from depressed states of the disposition, and gloomy ill-boding hallucinations, as language of abuse and mockery which the patient is always hearing, diabolical grimaces which he sees, etc. The false ideas and conclusions, which are attempts at explanation and vindications of the actual disposition in its effects, are spontaneously developed in the diseased mind according to the law of causality…. At first the delirious conceptions are fleeting … gradually, by continued repetition, they gain more body and form, repel opposing ideas … then they become constituent parts [of the “I”] … and the patient cannot divest himself of them.29(p71)

Early in his 1866 text, William Sankey (1813-1889), an asylum director and lecturer at University College London, outlined morbid psychiatric conditions of the intellect, emotions, and volition. He turned to discussing the development of melancholia:

The alterations in degree are such as an increase of grief, a depression of spirits going on to melancholy…. Such description of abnormal acts of mind belong to the emotions, and occur in the earlier stages, the later or more permanent alterations of kind may be manifested in the (a) intellect, (b) the disposition, (c) the manner, (d) temper, (e) habits, and (f) character of the individual.30(p25)

Therefore, primary alterations in emotions can lead to a range of developments in melancholia, including alterations in intellectual functioning including “in power of judgment, apprehension, imagination, argumentation, memory, or they may entertain distinct illusion [hallucination] or delusion.”30(p25) He captures this point in a case history of melancholia which he summarizes:

The progress of this case was therefore—simple depression, abstraction, forgetfulness, neglect of duties… religious fears, and morbid apprehensions and delusions… You see how closely nearly all these symptoms are connected with the emotions. Fear, apprehension, and dread are among the commonest phenomena.30(p30)

Early in his section on the varieties of insanity from his 1867 textbook,31 Henry Maudsley (1835-1918) adopted a faculty psychological orientation:

On a general survey of the symptoms of these varieties it is at once apparent that they fall into two well-marked groups one of these embracing all those cases in which the mode of feeling or the affective life is chiefly or solely perverted—in which the whole habit or manner of feeling, the mode of affection of the individual by events, is entirely changed; the other, those cases in which ideational or intellectual derangement predominates.31(p301)He then outlines how the effects of the mood disorder spread through other faculties:
Consequently, when there is perversion of the affective life, there will be morbid feeling and morbid action; the patient's whole manner of feeling, the mode of his affection by events, is unnatural, and the springs of his action are disordered; and the intellect is unable to check or control the morbid manifestations.31(p302)He later continues:

The different forms of affective insanity have not been properly recognised and exactly studied because they did not fall under the time-honoured divisions and the real manner of commencement of intellectual insanity in a disturbance of the affective life has frequently been overlooked.31(p321)

Maudsley then attacks the earlier views of melancholia—that the intellectual dysfunctions were primary and the mood disorder secondary (italics added):

It is necessary to guard against the mistake of supposing the delusion to be the cause of the passion, whether painful or gay …. Suddenly, it may be, an idea springs up in his mind that he is lost forever, or that he must commit suicide, or that he has committed murder and is about to be hanged; the vast and formless feeling of profound misery has taken form as a concrete idea —in other words, has become condensed into a definite delusion, this now being the expression of it. The delusion is not the cause of the feeling of misery, but is engendered of it, it is precipitated, as it were in a mind saturated with the feeling of inexpressible woe.31(p328)

Richard von Krafft-Ebing (1840-1902), among the most important late 19th century German-speaking neuropsychiatrists,32,33 wrote in his influential 1874 monograph on melancholia, “The basic phenomenon in melancholic insanity is simply mental depression, psychic pain in its elementary manifestation.”34(p1) By analogy with a peripheral neuralgia, melancholia transforms normal psychological experiences into anguish and sorrow. Affected individuals have repeated “painful distortions” of their experiences, “all his relations to the external world are different … he is unfeeling, homeless ... with unbearable despair.”34(p5)

In his section on melancholy with delusions and hallucinations, Krafft-Ebing writes
Let us look at the sources of these [symptoms]. Initially it is the altered sense of self of the patient, the consciousness of deep abasement … the fractured strength and ability to work, which require an explanation and, with advancing disturbance of consciousness, does not find this in the subjective aspect of the illness, but in the delusional changes of relationship to the external world, from which we are after all used to receiving the impulses for our feelings, ideas and ambitions. This formation of delusions is supported significantly by the deep disturbance of the perception of the world.34(p32)He then gives examples of how delusions of poverty, persecution, and impending punishment can emerge “in a psychological manner … from elementary disturbances of mood”34(p34):

Thus, deep depression of the sense of self, and the consciousness of mental impotence and physical inability to work, lead to the delusion of no longer being able to earn enough, of being impoverished, of starvation.34(p33)

Mental dysesthesia thus causes hostile apperception of the external world, as presumed suspicious glances, scornful gestures, abusive speeches from the environment join, leading to persecutory delusions…. Precordial anxiety and expectations of humiliation lead to the delusion that an actual danger is threatening [where] … a prior harmless action which is not even a crime … is formed into an actual crime.34(p34)

Emil Kraepelin’s views of melancholia, unencumbered by his later development of the category of manic-depressive illness, can be found in the first edition of his textbook published in 1883. He saw this syndrome as arising from “psychological anguish” when “the feelings of dissatisfaction, anxiety and general misery gains such strength that it constantly dominates the mood.”35(p190) He describes the emergence of depressive delusions:

… in milder cases … there is insight into his own illness. As a rule, however, critical ability becomes overwhelmed by powerful mood fluctuations, and the pathological change is transferred to the external world. It does not merely seem to be so dismal and bleak, but really is so. A further progression … can then give rise to formal delusions and a systematic distortion of external experiences.35(p191)

The writings of Krafft-Ebing32,33 and Kraepelin35 reflect a culmination in the development of the modern concept of depression, an illness resulting primarily from a disorder of mood, which can manifest delusions that do not reflect an independent disorder of judgment or intellect but rather a rise, in an understandable manner, from the affective disturbance. We see a clear continuity from these authors to DSM-III36 in the signs and symptoms of what we now call major depression.7,8

Discussion
In this historical sketch, which could not examine all relevant authors or provide helpful background materials, I document that, during the rise of modern psychiatry in the late 18th and early 19th century, the concept of melancholia was closely wedded to earlier views that it was fundamentally a disorder of intellect—a partial insanity—often, but not always, accompanied by sadness. This concept was seen, with modest variation, in writings from 1780 through the 1830s from both England (Cullen18,19 and Rowley22) and France (Pinel,20,21 Roubaud-Luce,23 and Esquirol24).

In this narrative, the first movement away from this paradigm was by Guislain,26 writing just after the mid-19th century, who defined elementary melancholia as a disorder of mood and then focused on the neglected but illustrative category of nondelusional melancholia. Such patients demonstrated no abnormalities of intellect or judgment. This form of melancholia was, he suggested, a disorder primarily of mood.

In 1858, 2 British authors, Bucknill and Tuke,27 went further, declaring explicitly, in the language of faculty psychology, that a disorder of the intellect was not an essential part of melancholia. However, this assertion left a key problem. How could the common occurrence of melancholia with delusions be explained if melancholia was primarily a disorder of mood?

Our final 5 authors—Griesinger,28 Sankey,30 Maudsley,31 Krafft-Ebing,32,34 and Kraepelin35—each accepted the primacy of mood in the cause of melancholia and addressed the problem of the origin of melancholic delusions. Griesinger argued that “the false ideas …are attempts at explanation.”29(p71) Sankey noted “how closely nearly all these [psychotic] symptoms are connected with the emotions.”30(p30) Maudsley stated, “The vast and formless feeling of profound misery has taken form as a concrete [delusional] idea…. The delusion is not the cause of the feeling of misery but is engendered of it.”31(p328) Krafft-Ebing presented a compelling explanation of the psychological origin of melancholic delusions including the nature of “delusional changes of relationship to the external world”34(p32) and sketched how melancholic symptoms could lead, understandably, to delusions of poverty, persecution, or punishment. Kraepelin described how “critical ability becomes overwhelmed by powerful mood fluctuations.”35(p191)

This review provides the historical context for our modern concept of mood-congruent psychotic features, which was first introduced in the research diagnostic criteria as “typical depressive delusions such as delusions of guilt, sin, poverty, nihilism, or self-deprecation,”37(p16) and then incorporated with modest changes in DSM-III36 and all subsequent DSM editions. Echoing the writings of authors reviewed herein, this list reflects delusions whose content can be understandably derived from the primary mood disturbance in major depression.

These historical observations have important implications for how we understand the nature of our psychiatric categories. A prominent narrative is that the great psychiatric nosologists of the 19th century acted as simple inductivists, seeing large numbers of patients with psychiatric disorders and, based initially on symptoms and signs and later also on course of illness, then sorting them into diagnostic categories. This inquiry suggests a more complex process.

First, as illustrated herein and described elsewhere,13,38,39 across Europe during the 19th century, systems of faculty psychology, innate functions of the human mind, were propounded by a range of philosophers, including Kant, Reid, and Stewart.10,38 These faculties provided influential a priori categories for psychiatric nosologists. As articulated explicitly by Tuke, absent a knowledge of pathophysiology, diagnostic categories should at least be based on the “affected function” (eg, “disorders of the intellect, sentiment, etc”27(p95)) rather than exclusively on symptoms.

Second, given the adoption of faculty psychology, nosologists had to confront the problem of the classification of patients apparently experiencing disorders of 2 faculties, such as individuals with delusional melancholia. Did these patients have 2 disorders or only 1 and, if so, which one? The creation of our modern concept of depression arose from an argument about the primacy of disordered intellect vs disordered mood in explaining the cause of delusional melancholia. The early model, consistent with the then dominant intellectualist view of insanity,40,41 assumed that disordered judgment was the essence of melancholia, which was first and foremost a disorder of intellect. Over the 19th century, this opinion was reversed. By the 1870s, it became widely accepted that melancholia was primarily a mood disorder. The argument that fueled that major diagnostic change appealed to understandability—that clinicians could empathically grasp how disordered mood could lead to particular kinds of delusions.

Rather than naive inductivism, a more realistic model for the development of psychiatric nosology in the 19th century would reflect a mixture of bottom-up and top-down processes. Psychiatric neuroscientists and geneticists working today are not studying the biological substrate of illnesses in patients classified from raw clinical experience. Rather, our diagnostic categories reflect clinical observations translated through mentalistic constructs from philosophers who divided the major functions of the human mind into faculties. An obvious question then is whether these faculties have a coherent biological substrate. In an 1857 essay, Henry Monro expressed concerns exactly on this point: Can we relate the metaphysical structure of mental faculties to brain structures? He wrote

Physiology points further than to the general truth that brain as a whole is the instrument of the mind as a whole, and gives us good reason to believe that the great faculties, the emotions, the sensations, and the intelligence, have distinguishable ganglia, sensoria, or spheres of action.42(p196)

The success of our efforts at understanding the biologic characteristics of major psychiatric disorders might therefore depend, in part, on how successfully the faculty psychology of 18th century philosophers reflected brain structure and function. Furthermore, our nosologic categories are influenced by empathy-based insights into the nature of psychological causation. When can a delusion be understood to derive from disordered mood rather than from a primary disorder of intellect? The degree to which these empathy-based mentalistic processes translate into a discernable neurobiology is not well known.

This history suggests that the path from patient observation to our nosologic categories and from there, hopefully, to a detectable pathophysiologic nature is more complex than is commonly realized.

Online perspective-taking experiments have demonstrated great potential in reducing prejudice towards disadvantaged groups, but had no meaningful causal effect on social welfare attitudes

Bor, Alexander, and Gábor Simonovits. 2020. “Empathy, Deservingness, and Preferences for Welfare Assistance: A Large-scale Online Perspective-taking Experiment.” PsyArXiv. January 29. doi:10.31234/osf.io/d4sm9

Abstract: Online perspective-taking experiments have demonstrated great potential in reducing prejudice towards disadvantaged groups such as refugees or the Roma. These studies trigger the psychological process of empathy and evoke feelings of compassion. Meanwhile, a growing literature argues that compassion towards the poor is an important predictor of support for social welfare. This paper bridges these two literatures and predicts that perspective-taking with the poor could increase support for welfare assistance. This hypothesis is tested with a pre-registered experiment conducted on a large and diverse online sample of US citizens (N=3,431). Our results suggest that participants engaged with the perspective-taking exercise, wrote eloquent, often emotional essays. Nevertheless, perspective-taking had no meaningful causal effect on social welfare attitudes; we can confidently rule out effects exceeding 2 points on a 100 points scale. These results cast serious doubt on perspective-taking as a viable online tool to create compassion towards the poor.


Discussion

In this paper, we have tested whether perspective-taking is a viable tool for increasing support for welfare redistribution. Relying on an original, carefully designed, well-powered, and pre-registered survey experiment elded to a representative sample of US citizens, we found that it is not. Similarly to successful interventions, we tested the impact of a particular stimulus, describing the experiences of a single target and emphasizing a particular set of challenges that poor people in the US face (unemployment, health problems, housing problems, single parenthood).
Thus, our conclusions about the possible e ectiveness of perspective-taking intervention are necessarily limited: We have no way of knowing if large or even small changes in the stimulus used here could have led to a more e ective intervention.
This leads to the question of the extent to which these null ndings advance our understanding of either the class of interventions or the substantive target attitude that we study.
This issue should be understood in the broader context of how the published experimental literature characterizes the e ect of a di erent class of interventions. There is ample evidence that published research over-represents successful interventions compared to the universe of social science experiments (Franco, Malhotra and Simonovits 2014; 2016). For a more complete understanding of how a given class of interventions – such as perspective-taking – works, one also needs to consider unsuccessful examples.
That said, it is also important to emphasize why we think that our null results are surprising. First, our experimental design relied on a heavy dose of deservingness cues, which, according to previous research, has a large and sometimes long-lasting e ect on support for redistribution. We expected the perspective taking exercise to amplify the e ects of these deservingness cues but found that it nulliffed it.
Second, our findings are surprising considering a growing line of research employing perspective taking to reduce prejudice against various groups from refugees to transgender individuals.
Our results suggest that prejudice against the poor and attitudes towards government help for the poor may be more di cult to shape than attitudes towards these other marginalized groups.
Third, our experimental design likely constitutes a liberal test of our hypothesis. Besides relying heavily on deservingness cues, we measure the dependent variable with a composite index of ten items after a distractor task lasting a few minutes. For this reason, the experiment should be able to pick up even small, fleeting effects. Finally, it is noteworthy that the analysis of the essays reveals that participants have been very attentive and engaged in the exercise. We have no reason to believe that if we had conducted our experiment in the lab, we would see different results.
At the same time, the literature also o ers some explanations for our failure to bring about attitude change using our perspective-taking intervention. On the one hand, our treatment might have proven too weak in the sense that even though subjects felt empathy towards the individual depicted in the vignette, these emotions did not spill over to people in need in general, perhaps because subjects viewed the vignette as an \exception" to some deeply held stereotypes about poor people.
On the other hand, intense exposure to a story about a person in need might have led to emotional reactions moving counter to our hypothesized e ect. For instance, as argued by Sands (2017) exposure to poverty might have provoked anxiety in subjects about their own relative status, suppressing their support for policies helping others. Similarly, as pointed out by Simas, Clifford and Kirkland (2020), heightened empathy might exacerbate in-group bias, leading to hostile attitudes towards members of an out-group.