Tuesday, March 16, 2021

Empathic accuracy was beneficial (for well-being & ill-being) or not harmful (for marital satisfaction) at low socio-economic levels; was not beneficial (well- & ill-being), or was harmful for mat. satisf. for high levels

Empathy in context: Socioeconomic status as a moderator of the link between empathic accuracy and well-being in married couples. Emily F. Hittner, Claudia M. Haase. Journal of Social and Personal Relationships, March 11, 2021. https://doi.org/10.1177/0265407521990750

Abstract: The present laboratory-based study investigated socioeconomic status (SES) as a moderator of the association between empathic accuracy and well-being among married couples from diverse socioeconomic backgrounds. Empathic accuracy was measured using a performance-based measure of empathic accuracy for one’s spouse’s negative emotions during a marital conflict conversation. Aspects of well-being included well-being (i.e., positive affect, life satisfaction), ill-being (i.e., negative affect, anxiety symptoms, depressive symptoms), and marital satisfaction. SES was measured using a composite score of income and education. Findings showed that SES moderated associations between empathic accuracy and well-being. Empathic accuracy was beneficial (for well-being and ill-being) or not harmful (for marital satisfaction) at low levels of SES. In contrast, empathic accuracy was not beneficial (for well-being and ill-being) or harmful (for marital satisfaction) at high levels of SES. Results were robust (controlled for age, gender, and race). Findings are discussed in light of interdependence vs. independence in low- vs. high-SES contexts and highlight the importance of socioeconomic context in determining whether empathic accuracy benefits well-being or not.

Keywords: Empathic accuracy, marriage, socioeconomic status, well-being


 

Vulnerable narcissism higher in Germany than Japan, related to self-construal; grandiose narcissism not equivalent across cultures; culturally incongruent forms of narcissism show more mental health problems

Narcissism in independent and interdependent cultures. Emanuel Jauk et al. Personality and Individual Differences, Volume 177, July 2021, 110716. https://doi.org/10.1016/j.paid.2021.110716

Highlights

• Studied narcissism in independent (Germany) and interdependent (Japan) cultures

• Vulnerable narcissism higher in Germany than Japan, related to self-construal

• Grandiose narcissism not equivalent across cultures

• Culturally incongruent forms of narcissism show more mental health problems.

Abstract: Narcissism can manifest in a grandiose form – admiration-seeking, exhibitionism, and dominance – or a vulnerable form – anxiety, withdrawal, and hypersensitivity. While grandiose narcissism is conceptually in line with an independent self-construal, as prevalent in Western countries, the vulnerable form can be assumed to relate more to an interdependent self-construal, as prevalent in Eastern countries. We studied both forms of narcissism in Germany and Japan (Ns = 258, 280), which differ fundamentally in their independent and interdependent self-construal, yet are similar regarding global developmental standards. We tested whether (1) mean differences in both narcissism forms would conform to the predominant self-construal, (2) self-construal would explain variance in narcissism beyond broad personality traits, and (3) there would be stronger mental health tradeoffs for culturally incongruent forms of narcissism. Our results largely confirm these expectations for vulnerable narcissism, which is (1) more prevalent in Japan than Germany, (2) related to self-construal beyond broad traits, and, (3) more strongly related to mental health problems in Germany than Japan. For grandiose narcissism, data analyses indicated that construct equivalence can only be assumed for the entitlement factor, and internal structure and nomological networks differ substantially between cultural contexts.

Keywords: Grandiose narcissismVulnerable narcissismIndependent self-construalInterdependent self-construalCross-cultural research


4. Discussion

We investigated grandiose and vulnerable narcissism across Germany and Japan, two countries differing in independent and interdependent self-construal. We tested whether (1) grandiose narcissism would be higher in Germany, whereas vulnerable narcissism would be higher in Japan, and that (2) these differences would relate to self-construal beyond broad FFM traits. Finally, (3) we tested two competing hypotheses regarding the relations between narcissism and psychological maladjustment across independent and interdependent cultures.

4.1. Vulnerable narcissism has a similar structure, yet different implications across cultures

Results largely confirmed our expectations for vulnerable narcissism, which was (1) higher in Japan than Germany, (2) related to interdependent self-construal beyond FFM traits (albeit also related to independent self-construal) and (3) related more strongly to interpersonal problems in Germany than Japan, which is in line with the cultural incongruency hypothesis on personality and mental health (Curhan et al., 2014). This latter result suggests that, while vulnerable narcissism goes along with interpersonal problems in both cultures, the burden for individuals high on vulnerable narcissism might be higher in a cultural context valuing individualism and assertiveness. The MCNS as a measure of vulnerable narcissism displayed metric invariance, which means that indicators loaded equally on a latent factor (however, intercepts differed). Nomological network structure within the FFM was similar for the central dimensions of neuroticism and disagreeableness (Miller et al., 2016) as well as introversion (Jauk et al., 2017).

4.2. Grandiose narcissism has different structures across cultures, but entitlement might be similar

For grandiose narcissism, the measure used in this study (NPI-13) was not invariant at a general factor level (similar to previous research; Żemojtel-Piotrowska et al., 2019), so we conducted analyses for lower-order factors. Here, the entitlement/exploitativeness-factor displayed metric invariance, the others did not. Though this result is at odds with a recent study by Żemojtel-Piotrowska and colleagues, who observed invariance for the other two factors (leadership/authority and grandiose exhibitionism; Żemojtel-Piotrowska et al., 2019), it fits conceptually with structural models of narcissism placing entitlement – an aspect of antagonism – at the core of the construct (Krizan & Herlache, 2018Weiss et al., 2019).

Contrary to our expectations, the entitlement aspect of narcissism was (1) higher in Japan than Germany (even more when controlling for FFM traits) and (2) controlling for self-construal did not alter this difference. While different post-hoc explanations for this finding could be conceived, when considered together with FFM differences observed here, it most likely reflects a reference group effect (see Limitations). Grandiose exhibitionism, the more (though not exclusively) agentic-extraverted aspect of grandiose narcissism was, in line with our expectations, lower in Japan (note, however, that this aspect likely assesses different constructs between cultures). This latter aspect, which is arguably most culturally incongruent with the Japanese culture, (3) was related to intrapersonal maladjustment in Japan, but not in Germany, further confirming the cultural incongruency hypothesis (Curhan et al., 2014). This shows that, while more agentic narcissism is largely associated with good mental health (less symptoms) in Western samples (e.g., Kaufman et al., 2018), this allegedly “happy face” (Rose, 2002) imposes a burden on the individual in cultures which value modesty and relatedness.

4.3. Limitations

An important methodological limitation of this study is that we relied on self-reports within the investigated cultures, in which cross-cultural differences might be obscured by reference group effects (Heine et al., 2002). This was likely the case for (part of) the self-construal scale, which showed an expected difference only for interdependent but not independent self-construal (despite experts' general agreement on independent orientation being very untypical for Japan; ibid.). Also, the scale displays limited reliability for its length. Regarding narcissism, while most of the effects observed here were in line with theoretical predictions, making reference group effects unlikely in these cases, the higher entitlement score in Japan might reflect such an effect, as do differences in FFM traits (see Supplement S1): as in previous research, Japanese participants rated themselves lower on agreeableness and conscientiousness than Germans, which might rather be indicative of high within-culture comparison standards than actual between-culture effects (Schmitt et al., 2007).

Another potential limitation could be seen in non-invariance of the grandiose narcissism measure/imperfect invariance of the vulnerable narcissism measure and entitlement scale. However, we wish to emphasize that we consider the finding that the complex psychological phenomenon of grandiose narcissism – rooted in Western thinking – varies across fundamentally different cultures an important insight rather than a “lack of invariance”. Nonetheless, when interpreting the findings presented here, it must be taken into account that vulnerable narcissism and entitlement do only partially reflect the same latent constructs across cultures, and leadership/authority and grandiose exhibitionism likely reflect different constructs and must be interpreted at the level of observed test scores (with varying meanings).

Facial attractiveness in women was negatively correlated with age at menopause and positively correlated to current fecundity

Żelaźniewicz A, Nowak-Kornicka J, Zbyrowska K, Pawłowski B (2021) Predicted reproductive longevity and women’s facial attractiveness. PLoS ONE 16(3): e0248344. https://doi.org/10.1371/journal.pone.0248344

Abstract: Physical attractiveness has been shown to reflect women’s current fecundity level, allowing a man to choose a potentially more fertile partner in mate choice context. However, women vary not only in terms of fecundity level at reproductive age but also in reproductive longevity, both influencing a couple’s long-term reproductive success. Thus, men should choose their potential partner not only based on cues of current fecundity but also on cues of reproductive longevity, and both may be reflected in women’s appearance. In this study, we investigated if a woman’s facial attractiveness at reproductive age reflects anti-Müllerian hormone (AMH) level, a hormone predictor of age at menopause, similarly as it reflects current fecundity level, estimated with estradiol level (E2). Face photographs of 183 healthy women (Mage = 28.49, SDage = 2.38), recruited between 2nd - 4th day of the menstrual cycle, were assessed by men in terms of attractiveness. Women’s health status was evaluated based on C-reactive protein level and biochemical blood test. Serum AMH and E2 were measured. The results showed that facial attractiveness was negatively correlated with AMH level, a hormone indicator of expected age at menopause, and positively with E2, indicator of current fecundity level, also when controlled for potential covariates (testosterone, BMI, age). This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity at reproductive age compared to the length of reproductive lifespan.

Discussion

In contrast to the research hypothesis, the result of this study showed that facial attractiveness of women at reproductive age is negatively related with AMH level. Simultaneously, we found a positive correlation between face attractiveness and estradiol level, a hormone predictor of current fecundity [2], which was also shown in previous studies [640; but see also for negative results 41]. Facial attractiveness was also negatively related with BMI what has been also shown in the previous studies [42,43].

Our results contradict the results obtained by Bovet et al. [16], showing a positive correlation between face attractiveness and predicted length of reproductive lifespan, estimated based on maternal age at menopause. Although, the most recent data on secular trends in age at menopause in Europe are scarce and difficult to compare, there seem to be no major difference between European countries, including Poland and France [44,45], that could explain the contradictory results of the studies. This difference in the study outcomes may be explained by different methods to estimate expected age at menopause employed in the two studies. Although, there is a positive association between mother’s and daughter’s age at menopause, existing estimates of the heritability of menopause age have a wide range [21,22,46]. Also, reported mother’s age at menopause may not be accurate due to the potential risk of recall bias [47]. Furthermore, previous research showed that AMH level is a better predictor of a woman’s TTM, compared to mother’s age at menopause [48,49], due to several reasons. AMH level is influenced by environmental factors that are also related with menopausal age, such as smoking or diet [50,51]. Also, a mother’s age at menopause is determined by genetic factors, that are shared by a mother and a daughter, and by environmental factors acting only on a mother, but not on a daughter [49]. While, a daughter’s age of menopause is influenced both by genetic and environmental factors, with genetic component reflecting not only maternal but also paternal genetic contribution [46,52]. Therefore, whilst information from mother’s age at menopause only reflects the maternal half of genetic influence, AMH level may reflect the sum total of genetic and environmental influences [50], and thus correlates more strongly with actual age at menopause [49]. Additionally, maternal age at menopause may only predict a daughter’s at menopause, whereas women’s fertility decline earlier, what reduce the chance of a successful pregnancy a few years before menopause. The age of the onset of a period of subfertility and infertility that precede menopause differs among women as well [46], and this should be indicated by AMH level (marker of diminishing ovarian reserve) but not by maternal age at menopause.

The results of the study also showed a negative correlation between AMH and E2 levels, what is in line with previous research [53,54]. Experiments in vitro showed that E2 down-regulates AMH expression in primary cultures of human granulosa cells (what in vivo may facilitate reduction of ovarian reserve), and when estradiol concentration reaches a certain threshold, it is capable of completely inhibiting AMH expression through ERβ receptors [55]. This, together with the results of our study, may suggest an existing trade-off between current fecundity, length of reproductive lifespan and a woman’s capability to invest in morphological cues of both. Life-history theory predicts that evolution of fitness-related traits and functions is constrained by the existence of trade-offs between them. Trade-offs are ubiquitous in nature, their existence is explained in the context of resource limitations [56], and may be observed not only between different traits and functions (e.g. immunity and fertility), but also within one function, e.g. different components of reproductive effort. Possibly, there is also a trade-off between high fecundity at reproductive age (the likelihood of fertilization within the cycles at reproductive age) and the length of reproductive lifespan (allowing for reproductive profits in a long-term perspective).

The existence of such trade-off may be confirmed by research showing that older age at menopause is related with using hormonal contraception for longer than a year [57,58; but see for contradictory results: 59,60] and occurrence of irregular cycles before age of 25 [58], which are often anovulatory [61]. Also, some research show that the number of children correlates negatively with AMH level in young women, what may suggest that more fertile women have shorter TTM [62,63]. On the other hand, some research show a positive correlation between AMH level and number of children [64] and that childlessness is linked with younger age at menopause [57,65,66]. However, this correlations may be caused by other variable (e.g. genetic factors or some disease), that causes both low fertility and earlier ovarian failure [66], and thus do not exclude the possible existence of the trade-off between high fecundity at reproductive age and length of reproductive lifespan.

Furthermore, sexual selection may act more strongly on male preferences toward cues of high fecundity at the reproductive age compared to cues of long reproductive lifespan. This presumption might explain the observed finding of a negative relationship between attractiveness and AMH and a simultaneous positive correlation between attractiveness and E2. Firstly, although humans often live in long-term pairbonds, remarriage is common after spousal death and/or divorce, resulting in serial monogamy [67]. Thus, as adult mortality was higher and the expected lifespan was shorter in our evolutionary past [68], men would profit more from mating with highly fecund women compared to mating with women with longer reproductive lifespan. Furthermore, many women (also in traditional societies) give last birth long before the time of menopause, not fully profiting from the length of their reproductive lifespan [69]. Pregnancy in older age is related to a higher risk of pregnancy complications, miscarriage [70], and maternal death [71], what might contribute to an earlier cessation of reproduction [69]. Also, many environmental and life-style factors may impact age at menopause [51,72], influencing the relationship between morphological cues of long reproductive lifespan at younger age and the actual age at menopause. Thus choosing a potential partner based on the cues of current fecundity may bring a greater fitness pay-off, compared to choosing a partner with a potentially long reproductive lifespan.

Finally, some limitations of our study need to be addressed. Both AMH and E2 levels were assessed only at the between-subjects level, based on a single measurement. Although AMH level has been shown to vary across menstrual cycle [73], the extent of variation is small and sampling on any day of the menstrual cycle is expected to adequately reflect ovarian reserve [74]. However, E2 level predicts most reliably a woman’s fecundity if based on repeated sampling across menstrual cycle [75]. Thus, it would be worth to verify the results of our study with repeated AMH and E2 measurements, using longitudinal, rather than cross-sectional design, to assess the relationship between these hormones and a woman’s facial attractiveness.

This is the first study investigating the relationship between AMH level and facial attractiveness in women. The results showed that women perceived as more attractive are characterized by lower AMH, hormonal predictor of age at menopause, and higher E2 levels, hormonal indicator of current fecundity. This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity during reproductive age compared to the length of reproductive lifespan.

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why?

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why? Alice Evans, Mar 13 2021. https://www.draliceevans.com/post/how-did-east-asia-overtake-south-asia

Circa 1900, women in East Asia and South Asia were equally oppressed and unfree. But over the course of the 20th century, gender equality in East Asia advanced far ahead of South Asia. What accounts for this divergence?

The first-order difference between East and South Asia is economic development. East Asian women left the countryside in droves to meet the huge demand for labour in the cities and escaped the patriarchal constraints of the village. They earned their own money, supported their parents, and gained independence. By contrast, the slower pace of structural transformation has kept South Asia a more agrarian and less urban society, with fewer opportunities for women to liberate themselves.

But growth is not the whole story. Cultural and religious norms have persisted in spite of growth. Even though women in South Asia are having fewer children and are better educated than ever before, they seldom work outside the family or collectively challenge their subordination. By global standards, gender equality indicators in South Asia remain low relative to regions at similar levels of development or even compared with many poorer countries. 

Below I set out evidence for four claims:

. East and South Asian women were once equally unfree and oppressed. Both societies were organised around tightly policing women’s sexuality. 

. But every patrilineal society also faced a trade-off between honour (achieved by restricting women’s freedoms) and income (earned by exploiting female labour). South Asia had a stronger preference for female seclusion, and East Asia a stronger preference for female exploitation. This implies South Asia ‘needed’ more income to be ‘compensated’ for the loss of honour than East Asia.

. In patriarchal societies, industrialisation and structural transformation are necessary preconditions for the emancipation of women. By seizing economic opportunities outside the family, women can gain economic autonomy, broaden their horizons, and collectively resist discrimination.

. But industrialisation is not sufficient. In societies with strong preferences for female seclusion, women may forfeit new economic opportunities so as to preserve family honour. Hence inequalities persist alongside growth. 


Women collectively condemn other women who appear to be sexually permissive even when they are not direct sexual rivals

Ayers, Jessica D., and Aaron T. Goetz. 2021. “Coordinated Condemnation in Women's Intrasexual Competition.” PsyArXiv. March 11. doi:10.31234/osf.io/g6x5r

Abstract: Here, we identify a novel reason why women are often criticized and condemned for (allegedly) sexually permissive behavior, such as their choice of dress. Combining principles from coordinated condemnation and sexual economics theory, we developed a model of competition that accounts for women’s competition in the absence of mating-relevant advantages. We hypothesized and found that women collectively condemn other women who appear to be sexually permissive. Study 1 (N = 712) demonstrated that women perceive a rival more negatively when she is showing cleavage, and these negative perceptions are ultimately driven by the inference that “provocatively” dressed women are more likely to have one-night stands. Study 2 (N = 341) demonstrated that women criticize and condemn provocatively dressed women, even when they are not direct sexual rival (e.g., her boyfriend’s sister). Our findings suggest that more research is needed to fully understand women’s intrasexual competition in the absence of mating-relevant cues.




Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property

Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property. Guy A. Higgins. Front. Pharmacol., February 26 2021. https://doi.org/10.3389/fphar.2021.640241

Abstract: Long term benefits following short-term administration of high psychedelic doses of serotonergic and dissociative hallucinogens, typified by psilocybin and ketamine respectively, support their potential as treatments for psychiatric conditions such as major depressive disorder. The high psychedelic doses induce perceptual experiences which are associated with therapeutic benefit. There have also been anecdotal reports of these drugs being used at what are colloquially referred to as “micro” doses to improve mood and cognitive function, although currently there are recognized limitations to their clinical and preclinical investigation. In the present studies we have defined a low dose and plasma exposure range in rats for both ketamine (0.3–3 mg/kg [10–73 ng/ml]) and psilocybin/psilocin (0.05–0.1 mg/kg [7–12 ng/ml]), based on studies which identified these as sub-threshold for the induction of behavioral stereotypies. Tests of efficacy were focused on depression-related endophenotypes of anhedonia, amotivation and cognitive dysfunction using low performing male Long Evans rats trained in two food motivated tasks: a progressive ratio (PR) and serial 5-choice (5-CSRT) task. Both acute doses of ketamine (1–3 mg/kg IP) and psilocybin (0.05–0.1 mg/kg SC) pretreatment increased break point for food (PR task), and improved attentional accuracy and a measure of impulsive action (5-CSRT task). In each case, effect size was modest and largely restricted to test subjects characterized as “low performing”. Furthermore, both drugs showed a similar pattern of effect across both tests. The present studies provide a framework for the future study of ketamine and psilocybin at low doses and plasma exposures, and help to establish the use of these lower concentrations of serotonergic and dissociative hallucinogens both as a valid scientific construct, and as having a therapeutic utility.

Discussion

The present series of experiments were designed to evaluate the behavioral properties of low doses and plasma concentrations of ketamine and psilocybin in the rat, with a view to identifying behavioral effects that might be relevant to the antidepressant and other therapeutic potential of both drugs. One of the first challenges to this line of research is defining a low dose range of ketamine and psilocybin. The approach taken in this study was to establish doses and plasma exposures of each drug for stereotyped behaviors characteristic of each drug and its distinct pharmacological class. Since behavioral stereotypies are often considered as the preclinical proxy for their psychomimetic property (Hanks and Gonzalez-Maeso, 2013Halberstadt and Geyer, 2018), we focused on doses just below threshold for their induction. Based on this criterion we identified ketamine and psilocybin doses (and plasma exposures) of 0.3–3 mg/kg (10–70 ng/ml) and 0.05–0.1 mg/kg (7–12 ng/ml [psilocin]) respectively for investigation.

Preclinical studies explicitly examining low (“micro”) doses of ketamine and psilocybin are beginning to appear in the literature (Horsley et al., 2018Meinhardt et al., 2020), albeit without any demonstration of potential beneficial effects. One of the limitations to these studies is that antidepressant potential has been typically investigated using tests such as forced swim and elevated plus maze, which lack human equivalence. These tests also overlook the trend to deconstruct complex clinical disorders into endophenotypes that may be more amenable to preclinical study and translation across the preclinical-clinical spectrum (Day et al., 2008Markou et al., 2009). A diagnosis of MDD includes symptoms of depressed mood, anhedonia, fatigue/loss of energy (anergia), cognitive deficits including diminished/slowed ability to think or concentrate and feelings of guilt, worthlessness and suicidal ideation (van Loo et al., 2012American Psychiatric Association, 2013). Therefore endophenotypes related to depression include anhedonia (impaired reward function), amotivation (lack of motivation/purpose) and impaired cognitive function (Hasler et al., 2004Atique-Ur-Rehman and Neill, 2019Treadway and Zald, 2011) which we addressed through the progressive ratio and 5-choice tasks.

A further consideration in the design of these experiments was an expectation that any effect of ketamine and psilocybin at low plasma concentrations was likely to be subtle, and potentially variable across a sample study population (see Horsley et al., 2018Cameron et al., 2019Meinhardt et al., 2020). We therefore exploited the heterogeneous nature of the performance level of rat populations across tasks such as PR and 5-CSRTT. Rats may be categorized based on performance differences in progressive ratio breakpoint, and thus serve as models of high vs. low motivation (Randall et al., 2012Randall et al., 2015). Similarly rats may be categorized according to attentional accuracy or impulsive action under specific challenge conditions, thus providing models of high vs. low attention or impulsivity (Blondeau and Dellu-Hagedorn, 2007Jupp et al., 2013Hayward et al., 2016Higgins et al., 2020aHiggins et al., 2020b). Consequently, rats showing low motivation and/or attention may represent models of specific depression-relevant endophenotypes (Hasler et al., 2004Treadway and Zald, 2011; Atique-Ur-Rehman and Neill, 2019). We identified three important considerations to this approach of subgrouping. Firstly, a requirement to identify an enduring nature to any performance subgroup classification. Secondly to establish “poor” performance is not a consequence of factors such as ill health, and thirdly a requirement for large sample sizes to ensure that subgroups were adequately separated and powered (Button et al., 2013). To address the former challenge, high/low performance subgroups were allotted based on 5–10 days baseline performance. Control experiments were conducted on the PR and 5-choice study cohorts which confirmed “low performance” was not associated with ill health or sensorimotor deficit. To address the third challenge, and to ensure at least some separation between subgroups but having due consideration to the principal of the 3R’s (replacement, refinement, reduction), we adopted the extreme tertile groups.

Considered as a whole, i.e. without subgrouping, despite group sizes of N = 24–72, we failed to identify any positive effect of ketamine or psilocybin on motivation or attention over the tested dose range. The most robust finding was a trend for a decline in performance following the 6 mg/kg dose of ketamine, which indicated the early phase of the descending limb of a biphasic dose response. This was confirmed by parallel experiments identifying even greater performance decline at 10 mg/kg (data not shown, but see Gastambide et al., 2013Benn and Robinson, 2014Nikiforuk and Popik, 2014).

Subgrouping rats based on break point and number of lever presses for food made available under a PR schedule of reinforcement identified rats that consistently ceased responding early (“low” responders), leading to low break points. Interestingly these rats had similar body weights, free feeding measures and open field activity compared to their high responder counterparts, suggesting any differences were unrelated to general health status, neurological function or appetite. In these low performers, both psilocybin (0.05–0.1 mg/kg) and ketamine (1–3 mg/kg) increased break point suggesting an increase in task motivation. These findings suggest that low doses of ketamine may relieve certain clinical signs related to depression (Xu et al., 2016), and further suggest that the doses and plasma concentrations of ketamine and psilocybin as described in the present study may have utility in treating subtypes of mental illnesses characterized by amotivation and anhedonia in particular.

In the 5-CSRTT, the effects of ketamine and psilocybin were evaluated in two separate task schedules. In the first, rats were tested under standard conditions of 0.75 s SD, 5 s ITI. Segregation of rats into high and low performers based on accuracy (% correct), revealed a trend for both psilocybin and ketamine to increase accuracy at equivalent doses to those effective in the PR task. In the case of psilocybin, the more robust measure of efficacy was the % hit measure, which also accounts for errors of omission as well as commission (incorrect response). Speed of responding was also marginally increased further supporting a performance improvement.

The second 5-CSRTT experiment utilized conditions of extended ITI (5 s vs. 10 s) and reduced stimulus duration (0.75 s vs. 0.3 s). The principal challenge is to response control, lengthening the ITI from 5 s to 10 s produces a significant increase in both PREM and PSV responses, a consistent and widely reported finding (Robbins, 2002Jupp et al., 2013Barlow et al., 2018Higgins et al., 2020a,b). Subgrouping rats, based on the level of PREM responses under the 10 s ITI schedule, into “Low” and “High” impulsives (LI vs. HI) highlights a wide range of responders typically seen under this schedule (Jupp et al., 2013Fink et al., 2015Barlow et al., 2018Higgins et al., 2020a). Importantly there is a reasonable consistency of performance on this measure over repeated tests as demonstrated by the HI rats having higher PREM scores under the 5 s ITI, albeit at markedly lower levels. PSV responses are also higher in the HI cohort, consistent with the HI rats demonstrating a deficit in inhibitory response control.

Similar findings for both ketamine and psilocybin were noted in this test schedule. While neither drug affected accuracy (measured as % correct), either in all, or HI/LI classified rats; both increased PREM and PSV responses in the LI cohort, supporting an increase in impulsive action. It should be noted that the magnitude of change produced by both ketamine and psilocybin was relatively small (∼2-fold) and confined to the LI subgroup. Certainly, the magnitude of change contrasted sharply with the 4-fold increase noted in rats pretreated with dizocilpine under the same 10 s ITI schedule (see also Higgins et al., 2005; 2016; Benn and Robinson, 2014). Previous studies have also described increased PREM responses following pretreatment with the phenethylamine 5-HT2A agonist DOI (Koskinen et al., 2000Koskinen and Sirvio, 2001Blokland et al., 2005Wischhof and Koch, 2012Fink et al., 2015), typically at doses lower than those which induce signs of WDS/BMC (Fink et al., 2015Halberstadt and Geyer, 2018).

Impulsivity is a construct that may be viewed in two forms: functional and dysfunctional (Dickman, 1990). Dysfunctional impulsivity is associated with psychiatric conditions such as substance abuse and OCD and thus carries a negative context. For example, associations between high impulsive trait and drug seeking behaviors have been reported both preclinically and clinically (Grant and Chamberlain, 2004Jupp et al., 2013). Functional impulsivity has been described as a tendency to make quick decisions when beneficial to do so, and may be related to traits such as enthusiasm, adventurousness, activity, extraversion and narcissism. Individuals with a high functional impulsivity are also reported to have enhanced executive functioning overall (Dickman, 1990Zadravec et al., 2005Burnett Heyes et al., 2012). Viewed in this more positive context, the feature of psilocybin and ketamine to promote impulsive behavior selectively in a LI cohort may be relevant in supporting a potential to treat depression and other mental disorders.

One advantage of being able to study pharmacological effects at low doses in an experimental setting, is the ability to probe for an underlying neurobiological mechanism, which would serve to establish this pattern of use within a scientific framework. Presumably these doses result in a low level of target site occupancy, which in the case of psilocybin is the serotonin 5-HT2A receptor (Vollenweider et al., 1998Tylš et al., 2014Nichols, 2016Kyzar et al., 2017). At higher doses and plasma exposure, and consequently higher levels of target occupancy, psychomimetic effects begin to emerge. In this respect, the recent study of Madsen et al., (2019) is of interest. These workers reported a correlation between the psychedelic effects of psilocybin (40–100% Likert scale maximum) and CNS 5-HT2A receptor occupancy (43–72%) and plasma psilocin levels (2–15 ng/ml). Increases in subjective intensity was correlated with both increases in 5-HT2A receptor occupancy and psilocin exposure. Based on these data, it is estimated that at 5-HT2A receptor occupancies up to ∼15%, no perceptual effects occur (Madsen and Knudsen, 2020).

5-HT2A receptors are widely distributed within cortical zones, notably layer II-V (Santana et al., 2004Mengod et al., 2015), and also in subcortical regions such as the DA nigrostriatal and mesocorticolimbic pathways where they appear to positively regulate tone, at least under certain physiological conditions (Doherty and Pickel, 2000Nocjar et al., 2002Bortolozzi et al., 2005Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017). One plausible hypothesis is that at low nanomolar plasma concentrations, psilocybin (or LSD, mescaline etc.) may preferentially target a subset of 5-HT2A receptors, possibly those localized to subcortical DA systems where activation has been reported to increase firing and tonicity of these pathways (Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017 for reviews). In turn this might be expected to promote behaviors related to motivation, attention and impulse control as noted in the PR and 5-choice experiments. Activation of cortical 5-HT2A receptors may account for the subjective/perceptual effects once a critical (higher) drug [plasma] threshold has been reached (Nichols, 2016Kyzar et al., 2017Madsen et al., 2019Vollenweider and Preller, 2020).

In the case of ketamine, the relevant target is most likely the NMDA subtype glutamate receptor (Lodge and Mercier, 2015Mathews et al., 2012Corriger and Pickering, 2019; although note; Zanos et al., 2018), which is comprised of a tetrameric receptor complex composed of NR1 subunits, combined with NR2A-D subunits and, in some cases, NR3A-B subunits. The NR2A-D subunits exist in an anatomically distinct manner, with the NR2A and NR2B subunits predominant in forebrain; the NR1 subunit having a broader distribution being a constituent of all NMDA channels (Kew and Kemp, 2005Traynelis et al., 2010). Potentially at low ketamine doses, there may be a preferential interaction between ketamine and specific NMDA channel subtypes (see Lodge and Mercier, 2015), and/or regional subpopulations which underlies the pharmacological effects of these doses of ketamine in preclinical and clinical contexts. We and others have reported on apparently pro-cognitive effects of non-competitive NMDA antagonists, typically dizocilpine, when tested at low doses (Mondadori et al., 1989Jackson et al., 2004Higgins et al., 20032016Guidi et al., 2015). A better understanding of the neurobiological mechanisms that underlie these effects may provide useful insight toward understanding the clinical benefit of low doses of ketamine in humans.

An interesting feature to emerge from this work was the similar profile of ketamine and psilocybin across the PR and 5-choice experiments. Both drugs increased break point in low performers, improved attention in low performer subgroups, and increased PREM/PSV responses in LI rats. Horsley et al., (2018) also reported a similar pattern of both drugs across various elevated plus maze measures, although the effects were suggestive of a mild anxiogenic profile. Despite their differing pharmacology, there is accumulating evidence from a variety of sources that the NMDA and 5-HT2A receptors are functionally intertwined. Vollenweider has highlighted the overlapping psychotic syndromes produced by serotonergic hallucinogens and psychotomimetic anesthetics associated with a marked activation of the prefrontal cortex and other overlapping changes in temporoparietal, striatal, and thalamic regions (Vollenweider, 2001Vollenweider and Kometer, 2010) suggesting that both classes of drugs may act upon a common final pathway. Secondly, 5-HT2A receptor antagonists attenuate a variety of putative psychosis-related behaviors induced by NMDA channel block, including behavioral stereotypy and disrupted PPI (Varty and Higgins, 1995Varty et al., 1999Higgins et al., 2003), a property that likely contributes to the antipsychotic efficacy of atypical neuroleptics such as clozapine, risperidone (Meltzer, 1999Remington, 2003). Furthermore, a cellular coexpression of 5-HT2A and NMDA receptors has been described in multiple brain regions, including VTA, striatum and cortex (Wang and Liang, 1998Rodriguez et al., 1999Rodriguez et al., 2000). Therefore, studying these drugs at the low dose range may also provide further insights into how these receptor systems may interact.

In conclusion, the present studies have characterized for the first time, a positive effect of ketamine (0.3–3 mg/kg [plasma] 10–70 ng/ml) and psilocybin (0.05–0.1 mg/kg [psilocin plasma] 7–12 ng/ml) on behaviors related to endophenotypes of amotivation and anhedonia. The overall effect sizes are modest, which might be expected at the doses and concentrations studied, where the degree of target occupancy is likely to be low and subject to individual differences in drug pharmacodynamics and pharmacokinetics. Each of these factors will impact on treatment response across a study population (Levy, 1998Dorne, 2004). Limitations to the present study include a restriction to male test subjects, and on single acute doses. Future studies should extend to both male and female subjects, and alternative dosing schedules. Nonetheless, the studies are important in that they define a potentially efficacious dose and plasma exposure range and provide a framework for early safety studies and further scientific investigation into the neurobiology of these drugs in the low dose range.

It seems that if individuals are frequently exposed to non-like-minded information, they often feel negative emotions and are, therefore, more likely to use incivility

The Effect of Exposure to (Non-)Like-Minded Information on the Use of Political Incivility on Twitter. Kohei Nishi. advance social sciences & humanities, Mar 11 2021. https://advance.sagepub.com/articles/preprint/The_Effect_of_Exposure_to_Non-_Like-Minded_Information_on_the_Use_of_Political_Incivility_on_Twitter/14191046/1


Abstract: Does exposure to like-minded/non-like-minded information lead to the use of political incivility? Few studies have investigated this question, and the results have been mixed. There are two conflicting possibilities: (i) if individuals are frequently exposed to like-minded political information, they reinforce their pre-existing beliefs and are, thus, more likely to use uncivil language, and (ii) if individuals are frequently exposed to non-like-minded information, they often feel negative emotions and are, therefore, more likely to use incivility. To evaluate these two competing hypotheses, I analyze Japanese Twitter data using a semi-supervised learning method. The results show that individuals who are exposed to non-like-minded information are more likely to use political incivility.


There is a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation

The Psychology of Fake News. Gordon Pennycook, David G. Rand. Trends in Cognitive Sciences, March 15 2021. https://doi.org/10.1016/j.tics.2021.02.007

Highlights

Recent evidence contradicts the common narrative that partisanship and politically motivated reasoning explain why people fall for 'fake news'.

Poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, as well as to the use of familiarity and source heuristics.

There is also a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation.

Effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.


Abstract: We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.

Keywords: fake newsmisinformationsocial medianews mediamotivated reasoningdual process theorycrowdsourcingattentioninformation sharing

What Can Be Done? Interventions To Fight Fake News

We now turn to the implications of these findings for interventions intended to decrease the spread and impact of online misinformation.

Current Approaches for Fighting Misinformation

As social media companies are, first and foremost, technology companies, a common approach is the automated detection of problematic news via machine learning, natural language processing, and network analysis [74.75.76.]. Content classified as problematic is then down-ranked by the ranking algorithm such that users are less likely to see it. However, creating an effective misinformation classifier faces two fundamental challenges. First, truth is not a black-and-white, clearly defined property: even professional fact-checkers often disagree on how exactly to classify content [77,78]. Thus, it is difficult to decide what content and features should be included in training sets, and artificial intelligence approaches run the risk of false positives and, therefore, of unjustified censorship [79]. Second, there is the problem of nonstationarity: misinformation content tends to evolve rapidly, and therefore the features which are effective at identifying misinformation today may not be effective tomorrow. Consider, for example, the rise of COVID-19 misinformation in 2020 – classifiers trained to detect largely political content were likely unequipped to be effective for novel false and misleading claims relating to health.

Another commonly used approach involves attaching warnings to content that professional fact-checkers have found to be false (reviewed in [80,81]). A great deal of evidence indicates that corrections and warnings do successfully reduce misperceptions [41,81.82.83.] and sharing [49,84,85]. Despite some early evidence that correction checking could backfire and increase belief in false content [86], recent work has shown that these backfire effects are extremely uncommon and are not a cause for serious concern [87,88].

There are, however, other reasons to be cautious about the sufficiency of professional fact-checking. Most importantly, fact-checking is simply not scalable – it typically requires substantial time and effort to investigate whether a particular claim is false or misleading. Thus, many (if not most) false claims never get fact-checked. Even for those claims that do eventually get flagged, the process is often slow, such that warnings are likely to be absent during the claim's period of peak viral spreading. Furthermore, warnings are typically only attached to blatantly false news, and not to extremely misleading or biased coverage of events that actually occurred. In addition to straightforwardly undermining the reach of fact-checks, this sparse application of warnings could lead to an 'implied truth' effect where users may assume that (false or misleading) headlines without warnings have actually been verified [84]. Fact-checks often also fail to reach their intended audience [89], and may fade over time [90], provide incomplete protection against familiarity effects [49], and cause corrected users to subsequently share more low-quality and partisan content [91].

Another potential approach that is commonly referenced is emphasizing the publishers of news articles, seeking to leverage the reliance on source cues described earlier. This, in theory, could be effective because people (at least in the USA) are actually fairly good at distinguishing between low- and high-quality publishers [92]. However, experimental evidence on emphasizing news publishers is not very encouraging: Numerous studies find that making source information more salient (or removing it entirely) has little impact on whether people judge headlines to be accurate or inaccurate [37,93.94.95.96.97.] (although see [98,99]).

New Approaches for Fighting Misinformation

One potentially promising alternative class of interventions involve a more proactive 'inoculation' or 'prebunking' against misinformation [8,100]. For example, the 'Bad News Game' uses a 10–20 minute interactive tutorial to teach people how to identify fake news in an engaging way [101]. An important limitation of such approaches is that they are 'opt in' – that is, people have to actively choose to engage with the inoculation technique (often for a fairly substantial amount of time – at least in terms of the internet attention span [102]). This is particularly problematic given that those most in need of 'inoculation' against misinformation (e.g., people who are low on cognitive reflection) may be the least likely to seek out and participate in lengthy inoculations. Lighter-touch forms of inoculation that simply present people with information that helps them to identify misinformation (e.g., in the context of climate change [103]) may be more scalable. For example, presenting a simple list of 12 digital media literacy tips improved people's capacity to discern between true and false news in the USA and India [104].

Both fact-checking and inoculation approaches are fundamentally directed toward improving people's underlying knowledge or skills. However, as noted earlier, recent evidence indicates that misinformation may spread on social media not only because people are confused or lack the competency to recognize fake news, but also (or even mostly) because people fail to consider accuracy at all when they make choices about what to share online [21,44]. In addition, as mentioned, people who are more intuitive tend to be worse at distinguishing between true and false news content, both in terms of belief (Figure 1A) and sharing [35,71]. This work suggests that interventions aimed at getting people to slow down and reflect about the accuracy of what they see on social media may be effective in slowing the spread of misinformation.

Indeed, recent research shows that a simple accuracy prompt – specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) before making judgments about social media sharing – improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments [21,44]. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a politically neutral news headline were sent to thousands of accounts who recently shared links to misinformation sites [21]. This subtle prompt significantly increased the quality of the new they subsequently shared (Figure 2B). Furthermore, survey experiments have shown that asking participants to explain how they know whether a headline is true or false before sharing it increases sharing discernment [105], and having participants rate accuracy at the time of encoding protects against familiarity effects [106]. Relatedly, metacognitive prompts – probing questions that make people reflect – increases resistance to inaccurate information [107].

A major advantage of such accuracy prompts is that they are readily scalable. There are many ways that social media companies, or other interested parties such as governments or civil society organizations, could shift people's attention to accuracy (e.g., through ads, by asking about the accuracy of content that is shared, or via public service announcements, etc.). In addition to scalability, accuracy prompts also have the normative advantage of not relying on a centralized arbiter to determine truth versus falsehood. Instead, they leverage users' own (often latent) ability to make such determinations themselves, preserving user autonomy. Naturally, this will not be effective for everyone all of the time, but it could have a positive effective in the aggregate as one of the various tools used to combat misinformation.

Finally, platforms could also harness the power of human reasoning and the 'wisdom of crowds' to improve the performance of machine-learning approaches. While professional fact-checking is not easily scalable, it is much more tractable for platforms to have large numbers of non-experts rate news content. Despite potential concerns about political bias or lack of knowledge, recent work has found high agreement between layperson crowds and fact-checkers when evaluating the trustworthiness of news publishers: the average Democrat, Republican, and fact-checker all gave fake news and hyperpartisan sites very low trust ratings [92] (Figure 3A). This remained true even when layperson raters were told that their responses would influence social media ranking algorithms, creating an incentive to 'game the system' [108]. However, these studies also revealed a weakness of publisher-based crowd ratings: familiarity with a publisher was necessary (although not sufficient) for trust, meaning that new or niche publishers are unfairly punished by such a rating scheme. One solution to this problem is to have laypeople rate the accuracy of individual articles or headlines (rather than publishers), and to then aggregate these item-level ratings to create average scores for each publisher (Figure 3B). Furthermore, the layperson ratings of the articles themselves are also useful. Analyzing a set of headlines flagged for fact-checking by an internal Facebook algorithm found that the average layperson accuracy rating for fairly small crowds correlated equally well with that of professional fact-checkers as the fact-checkers correlated with each other [77]. Thus, using crowdsourcing to add a 'human in the loop' element to misinformation detection algorithms is promising.

These observations about the utility of layperson ratings have a strong synergy with the aforementioned idea of prompts that shift users' attention to accuracy: periodically asking social media users to rate the accuracy of random headlines both (i) shifts attention to accuracy and thus induces the users to be more discerning in their subsequent sharing, and (ii) generates useful ratings to help inform ranking algorithms.