Tuesday, March 16, 2021

Facial attractiveness in women was negatively correlated with age at menopause and positively correlated to current fecundity

Żelaźniewicz A, Nowak-Kornicka J, Zbyrowska K, Pawłowski B (2021) Predicted reproductive longevity and women’s facial attractiveness. PLoS ONE 16(3): e0248344. https://doi.org/10.1371/journal.pone.0248344

Abstract: Physical attractiveness has been shown to reflect women’s current fecundity level, allowing a man to choose a potentially more fertile partner in mate choice context. However, women vary not only in terms of fecundity level at reproductive age but also in reproductive longevity, both influencing a couple’s long-term reproductive success. Thus, men should choose their potential partner not only based on cues of current fecundity but also on cues of reproductive longevity, and both may be reflected in women’s appearance. In this study, we investigated if a woman’s facial attractiveness at reproductive age reflects anti-Müllerian hormone (AMH) level, a hormone predictor of age at menopause, similarly as it reflects current fecundity level, estimated with estradiol level (E2). Face photographs of 183 healthy women (Mage = 28.49, SDage = 2.38), recruited between 2nd - 4th day of the menstrual cycle, were assessed by men in terms of attractiveness. Women’s health status was evaluated based on C-reactive protein level and biochemical blood test. Serum AMH and E2 were measured. The results showed that facial attractiveness was negatively correlated with AMH level, a hormone indicator of expected age at menopause, and positively with E2, indicator of current fecundity level, also when controlled for potential covariates (testosterone, BMI, age). This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity at reproductive age compared to the length of reproductive lifespan.

Discussion

In contrast to the research hypothesis, the result of this study showed that facial attractiveness of women at reproductive age is negatively related with AMH level. Simultaneously, we found a positive correlation between face attractiveness and estradiol level, a hormone predictor of current fecundity [2], which was also shown in previous studies [640; but see also for negative results 41]. Facial attractiveness was also negatively related with BMI what has been also shown in the previous studies [42,43].

Our results contradict the results obtained by Bovet et al. [16], showing a positive correlation between face attractiveness and predicted length of reproductive lifespan, estimated based on maternal age at menopause. Although, the most recent data on secular trends in age at menopause in Europe are scarce and difficult to compare, there seem to be no major difference between European countries, including Poland and France [44,45], that could explain the contradictory results of the studies. This difference in the study outcomes may be explained by different methods to estimate expected age at menopause employed in the two studies. Although, there is a positive association between mother’s and daughter’s age at menopause, existing estimates of the heritability of menopause age have a wide range [21,22,46]. Also, reported mother’s age at menopause may not be accurate due to the potential risk of recall bias [47]. Furthermore, previous research showed that AMH level is a better predictor of a woman’s TTM, compared to mother’s age at menopause [48,49], due to several reasons. AMH level is influenced by environmental factors that are also related with menopausal age, such as smoking or diet [50,51]. Also, a mother’s age at menopause is determined by genetic factors, that are shared by a mother and a daughter, and by environmental factors acting only on a mother, but not on a daughter [49]. While, a daughter’s age of menopause is influenced both by genetic and environmental factors, with genetic component reflecting not only maternal but also paternal genetic contribution [46,52]. Therefore, whilst information from mother’s age at menopause only reflects the maternal half of genetic influence, AMH level may reflect the sum total of genetic and environmental influences [50], and thus correlates more strongly with actual age at menopause [49]. Additionally, maternal age at menopause may only predict a daughter’s at menopause, whereas women’s fertility decline earlier, what reduce the chance of a successful pregnancy a few years before menopause. The age of the onset of a period of subfertility and infertility that precede menopause differs among women as well [46], and this should be indicated by AMH level (marker of diminishing ovarian reserve) but not by maternal age at menopause.

The results of the study also showed a negative correlation between AMH and E2 levels, what is in line with previous research [53,54]. Experiments in vitro showed that E2 down-regulates AMH expression in primary cultures of human granulosa cells (what in vivo may facilitate reduction of ovarian reserve), and when estradiol concentration reaches a certain threshold, it is capable of completely inhibiting AMH expression through ERβ receptors [55]. This, together with the results of our study, may suggest an existing trade-off between current fecundity, length of reproductive lifespan and a woman’s capability to invest in morphological cues of both. Life-history theory predicts that evolution of fitness-related traits and functions is constrained by the existence of trade-offs between them. Trade-offs are ubiquitous in nature, their existence is explained in the context of resource limitations [56], and may be observed not only between different traits and functions (e.g. immunity and fertility), but also within one function, e.g. different components of reproductive effort. Possibly, there is also a trade-off between high fecundity at reproductive age (the likelihood of fertilization within the cycles at reproductive age) and the length of reproductive lifespan (allowing for reproductive profits in a long-term perspective).

The existence of such trade-off may be confirmed by research showing that older age at menopause is related with using hormonal contraception for longer than a year [57,58; but see for contradictory results: 59,60] and occurrence of irregular cycles before age of 25 [58], which are often anovulatory [61]. Also, some research show that the number of children correlates negatively with AMH level in young women, what may suggest that more fertile women have shorter TTM [62,63]. On the other hand, some research show a positive correlation between AMH level and number of children [64] and that childlessness is linked with younger age at menopause [57,65,66]. However, this correlations may be caused by other variable (e.g. genetic factors or some disease), that causes both low fertility and earlier ovarian failure [66], and thus do not exclude the possible existence of the trade-off between high fecundity at reproductive age and length of reproductive lifespan.

Furthermore, sexual selection may act more strongly on male preferences toward cues of high fecundity at the reproductive age compared to cues of long reproductive lifespan. This presumption might explain the observed finding of a negative relationship between attractiveness and AMH and a simultaneous positive correlation between attractiveness and E2. Firstly, although humans often live in long-term pairbonds, remarriage is common after spousal death and/or divorce, resulting in serial monogamy [67]. Thus, as adult mortality was higher and the expected lifespan was shorter in our evolutionary past [68], men would profit more from mating with highly fecund women compared to mating with women with longer reproductive lifespan. Furthermore, many women (also in traditional societies) give last birth long before the time of menopause, not fully profiting from the length of their reproductive lifespan [69]. Pregnancy in older age is related to a higher risk of pregnancy complications, miscarriage [70], and maternal death [71], what might contribute to an earlier cessation of reproduction [69]. Also, many environmental and life-style factors may impact age at menopause [51,72], influencing the relationship between morphological cues of long reproductive lifespan at younger age and the actual age at menopause. Thus choosing a potential partner based on the cues of current fecundity may bring a greater fitness pay-off, compared to choosing a partner with a potentially long reproductive lifespan.

Finally, some limitations of our study need to be addressed. Both AMH and E2 levels were assessed only at the between-subjects level, based on a single measurement. Although AMH level has been shown to vary across menstrual cycle [73], the extent of variation is small and sampling on any day of the menstrual cycle is expected to adequately reflect ovarian reserve [74]. However, E2 level predicts most reliably a woman’s fecundity if based on repeated sampling across menstrual cycle [75]. Thus, it would be worth to verify the results of our study with repeated AMH and E2 measurements, using longitudinal, rather than cross-sectional design, to assess the relationship between these hormones and a woman’s facial attractiveness.

This is the first study investigating the relationship between AMH level and facial attractiveness in women. The results showed that women perceived as more attractive are characterized by lower AMH, hormonal predictor of age at menopause, and higher E2 levels, hormonal indicator of current fecundity. This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity during reproductive age compared to the length of reproductive lifespan.

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why?

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why? Alice Evans, Mar 13 2021. https://www.draliceevans.com/post/how-did-east-asia-overtake-south-asia

Circa 1900, women in East Asia and South Asia were equally oppressed and unfree. But over the course of the 20th century, gender equality in East Asia advanced far ahead of South Asia. What accounts for this divergence?

The first-order difference between East and South Asia is economic development. East Asian women left the countryside in droves to meet the huge demand for labour in the cities and escaped the patriarchal constraints of the village. They earned their own money, supported their parents, and gained independence. By contrast, the slower pace of structural transformation has kept South Asia a more agrarian and less urban society, with fewer opportunities for women to liberate themselves.

But growth is not the whole story. Cultural and religious norms have persisted in spite of growth. Even though women in South Asia are having fewer children and are better educated than ever before, they seldom work outside the family or collectively challenge their subordination. By global standards, gender equality indicators in South Asia remain low relative to regions at similar levels of development or even compared with many poorer countries. 

Below I set out evidence for four claims:

. East and South Asian women were once equally unfree and oppressed. Both societies were organised around tightly policing women’s sexuality. 

. But every patrilineal society also faced a trade-off between honour (achieved by restricting women’s freedoms) and income (earned by exploiting female labour). South Asia had a stronger preference for female seclusion, and East Asia a stronger preference for female exploitation. This implies South Asia ‘needed’ more income to be ‘compensated’ for the loss of honour than East Asia.

. In patriarchal societies, industrialisation and structural transformation are necessary preconditions for the emancipation of women. By seizing economic opportunities outside the family, women can gain economic autonomy, broaden their horizons, and collectively resist discrimination.

. But industrialisation is not sufficient. In societies with strong preferences for female seclusion, women may forfeit new economic opportunities so as to preserve family honour. Hence inequalities persist alongside growth. 


Women collectively condemn other women who appear to be sexually permissive even when they are not direct sexual rivals

Ayers, Jessica D., and Aaron T. Goetz. 2021. “Coordinated Condemnation in Women's Intrasexual Competition.” PsyArXiv. March 11. doi:10.31234/osf.io/g6x5r

Abstract: Here, we identify a novel reason why women are often criticized and condemned for (allegedly) sexually permissive behavior, such as their choice of dress. Combining principles from coordinated condemnation and sexual economics theory, we developed a model of competition that accounts for women’s competition in the absence of mating-relevant advantages. We hypothesized and found that women collectively condemn other women who appear to be sexually permissive. Study 1 (N = 712) demonstrated that women perceive a rival more negatively when she is showing cleavage, and these negative perceptions are ultimately driven by the inference that “provocatively” dressed women are more likely to have one-night stands. Study 2 (N = 341) demonstrated that women criticize and condemn provocatively dressed women, even when they are not direct sexual rival (e.g., her boyfriend’s sister). Our findings suggest that more research is needed to fully understand women’s intrasexual competition in the absence of mating-relevant cues.




Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property

Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property. Guy A. Higgins. Front. Pharmacol., February 26 2021. https://doi.org/10.3389/fphar.2021.640241

Abstract: Long term benefits following short-term administration of high psychedelic doses of serotonergic and dissociative hallucinogens, typified by psilocybin and ketamine respectively, support their potential as treatments for psychiatric conditions such as major depressive disorder. The high psychedelic doses induce perceptual experiences which are associated with therapeutic benefit. There have also been anecdotal reports of these drugs being used at what are colloquially referred to as “micro” doses to improve mood and cognitive function, although currently there are recognized limitations to their clinical and preclinical investigation. In the present studies we have defined a low dose and plasma exposure range in rats for both ketamine (0.3–3 mg/kg [10–73 ng/ml]) and psilocybin/psilocin (0.05–0.1 mg/kg [7–12 ng/ml]), based on studies which identified these as sub-threshold for the induction of behavioral stereotypies. Tests of efficacy were focused on depression-related endophenotypes of anhedonia, amotivation and cognitive dysfunction using low performing male Long Evans rats trained in two food motivated tasks: a progressive ratio (PR) and serial 5-choice (5-CSRT) task. Both acute doses of ketamine (1–3 mg/kg IP) and psilocybin (0.05–0.1 mg/kg SC) pretreatment increased break point for food (PR task), and improved attentional accuracy and a measure of impulsive action (5-CSRT task). In each case, effect size was modest and largely restricted to test subjects characterized as “low performing”. Furthermore, both drugs showed a similar pattern of effect across both tests. The present studies provide a framework for the future study of ketamine and psilocybin at low doses and plasma exposures, and help to establish the use of these lower concentrations of serotonergic and dissociative hallucinogens both as a valid scientific construct, and as having a therapeutic utility.

Discussion

The present series of experiments were designed to evaluate the behavioral properties of low doses and plasma concentrations of ketamine and psilocybin in the rat, with a view to identifying behavioral effects that might be relevant to the antidepressant and other therapeutic potential of both drugs. One of the first challenges to this line of research is defining a low dose range of ketamine and psilocybin. The approach taken in this study was to establish doses and plasma exposures of each drug for stereotyped behaviors characteristic of each drug and its distinct pharmacological class. Since behavioral stereotypies are often considered as the preclinical proxy for their psychomimetic property (Hanks and Gonzalez-Maeso, 2013Halberstadt and Geyer, 2018), we focused on doses just below threshold for their induction. Based on this criterion we identified ketamine and psilocybin doses (and plasma exposures) of 0.3–3 mg/kg (10–70 ng/ml) and 0.05–0.1 mg/kg (7–12 ng/ml [psilocin]) respectively for investigation.

Preclinical studies explicitly examining low (“micro”) doses of ketamine and psilocybin are beginning to appear in the literature (Horsley et al., 2018Meinhardt et al., 2020), albeit without any demonstration of potential beneficial effects. One of the limitations to these studies is that antidepressant potential has been typically investigated using tests such as forced swim and elevated plus maze, which lack human equivalence. These tests also overlook the trend to deconstruct complex clinical disorders into endophenotypes that may be more amenable to preclinical study and translation across the preclinical-clinical spectrum (Day et al., 2008Markou et al., 2009). A diagnosis of MDD includes symptoms of depressed mood, anhedonia, fatigue/loss of energy (anergia), cognitive deficits including diminished/slowed ability to think or concentrate and feelings of guilt, worthlessness and suicidal ideation (van Loo et al., 2012American Psychiatric Association, 2013). Therefore endophenotypes related to depression include anhedonia (impaired reward function), amotivation (lack of motivation/purpose) and impaired cognitive function (Hasler et al., 2004Atique-Ur-Rehman and Neill, 2019Treadway and Zald, 2011) which we addressed through the progressive ratio and 5-choice tasks.

A further consideration in the design of these experiments was an expectation that any effect of ketamine and psilocybin at low plasma concentrations was likely to be subtle, and potentially variable across a sample study population (see Horsley et al., 2018Cameron et al., 2019Meinhardt et al., 2020). We therefore exploited the heterogeneous nature of the performance level of rat populations across tasks such as PR and 5-CSRTT. Rats may be categorized based on performance differences in progressive ratio breakpoint, and thus serve as models of high vs. low motivation (Randall et al., 2012Randall et al., 2015). Similarly rats may be categorized according to attentional accuracy or impulsive action under specific challenge conditions, thus providing models of high vs. low attention or impulsivity (Blondeau and Dellu-Hagedorn, 2007Jupp et al., 2013Hayward et al., 2016Higgins et al., 2020aHiggins et al., 2020b). Consequently, rats showing low motivation and/or attention may represent models of specific depression-relevant endophenotypes (Hasler et al., 2004Treadway and Zald, 2011; Atique-Ur-Rehman and Neill, 2019). We identified three important considerations to this approach of subgrouping. Firstly, a requirement to identify an enduring nature to any performance subgroup classification. Secondly to establish “poor” performance is not a consequence of factors such as ill health, and thirdly a requirement for large sample sizes to ensure that subgroups were adequately separated and powered (Button et al., 2013). To address the former challenge, high/low performance subgroups were allotted based on 5–10 days baseline performance. Control experiments were conducted on the PR and 5-choice study cohorts which confirmed “low performance” was not associated with ill health or sensorimotor deficit. To address the third challenge, and to ensure at least some separation between subgroups but having due consideration to the principal of the 3R’s (replacement, refinement, reduction), we adopted the extreme tertile groups.

Considered as a whole, i.e. without subgrouping, despite group sizes of N = 24–72, we failed to identify any positive effect of ketamine or psilocybin on motivation or attention over the tested dose range. The most robust finding was a trend for a decline in performance following the 6 mg/kg dose of ketamine, which indicated the early phase of the descending limb of a biphasic dose response. This was confirmed by parallel experiments identifying even greater performance decline at 10 mg/kg (data not shown, but see Gastambide et al., 2013Benn and Robinson, 2014Nikiforuk and Popik, 2014).

Subgrouping rats based on break point and number of lever presses for food made available under a PR schedule of reinforcement identified rats that consistently ceased responding early (“low” responders), leading to low break points. Interestingly these rats had similar body weights, free feeding measures and open field activity compared to their high responder counterparts, suggesting any differences were unrelated to general health status, neurological function or appetite. In these low performers, both psilocybin (0.05–0.1 mg/kg) and ketamine (1–3 mg/kg) increased break point suggesting an increase in task motivation. These findings suggest that low doses of ketamine may relieve certain clinical signs related to depression (Xu et al., 2016), and further suggest that the doses and plasma concentrations of ketamine and psilocybin as described in the present study may have utility in treating subtypes of mental illnesses characterized by amotivation and anhedonia in particular.

In the 5-CSRTT, the effects of ketamine and psilocybin were evaluated in two separate task schedules. In the first, rats were tested under standard conditions of 0.75 s SD, 5 s ITI. Segregation of rats into high and low performers based on accuracy (% correct), revealed a trend for both psilocybin and ketamine to increase accuracy at equivalent doses to those effective in the PR task. In the case of psilocybin, the more robust measure of efficacy was the % hit measure, which also accounts for errors of omission as well as commission (incorrect response). Speed of responding was also marginally increased further supporting a performance improvement.

The second 5-CSRTT experiment utilized conditions of extended ITI (5 s vs. 10 s) and reduced stimulus duration (0.75 s vs. 0.3 s). The principal challenge is to response control, lengthening the ITI from 5 s to 10 s produces a significant increase in both PREM and PSV responses, a consistent and widely reported finding (Robbins, 2002Jupp et al., 2013Barlow et al., 2018Higgins et al., 2020a,b). Subgrouping rats, based on the level of PREM responses under the 10 s ITI schedule, into “Low” and “High” impulsives (LI vs. HI) highlights a wide range of responders typically seen under this schedule (Jupp et al., 2013Fink et al., 2015Barlow et al., 2018Higgins et al., 2020a). Importantly there is a reasonable consistency of performance on this measure over repeated tests as demonstrated by the HI rats having higher PREM scores under the 5 s ITI, albeit at markedly lower levels. PSV responses are also higher in the HI cohort, consistent with the HI rats demonstrating a deficit in inhibitory response control.

Similar findings for both ketamine and psilocybin were noted in this test schedule. While neither drug affected accuracy (measured as % correct), either in all, or HI/LI classified rats; both increased PREM and PSV responses in the LI cohort, supporting an increase in impulsive action. It should be noted that the magnitude of change produced by both ketamine and psilocybin was relatively small (∼2-fold) and confined to the LI subgroup. Certainly, the magnitude of change contrasted sharply with the 4-fold increase noted in rats pretreated with dizocilpine under the same 10 s ITI schedule (see also Higgins et al., 2005; 2016; Benn and Robinson, 2014). Previous studies have also described increased PREM responses following pretreatment with the phenethylamine 5-HT2A agonist DOI (Koskinen et al., 2000Koskinen and Sirvio, 2001Blokland et al., 2005Wischhof and Koch, 2012Fink et al., 2015), typically at doses lower than those which induce signs of WDS/BMC (Fink et al., 2015Halberstadt and Geyer, 2018).

Impulsivity is a construct that may be viewed in two forms: functional and dysfunctional (Dickman, 1990). Dysfunctional impulsivity is associated with psychiatric conditions such as substance abuse and OCD and thus carries a negative context. For example, associations between high impulsive trait and drug seeking behaviors have been reported both preclinically and clinically (Grant and Chamberlain, 2004Jupp et al., 2013). Functional impulsivity has been described as a tendency to make quick decisions when beneficial to do so, and may be related to traits such as enthusiasm, adventurousness, activity, extraversion and narcissism. Individuals with a high functional impulsivity are also reported to have enhanced executive functioning overall (Dickman, 1990Zadravec et al., 2005Burnett Heyes et al., 2012). Viewed in this more positive context, the feature of psilocybin and ketamine to promote impulsive behavior selectively in a LI cohort may be relevant in supporting a potential to treat depression and other mental disorders.

One advantage of being able to study pharmacological effects at low doses in an experimental setting, is the ability to probe for an underlying neurobiological mechanism, which would serve to establish this pattern of use within a scientific framework. Presumably these doses result in a low level of target site occupancy, which in the case of psilocybin is the serotonin 5-HT2A receptor (Vollenweider et al., 1998Tylš et al., 2014Nichols, 2016Kyzar et al., 2017). At higher doses and plasma exposure, and consequently higher levels of target occupancy, psychomimetic effects begin to emerge. In this respect, the recent study of Madsen et al., (2019) is of interest. These workers reported a correlation between the psychedelic effects of psilocybin (40–100% Likert scale maximum) and CNS 5-HT2A receptor occupancy (43–72%) and plasma psilocin levels (2–15 ng/ml). Increases in subjective intensity was correlated with both increases in 5-HT2A receptor occupancy and psilocin exposure. Based on these data, it is estimated that at 5-HT2A receptor occupancies up to ∼15%, no perceptual effects occur (Madsen and Knudsen, 2020).

5-HT2A receptors are widely distributed within cortical zones, notably layer II-V (Santana et al., 2004Mengod et al., 2015), and also in subcortical regions such as the DA nigrostriatal and mesocorticolimbic pathways where they appear to positively regulate tone, at least under certain physiological conditions (Doherty and Pickel, 2000Nocjar et al., 2002Bortolozzi et al., 2005Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017). One plausible hypothesis is that at low nanomolar plasma concentrations, psilocybin (or LSD, mescaline etc.) may preferentially target a subset of 5-HT2A receptors, possibly those localized to subcortical DA systems where activation has been reported to increase firing and tonicity of these pathways (Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017 for reviews). In turn this might be expected to promote behaviors related to motivation, attention and impulse control as noted in the PR and 5-choice experiments. Activation of cortical 5-HT2A receptors may account for the subjective/perceptual effects once a critical (higher) drug [plasma] threshold has been reached (Nichols, 2016Kyzar et al., 2017Madsen et al., 2019Vollenweider and Preller, 2020).

In the case of ketamine, the relevant target is most likely the NMDA subtype glutamate receptor (Lodge and Mercier, 2015Mathews et al., 2012Corriger and Pickering, 2019; although note; Zanos et al., 2018), which is comprised of a tetrameric receptor complex composed of NR1 subunits, combined with NR2A-D subunits and, in some cases, NR3A-B subunits. The NR2A-D subunits exist in an anatomically distinct manner, with the NR2A and NR2B subunits predominant in forebrain; the NR1 subunit having a broader distribution being a constituent of all NMDA channels (Kew and Kemp, 2005Traynelis et al., 2010). Potentially at low ketamine doses, there may be a preferential interaction between ketamine and specific NMDA channel subtypes (see Lodge and Mercier, 2015), and/or regional subpopulations which underlies the pharmacological effects of these doses of ketamine in preclinical and clinical contexts. We and others have reported on apparently pro-cognitive effects of non-competitive NMDA antagonists, typically dizocilpine, when tested at low doses (Mondadori et al., 1989Jackson et al., 2004Higgins et al., 20032016Guidi et al., 2015). A better understanding of the neurobiological mechanisms that underlie these effects may provide useful insight toward understanding the clinical benefit of low doses of ketamine in humans.

An interesting feature to emerge from this work was the similar profile of ketamine and psilocybin across the PR and 5-choice experiments. Both drugs increased break point in low performers, improved attention in low performer subgroups, and increased PREM/PSV responses in LI rats. Horsley et al., (2018) also reported a similar pattern of both drugs across various elevated plus maze measures, although the effects were suggestive of a mild anxiogenic profile. Despite their differing pharmacology, there is accumulating evidence from a variety of sources that the NMDA and 5-HT2A receptors are functionally intertwined. Vollenweider has highlighted the overlapping psychotic syndromes produced by serotonergic hallucinogens and psychotomimetic anesthetics associated with a marked activation of the prefrontal cortex and other overlapping changes in temporoparietal, striatal, and thalamic regions (Vollenweider, 2001Vollenweider and Kometer, 2010) suggesting that both classes of drugs may act upon a common final pathway. Secondly, 5-HT2A receptor antagonists attenuate a variety of putative psychosis-related behaviors induced by NMDA channel block, including behavioral stereotypy and disrupted PPI (Varty and Higgins, 1995Varty et al., 1999Higgins et al., 2003), a property that likely contributes to the antipsychotic efficacy of atypical neuroleptics such as clozapine, risperidone (Meltzer, 1999Remington, 2003). Furthermore, a cellular coexpression of 5-HT2A and NMDA receptors has been described in multiple brain regions, including VTA, striatum and cortex (Wang and Liang, 1998Rodriguez et al., 1999Rodriguez et al., 2000). Therefore, studying these drugs at the low dose range may also provide further insights into how these receptor systems may interact.

In conclusion, the present studies have characterized for the first time, a positive effect of ketamine (0.3–3 mg/kg [plasma] 10–70 ng/ml) and psilocybin (0.05–0.1 mg/kg [psilocin plasma] 7–12 ng/ml) on behaviors related to endophenotypes of amotivation and anhedonia. The overall effect sizes are modest, which might be expected at the doses and concentrations studied, where the degree of target occupancy is likely to be low and subject to individual differences in drug pharmacodynamics and pharmacokinetics. Each of these factors will impact on treatment response across a study population (Levy, 1998Dorne, 2004). Limitations to the present study include a restriction to male test subjects, and on single acute doses. Future studies should extend to both male and female subjects, and alternative dosing schedules. Nonetheless, the studies are important in that they define a potentially efficacious dose and plasma exposure range and provide a framework for early safety studies and further scientific investigation into the neurobiology of these drugs in the low dose range.

It seems that if individuals are frequently exposed to non-like-minded information, they often feel negative emotions and are, therefore, more likely to use incivility

The Effect of Exposure to (Non-)Like-Minded Information on the Use of Political Incivility on Twitter. Kohei Nishi. advance social sciences & humanities, Mar 11 2021. https://advance.sagepub.com/articles/preprint/The_Effect_of_Exposure_to_Non-_Like-Minded_Information_on_the_Use_of_Political_Incivility_on_Twitter/14191046/1


Abstract: Does exposure to like-minded/non-like-minded information lead to the use of political incivility? Few studies have investigated this question, and the results have been mixed. There are two conflicting possibilities: (i) if individuals are frequently exposed to like-minded political information, they reinforce their pre-existing beliefs and are, thus, more likely to use uncivil language, and (ii) if individuals are frequently exposed to non-like-minded information, they often feel negative emotions and are, therefore, more likely to use incivility. To evaluate these two competing hypotheses, I analyze Japanese Twitter data using a semi-supervised learning method. The results show that individuals who are exposed to non-like-minded information are more likely to use political incivility.


There is a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation

The Psychology of Fake News. Gordon Pennycook, David G. Rand. Trends in Cognitive Sciences, March 15 2021. https://doi.org/10.1016/j.tics.2021.02.007

Highlights

Recent evidence contradicts the common narrative that partisanship and politically motivated reasoning explain why people fall for 'fake news'.

Poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, as well as to the use of familiarity and source heuristics.

There is also a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation.

Effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.


Abstract: We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.

Keywords: fake newsmisinformationsocial medianews mediamotivated reasoningdual process theorycrowdsourcingattentioninformation sharing

What Can Be Done? Interventions To Fight Fake News

We now turn to the implications of these findings for interventions intended to decrease the spread and impact of online misinformation.

Current Approaches for Fighting Misinformation

As social media companies are, first and foremost, technology companies, a common approach is the automated detection of problematic news via machine learning, natural language processing, and network analysis [74.75.76.]. Content classified as problematic is then down-ranked by the ranking algorithm such that users are less likely to see it. However, creating an effective misinformation classifier faces two fundamental challenges. First, truth is not a black-and-white, clearly defined property: even professional fact-checkers often disagree on how exactly to classify content [77,78]. Thus, it is difficult to decide what content and features should be included in training sets, and artificial intelligence approaches run the risk of false positives and, therefore, of unjustified censorship [79]. Second, there is the problem of nonstationarity: misinformation content tends to evolve rapidly, and therefore the features which are effective at identifying misinformation today may not be effective tomorrow. Consider, for example, the rise of COVID-19 misinformation in 2020 – classifiers trained to detect largely political content were likely unequipped to be effective for novel false and misleading claims relating to health.

Another commonly used approach involves attaching warnings to content that professional fact-checkers have found to be false (reviewed in [80,81]). A great deal of evidence indicates that corrections and warnings do successfully reduce misperceptions [41,81.82.83.] and sharing [49,84,85]. Despite some early evidence that correction checking could backfire and increase belief in false content [86], recent work has shown that these backfire effects are extremely uncommon and are not a cause for serious concern [87,88].

There are, however, other reasons to be cautious about the sufficiency of professional fact-checking. Most importantly, fact-checking is simply not scalable – it typically requires substantial time and effort to investigate whether a particular claim is false or misleading. Thus, many (if not most) false claims never get fact-checked. Even for those claims that do eventually get flagged, the process is often slow, such that warnings are likely to be absent during the claim's period of peak viral spreading. Furthermore, warnings are typically only attached to blatantly false news, and not to extremely misleading or biased coverage of events that actually occurred. In addition to straightforwardly undermining the reach of fact-checks, this sparse application of warnings could lead to an 'implied truth' effect where users may assume that (false or misleading) headlines without warnings have actually been verified [84]. Fact-checks often also fail to reach their intended audience [89], and may fade over time [90], provide incomplete protection against familiarity effects [49], and cause corrected users to subsequently share more low-quality and partisan content [91].

Another potential approach that is commonly referenced is emphasizing the publishers of news articles, seeking to leverage the reliance on source cues described earlier. This, in theory, could be effective because people (at least in the USA) are actually fairly good at distinguishing between low- and high-quality publishers [92]. However, experimental evidence on emphasizing news publishers is not very encouraging: Numerous studies find that making source information more salient (or removing it entirely) has little impact on whether people judge headlines to be accurate or inaccurate [37,93.94.95.96.97.] (although see [98,99]).

New Approaches for Fighting Misinformation

One potentially promising alternative class of interventions involve a more proactive 'inoculation' or 'prebunking' against misinformation [8,100]. For example, the 'Bad News Game' uses a 10–20 minute interactive tutorial to teach people how to identify fake news in an engaging way [101]. An important limitation of such approaches is that they are 'opt in' – that is, people have to actively choose to engage with the inoculation technique (often for a fairly substantial amount of time – at least in terms of the internet attention span [102]). This is particularly problematic given that those most in need of 'inoculation' against misinformation (e.g., people who are low on cognitive reflection) may be the least likely to seek out and participate in lengthy inoculations. Lighter-touch forms of inoculation that simply present people with information that helps them to identify misinformation (e.g., in the context of climate change [103]) may be more scalable. For example, presenting a simple list of 12 digital media literacy tips improved people's capacity to discern between true and false news in the USA and India [104].

Both fact-checking and inoculation approaches are fundamentally directed toward improving people's underlying knowledge or skills. However, as noted earlier, recent evidence indicates that misinformation may spread on social media not only because people are confused or lack the competency to recognize fake news, but also (or even mostly) because people fail to consider accuracy at all when they make choices about what to share online [21,44]. In addition, as mentioned, people who are more intuitive tend to be worse at distinguishing between true and false news content, both in terms of belief (Figure 1A) and sharing [35,71]. This work suggests that interventions aimed at getting people to slow down and reflect about the accuracy of what they see on social media may be effective in slowing the spread of misinformation.

Indeed, recent research shows that a simple accuracy prompt – specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) before making judgments about social media sharing – improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments [21,44]. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a politically neutral news headline were sent to thousands of accounts who recently shared links to misinformation sites [21]. This subtle prompt significantly increased the quality of the new they subsequently shared (Figure 2B). Furthermore, survey experiments have shown that asking participants to explain how they know whether a headline is true or false before sharing it increases sharing discernment [105], and having participants rate accuracy at the time of encoding protects against familiarity effects [106]. Relatedly, metacognitive prompts – probing questions that make people reflect – increases resistance to inaccurate information [107].

A major advantage of such accuracy prompts is that they are readily scalable. There are many ways that social media companies, or other interested parties such as governments or civil society organizations, could shift people's attention to accuracy (e.g., through ads, by asking about the accuracy of content that is shared, or via public service announcements, etc.). In addition to scalability, accuracy prompts also have the normative advantage of not relying on a centralized arbiter to determine truth versus falsehood. Instead, they leverage users' own (often latent) ability to make such determinations themselves, preserving user autonomy. Naturally, this will not be effective for everyone all of the time, but it could have a positive effective in the aggregate as one of the various tools used to combat misinformation.

Finally, platforms could also harness the power of human reasoning and the 'wisdom of crowds' to improve the performance of machine-learning approaches. While professional fact-checking is not easily scalable, it is much more tractable for platforms to have large numbers of non-experts rate news content. Despite potential concerns about political bias or lack of knowledge, recent work has found high agreement between layperson crowds and fact-checkers when evaluating the trustworthiness of news publishers: the average Democrat, Republican, and fact-checker all gave fake news and hyperpartisan sites very low trust ratings [92] (Figure 3A). This remained true even when layperson raters were told that their responses would influence social media ranking algorithms, creating an incentive to 'game the system' [108]. However, these studies also revealed a weakness of publisher-based crowd ratings: familiarity with a publisher was necessary (although not sufficient) for trust, meaning that new or niche publishers are unfairly punished by such a rating scheme. One solution to this problem is to have laypeople rate the accuracy of individual articles or headlines (rather than publishers), and to then aggregate these item-level ratings to create average scores for each publisher (Figure 3B). Furthermore, the layperson ratings of the articles themselves are also useful. Analyzing a set of headlines flagged for fact-checking by an internal Facebook algorithm found that the average layperson accuracy rating for fairly small crowds correlated equally well with that of professional fact-checkers as the fact-checkers correlated with each other [77]. Thus, using crowdsourcing to add a 'human in the loop' element to misinformation detection algorithms is promising.

These observations about the utility of layperson ratings have a strong synergy with the aforementioned idea of prompts that shift users' attention to accuracy: periodically asking social media users to rate the accuracy of random headlines both (i) shifts attention to accuracy and thus induces the users to be more discerning in their subsequent sharing, and (ii) generates useful ratings to help inform ranking algorithms.

Despite generous Danish social policies, more advantaged families are better able to access, utilize, & influence universally available programs; & purposive sorting by levels of family advantage create neighborhood effects

Lessons from Denmark about Inequality and Social Mobility. James J. Heckman & Rasmus Landersø. NBER Working Paper 28543, March 2021. DOI 10.3386/w28543

Abstract: Many American policy analysts point to Denmark as a model welfare state with low levels of income inequality and high levels of income mobility across generations. It has in place many social policies now advocated for adoption in the U.S. Despite generous Danish social policies, family influence on important child outcomes in Denmark is about as strong as it is in the United States. More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects. Powerful forces not easily mitigated by Danish-style welfare state programs operate in both countries.


Controlling and drinking behaviour and partners’ unemployment status were identified as important factors for married women experiencing intimate partner violence

Mapping of Intimate Partner Violence: Evidence From a National Population Survey. Muluken Dessalegn Muluneh et al. Journal of Interpersonal Violence, March 8, 2021. https://doi.org/10.1177/0886260521997954

Abstract: Evidence on the relative importance of geographical distribution and associated factors with intimate partner violence (IPV) can inform regional and national health programs on women’s health. Four thousand seven hundred and twenty married women aged 15-49 years were interviewed in 2016 about IPV and this data was extracted from the Ethiopian Demographic Health Survey (EDHS) in 2020. The sample was selected by a two-staged cluster survey of women. The analysis was conducted using logistic regression that adjusted for clustering and sampling weights. Moreover, weighted proportions of IPV were exported to ArcGIS to conduct autocorrelations to assess the clustering of IPV. Amongst the 4469 married women who were 15 to 49 years of age included in the analysis, 34% (95% CI, 31.4%-36.3%) experienced IPV, 23.5% ( 95% CI, 21.5%-25.7%) experienced physical violence, 10.1% (95% CI, 8.7%- 11.7 %) experienced sexual violence and 24% (95% CI, 21.7%-26.4 %) experienced emotional violence. Partners’ controlling behaviour [AOR: 3.94; 95% CI, 3.03- 5.12], partner’s alcohol consumption [AOR: 2.59; 95% CI, 1.80- 3.71], partner educational qualifications [AOR: 2.16; 95% CI, 1.26- 3.71], a woman birthing more than five children [AOR: 1.70; 95% CI, 1.12- 2.56] and a history of the woman’s father being physically violent towards her mother [AOR: 1.99; 95% CI, 1.52- 2.59] were associated with an increased risk of IPV amongst married women in Ethiopia. Western and Central Oromia, Western Amhara, Gambella and Central Tigray and Hararri were identified as hot spot areas in Ethiopia (p<0.001). In this study, there was a significant geographic clustering of IPV in Ethiopia. Controlling and drinking behaviour and partners’ unemployment status were identified as important factors for married women experiencing IPV. Hence, there is a need for a context- driven evidence-based design intervention to reduce the impact of IPV.


Keywords: domestic violence, sexual assault, cultural contexts, alcohol and drugs, assessment, female offenders



Monday, March 15, 2021

CEO Stress, Aging, and Death

CEO Stress, Aging, and Death. Mark Borgschulte, Marius Guenzel, Canyao Liu & Ulrike Malmendier. NBER Working Paper 28550. DOI 10.3386/w28550

Abstract: We estimate the long-term effects of experiencing high levels of job demands on the mortality and aging of CEOs. The estimation exploits variation in takeover protection and industry crises. First, using hand-collected data on the dates of birth and death for 1,605 CEOs of large, publicly-listed U.S. firms, we estimate the resulting changes in mortality. The hazard estimates indicate that CEOs’ lifespan increases by two years when insulated from market discipline via anti-takeover laws, and decreases by 1.5 years in response to an industry-wide downturn. Second, we apply neural-network based machine-learning techniques to assess visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs’ apparent age by one year over the next decade. Our findings imply significant health costs of managerial stress, also relative to known health risks.


Human energy increases continuously during the weekend, drops on Monday, follows a passageway trajectory from Monday to Thursday, and increases on Friday again; increases in sleep quality prediced energy changes

Continuity in transition –Combining recovery and day‐of‐week perspectives to understand changes in employee energy across the seven‐day week. Oliver Weigelt  Katja Siestrup  Roman Prem. Journal of Organizational Behavior, March 14 2021. https://doi.org/10.1002/job.2514

Summary: We integrate perspectives from research on recovery from work and perspectives from day‐of‐week research to predict continuous as well as discontinuous changes in vitality and fatigue. We examine whether changes in recovery experiences and sleep quality predict changes in human energy over the course of the weekend. Furthermore, we consider positive anticipation of work at the start of the workweek and effort during the workweek to predict changes in energy. We collected experience sampling data from 87 employees over the course of twelve days. In total, 2187 observations nested in 972 days were eligible for analysis. Applying discontinuous growth curve modeling, we found that human energy increases continuously during the weekend, drops on Monday, follows a passageway trajectory from Monday to Thursday, and increases on Friday again. Changes in recovery experiences did not predict changes in energy, increases in sleep quality did. Positive anticipation of work attenuated the drop in vitality on Monday. Effort did not predict changes in energy over the course of the workweek. Our results suggest that the transition between weekends and workweeks and vice versa accounts for considerable changes in human energy and that weekends are recuperative, particularly because they provide the opportunity for better sleep.


Sexual violence in college men: Some predictors are Hostile Masculinity, Impersonal Sex, lower empathy, peer support, extreme pornography use, and participation in alcohol parties

Factors predictive of sexual violence: Testing the four pillars of the Confluence Model in a large diverse sample of college men. Neil M. Malamuth  Raina V. Lamade  Mary P. Koss  Elise Lopez  Christopher Seaman  Robert Prentky. Aggressive Behavior, March 14 2021. https://doi.org/10.1002/ab.21960

Abstract: This article focuses on the characteristics of sexually violent men who have not been convicted of a crime. The objective of this study was to test the four key interrelated pillars of the Confluence Model. The first key pillar posits the interaction of Hostile Masculinity and Impersonal Sex as core risk predictors. The second pillar entails a “mediated structure” wherein the impact of more general risk factors is mediated via those specific to aggression against women. The third pillar comprises a single latent factor underlying various types of sexual violence. The fourth pillar expands the core model by including the secondary risk factors of lower empathy, peer support, extreme pornography use, and participation in alcohol parties. An ethnically diverse sample of 1,148 male students from 13 U.S. colleges and universities completed a comprehensive survey that assessed the hypothesized risk factors and self‐reported sexual violence, which included noncontact sexual offenses, contact sexual coercion, and contact sexual aggression. A series of multiple regression analyses were conducted before testing structural equation models. The results supported the integration of the four pillars within a single expanded empirical model that accounted for 49% of the variance of sexual violence. This study yielded data supporting all four key pillars. These findings provide information about non‐redudant risk factors that can be used to develop screening tools, group‐based and individually tailored psychoeducational and treatment interventions.

Check also An Evolutionary Perspective on Sexual Assault and Implications for Interventions. Mark Huppin, Neil M. Malamuth, Daniel Linz. In the Handbook of Sexual Assault and Sexual Assault Prevention pp 17-44, October 19 2019. https://www.bipartisanalliance.com/2019/11/an-evolutionary-perspective-on-sexual.html


Spending money on pets promotes happiness

Give a dog a bone: Spending money on pets promotes happiness. Michael W. White, Nazia Khan, Jennifer S. Deren, Jessica J. Sim & Elizabeth A. Majka. The Journal of Positive Psychology, Mar 13 2021, https://doi.org/10.1080/17439760.2021.1897871

Abstract: Pet owners routinely spend money on services, accessories, and gifts for their pets. The present research investigates the affective consequences of pet spending. Specifically, we propose that spending money on pets promotes happiness. As predicted, a lab study demonstrated that pet owners who were randomly assigned to recall a time they spent money on their pet reported greater happiness than those who recalled spending money on themselves. Likewise, a field study demonstrated that pet owners who were randomly assigned to spend $5 on their pet reported greater happiness than those who were assigned to spend on themselves or another person – an effect specific to feelings of happiness rather than to mood more generally. This research offers pet owners an evidence-based strategy for boosting happiness, representing an additional intentional activity that can be used to improve well-being.

KEYWORDS: Happinesspetsmoneywell-being