Wednesday, March 17, 2021

Hippocampal volume has been associated with impaired & expert navigation, but large pre-registered studies found no such correlation in healthy adults; network models offer greater explanatory power for flexibility & individual differences

Hippocampal volume and navigational ability: The map(ping) is not to scale. Steven M. Weisberg, Arne D. Ekstrom. Neuroscience & Biobehavioral Reviews, March 17 2021. https://doi.org/10.1016/j.neubiorev.2021.03.012

Rolf Degen's take: The way things are, we will probably never learn whether taxi drivers really have a larger hippocampus, as shaky science findings once seemed to show

Highlights

• Hippocampal volume has been associated with impaired and expert navigation.

• Large pre-registered studies found no such correlation in healthy adults.

• Theoretical mechanisms supporting structure-behavior associations are tenuous.

• Navigation is a complex cognitive function, involving multiple brain networks.

• Network models offer greater explanatory power for flexibility and individual differences.

Abstract: A critical question regards the neural basis of complex cognitive skill acquisition. One extensively studied skill is navigation, with evidence suggesting that humans vary widely in navigation abilities. Yet, data supporting the neural underpinning of these individual differences are mixed. Some evidence suggests robust structure-behavior relations between hippocampal volume and navigation ability, whereas other experiments show no such correlation. We focus on several possibilities for these discrepancies: 1) volumetric hippocampal changes are relevant only at the extreme ranges of navigational abilities; 2) hippocampal volume correlates across individuals but only for specific measures of navigation skill; 3) hippocampal volume itself does not correlate with navigation skill acquisition; connectivity patterns are more relevant. To explore this third possibility, we present a model emphasizing functional connectivity changes, particularly to extra-hippocampal structures. This class of models arises from the premise that navigation is dynamic and that good navigators flexibly solve spatial challenges. These models pave the way for research on other skills and provide more precise predictions for the neural basis of skill acquisition.

Keywords: Hippocampusspatial navigationMRIfunctional connectivitybrain volume


Potential ‘super-spreaders’ –people with many social interactions, extraverts–, the people most likely to widely spread COVID-19, are the most willing to take a free test, despite the increased probability of being quarantined

Testing for COVID-19: willful ignorance or selfless behavior? Linda Thunstroem et al. Behavioural Public Policy , Volume 5, Issue 2, April 2021, pp 135-152. https://doi.org/10.1017/bpp.2020.15

Rolf Degen's take: Potential "super-spreaders," who would suffer the most from being quarantined, are nevertheless those who would most willingly submit to free COVID-19 testing

Abstract: Widespread testing is key to controlling the spread of COVID-19. But should we worry about self-selection bias in the testing? The recent literature on willful ignorance says we should – people often avoid health information. In the context of COVID-19, such willful ignorance can bias testing data. Furthermore, willful ignorance often arises when selfish wants conflict with social benefits, which might be particularly likely for potential ‘super-spreaders’ – people with many social interactions – given people who test positive are urged to self-isolate for two weeks. We design a survey in which participants (n = 897) choose whether to take a costless COVID-19 test. We find that 70% would take a test. Surprisingly, the people most likely to widely spread COVID-19 – the extraverts, others who meet more people in their daily lives and younger people – are the most willing to take a test. People's ability to financially or emotionally sustain self-isolation does not matter to their decision. We conclude that people are selfless in their decision to test for COVID-19. Our results are encouraging – they imply that COVID-19 testing may succeed in targeting those who generate the largest social benefits from self-isolation if infected, which strengthens the case for widespread testing.

Discussion

Widespread testing is one of the most important actions that US governments at any level can undertake to help slow down the spread of COVID-19. Given budget and testing supply constraints, it is likely that random, but voluntary, testing will be the most effective policy. We design a survey to examine the risks from self-selection into taking a COVID-19 test.

Overall, we observe that around 70% of people would agree to a costless COVID-19 test. We find that people who are more worried about their own health due to COVID-19 are more likely to test, as are young healthy people, relative to older healthy people. Ability to afford self-isolation for 14 days does not seem to affect the decision to test. Furthermore, people who worry more about their health, and people with health insurance or health coverage through Medicare or Medicaid, are more likely to take the test, as are people identifying as Democrats compared to Republicans.

Contrary to our expectation, we also find that potential ‘super-spreaders’ are more likely than other individuals to agree to a costless COVID-19. It could be that extroverts are more willing than expected to take a COVID-19 test because their private cost of doing so is unusually low due to the broadly implemented social distancing at the time of data collection for this study. If extroverts are already relatively isolated (i.e., due to a stay-at-home order and mandated closures by the state governor of public spaces, such as gyms, restaurants and bars), the personal cost of testing might be low. Furthermore, extroverts might be more likely to get infected if they socialize more, which could be a ‘selfish’ motivation to get tested. However, we control for the current level of compliance with social distancing, which should address both of these private motivations for increased probability of testing, and we find that people who comply more are less motivated to take the test. We also control for their worry about own health due to COVID-19. Even so, the positive effect on willingness to test from being an extrovert persists. We therefore conclude that the positive effect of being an extrovert on willingness to test for COVID-19 is likely due to social health benefits weighing more heavily in their decision than their private costs from potential self-isolation for 14 days, should the test come back positive. The importance of the prosocial motive in determining COVID-19 testing is consistent with the results of the study by Jordan et al. (2020), who find that prosocial messages are more effective than self-interested messages in promoting behavior that prevent the spread of COVID-19 (e.g., hand washing, hand shaking, hugging).

Our results suggest that the risks of adverse selection (in terms of failing to target the people most likely to spread the virus) in testing for COVID-19 might be fairly low. This underscores the value of widespread testing, even if it cannot be truly random, and the importance of making such testing available nationwide in the USA as soon as possible.

An important shortcoming of our analysis is that it builds on hypothetical survey data. It is well documented that survey answers may be affected by a ‘hypothetical bias’, meaning that people answer one way in a survey and behave in a different way when faced with real, incentivized decisions. This risk pertains to our study as well, and the hypothetical bias might be particularly pronounced if the choice to test for COVID-19 is regarded as prosocial. Several studies suggest that a hypothetical bias is particularly likely when measuring prosocial behavior – people often exaggerate the extent to which they engage in such behavior (e.g., Murphy et al.2005; Vossler et al.2012; Jacquemet et al.2013). Furthermore, it is possible that personal costs to the testing decision are less salient in a hypothetical context. Once testing is more widespread in the USA, it will be important to examine who actually chooses to get tested, and the extent to which they deviate from the general population. That said, hypothetical and incentivized behavior generally correlate, such that an analysis like ours can provide important insights into the potential pitfalls of voluntary testing, prior to the actual testing. This is useful information to have on hand when designing an efficient and cost-effective testing strategy.



Facet-level personality development in the transition to adolescence: Girls increase their conscientiousness, boys' stay relatively constant; agreeableness and introversion increase for both boys & girls

Brandes, C. M., Kushner, S. C., Herzhoff, K., & Tackett, J. L. (2020). Facet-level personality development in the transition to adolescence: Maturity, disruption, and gender differences. Journal of Personality and Social Psychology, Mar 2021. https://doi.org/10.1037/pspp0000367

Abstract: The transition to adolescence is marked by enormous change in social, biological, and personality development. Although accumulating evidence has offered insight into the nature of higher-order personality trait development during this period, much less is known about the development of lower-order personality traits, or “facets.” The current study used a cohort-sequential longitudinal design to examine domain- and facet-level trajectories for mother-reported personality traits during the early adolescent transition. Personality trait domains and facets were assessed with the Inventory of Child Individual Differences–Short Form (Deal, Halverson, Martin, Victor, & Baker, 2007). Participants were 440 children followed at 4 annual timepoints from middle childhood (Mage = 9.97, SD = 0.81) to early adolescence (Mage = 13.11, SD = 0.84). Results of latent growth curve models showed substantial facet-level personality stability in this period, as well as small to moderate linear change in 13 of 15 facets. Gender differences in change were evident for 9 facets. Overall patterns suggested consistent increases in agreeableness facets with null to small gender differences. Neuroticism and openness to experience facet change was heterogeneous within each domain, but patterns were similar for boys and girls. Extraversion primarily decreased, though the magnitude and direction of change differed between facets and genders. Conscientiousness increased across all facets, but only among girls. These findings overall demonstrate a high degree of developmental consistency in facets within each domain as well as some notable differences. Further, this study contributes to a small and somewhat mixed evidence base for current theories of adolescent personality development.


Self-rated Attractiveness Predicts Preferences for Sexually Dimorphic Facial Characteristics: Evidence from a Culturally Diverse Sample

Marcinkowska, Urszula M., Benedict C. Jones, and Anthony J. Lee. 2021. “Self-rated Attractiveness Predicts Preferences for Sexually Dimorphic Facial Characteristics: Evidence from a Culturally Diverse Sample.” PsyArXiv. March 10. doi:10.31234/osf.io/r6jub

Abstract: Individuals who are more attractive are thought to show a greater preference for facial sexual dimorphism, potentially because individuals who perceive themselves as more physically attractive believe they will be better able to attract and/or retain sexually dimorphic partners. Evidence for this link is mixed, however, and recent research suggests the association between self-rated attractiveness and preferences for facial sexual dimorphism may not generalise to non-Western cultures. Here, we assess whether self-rated attractiveness and health predict facial sexual dimorphism preferences in a large and culturally diverse sample of 6907 women and 2851 men from 41 countries. We also investigated whether ecological factors, such as country health/development and inequality, might moderate this association. Our analyses found that men and women who rated themselves as more physically attractive reported stronger preferences for exaggerated sex-typical characteristics in other sex faces. This finding suggests that associations between self-rated attractiveness and preferences for sexually dimorphic facial characteristics generalise to a culturally diverse sample and exist independently of country-level factors. We also found that country health/development moderated the effect of men’s self-rated attractiveness on femininity preferences, such that men from countries with high health/development showed a positive association between self-rated attractiveness and femininity preference, while men from countries with low health/development, showed exactly opposite trend.


Tuesday, March 16, 2021

Empathic accuracy was beneficial (for well-being & ill-being) or not harmful (for marital satisfaction) at low socio-economic levels; was not beneficial (well- & ill-being), or was harmful for mat. satisf. for high levels

Empathy in context: Socioeconomic status as a moderator of the link between empathic accuracy and well-being in married couples. Emily F. Hittner, Claudia M. Haase. Journal of Social and Personal Relationships, March 11, 2021. https://doi.org/10.1177/0265407521990750

Abstract: The present laboratory-based study investigated socioeconomic status (SES) as a moderator of the association between empathic accuracy and well-being among married couples from diverse socioeconomic backgrounds. Empathic accuracy was measured using a performance-based measure of empathic accuracy for one’s spouse’s negative emotions during a marital conflict conversation. Aspects of well-being included well-being (i.e., positive affect, life satisfaction), ill-being (i.e., negative affect, anxiety symptoms, depressive symptoms), and marital satisfaction. SES was measured using a composite score of income and education. Findings showed that SES moderated associations between empathic accuracy and well-being. Empathic accuracy was beneficial (for well-being and ill-being) or not harmful (for marital satisfaction) at low levels of SES. In contrast, empathic accuracy was not beneficial (for well-being and ill-being) or harmful (for marital satisfaction) at high levels of SES. Results were robust (controlled for age, gender, and race). Findings are discussed in light of interdependence vs. independence in low- vs. high-SES contexts and highlight the importance of socioeconomic context in determining whether empathic accuracy benefits well-being or not.

Keywords: Empathic accuracy, marriage, socioeconomic status, well-being


 

Vulnerable narcissism higher in Germany than Japan, related to self-construal; grandiose narcissism not equivalent across cultures; culturally incongruent forms of narcissism show more mental health problems

Narcissism in independent and interdependent cultures. Emanuel Jauk et al. Personality and Individual Differences, Volume 177, July 2021, 110716. https://doi.org/10.1016/j.paid.2021.110716

Highlights

• Studied narcissism in independent (Germany) and interdependent (Japan) cultures

• Vulnerable narcissism higher in Germany than Japan, related to self-construal

• Grandiose narcissism not equivalent across cultures

• Culturally incongruent forms of narcissism show more mental health problems.

Abstract: Narcissism can manifest in a grandiose form – admiration-seeking, exhibitionism, and dominance – or a vulnerable form – anxiety, withdrawal, and hypersensitivity. While grandiose narcissism is conceptually in line with an independent self-construal, as prevalent in Western countries, the vulnerable form can be assumed to relate more to an interdependent self-construal, as prevalent in Eastern countries. We studied both forms of narcissism in Germany and Japan (Ns = 258, 280), which differ fundamentally in their independent and interdependent self-construal, yet are similar regarding global developmental standards. We tested whether (1) mean differences in both narcissism forms would conform to the predominant self-construal, (2) self-construal would explain variance in narcissism beyond broad personality traits, and (3) there would be stronger mental health tradeoffs for culturally incongruent forms of narcissism. Our results largely confirm these expectations for vulnerable narcissism, which is (1) more prevalent in Japan than Germany, (2) related to self-construal beyond broad traits, and, (3) more strongly related to mental health problems in Germany than Japan. For grandiose narcissism, data analyses indicated that construct equivalence can only be assumed for the entitlement factor, and internal structure and nomological networks differ substantially between cultural contexts.

Keywords: Grandiose narcissismVulnerable narcissismIndependent self-construalInterdependent self-construalCross-cultural research


4. Discussion

We investigated grandiose and vulnerable narcissism across Germany and Japan, two countries differing in independent and interdependent self-construal. We tested whether (1) grandiose narcissism would be higher in Germany, whereas vulnerable narcissism would be higher in Japan, and that (2) these differences would relate to self-construal beyond broad FFM traits. Finally, (3) we tested two competing hypotheses regarding the relations between narcissism and psychological maladjustment across independent and interdependent cultures.

4.1. Vulnerable narcissism has a similar structure, yet different implications across cultures

Results largely confirmed our expectations for vulnerable narcissism, which was (1) higher in Japan than Germany, (2) related to interdependent self-construal beyond FFM traits (albeit also related to independent self-construal) and (3) related more strongly to interpersonal problems in Germany than Japan, which is in line with the cultural incongruency hypothesis on personality and mental health (Curhan et al., 2014). This latter result suggests that, while vulnerable narcissism goes along with interpersonal problems in both cultures, the burden for individuals high on vulnerable narcissism might be higher in a cultural context valuing individualism and assertiveness. The MCNS as a measure of vulnerable narcissism displayed metric invariance, which means that indicators loaded equally on a latent factor (however, intercepts differed). Nomological network structure within the FFM was similar for the central dimensions of neuroticism and disagreeableness (Miller et al., 2016) as well as introversion (Jauk et al., 2017).

4.2. Grandiose narcissism has different structures across cultures, but entitlement might be similar

For grandiose narcissism, the measure used in this study (NPI-13) was not invariant at a general factor level (similar to previous research; Żemojtel-Piotrowska et al., 2019), so we conducted analyses for lower-order factors. Here, the entitlement/exploitativeness-factor displayed metric invariance, the others did not. Though this result is at odds with a recent study by Żemojtel-Piotrowska and colleagues, who observed invariance for the other two factors (leadership/authority and grandiose exhibitionism; Żemojtel-Piotrowska et al., 2019), it fits conceptually with structural models of narcissism placing entitlement – an aspect of antagonism – at the core of the construct (Krizan & Herlache, 2018Weiss et al., 2019).

Contrary to our expectations, the entitlement aspect of narcissism was (1) higher in Japan than Germany (even more when controlling for FFM traits) and (2) controlling for self-construal did not alter this difference. While different post-hoc explanations for this finding could be conceived, when considered together with FFM differences observed here, it most likely reflects a reference group effect (see Limitations). Grandiose exhibitionism, the more (though not exclusively) agentic-extraverted aspect of grandiose narcissism was, in line with our expectations, lower in Japan (note, however, that this aspect likely assesses different constructs between cultures). This latter aspect, which is arguably most culturally incongruent with the Japanese culture, (3) was related to intrapersonal maladjustment in Japan, but not in Germany, further confirming the cultural incongruency hypothesis (Curhan et al., 2014). This shows that, while more agentic narcissism is largely associated with good mental health (less symptoms) in Western samples (e.g., Kaufman et al., 2018), this allegedly “happy face” (Rose, 2002) imposes a burden on the individual in cultures which value modesty and relatedness.

4.3. Limitations

An important methodological limitation of this study is that we relied on self-reports within the investigated cultures, in which cross-cultural differences might be obscured by reference group effects (Heine et al., 2002). This was likely the case for (part of) the self-construal scale, which showed an expected difference only for interdependent but not independent self-construal (despite experts' general agreement on independent orientation being very untypical for Japan; ibid.). Also, the scale displays limited reliability for its length. Regarding narcissism, while most of the effects observed here were in line with theoretical predictions, making reference group effects unlikely in these cases, the higher entitlement score in Japan might reflect such an effect, as do differences in FFM traits (see Supplement S1): as in previous research, Japanese participants rated themselves lower on agreeableness and conscientiousness than Germans, which might rather be indicative of high within-culture comparison standards than actual between-culture effects (Schmitt et al., 2007).

Another potential limitation could be seen in non-invariance of the grandiose narcissism measure/imperfect invariance of the vulnerable narcissism measure and entitlement scale. However, we wish to emphasize that we consider the finding that the complex psychological phenomenon of grandiose narcissism – rooted in Western thinking – varies across fundamentally different cultures an important insight rather than a “lack of invariance”. Nonetheless, when interpreting the findings presented here, it must be taken into account that vulnerable narcissism and entitlement do only partially reflect the same latent constructs across cultures, and leadership/authority and grandiose exhibitionism likely reflect different constructs and must be interpreted at the level of observed test scores (with varying meanings).

Facial attractiveness in women was negatively correlated with age at menopause and positively correlated to current fecundity

Żelaźniewicz A, Nowak-Kornicka J, Zbyrowska K, Pawłowski B (2021) Predicted reproductive longevity and women’s facial attractiveness. PLoS ONE 16(3): e0248344. https://doi.org/10.1371/journal.pone.0248344

Abstract: Physical attractiveness has been shown to reflect women’s current fecundity level, allowing a man to choose a potentially more fertile partner in mate choice context. However, women vary not only in terms of fecundity level at reproductive age but also in reproductive longevity, both influencing a couple’s long-term reproductive success. Thus, men should choose their potential partner not only based on cues of current fecundity but also on cues of reproductive longevity, and both may be reflected in women’s appearance. In this study, we investigated if a woman’s facial attractiveness at reproductive age reflects anti-Müllerian hormone (AMH) level, a hormone predictor of age at menopause, similarly as it reflects current fecundity level, estimated with estradiol level (E2). Face photographs of 183 healthy women (Mage = 28.49, SDage = 2.38), recruited between 2nd - 4th day of the menstrual cycle, were assessed by men in terms of attractiveness. Women’s health status was evaluated based on C-reactive protein level and biochemical blood test. Serum AMH and E2 were measured. The results showed that facial attractiveness was negatively correlated with AMH level, a hormone indicator of expected age at menopause, and positively with E2, indicator of current fecundity level, also when controlled for potential covariates (testosterone, BMI, age). This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity at reproductive age compared to the length of reproductive lifespan.

Discussion

In contrast to the research hypothesis, the result of this study showed that facial attractiveness of women at reproductive age is negatively related with AMH level. Simultaneously, we found a positive correlation between face attractiveness and estradiol level, a hormone predictor of current fecundity [2], which was also shown in previous studies [640; but see also for negative results 41]. Facial attractiveness was also negatively related with BMI what has been also shown in the previous studies [42,43].

Our results contradict the results obtained by Bovet et al. [16], showing a positive correlation between face attractiveness and predicted length of reproductive lifespan, estimated based on maternal age at menopause. Although, the most recent data on secular trends in age at menopause in Europe are scarce and difficult to compare, there seem to be no major difference between European countries, including Poland and France [44,45], that could explain the contradictory results of the studies. This difference in the study outcomes may be explained by different methods to estimate expected age at menopause employed in the two studies. Although, there is a positive association between mother’s and daughter’s age at menopause, existing estimates of the heritability of menopause age have a wide range [21,22,46]. Also, reported mother’s age at menopause may not be accurate due to the potential risk of recall bias [47]. Furthermore, previous research showed that AMH level is a better predictor of a woman’s TTM, compared to mother’s age at menopause [48,49], due to several reasons. AMH level is influenced by environmental factors that are also related with menopausal age, such as smoking or diet [50,51]. Also, a mother’s age at menopause is determined by genetic factors, that are shared by a mother and a daughter, and by environmental factors acting only on a mother, but not on a daughter [49]. While, a daughter’s age of menopause is influenced both by genetic and environmental factors, with genetic component reflecting not only maternal but also paternal genetic contribution [46,52]. Therefore, whilst information from mother’s age at menopause only reflects the maternal half of genetic influence, AMH level may reflect the sum total of genetic and environmental influences [50], and thus correlates more strongly with actual age at menopause [49]. Additionally, maternal age at menopause may only predict a daughter’s at menopause, whereas women’s fertility decline earlier, what reduce the chance of a successful pregnancy a few years before menopause. The age of the onset of a period of subfertility and infertility that precede menopause differs among women as well [46], and this should be indicated by AMH level (marker of diminishing ovarian reserve) but not by maternal age at menopause.

The results of the study also showed a negative correlation between AMH and E2 levels, what is in line with previous research [53,54]. Experiments in vitro showed that E2 down-regulates AMH expression in primary cultures of human granulosa cells (what in vivo may facilitate reduction of ovarian reserve), and when estradiol concentration reaches a certain threshold, it is capable of completely inhibiting AMH expression through ERβ receptors [55]. This, together with the results of our study, may suggest an existing trade-off between current fecundity, length of reproductive lifespan and a woman’s capability to invest in morphological cues of both. Life-history theory predicts that evolution of fitness-related traits and functions is constrained by the existence of trade-offs between them. Trade-offs are ubiquitous in nature, their existence is explained in the context of resource limitations [56], and may be observed not only between different traits and functions (e.g. immunity and fertility), but also within one function, e.g. different components of reproductive effort. Possibly, there is also a trade-off between high fecundity at reproductive age (the likelihood of fertilization within the cycles at reproductive age) and the length of reproductive lifespan (allowing for reproductive profits in a long-term perspective).

The existence of such trade-off may be confirmed by research showing that older age at menopause is related with using hormonal contraception for longer than a year [57,58; but see for contradictory results: 59,60] and occurrence of irregular cycles before age of 25 [58], which are often anovulatory [61]. Also, some research show that the number of children correlates negatively with AMH level in young women, what may suggest that more fertile women have shorter TTM [62,63]. On the other hand, some research show a positive correlation between AMH level and number of children [64] and that childlessness is linked with younger age at menopause [57,65,66]. However, this correlations may be caused by other variable (e.g. genetic factors or some disease), that causes both low fertility and earlier ovarian failure [66], and thus do not exclude the possible existence of the trade-off between high fecundity at reproductive age and length of reproductive lifespan.

Furthermore, sexual selection may act more strongly on male preferences toward cues of high fecundity at the reproductive age compared to cues of long reproductive lifespan. This presumption might explain the observed finding of a negative relationship between attractiveness and AMH and a simultaneous positive correlation between attractiveness and E2. Firstly, although humans often live in long-term pairbonds, remarriage is common after spousal death and/or divorce, resulting in serial monogamy [67]. Thus, as adult mortality was higher and the expected lifespan was shorter in our evolutionary past [68], men would profit more from mating with highly fecund women compared to mating with women with longer reproductive lifespan. Furthermore, many women (also in traditional societies) give last birth long before the time of menopause, not fully profiting from the length of their reproductive lifespan [69]. Pregnancy in older age is related to a higher risk of pregnancy complications, miscarriage [70], and maternal death [71], what might contribute to an earlier cessation of reproduction [69]. Also, many environmental and life-style factors may impact age at menopause [51,72], influencing the relationship between morphological cues of long reproductive lifespan at younger age and the actual age at menopause. Thus choosing a potential partner based on the cues of current fecundity may bring a greater fitness pay-off, compared to choosing a partner with a potentially long reproductive lifespan.

Finally, some limitations of our study need to be addressed. Both AMH and E2 levels were assessed only at the between-subjects level, based on a single measurement. Although AMH level has been shown to vary across menstrual cycle [73], the extent of variation is small and sampling on any day of the menstrual cycle is expected to adequately reflect ovarian reserve [74]. However, E2 level predicts most reliably a woman’s fecundity if based on repeated sampling across menstrual cycle [75]. Thus, it would be worth to verify the results of our study with repeated AMH and E2 measurements, using longitudinal, rather than cross-sectional design, to assess the relationship between these hormones and a woman’s facial attractiveness.

This is the first study investigating the relationship between AMH level and facial attractiveness in women. The results showed that women perceived as more attractive are characterized by lower AMH, hormonal predictor of age at menopause, and higher E2 levels, hormonal indicator of current fecundity. This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity during reproductive age compared to the length of reproductive lifespan.

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why?

Huge inequalities persist - in terms of pay, property, and political representation, but East Asia is becoming more gender equal; the same cannot be said for South Asia. Why? Alice Evans, Mar 13 2021. https://www.draliceevans.com/post/how-did-east-asia-overtake-south-asia

Circa 1900, women in East Asia and South Asia were equally oppressed and unfree. But over the course of the 20th century, gender equality in East Asia advanced far ahead of South Asia. What accounts for this divergence?

The first-order difference between East and South Asia is economic development. East Asian women left the countryside in droves to meet the huge demand for labour in the cities and escaped the patriarchal constraints of the village. They earned their own money, supported their parents, and gained independence. By contrast, the slower pace of structural transformation has kept South Asia a more agrarian and less urban society, with fewer opportunities for women to liberate themselves.

But growth is not the whole story. Cultural and religious norms have persisted in spite of growth. Even though women in South Asia are having fewer children and are better educated than ever before, they seldom work outside the family or collectively challenge their subordination. By global standards, gender equality indicators in South Asia remain low relative to regions at similar levels of development or even compared with many poorer countries. 

Below I set out evidence for four claims:

. East and South Asian women were once equally unfree and oppressed. Both societies were organised around tightly policing women’s sexuality. 

. But every patrilineal society also faced a trade-off between honour (achieved by restricting women’s freedoms) and income (earned by exploiting female labour). South Asia had a stronger preference for female seclusion, and East Asia a stronger preference for female exploitation. This implies South Asia ‘needed’ more income to be ‘compensated’ for the loss of honour than East Asia.

. In patriarchal societies, industrialisation and structural transformation are necessary preconditions for the emancipation of women. By seizing economic opportunities outside the family, women can gain economic autonomy, broaden their horizons, and collectively resist discrimination.

. But industrialisation is not sufficient. In societies with strong preferences for female seclusion, women may forfeit new economic opportunities so as to preserve family honour. Hence inequalities persist alongside growth. 


Women collectively condemn other women who appear to be sexually permissive even when they are not direct sexual rivals

Ayers, Jessica D., and Aaron T. Goetz. 2021. “Coordinated Condemnation in Women's Intrasexual Competition.” PsyArXiv. March 11. doi:10.31234/osf.io/g6x5r

Abstract: Here, we identify a novel reason why women are often criticized and condemned for (allegedly) sexually permissive behavior, such as their choice of dress. Combining principles from coordinated condemnation and sexual economics theory, we developed a model of competition that accounts for women’s competition in the absence of mating-relevant advantages. We hypothesized and found that women collectively condemn other women who appear to be sexually permissive. Study 1 (N = 712) demonstrated that women perceive a rival more negatively when she is showing cleavage, and these negative perceptions are ultimately driven by the inference that “provocatively” dressed women are more likely to have one-night stands. Study 2 (N = 341) demonstrated that women criticize and condemn provocatively dressed women, even when they are not direct sexual rival (e.g., her boyfriend’s sister). Our findings suggest that more research is needed to fully understand women’s intrasexual competition in the absence of mating-relevant cues.




Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property

Low Doses of Psilocybin and Ketamine Enhance Motivation and Attention in Poor Performing Rats: Evidence for an Antidepressant Property. Guy A. Higgins. Front. Pharmacol., February 26 2021. https://doi.org/10.3389/fphar.2021.640241

Abstract: Long term benefits following short-term administration of high psychedelic doses of serotonergic and dissociative hallucinogens, typified by psilocybin and ketamine respectively, support their potential as treatments for psychiatric conditions such as major depressive disorder. The high psychedelic doses induce perceptual experiences which are associated with therapeutic benefit. There have also been anecdotal reports of these drugs being used at what are colloquially referred to as “micro” doses to improve mood and cognitive function, although currently there are recognized limitations to their clinical and preclinical investigation. In the present studies we have defined a low dose and plasma exposure range in rats for both ketamine (0.3–3 mg/kg [10–73 ng/ml]) and psilocybin/psilocin (0.05–0.1 mg/kg [7–12 ng/ml]), based on studies which identified these as sub-threshold for the induction of behavioral stereotypies. Tests of efficacy were focused on depression-related endophenotypes of anhedonia, amotivation and cognitive dysfunction using low performing male Long Evans rats trained in two food motivated tasks: a progressive ratio (PR) and serial 5-choice (5-CSRT) task. Both acute doses of ketamine (1–3 mg/kg IP) and psilocybin (0.05–0.1 mg/kg SC) pretreatment increased break point for food (PR task), and improved attentional accuracy and a measure of impulsive action (5-CSRT task). In each case, effect size was modest and largely restricted to test subjects characterized as “low performing”. Furthermore, both drugs showed a similar pattern of effect across both tests. The present studies provide a framework for the future study of ketamine and psilocybin at low doses and plasma exposures, and help to establish the use of these lower concentrations of serotonergic and dissociative hallucinogens both as a valid scientific construct, and as having a therapeutic utility.

Discussion

The present series of experiments were designed to evaluate the behavioral properties of low doses and plasma concentrations of ketamine and psilocybin in the rat, with a view to identifying behavioral effects that might be relevant to the antidepressant and other therapeutic potential of both drugs. One of the first challenges to this line of research is defining a low dose range of ketamine and psilocybin. The approach taken in this study was to establish doses and plasma exposures of each drug for stereotyped behaviors characteristic of each drug and its distinct pharmacological class. Since behavioral stereotypies are often considered as the preclinical proxy for their psychomimetic property (Hanks and Gonzalez-Maeso, 2013Halberstadt and Geyer, 2018), we focused on doses just below threshold for their induction. Based on this criterion we identified ketamine and psilocybin doses (and plasma exposures) of 0.3–3 mg/kg (10–70 ng/ml) and 0.05–0.1 mg/kg (7–12 ng/ml [psilocin]) respectively for investigation.

Preclinical studies explicitly examining low (“micro”) doses of ketamine and psilocybin are beginning to appear in the literature (Horsley et al., 2018Meinhardt et al., 2020), albeit without any demonstration of potential beneficial effects. One of the limitations to these studies is that antidepressant potential has been typically investigated using tests such as forced swim and elevated plus maze, which lack human equivalence. These tests also overlook the trend to deconstruct complex clinical disorders into endophenotypes that may be more amenable to preclinical study and translation across the preclinical-clinical spectrum (Day et al., 2008Markou et al., 2009). A diagnosis of MDD includes symptoms of depressed mood, anhedonia, fatigue/loss of energy (anergia), cognitive deficits including diminished/slowed ability to think or concentrate and feelings of guilt, worthlessness and suicidal ideation (van Loo et al., 2012American Psychiatric Association, 2013). Therefore endophenotypes related to depression include anhedonia (impaired reward function), amotivation (lack of motivation/purpose) and impaired cognitive function (Hasler et al., 2004Atique-Ur-Rehman and Neill, 2019Treadway and Zald, 2011) which we addressed through the progressive ratio and 5-choice tasks.

A further consideration in the design of these experiments was an expectation that any effect of ketamine and psilocybin at low plasma concentrations was likely to be subtle, and potentially variable across a sample study population (see Horsley et al., 2018Cameron et al., 2019Meinhardt et al., 2020). We therefore exploited the heterogeneous nature of the performance level of rat populations across tasks such as PR and 5-CSRTT. Rats may be categorized based on performance differences in progressive ratio breakpoint, and thus serve as models of high vs. low motivation (Randall et al., 2012Randall et al., 2015). Similarly rats may be categorized according to attentional accuracy or impulsive action under specific challenge conditions, thus providing models of high vs. low attention or impulsivity (Blondeau and Dellu-Hagedorn, 2007Jupp et al., 2013Hayward et al., 2016Higgins et al., 2020aHiggins et al., 2020b). Consequently, rats showing low motivation and/or attention may represent models of specific depression-relevant endophenotypes (Hasler et al., 2004Treadway and Zald, 2011; Atique-Ur-Rehman and Neill, 2019). We identified three important considerations to this approach of subgrouping. Firstly, a requirement to identify an enduring nature to any performance subgroup classification. Secondly to establish “poor” performance is not a consequence of factors such as ill health, and thirdly a requirement for large sample sizes to ensure that subgroups were adequately separated and powered (Button et al., 2013). To address the former challenge, high/low performance subgroups were allotted based on 5–10 days baseline performance. Control experiments were conducted on the PR and 5-choice study cohorts which confirmed “low performance” was not associated with ill health or sensorimotor deficit. To address the third challenge, and to ensure at least some separation between subgroups but having due consideration to the principal of the 3R’s (replacement, refinement, reduction), we adopted the extreme tertile groups.

Considered as a whole, i.e. without subgrouping, despite group sizes of N = 24–72, we failed to identify any positive effect of ketamine or psilocybin on motivation or attention over the tested dose range. The most robust finding was a trend for a decline in performance following the 6 mg/kg dose of ketamine, which indicated the early phase of the descending limb of a biphasic dose response. This was confirmed by parallel experiments identifying even greater performance decline at 10 mg/kg (data not shown, but see Gastambide et al., 2013Benn and Robinson, 2014Nikiforuk and Popik, 2014).

Subgrouping rats based on break point and number of lever presses for food made available under a PR schedule of reinforcement identified rats that consistently ceased responding early (“low” responders), leading to low break points. Interestingly these rats had similar body weights, free feeding measures and open field activity compared to their high responder counterparts, suggesting any differences were unrelated to general health status, neurological function or appetite. In these low performers, both psilocybin (0.05–0.1 mg/kg) and ketamine (1–3 mg/kg) increased break point suggesting an increase in task motivation. These findings suggest that low doses of ketamine may relieve certain clinical signs related to depression (Xu et al., 2016), and further suggest that the doses and plasma concentrations of ketamine and psilocybin as described in the present study may have utility in treating subtypes of mental illnesses characterized by amotivation and anhedonia in particular.

In the 5-CSRTT, the effects of ketamine and psilocybin were evaluated in two separate task schedules. In the first, rats were tested under standard conditions of 0.75 s SD, 5 s ITI. Segregation of rats into high and low performers based on accuracy (% correct), revealed a trend for both psilocybin and ketamine to increase accuracy at equivalent doses to those effective in the PR task. In the case of psilocybin, the more robust measure of efficacy was the % hit measure, which also accounts for errors of omission as well as commission (incorrect response). Speed of responding was also marginally increased further supporting a performance improvement.

The second 5-CSRTT experiment utilized conditions of extended ITI (5 s vs. 10 s) and reduced stimulus duration (0.75 s vs. 0.3 s). The principal challenge is to response control, lengthening the ITI from 5 s to 10 s produces a significant increase in both PREM and PSV responses, a consistent and widely reported finding (Robbins, 2002Jupp et al., 2013Barlow et al., 2018Higgins et al., 2020a,b). Subgrouping rats, based on the level of PREM responses under the 10 s ITI schedule, into “Low” and “High” impulsives (LI vs. HI) highlights a wide range of responders typically seen under this schedule (Jupp et al., 2013Fink et al., 2015Barlow et al., 2018Higgins et al., 2020a). Importantly there is a reasonable consistency of performance on this measure over repeated tests as demonstrated by the HI rats having higher PREM scores under the 5 s ITI, albeit at markedly lower levels. PSV responses are also higher in the HI cohort, consistent with the HI rats demonstrating a deficit in inhibitory response control.

Similar findings for both ketamine and psilocybin were noted in this test schedule. While neither drug affected accuracy (measured as % correct), either in all, or HI/LI classified rats; both increased PREM and PSV responses in the LI cohort, supporting an increase in impulsive action. It should be noted that the magnitude of change produced by both ketamine and psilocybin was relatively small (∼2-fold) and confined to the LI subgroup. Certainly, the magnitude of change contrasted sharply with the 4-fold increase noted in rats pretreated with dizocilpine under the same 10 s ITI schedule (see also Higgins et al., 2005; 2016; Benn and Robinson, 2014). Previous studies have also described increased PREM responses following pretreatment with the phenethylamine 5-HT2A agonist DOI (Koskinen et al., 2000Koskinen and Sirvio, 2001Blokland et al., 2005Wischhof and Koch, 2012Fink et al., 2015), typically at doses lower than those which induce signs of WDS/BMC (Fink et al., 2015Halberstadt and Geyer, 2018).

Impulsivity is a construct that may be viewed in two forms: functional and dysfunctional (Dickman, 1990). Dysfunctional impulsivity is associated with psychiatric conditions such as substance abuse and OCD and thus carries a negative context. For example, associations between high impulsive trait and drug seeking behaviors have been reported both preclinically and clinically (Grant and Chamberlain, 2004Jupp et al., 2013). Functional impulsivity has been described as a tendency to make quick decisions when beneficial to do so, and may be related to traits such as enthusiasm, adventurousness, activity, extraversion and narcissism. Individuals with a high functional impulsivity are also reported to have enhanced executive functioning overall (Dickman, 1990Zadravec et al., 2005Burnett Heyes et al., 2012). Viewed in this more positive context, the feature of psilocybin and ketamine to promote impulsive behavior selectively in a LI cohort may be relevant in supporting a potential to treat depression and other mental disorders.

One advantage of being able to study pharmacological effects at low doses in an experimental setting, is the ability to probe for an underlying neurobiological mechanism, which would serve to establish this pattern of use within a scientific framework. Presumably these doses result in a low level of target site occupancy, which in the case of psilocybin is the serotonin 5-HT2A receptor (Vollenweider et al., 1998Tylš et al., 2014Nichols, 2016Kyzar et al., 2017). At higher doses and plasma exposure, and consequently higher levels of target occupancy, psychomimetic effects begin to emerge. In this respect, the recent study of Madsen et al., (2019) is of interest. These workers reported a correlation between the psychedelic effects of psilocybin (40–100% Likert scale maximum) and CNS 5-HT2A receptor occupancy (43–72%) and plasma psilocin levels (2–15 ng/ml). Increases in subjective intensity was correlated with both increases in 5-HT2A receptor occupancy and psilocin exposure. Based on these data, it is estimated that at 5-HT2A receptor occupancies up to ∼15%, no perceptual effects occur (Madsen and Knudsen, 2020).

5-HT2A receptors are widely distributed within cortical zones, notably layer II-V (Santana et al., 2004Mengod et al., 2015), and also in subcortical regions such as the DA nigrostriatal and mesocorticolimbic pathways where they appear to positively regulate tone, at least under certain physiological conditions (Doherty and Pickel, 2000Nocjar et al., 2002Bortolozzi et al., 2005Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017). One plausible hypothesis is that at low nanomolar plasma concentrations, psilocybin (or LSD, mescaline etc.) may preferentially target a subset of 5-HT2A receptors, possibly those localized to subcortical DA systems where activation has been reported to increase firing and tonicity of these pathways (Alex and Pehek, 2007Howell and Cunningham, 2015De Deurwaerdère, and Di Giovanni, 2017 for reviews). In turn this might be expected to promote behaviors related to motivation, attention and impulse control as noted in the PR and 5-choice experiments. Activation of cortical 5-HT2A receptors may account for the subjective/perceptual effects once a critical (higher) drug [plasma] threshold has been reached (Nichols, 2016Kyzar et al., 2017Madsen et al., 2019Vollenweider and Preller, 2020).

In the case of ketamine, the relevant target is most likely the NMDA subtype glutamate receptor (Lodge and Mercier, 2015Mathews et al., 2012Corriger and Pickering, 2019; although note; Zanos et al., 2018), which is comprised of a tetrameric receptor complex composed of NR1 subunits, combined with NR2A-D subunits and, in some cases, NR3A-B subunits. The NR2A-D subunits exist in an anatomically distinct manner, with the NR2A and NR2B subunits predominant in forebrain; the NR1 subunit having a broader distribution being a constituent of all NMDA channels (Kew and Kemp, 2005Traynelis et al., 2010). Potentially at low ketamine doses, there may be a preferential interaction between ketamine and specific NMDA channel subtypes (see Lodge and Mercier, 2015), and/or regional subpopulations which underlies the pharmacological effects of these doses of ketamine in preclinical and clinical contexts. We and others have reported on apparently pro-cognitive effects of non-competitive NMDA antagonists, typically dizocilpine, when tested at low doses (Mondadori et al., 1989Jackson et al., 2004Higgins et al., 20032016Guidi et al., 2015). A better understanding of the neurobiological mechanisms that underlie these effects may provide useful insight toward understanding the clinical benefit of low doses of ketamine in humans.

An interesting feature to emerge from this work was the similar profile of ketamine and psilocybin across the PR and 5-choice experiments. Both drugs increased break point in low performers, improved attention in low performer subgroups, and increased PREM/PSV responses in LI rats. Horsley et al., (2018) also reported a similar pattern of both drugs across various elevated plus maze measures, although the effects were suggestive of a mild anxiogenic profile. Despite their differing pharmacology, there is accumulating evidence from a variety of sources that the NMDA and 5-HT2A receptors are functionally intertwined. Vollenweider has highlighted the overlapping psychotic syndromes produced by serotonergic hallucinogens and psychotomimetic anesthetics associated with a marked activation of the prefrontal cortex and other overlapping changes in temporoparietal, striatal, and thalamic regions (Vollenweider, 2001Vollenweider and Kometer, 2010) suggesting that both classes of drugs may act upon a common final pathway. Secondly, 5-HT2A receptor antagonists attenuate a variety of putative psychosis-related behaviors induced by NMDA channel block, including behavioral stereotypy and disrupted PPI (Varty and Higgins, 1995Varty et al., 1999Higgins et al., 2003), a property that likely contributes to the antipsychotic efficacy of atypical neuroleptics such as clozapine, risperidone (Meltzer, 1999Remington, 2003). Furthermore, a cellular coexpression of 5-HT2A and NMDA receptors has been described in multiple brain regions, including VTA, striatum and cortex (Wang and Liang, 1998Rodriguez et al., 1999Rodriguez et al., 2000). Therefore, studying these drugs at the low dose range may also provide further insights into how these receptor systems may interact.

In conclusion, the present studies have characterized for the first time, a positive effect of ketamine (0.3–3 mg/kg [plasma] 10–70 ng/ml) and psilocybin (0.05–0.1 mg/kg [psilocin plasma] 7–12 ng/ml) on behaviors related to endophenotypes of amotivation and anhedonia. The overall effect sizes are modest, which might be expected at the doses and concentrations studied, where the degree of target occupancy is likely to be low and subject to individual differences in drug pharmacodynamics and pharmacokinetics. Each of these factors will impact on treatment response across a study population (Levy, 1998Dorne, 2004). Limitations to the present study include a restriction to male test subjects, and on single acute doses. Future studies should extend to both male and female subjects, and alternative dosing schedules. Nonetheless, the studies are important in that they define a potentially efficacious dose and plasma exposure range and provide a framework for early safety studies and further scientific investigation into the neurobiology of these drugs in the low dose range.