Wednesday, September 22, 2021

The Role of Genetics for Survival After 80: Heritability is estimated at about 12%, lower than previously reported in older adults

What Matters and What Matters Most for Survival After age 80? A Multidisciplinary Exploration Based on Twin Data. Boo Johansson and Valgeir Thorvaldsson. Front. Psychol., Sep 22 2021. https://doi.org/10.3389/fpsyg.2021.723027

Abstract: Given research and public interest for conditions related to an extended lifespan, we addressed the questions of what matters and what matters most for subsequent survival past age 80. The data was drawn from the population-based and multidisciplinary Swedish OCTO Twin Study, in which a sample (N = 699) consisting of identical and same-sex fraternal twin pairs, followed from age 80 until death, provided detailed data on health, physical functioning, life style, personality, and sociodemographic conditions. Information concerning date of birth and death were obtained from population census register. We estimated heritability using an ACE model and evaluated the role of multiple predictors for the mortality-related hazard rate using Cox regression. Our findings confirmed a low heritability of 12%. As expected, longer survival was associated with being a female, an apolipoprotein E (APOE) e4 allele non-carrier, and a non-smoker. Several diseases were found to be associated with shorter survival (cerebrovascular, dementia, Parkinson’s, and diabetes) as well as certain health conditions (high diastolic blood pressure, low body mass index, and hip fracture). Stronger grip and better lung function, as well as better vision (but not hearing), and better cognitive function (self-evaluated and measured) was related to longer survival. Social embeddedness, better self-evaluated health, and life-satisfaction were also significantly associated with longer survival. After controlling for the impact of comorbidity, functional markers, and personality-related predictors, we found that sex, cerebrovascular diseases, compromised cognitive functioning, self-related health, and life-satisfaction remained as strong predictors. Cancer was only associated with the mortality hazard when accounting for other co-morbidities. The survival estimates were mostly in anticipated directions and contained effect sizes within the expected range. Noteworthy, we found that some of the so-called “soft-markers” remained strong predictors, despite a control for other factors. For example, self-evaluation of health and ratings of life-satisfaction provide additional and valuable information.

Discussion

In this study, we addressed the questions of what matters and what matters most for survival after age 80. We based our analyses on data from a population-based twin sample of monozygotic (identical) and same-sex fraternal (dizygotic) twins followed from age 80, until death. The fact that we conducted our analyses using a select sample of hardy survivors, born more than 100 years ago, should be considered when comparing our findings of predictions and for their relevance at younger ages. The observed median life expectancy (age at which 50% of a birth cohort is alive) for those born in Sweden during the period 1893–1913 was in the range of 65–72 years for males and for 70–79 years for females. The expectancy for the individuals in our birth cohort to be alive at age 80 and beyond was only in between 2.5–6% for males and 8.5–9.2% for females (see SCB, 2020). This remark, concerning generation and cohort differences, is important to consider in efforts to identify and determine the relative impact of various mortality-related predictors. In this respect, we may find that longevity predictors can vary in type or differ in magnitude considerably across birth cohorts, which needs to be considered when comparing findings from a sample born more than 100 years ago with data from more recent birth cohorts. Furthermore, predictors of longevity, which are informative and relevant from an early age, are not necessarily valid to predict subsequent survival for those who have survived into a later stage of life. This was evident in our study by the fact that SES and financial status no longer acted as predictors for survival, as would be expected in younger samples.

The Role of Sociodemographic for Survival

Studies typically find that SES and education act as relatively strong predictors for longevity (e.g., Stringhini et al., 2017Steptoe and Zaninotto, 2020). However, we could not replicate these findings, which likely reflect a restricted education range in our sample as well as greater homogeneity in overall socioeconomic status. Later born cohorts of late life survivors may therefore show other associations with these two common survival markers. Age at baseline was positively associated with subsequent survival. This infers that, given comparison of the hazard rate at a specific age (e.g., age 91) those that accepted study participation at later ages showed a lower expected hazard rate. This finding inform us that those who entered the study at a higher age in fact represent “the even more hardy ones” who will survive even longer than their counterparts who accepted participation at younger ages. Less surprising was our finding that women tend to live longer. For marital status, we only found that our small sample of divorced individuals showed a higher mortality risk. However, this finding needs to be replicated in samples with a higher frequency of divorced individuals, although our finding is in line with previous reports on the lethal consequences of divorce (e.g., Norgård Berntsen and Kravdal, 2012).

The Role of Genetics for Survival

The analysis revealed a heritability estimate of about 12%, which is a lower estimate than previously reported in older adults (e.g., Christensen et al., 2006). This corresponds to claims that the heritability for subsequent survival is likely to be higher in the younger age range. However, Ruby et al. (2018) reported that the heritability for birth cohorts across the 1800s and early 1900s is rather well below 10%. As expected, we could confirm the significant role of APOE status. Thus, the association with the APOE e4 allele remained in late life, as those with a e4 allele had a shorter remaining life span, compared with non e4 carriers (e.g., Wolters et al., 2019). Notably, in complementary analyses (not reported), the APOE effect was reduced (β = 0.048, SE = 0.122; exp(β) = 1.05, 95% CI [0.83, 1.33]) to non-significance when we accounted for cognitive status.

The Role of Diseases and Health Related Factors for Survival

Among the many analyzed diseases, we confirm strong expected associations for dementia, cerebrovascular disease, diabetes, Parkinson’s disease, and history of hip fracture. The effect sizes for dementia, CVD, diastolic BP, and BMI remained relatively unaffected when we controlled for comorbidities. The hip fracture effect replicates previous findings of an excess mortality risk after a hip fracture that last over many years (e.g., von Friesendorff et al., 2016). This frailty may be associated with immobility preventing a physically active and healthier lifestyle. The effects sizes for hip fracture, as well as for diabetes and Parkinson’s disease, were substantially reduced when we controlled for comorbidity (see Table 4).

More surprisingly, we found that the presence of thyroid disease predicted longer survival in our sample, which awaits further investigations, as both subclinical hypothyroidism and hyperthyroidism previously have been associated with an increased mortality risk (e.g., Ochs et al., 2008). A similar positive survival effect was found for cataract. These paradoxical findings may be explained as selection effects. We can speculate whether individuals receiving diagnosis for these conditions are more vital and more demanding for an appropriate treatment. Interestingly, the predictive value of both thyroid disease and cataract remained relatively unaffected even after controlling for all other diseases (see Table 4), which means that these unexpected results are not accounted for by comorbidities. Also, given that we accounted for cognitive status, the thyroid disease effect size remained similar (β = −0.250, SE = 0.126; exp(β) = 0.78, 95% CI [0.61, 0.99]). The effect size for cataract, however, was reduced somewhat (β = −0.092, SE = 0.091; exp(β) = 0.91, 95% CI [0.76, 1.09]).

Depression was not related to subsequent survival, which was an unexpected finding given that many studies show that depression substantially increases the mortality risk (e.g., Wulsin et al., 1999), and that late-life depression is associated with higher risk of both all-cause and cardiovascular mortality (Wei et al., 2019). A possible explanation for our finding is that our depression diagnosis is likely to reflect compromised mental health at earlier ages, rather than in later life.

Further, we found that higher diastolic blood pressure, but not systolic, was associated with a shorter survival. This is in line with previous studies showing that higher systolic blood pressure in older ages can be compensatory and in fact associated with better survival, while diastolic pressure is negatively related to all-cause mortality (e.g., Satish et al., 2001). We also found that higher BMI in fact was protective and associated with longer survival. Notably, few individuals were overweight in our sample. Our finding corresponds to previous reports of a U-shaped association between BMI and all-cause mortality (e.g., Cheng et al., 2016). In fact, when we modeled the hazard rate as a conditional function of an additional quadratic BMI component, we received the following estimate, β = 0.005, SE = 0.002; exp(β) = 1.005, 95% CI [1.003, 1.010], and a linear component, β = −0.296, SE = 0.128; exp(β) = 0.74, 95% CI [0.58, 0.96], implying a non-linear U-shaped association. A low BMI is typically found to be accompanied with an increased mortality risk which in our sample indicate compromised overall health.

Notably, cancer was not a significant predictor when we only controlled for baseline age, sex and education (shown in Table 3, with an effect size of 1.16). However, when we controlled for other health-related variables and diseases, the effect size became substantially larger, i.e., 1.38 and 1.33, respectively (see Tables 45). This finding implies a suppression effect, which may reflect the broad malignancy category with several cancer types among our cancer survivors (26%), offered life-promoting treatments. Another explanation relates to comorbidities (e.g., dementia, CVD) that initially hid the effect of cancer.

Our findings largely correspond to previous studies demonstrating differential survival related to various disease conditions in later life. The results also confirm numerous studies showing that self-rated health is an informative marker for subsequent survival. Those who evaluate and self-diagnose their health as better also tend to live longer (e.g., Lyyra et al., 2006aFeenstra et al., 2020). We may perhaps find it remarkably that self-rated health remains a relatively strong predictor of mortality (e.g., Jylhä, 2009), even when we control for multiple health related variables (seen in a comparison of effect sizes in Tables 34 where the effect size only dropped from 1,82 to 1.69). The association between self-rated health and mortality cannot be fully accounted for by individual differences in cognitive status or personality-related variable like life-satisfaction (as shown in Table 5, were the effect size dropped to 1.39). As previously emphasized, self-rated health reflects a broader assessment of own health and functioning with reference to age-fellows, rather than experiences of a disease burden (Strawbridge and Wallhagen, 1999).

The Role of Lifestyle Factors for Survival

Smoking was, as expected related to shorter survival. More interestingly, we found that self-reported intellectual engagement and social embeddedness also predicted subsequent survival, pointing toward the importance of maintaining social life and acquiring as well as preserving knowledge for making life worth living. An interesting study in this context, focusing on the valuation of life and more specifically on active attachment showed that old and very old individuals differ in terms of endorsement and with respect to what makes a life worth living. Whereas health factors were more important among the young-old, social factors were more important in the old-old group (Jopp et al., 2008). Our findings support and extend this interpretation in the context of survival.

The Role of Cognitive Health for Survival

Our cognitive status indicator revealed a clear pattern showing that those with better cognition also tended to live longer, which partly was accounted for by the fact that individuals categorized as 3–5 met the dementia criterion. Noteworthy, better self-rated memory was also positively associated with survival. It is by now repeatedly shown that cognitive impairment and decline is indicative for a shorter life span, specifically demonstrated in terminal cognitive decline trajectories for various cognitive abilities (e.g., Thorvaldsson et al., 2008).

The Role of Functional Markers for Survival

Among the functional markers, we found that the measures of grip strength and lung function were associated with subsequent survival; those with better performance on these two measures lived longer. This confirm previous findings, for example, McGrath et al. (2018), who showed that decreased handgrip strength was associated with ADL limitations and higher hazard for mortality. Our finding that better self-evaluated visual acuity was positively associated with survival is also in line with studies showing that worse visual acuity is indicative of a higher mortality rate (e.g., Freeman et al., 2005). Hearing was not a significant marker for mortality in our study, which may reflect that relatively few individuals were afflicted with serious hearing loss, preventing everyday coping and interactions in social life. Notably, when we included all the functional markers into the same model the effect size dropped for all variables. This may reflect that similar underlying neurophysiological mechanism can be responsible for the mortality-related associations across these markers, which is in line with the common cause assumption (e.g., Christensen et al., 2001) of aging degeneration.

The Role of Personality Characteristics and Life Satisfaction for Survival

Among the examined markers in this category of potential predictors, we only found that life-satisfaction to be positively associated with a longer subsequent survival. This result is in line with several studies (e.g., Sadler et al., 2012Hülür et al., 2017). However, compared with findings reported by Hülür et al. (2017), we found no associations with our measures of personal control (general or health related locus of control) and survival, which partly may reflect that those scales were only taken by a select portion of individuals, able to comprehend and return the inventories.

Multiple Predictors in Concert and Survival

A strength in the present study is that it allowed a simultaneous examination of the potential role among multiple predictors. Following the first step of identifying potential predictors, “what matters,” we then turned to the question of “what matters most”? In doing so, it is important to remember that human functioning is highly inter-related, which make it unlikely to find isolated health conditions and other markers associated with late life survival. Interestingly, we could anyhow identify that some diseases categories, for example cerebrovascular disease and dementia, remained strong predictors in preventing a more extended life span after age 80. In the same manner, we found that self-rated health to be a strong survival indicator and that life satisfaction acted as positive marker for subsequent survival in advanced ages.

Although it would seem attractive to present a ranking list in response to the question of “what matters most,” it is also important to realize that many of the candidate variables evaluated in this study were inter-correlated. Therefore, the specific effect sizes were often substantially affected by a simultaneous inclusion of several variables into the same model. In addition, scale characteristics and metric properties (such as reliability and validity), differ across measures, rendering the comparison even more difficult. We therefore hesitate to provide a detailed weight for what matters most. However, as seen in Table 5, our analyses provide strong support for a shortlist that encompasses cerebrovascular disease, cognitive status, self-rated health, and life-satisfaction, in addition to the expected survival advantage among women, non-smokers, and non-carriers of the APOE-e4 allele. Our finding of an overall heritability estimate of 12% also emphasize the importance of multiple non-genetic influences for late life survival.

Strengths and Limitations

Certain limitations and strengths merit comments. First, our sample was comprised of late life twin survivors born in the late 1800 and at the beginning of the 19th century. To test for potential selection effects due to twin ship, we compared our twin sample with a population-based community sample of non-twins largely in the same age range for health and overall functioning (Simmons et al., 1997). In this analysis, one member of each twin dyad was randomly selected. Adjustments for age, sex, and type of housing reveled significant differences only in three out of 20 comparisons, in which the twins were more advantaged in health and bio-behavioral functioning. The conclusion from this comparison was that twin pairs surviving into very late life are largely similar to a representative sample of non-twins of the same age (Simmons et al., 1997). Furthermore, the unique experiences and exposures in our select cohort born more than hundred years ago are unlikely to be similar to that of later cohorts in which the likelihood for survival have increased considerable over the years. Despite this important remark, the predictors identified in our sample are likely to be valid also for later born individuals, although this claim needs clarification in empirical studies. Second, the validity and reliability of our predictors varied, with some relatively brief indices (e.g., a medical history of having or not meeting a certain diagnostic category, without severity accounted for) while others reflected more detailed measurements (e.g., grip strength, lung function, blood pressure, and BMI). Third, our predictors do not cover all potential markers, although we originally selected them based on gerontological relevance for a broad population-based longitudinal study. Fourth, we did not examine additive or multiplicative effects of having multiple diseases (i.e., multimorbidity) which was beyond the scope of the present study.

Despite these potential shortcomings, the strength of our study refers to the fact that we were able to use a rich and comprehensive data set gathered in a population-based sample of twins examined in–person for a whole day over a broad range of variables. This allowed analyses of the overall research question of what matters for subsequent survival past age 80 as well as analysis of heritability. Of special importance is the fact that our study encompasses detailed and valid information drawn from official register data on exact date of birth, as well as date of death.

Experimental evidence for the gaze-signaling hypothesis: White sclera enhances the visibility of eye gaze direction in humans and chimpanzees

Experimental evidence for the gaze-signaling hypothesis: White sclera enhances the visibility of eye gaze direction in humans and chimpanzees. Fumihiro Kano, Yuri Kawaguchi, Hanling Yeow. bioRxiv, Sep 21 2021. https://doi.org/10.1101/2021.09.21.461201

Abstract: Hallmark social activities of humans, such as cooperation and cultural learning, involve eye-gaze signaling through joint attentional interaction and ostensive communication. The gaze-signaling and related cooperative-eye hypotheses posit that humans evolved unique external eye morphology, including exposed white sclera (the white of the eye), to enhance the visibility of eye-gaze for conspecifics. However, experimental evidence is still lacking. This study tested the ability of human and chimpanzee participants to detect the eye-gaze directions of human and chimpanzee images in computerized tasks. We varied the level of brightness and size in the stimulus images to examine the robustness of the eye-gaze directional signal against visually challenging conditions. We found that both humans and chimpanzees detected gaze directions of the human eye better than that of the chimpanzee eye, particularly when eye stimuli were darker and smaller. Also, participants of both species detected gaze direction of the chimpanzee eye better when its color was inverted compared to when its color was normal; namely, when the chimpanzee eye has artificial white sclera. White sclera thus enhances the visibility of eye-gaze direction even across species, particularly in visually challenging conditions. Our findings supported but also critically updated the central premises of the gaze-signaling hypothesis.

Final version: What is unique about the human eye? Comparative image analysis on the external eye morphology of human and nonhuman great apes. Fumihiro Kano et al. Evolution and Human Behavior, December 29 2021. https://doi.org/10.1016/j.evolhumbehav.2021.12.004

Abstract: The gaze-signaling hypothesis and the related cooperative-eye hypothesis posit that humans have evolved special external eye morphology, including exposed white sclera (the white of the eye), to enhance the visibility of eye-gaze direction and thereby facilitate conspecific communication through joint-attentional interaction and ostensive communication. However, recent quantitative studies questioned these hypotheses based on new findings that certain features of human eyes are not necessarily unique among great ape species. Accordingly, there is currently a heated debate over whether external eye features of humans are distinct from those of other apes and how such distinguishable features contribute to the visibility of eye-gaze direction. The present study leveraged updated image analysis techniques to test the uniqueness of human eye features in facial images of great apes. Although many eye features were similar between humans and other great apes, a key difference was that humans have uniformly white sclera which creates clear visibility of both the eye outline and iris—the two essential features contributing to the visibility of eye-gaze direction. We then tested the robustness of the visibility of these features against visual noise, such as shading and distancing, and found that both eye features remain detectable in the human eye, while eye outline becomes barely detectable in other species under these visually challenging conditions. Overall, we identified that humans have unique external eye morphology among other great apes, which ensures the robustness of eye-gaze signals in various visual conditions. Our results support and also critically update the central premises of the gaze-signaling hypothesis.

Keywords: Eye colorCommunicationComparative analysisHuman evolutionGreat apeScleraGaze detection


Specific factors and methodological decisions influencing brain responses to sexual stimuli in women

Specific factors and methodological decisions influencing brain responses to sexual stimuli in women. Sophie Rosa van’t Hof, Nicoletta Cera. Neuroscience & Biobehavioral Reviews, September 21 2021. https://doi.org/10.1016/j.neubiorev.2021.09.013

Highlights

• Several female-specific factors important for sexual arousal neuroimaging research

• Stress and mood could be assessed when analyzing on individual level

• Methodologies should focus on optimizing sexual arousal

• Sexual stimuli should be selected by women and optimal duration should be piloted

• Brain models of sexual arousal should be updated with data of women

Abstract: Most of the neuroimaging studies on sexual behavior have been conducted with male participants, leading to men-based models of sexual arousal. Here, possible factors and methodological decisions that might influence brain responses to sexual stimuli, specifically for the inclusion of women, will be reviewed. Based on this review, we suggest that future studies consider the following factors: menstrual phase, hormonal contraception use, history of sexual or psychiatric disorders or diseases, and medication use. Moreover, when researching sexual arousal, we suggest future studies assess sexual orientation and preferences, that women should select visual sexual stimuli, and a longer duration than commonly used. This review is thought to represent a useful guideline for future research in sexual arousal, which hopefully will lead to a higher inclusion of women and therefore more accurate neurobiological models of sexual arousal.

Keywords: sexual arousalwomenbrainneuroimagingfunctional magnetic resonance imagingpositron emission transmission

1. INTRODUCTION

During the last twenty years, several studies investigated the cerebral correlates of human sexual behavior, with the majority using external sexual stimuli to evoke sexual arousal (for meta-analyses and reviews, see: Stoléru et al., 2012Georgiadis & Kringelbach, 2012Poeppl et al., 2016Mitricheva et al., 2019). Human sexual arousal refers to a complex set of social, psychological, and biological processes and therefore investigation of sexual arousal requires a multi-method and an interdisciplinary approach (Woodard & Diamond, 2008).

Sexual arousal can be induced by both internal cues, represented by sexual interest, autobiographical memories, fantasies, or, simply thoughts, and external sexual stimuli. External sexual stimuli, of different sensory modalities, have been considered a reliable tool to study the brain underpinnings of sexual arousal in both men and women. Sexual arousal is usually operationalized through the measurement of genital responses and self-reported (i.e., subjective) sexual arousal. Since both genital responses and subjective sexual arousal are activated, and regulated, by brain circuits responding to internal and external stimuli, sexual arousal has been measured by functional neuroimaging as well. Modalities of functional brain imaging include functional magnetic resonance imaging (fMRI), positron emission tomography (PET), electroencephalography (EEG), and magnetoencephalography (MEG). EEG and MEG have a considerably lower spatial resolution than fMRI and PET. Since this review will focus on the brain response patterns to sexual stimuli, results of EEG and MEG will not be discussed.

A wide array of brain regions is involved in processing and experiencing sexual arousal, not surprising for a complex task involving multiple sensory modalities and several cognitive functions as focused attention, working and long-term memory, and emotional appraisal (Stoléru et al., 2012Georgiadis & Kringelbach 2012). Two recent meta-analyses showed different results regarding the brain regions involved during visual sexual stimulation (VSS) in women and men. A meta-analysis by Poeppl et al. (2016) showed small between-gender differences in brain response in subcortical areas to sexual stimuli, whereas Mitricheva et al. (2019) did not find any differences in brain response to sexual stimuli between men and women. According to Mitricheva et al. (2019), this discrepancy in the meta-analyses results could depend on the inclusion of studies using different sensory modalities sexual stimulation (visual, olfactory, and tactile stimuli).

Although there is a common assumption of large sex differences in brain responses to sexual stimuli, and the evoked sexual arousal, these meta-analyses show small or null between genders differences. However, previous behavioral and psychophysiological studies found a significantly higher level of agreement between self-reported sexual arousal and genital response in men than in women (Chivers et al., 2010). Methodological issues, such as differences in devices and procedures used, or fundamental differences, might modulate this. An fMRI study by Parada et al. (20162018) examined both self-reported sexual arousal and genital responses in relation to brain responses in both men and women. Various subregions of the parietal cortex show significant changes in brain responses corresponding to the degree of self-reported sexual arousal, with no gender differences. The strength of the correlation between brain activation and genital response shows that women had a stronger brain-genital relation than men in the insula, amygdala, posterior cingulate cortex, lateral occipital cortex, and bilateral cerebellum. Conversely, in men, no brain regions showed a strong brain-genital correlation. This study presents that fMRI studies can be an important addition to psychophysiological and behavioral research in understanding complex questions, such as the gender differences in concordance between genital response and self-reported sexual arousal.

Previous neuroimaging studies on sexual arousal have predominantly included heterosexual male participants. The recent meta-analysis by Mitricheva et al. (2019) demonstrated the inclusion of 1184 male participants in contrast to 636 female participants. Of these 1184 male participants, 1054 were heterosexual, making it the largest group to be included in neuroimaging studies to sexual arousal. Due to the large inclusion of men, one of the most recent and influential models of brain responses to sexual stimuli is based on data of male participants (Stoléru et al., 2012). The overrepresentation of male participants and overgeneralization of theories and models based on male data is not limited to neurosexology but for instance also present in animal studies (Coiro & Pollak, 2019) or clinical trials (Feldman et al., 2019Holdcroft, 2007). By including more women, but also more non-heterosexual and non-cis participants, the specificity and clinical utility of future theoretical models could be improved. Besides theoretical reasons, the larger inclusion of women could lead to a better understanding of female-specific sexual disorders and diseases (e.g., female sexual arousal disorder, genito-pelvic pain/penetration disorder).

It is not clear why there is an overrepresentation of men in previous studies. A potential reason might be female-specific factors and methodological decisions, which could be seen as an obstacle. Hence, the present review will examine factors and methodological decisions that could potentially influence brain responses to sexual stimuli when women are included in neuroimaging studies to sexual arousal and genital response. Moreover, we will assess whether previous neuroimaging studies considered these factors.

Tuesday, September 21, 2021

From 2013... Women in STEM: Investments and job rewards that generally stimulate field commitment, such as advanced training and high job satisfaction, fail to build commitment among women in STEM

From 2013... What's So Special about STEM? A Comparison of Women's Retention in STEM and Professional Occupations. Jennifer L. Glass, Sharon Sassler, Yael Levitte, Katherine M. Michelmore. Social Forces, Volume 92, Issue 2, December 2013, Pages 723–756, https://doi.org/10.1093/sf/sot092

Abstract: We follow female college graduates in the National Longitudinal Survey of Youth 1979 and compare the trajectories of women in science, technology, engineering, and mathematics (STEM)-related occupations to other professional occupations. Results show that women in STEM occupations are significantly more likely to leave their occupational field than professional women, especially early in their career, while few women in either group leave jobs to exit the labor force. Family factors cannot account for the differential loss of STEM workers compared to other professional workers. Few differences in job characteristics emerge either, so these cannot account for the disproportionate loss of STEM workers. What does emerge is that investments and job rewards that generally stimulate field commitment, such as advanced training and high job satisfaction, fail to build commitment among women in STEM.


Discussion

Our results suggest that there are few significant differences in the demographic and family characteristics of women in STEM jobs compared to women in non-STEM professional jobs, or in the measured work conditions they face (hours, job satisfaction, and job flexibility). Despite stereotypical notions about women in STEM not having families, our sample of women in STEM jobs are just as likely to be married and bear children as women in professional jobs. Women in STEM jobs do show slightly more egalitarian gender attitudes, higher earn-ings, and better work-life amenities, but this should make them less likely to leave STEM employment relative to women in professional jobs, especially for non-market pursuits like homemaking. Yet our findings reveal that women in STEM fields are dramatically less likely to persist in them over time compared to women in other professional fields and that this occurs because women in STEM move to non-STEM jobs at very high rates, not because women in STEM fields disproportionately move out of the labor force. Moves out of the labor force are in fact quite rare for both groups, confirming analyses that show grow-ing labor force attachment among professionals in all fields over time, particularly when workplace supports for parenting exist (Herr and Wolfram 2009; Percheski 2008).Moreover, the women who leave STEM occupations are unlikely to return; only a handful of women ever moved back into a STEM job following a job move out of the field. However, some of these STEM women could be moving from scientific or technical work into the management of scientific or technical work. To check, we looked at the distribution of jobs taken following the last STEM job and report the results in appendix C. Only about 21 percent of moves out of STEM are moves into managerial or administrative ranks; the vast major-ity are not. While some move into health professions (4 percent become health technologists, 1 percent become dietitians, and 1 percent become physicians) or teaching (11 percent), most go into non-professional jobs (50 percent).One reason that so few moves led to management careers may be that these moves occurred early in the respondent’s STEM career, most in the first five years of employment. This suggests not only that promotions into management are unlikely to be the sources of moves out of the field, but that marriage and chil-dren are not the primary propellants of moves out of STEM either. We turn to our multivariate models for clues about this early erosion from STEM employment into other fields among women who have persevered through the educa-tional process to get STEM degrees.

While we expected that women’s token status in STEM fields could be isolat-ing and lead to dissatisfaction with STEM work environments, neither of our measures of tokenism (occupation proportion female or motherhood status) sig-nificantly affect retention in our multivariate models. In addition, results show that most of the workplace characteristics, including hours of work, earnings, and parental leave policies, affect retention in similar ways for women in STEM and professional employment. However, women in STEM fields do not react as positively to increasing job satisfaction, job tenure, and advancing age, suggest-ing that climate issues or lack of “fit” between worker and job persist for lon-ger periods of time in STEM careers. This helps explain the widening retention deficit that STEM women experience over time relative to professional women.The effects of educational credentials on retention, which we initially con-sidered to be another indicator of commitment to STEM, bolster this interpre-tation. While holding an advanced degree does not affect the odds of leaving professional employment for either destination status (different type of job or labor force exit), increasing educational investment in STEM actually decreases retention and increases the odds of leaving STEM employment, suggesting that the STEM jobs held by advanced-degree holders are either more noxious or more isolating than those held by bachelor’s degree recipients. While unexpected, this is consistent with both the competition/demands and token status explanations proffered for the weaker retention of women in STEM employment. Whatever the origin of these effects, the fact that advanced training, increasing job tenure, job satisfaction, and aging do not deepen commitment to STEM fields as they do for most other workers in most other fields is particularly troubling.Family formation events and family characteristics that might decrease occu-pational commitment appear to be more closely associated with leaving STEM employment than with leaving professional employment. Early aspirations to avoid or postpone family obligations emerge as important for STEM employees’ reten-tion in the field while having neutral or negative impacts on field leaving among professionals. Actually getting married negatively affects retention in the field for STEM employees, but having a spouse employed in the same field emerges as sur-prisingly important in discouraging both changing fields and exiting the labor force among women in STEM, while having virtually no effect on field leaving among professional women.The patterning of these results supports the perspective that there may be pecu-liar unmeasured features of STEM jobs that are difficult to combine with fam-ily life, and that these are exacerbated as one goes up the hierarchy of skill and authority in STEM employment. But we hesitate to exaggerate the importance of these indicators of occupational commitment (family statuses and spouse char-acteristics) because the biggest problems in STEM retention occur so early in STEM careers. The large residual unexplained difference in moves out of field between STEM and professional women eludes explanation by family factors and simple job characteristics like earnings or work hours. Even work-life ameni-ties such as flexible scheduling and telecommuting matter little in accounting for the lower retention rates of STEM workers. We suspect that the retention deficit in STEM may be due to the team organization of scientific work combined with the attitudes and expectations of coworkers and supervisors who hold more tra-ditional beliefs about the competencies of women in these rapidly changing fields. The token status of women at higher skill levels, which we could not test, may also contribute to their disproportionate loss compared to skilled professionals.

We acknowledge that our longitudinal data on a single cohort of highly edu-cated women at mid-career cannot capture possible trends in reactions to STEM work environments among women college graduates from the mid-1990s and beyond. Younger women in STEM may differ from the pioneering cohorts of the 1980s and early 1990s, and may hold more conventional desires for marriage and family that discourage continuity in STEM careers. This may be counterbal-anced, however, by the fact that attitudes toward mothers’ employment, nonmar-ital childbearing, and cohabitation have liberalized among women at all levels of education and occupation since the early 1980s. Perhaps women in STEM jobs are more conventional now than in the past, and their family attitudes are more salient in explaining why women leave STEM employment in contemporary cohorts. Recent evidence from college-bound women in 2002, however, shows little evidence that the family plans of young women deter either majoring in STEM or aspiring to STEM occupations (Morgan, Gelbgiser, and Weeden 2013).The focus for future work should be, we believe, on the first few years of employment in STEM jobs, when the greatest attrition out of the field occurs. Our analysis suffers from a lack of detailed information on the characteristics of jobs and the organizational environment in which STEM women labor post-graduation. The interaction patterns between new STEM entrants and supervi-sors and coworkers may be especially relevant, along with the skill content of the job and the prospects for future upward mobility. The distinction between organizational provision of work-life amenities and the ability of employees to actually use amenities without negative consequence may also be important in understanding why women might leave fields that initially seem to have better pay and benefits and greater flexibility. 

Higher education liberalizes moral concerns for most students, but it also departs from the standard liberal profile by promoting moral absolutism rather than relativism; more for individuals majoring in the humanities, arts, or social sciences

College and the “Culture War”: Assessing Higher Education’s Influence on Moral Attitudes. Miloš Broćić, Andrew Miles. American Sociological Review, September 18, 2021. https://doi.org/10.1177/00031224211041094

Abstract: Moral differences contribute to social and political conflicts. Against this backdrop, colleges and universities have been criticized for promoting liberal moral attitudes. However, direct evidence for these claims is sparse, and suggestive evidence from studies of political attitudes is inconclusive. Using four waves of data from the National Study of Youth and Religion, we examine the effects of higher education on attitudes related to three dimensions of morality that have been identified as central to conflict: moral relativism, concern for others, and concern for social order. Our results indicate that higher education liberalizes moral concerns for most students, but it also departs from the standard liberal profile by promoting moral absolutism rather than relativism. These effects are strongest for individuals majoring in the humanities, arts, or social sciences, and for students pursuing graduate studies. We conclude with a discussion of the implications of our results for work on political conflict and moral socialization.

Keywords: moral attitudes, higher education, culture war, socialization, political sociology

According to Bloom (1987:26), behind the curriculum of every educational system lies a latent moral purpose to “produce a certain kind of human being.” Yet recent scholarship has questioned whether the collegiate experience is indeed a deeply formative period. Researchers have demonstrated that differences prior to enrollment explain much of the variation in outcomes across educational levels (Campbell and Horowitz 2016Elchardus and Spruyt 2009Gross 2013), a finding that resonates with work emphasizing the importance of early-life social experiences in forming moral dispositions (Killen and Smetana 2015Vaisey and Lizardo 2016). We test whether higher education shapes morality using four waves of data that follow respondents from high school into young adulthood and models that test or control for selection processes. We find that moral attitudes remain malleable into young adulthood and that higher education is an important institution that facilitates change.

The most consistent predictors of moral change were pursuing graduate education and majoring in the humanities, arts, or social sciences. These educational experiences increased belief that moral principles should adapt to changes in society (moral progressivism), but—in contrast to the typical liberal moral profile—they also decreased moral relativism, suggesting some students are emerging from higher education with a greater conviction in absolute rights and wrongs. However, our data indicate this moral absolutism looks different than the moral absolutism of religious and political conservatives. Rather than supporting traditional norms, these students emerge from university with a moral profile characterized by high concern for others and weak commitment to traditional social order. One interpretation of these results is that some university students—particularly those majoring in HASS or who continue on to graduate education—come to believe that the morals of society must change to remedy historical (and current) injustices (i.e., moral progressivism), but that the moral principles they have learned through their studies represent the real moral truth (moral absolutism).

Evidence of decreased relativism is noteworthy in that it contrasts with prior critiques of higher education by religious and conservative commentators, as well as earlier scholarly accounts that described relativistic tendencies among academics (Hunter 1991Wuthnow 1988). Lazarsfeld and Thielens’s (1958) pioneering study of the U.S. professoriate, for instance, described social scientists as relativists whose keen awareness of historical variation in morality led to contingency in their own beliefs. Consistent with this, we find HASS majors believe morals should be adjusted to social changes, suggesting a more contextual and relativistic moral understanding. However, these students differ from earlier relativists in their willingness to claim there are definite moral truths. This lends prima facie support to recent claims that the moral relativism of years past is transforming into a form of liberal moral puritanism (Campbell and Manning 2018Lukianoff and Haidt 2018).

The apparent discrepancies between our findings and earlier work invite the question of whether key socializing processes in higher education have changed. Our study’s focus on individual-level change limits our ability to assess this directly, but suggestive research allows us to speculate. Growing social closure along the lines of political ideology among university faculty and administrators may partly explain the rise in moral absolutism among students (Gross 2013). In 1969, 28 percent of professors described themselves as conservative, but by 2013 this decreased to 12 percent (Eagan et al. 2014Ladd and Lipset 1975). Data on college administrators are harder to come by, but a recent survey found that among “student-facing” college administrators—those who are most responsible for shaping student experiences on campus—liberals outnumber conservatives by as much as 12 to 1 (Abrams 2018a2018b). Increasing political homogeneity among faculty and/or administrators could create a sense of moral consensus that leaves shared liberal beliefs unchallenged or might even make them seem naturally true. Lack of interpersonal engagement with members of an outgroup can in turn make individuals less politically tolerant, less likely to regard opposing views as legitimate, and more likely to hold extreme attitudes (Huckfeldt, Mendez, and Osborn 2004Mutz 2002)—all traits that coincide with stronger moral conviction (Skitka et al. 2021). These processes could contribute to a sense of liberal moral certitude among students to the extent that university messaging, course content, the types of faculty mentors available, or even informal interactions with faculty and staff communicate moral consensus.

This narrative may be incomplete, however, given that moral certainty also increases for students enrolled in majors that are not heavily associated with liberal moral concerns.11 Another possibility is that growth in moral certainty might also be explained by socialization into the official culture of dominant institutions. According to scholarship in this area, universities are the primary institution for mobility into the professional classes. Consequently, their latent function is to socialize students into dominant status culture by teaching proper etiquette, aesthetic tastes, and moral evaluations that serve to legitimize their advantaged class position (Bourdieu 1984Collins 1971Jackman and Muha 1984). Moral justifications may differ across fields, with educated elites variously casting themselves as “enlightened cosmopolitans” (see Johnston and Baumann 2007Lizardo and Skiles 2015Ollivier 2008) or winners of “meritocratic struggle” (Bourdieu and Passeron 1979Mijs 2016Piketty 2020), but strong moral self-assurance appears to form a common sentiment. Importantly, as cultivation combines with a growing sense of expertise from formal training, educational attainment may impart moral beliefs with a stamp of objectivity (cf. Bottum 2014). Seen this way, moral righteousness might be a consequence of rising social class rather than liberal socialization alone. Of course, the two need not be mutually exclusive—professionalization and liberal attitudes could reinforce one another to the extent that dominant institutions adopt liberal values, policies, or agendas. Some evidence suggests this process might be well under way.12

Recent events suggest higher education’s role in liberalizing moral concerns could have important consequences for social conflict. Scholars have noted the growing salience of the “diploma divide” in politics, with educational attainment being among the strongest predictors of voting against Donald Trump, Brexit, and other events (Gidron and Hall 2017Lind 2020Piketty 2020). Our study speaks to the moral dimension of this divide. When conflict pits nativism against cosmopolitanism and “vulgar” populism against “technocratic” expertise, an educational system that promotes commitment to liberal sensibilities will likely stratify voters according to educational attainment.13 Moral stratification of this sort could pose several risks to civil society. If individuals on the political right come to regard the primary credentialing institution as hostile to their interests, partisan segregation could further escalate by deterring conservative enrollment (Gross 2013). This, in turn, could deepen the distrust toward government, media, and other institutions that employ the credentialled classes that is already evident among the less-educated (Rainie and Perrin 2019). Finally, deliberative democracy could suffer if educational attainment is accompanied by a rising moral conviction that views opposition as too dangerous to engage with or even tolerate (Skitka 2010Skitka, Bauman, and Sargis 2005).14

However, we must be careful not to overstate the political consequences of moral change. Partisans often differ in their moral attitudes (Miles and Vaisey 2015), but it is unclear whether higher education’s effects on moral attitudes will necessarily lead to demonstrable shifts in political behavior. A student leaving the university might well emerge with less regard for traditional conservative morality, yet still vote Republican for economic, foreign policy, or other reasons. Some research even finds that partisan identification precedes moral change, suggesting moral differences may express rather than constitute partisan allegiances (Hatemi, Crabtree, and Smith 2019Smith et al. 2017). The fact that higher education also shapes eventual class position complicates matters further by leaving open the possibility that material interests underlie conflict that on the surface appears morally motivated (Lasch 1994Lind 2020Piketty 2020). Given these considerations, it would be premature to conclude that morality is the only or even necessarily the primary predictor of political behavior. Future research should continue to explore how moral, economic, and political interests intersect among the highly educated, and the effects these have on political behavior. Such research could build on older sociological analyses of the “New Class” emerging from the knowledge economy (Bazelon 1967Bell 1979Gouldner 1978), variously treated as the “Creative Class” (Florida 2002), the “Elect” (Bottum 2014), or the “Brahmin Left” (Piketty 2020) in contemporary discussions.

Our study also speaks to work on moral socialization (Guhin, Calarco, and Miller-Idriss 2021). Contrary to recent accounts emphasizing selection effects, we find that moral socialization occurs within universities in a meaningful way. Consider higher education’s effect as it compares to religious practices. Scholars often depict religion as the defining cleavage of cultural conflict (Castle 2019Gorski 2020Wuthnow 1989), yet our analysis finds that the effect of higher education on moral concerns is comparable to the moral influence of adolescent religion and imparts a sense of moral absolutism that rivals the effect of religiosity. Evidence of moral change invites additional research into what aspects of early morality are stable, and which are open to revision. Theories of moral socialization often acknowledge the possibility of later moral change, but in practice focus on innate moral impulses or moral learning processes that occur early in life (Graham et al. 2009Killen and Smetana 2015). Scholars who consider attitude development during adulthood, moreover, find greater support for a “settled disposition model” emphasizing stability rather than change (Kiley and Vaisey 2020Vaisey and Lizardo 2016). However, our results suggest adolescence and young adulthood remain important periods of moral change worthy of scholarly attention (cf. Hardy and Carlo 2011).

Further work is also needed to understand the processes whereby educational attainment influences moral attitudes. Consistent with the socialization hypothesis, moral change was strongest for HASS students, and comparatively weaker and in some cases absent for other majors. This suggests curricular content matters for moral change. The traditional socialization hypothesis holds that moral relativism is the natural by-product of exposure to cultural diversity, but this was not borne out by our analyses. Instead, we observed an increase in moral absolutism, which may suggest students are being actively taught moral ideals. This, however, remains speculative and requires systematic exploration. Furthermore, the fact that moral relativism decreases across all fields suggests socialization effects likely are not due to curricular content alone and may indicate social learning through noncurricular aspects of the university experience. As discussed earlier, we speculate that formal and informal socialization into official culture might explain this effect, with institutional validation and expertise giving students moral self-assurance, and the mostly liberal direction of this change signaling the elevation of social justice and related liberal concerns within major institutions (Campbell and Manning 2018Lind 2020).

Ideally, future research would address the limitations of this study. For example, future work should use larger samples to increase statistical power to detect effects when cross-classifying educational categories. Furthermore, we believe our research supports a causal interpretation, but this interpretation is necessarily provisional, particularly for our results linking higher education to changing moral concerns for order, given that these were measured only at wave 4. Researchers should collect data on moral concerns at multiple waves so that correlated-random-effects models or equivalent methods can be used to test for and—if needed—correct for the influence of unobserved time-constant confounds. Future analysis could also unpack the causal mechanisms involved by incorporating direct measures of course content and noncurricular aspects of the academic environment (e.g., campus messaging, programming, friendship networks; see Rauf 2021Strother et al. 2020). The moral consequences of cognitive sophistication could also be clarified. Indeed, absolute moral certitude appears at odds with the cognitive hypothesis, which predicts greater intellectual flexibility as a result of sophistication (cf. Adorno et al. 1950Altemeyer 1996Jost et al. 2003). Finally, it is important to replicate our results using recent samples of college-aged adults. Although victimhood culture (under various names) has been discussed since at least the 1980s (Bloom 1987), some scholars argue that manifestations of this moral culture increased sharply beginning in the mid-2010s (Campell and Manning 2018; Lukianoff and Haidt 2018). The final wave of data for the NSYR was collected in 2012 to 2013, which places our data relatively early in these developments. More recent data would allow our findings to be tested in a sample that more closely aligns with the theorized timeline and could provide important insights into the underlying mechanisms.

Developmental Noise Is an Overlooked Contributor to Innate Variation in Psychological Traits

Mitchell, Kevin J. 2021. “Developmental Noise Is an Overlooked Contributor to Innate Variation in Psychological Traits.” PsyArXiv. September 21. doi:10.31234/osf.io/qnams

Abstract: Stochastic developmental variation is an additional important source of variance – beyond genes and environment – that should be included in considering how our innate psychological predispositions may interact with environment and experience, in a culture-dependent manner, to ultimately shape patterns of human behaviour.

---

The target article (Uchiyama et al., in press) presents a very welcome and much-needed overview of the importance of cultural context in the interpretation of heritability. The authors discuss a range of complex interactions that can occur between cultural and genetic effects, illustrating how already complicated gene-environment correlations and interactions can vary at a higher level as a function of cultural factors or secular trends.

However, the framing with genes and environment as the only sources of variance ignores an extremely important third component of variance, which is stochastic developmental variation (Vogt, 2008). Genetic effects on our psychological traits are mainly developmental in origin, but genetic differences are not the only source of variance in developmental outcomes (Mitchell, 2018).

The genome does not specify a precise phenotype – there is not enough information in the three billion letters of our DNA to encode the position of every cell or the connections of every neuron. Rather, the genome encodes a set of biochemical rules and cellular processes through which some particular outcome from a range of possible outcomes is realized (Mitchell, 2007).

These processes of development are intrinsically noisy at a molecular and cellular level (Raj and van Oudenaarden, 2008), creating substantial phenotypic variation even from identical starting genotypes (Kan et al., 2010). The importance of chance as a contributor to individual differences was recognised already by Sewell Wright in a famous 1920 paper (Wright, 1920) and is ubiquitously observed for all kinds of morphological and behavioural traits across diverse species (Honegger, 2018; Vogt, 2019). For brain development in particular, the contingencies and non-linearities of developmental trajectories mean that such noise can manifest not just as quantitative, but sometimes as qualitative variation in the outcome (Honegger, 2018; Linneweber et al., 2020;  Mitchell, 2018). 

The implication is that individual differences in many traits are more (sometimes much more) innate than the limits of the heritability of the trait might suggest. In other words, not all of the innate sources of variation are genetic in origin, and not all of the non-genetic components of variance are actually “environmental”.

Indeed, a sizeable proportion of the confusingly named “non-shared environmental” component of variance may have nothing to do with factors outside the organism at all, but may be attributable instead to inherently stochastic developmental variation (Barlow, 2019; Kan et al., 2010; Mitchell, 2018). This may be especially true for psychological traits, where heritability tends to be modest, but systematic environmental factors that might explain the rest of the variance have remained elusive (Mitchell, 2018). Proposals that idiosyncratic experiences should somehow have more of an effect than systematic ones (Harris, 1995) provide no convincing evidence that this is the case, nor any persuasive arguments for why it might be so.

This does not overturn any of the important points that the authors make but does suggest an important reframing. Rather than thinking solely of genetic versus environmental sources of variance, and the interaction between them, we can think of the interplay between innate predispositions – which reflect both genetic and developmental variation – and experience. Culture can have a huge influence on this interplay, especially on how much scope it gives for individual differences in psychology to be expressed or even amplified through experience.

However, if such predispositions do not solely reflect genetic influences then the implications of such effects for heritability become less obvious. If genetic variance predominates at early stages, then heritability may increase across the lifespan, as is observed for cognitive ability. On the other hand, if the influence of stochastic developmental variance (included in the non-shared environment term) is larger, then heritability may decrease with age, as observed for example for many personality traits (Briley and Tucker-Drob, 2017). In both cases, innate differences may be amplified, as observed in mice (Freund et al., 2013).

An already complicated picture of interactions and meta-interactions thus becomes even more so. In addition, there may be further interactions at play, as the degree of developmental variability is often itself a genetic trait. This has been observed in various experimental systems, which have found that variability of a trait can be affected by genetic variation and even selected for, with no concomitant effect on the phenotypic mean (e.g., Ayroles et al., 2015).

More generally, the developmental program has evolved to robustly produce an outcome within a viable range (Wagner, 2015). However, that robustness depends on all of the elements of the genetic program and the multifarious feedforward and feedback interactions between them. Increasing genetic variation is therefore expected to not just affect various specific phenotypes, but also to degrade the general robustness of the overall program and thus increase the variability of outcomes from some genotypes more than others.

This is illustrated by the special case of increased variance in many traits in males compared to females, observed across diverse phenotypes in many different species (Lehre et al., 2008). A proposed explanation is that hemizygosity of the X chromosome in males reduces overall robustness of the programs of development and physiology and thus increases variance in males. Strong support for this hypothesis comes from the evidence that the direction of this effect is reversed in species, including birds for example, where females are the heterogametic sex and show increased phenotypic variance (Reinhold and Engqvist, 2013). Sex is thus another factor that may affect patterns of variation of human traits through this kind of general influence on developmental variability. In addition, of course, cultural factors differ hugely between the sexes, which may differentially influence how innate predispositions are expressed by males and females.

One final complication is that environmental conditions may either buffer or further challenge the developmental program, reducing or exposing variability, as demonstrated in classic experiments (Waddington 1957; Wagner, 2007). Overall then, the already complex interactions very thoroughly discussed by the authors should be expanded to include the often overlooked but hugely important third component of variance: noise inherent in the developmental processes by which genotypes become realized as specific phenotypes.


Epistemic hubris is prevalent, bipartisan, & associated with both intellectualism (an identity marked by ruminative habits and learning for its own sake) & anti-intellectualism (negative affect toward intellectuals & the intellectual establishment)

Intellectualism, Anti-Intellectualism, and Epistemic Hubris in Red and Blue America. David C Barker, Ryan Detamble, Morgan Marietta. American Political Science Review, September 13 2021. https://www.cambridge.org/core/journals/american-political-science-review/article/abs/intellectualism-antiintellectualism-and-epistemic-hubris-in-red-and-blue-america/37C9F95A5DF4F81BAF69677DC6C9C972

Abstract: Epistemic hubris—the expression of unwarranted factual certitude—is a conspicuous yet understudied democratic hazard. Here, in two nationally representative studies, we examine its features and analyze its variance. We hypothesize, and find, that epistemic hubris is (a) prevalent, (b) bipartisan, and (c) associated with both intellectualism (an identity marked by ruminative habits and learning for its own sake) and anti-intellectualism (negative affect toward intellectuals and the intellectual establishment). Moreover, these correlates of epistemic hubris are distinctly partisan: intellectuals are disproportionately Democratic, whereas anti-intellectuals are disproportionately Republican. By implication, we suggest that both the intellectualism of Blue America and the anti-intellectualism of Red America contribute to the intemperance and intransigence that characterize civil society in the United States.