Sunday, December 22, 2019

Pathogen defence is a potential driver of social evolution in beetles: Daughters prolonged their cooperative phase within their mothers' nest, increasing hygienic behaviors (allogrooming & cannibalism)

Pathogen defence is a potential driver of social evolution in ambrosia beetles. Jon A. NuotclĂ , Peter H. W. Biedermann and Michael Taborsky. Proceedings of the Royal Society B, Volume 286, Issue 1917, December 18 2019. https://royalsocietypublishing.org/doi/10.1098/rspb.2019.2332

Abstract: Social immunity—the collective behavioural defences against pathogens—is considered a crucial evolutionary force for the maintenance of insect societies. It has been described and investigated primarily in eusocial insects, but its role in the evolutionary trajectory from parental care to eusociality is little understood. Here, we report on the existence, plasticity, effectiveness and consequences of social pathogen defence in experimental nests of cooperatively breeding ambrosia beetles. After an Aspergillus spore buffer solution or a control buffer solution had been injected in laboratory nests, totipotent adult female workers increased their activity and hygienic behaviours like allogrooming and cannibalism. Such social immune responses had not been described for a non-eusocial, cooperatively breeding insect before. Removal of beetles from Aspergillus-treated nests in a paired experimental design revealed that the hygienic behaviours of beetles significantly reduced pathogen prevalence in the nest. Furthermore, in response to pathogen injections, female helpers delayed dispersal and thus prolonged their cooperative phase within their mother's nest. Our findings of appropriate social responses to an experimental immune challenge in a cooperatively breeding beetle corroborate the view that social immunity is not an exclusive attribute of eusocial insects, but rather a concomitant and presumably important feature in the evolutionary transitions towards complex social organization.


1. Introduction

Pathogens pose a major risk to highly social animals. Insect societies, for instance, provide ideal conditions for their dissemination [1,2], because a large number of closely related individuals with potentially very similar immune defences live together in intimate contact and under homogeneous, environmentally buffered conditions. Low genetic variance has been shown to reduce the chances of successfully resisting severe fungus infections in honeybees, and in ants it reduces the effectiveness of anti-pathogen behaviours [3,4]. To counter pathogen risk, social insects evolved various physiological and behavioural strategies to inhibit the spread of diseases [5].
The innate immune system, pathogen avoidance and self-cleaning behaviours are probably the most common anti-pathogen strategies in insects. In addition to such traits that might be termed ‘non-social’, many social insects were found to express social immunity, which refers to cooperative sanitation involving the joint mechanical and chemical removal of bacterial and fungal pathogens. Originally, social immunity was regarded as a nest-wide parasite and pathogen defence mechanism that evolved in eusocial insects to counter the beforementioned inherent risks of infection caused by the social lifestyle and genetic homogeneity [5]. Importantly, this concept has highlighted the parallels between the innate non-social immune system of a single multicellular organism and a nest-wide ‘social immune system’ of a complex insect society. This idea relates to the concept of superorganismality, where a whole nest of social insects is regarded as a single reproducing entity (the ‘super organism’ [6,7]). Groups of nest members take on specialized roles, which corresponds to the differentiated cell tissues of a multicellular organism [8,9]. In the best-studied societies of ants and bees, for example, this is proposed to have led to the evolution of sophisticated group-level social immune defences by workers, including application of antimicrobial substances onto contaminated areas, removal of corpses and diseased brood, social fever and allogrooming [1012]. Such sanitation behaviour is not restricted to eusocial insects, however, as its precursors are already present in subsocial insects with parental care (e.g. [1315]), although empirical data from such systems are scarce. This sparked a debate about whether the concept of social immunity should be extended to include cooperative sanitation tasks performed in non-eusocial group living species, to better understand the evolutionary origins of social immunity [11,16].
This recent debate highlights that the evolution of social immunity is hitherto unclear. Either social immunity evolved as a result of increased pathogen transmission in eusocial organisms (termed the eusocial framework [5,11,16]) or sociality and social immunity co-evolved in a close feedback interaction (termed the group living framework [1517]). In some taxa, the suppression of pathogens is a very important social task, not only exhibited by parents towards offspring, but sometimes even between all individuals in a nest or aggregation. Hence, it is conceivable that under certain circumstances, pathogens themselves may be important drivers of sociality. This might be true especially in taxa that live in permanent close contact to a decaying food source and are thus frequently in contact with various microbes (e.g. involving parental care in burying beetles, larval aggregations in Drosophila or worker specialization in attine ants [17]). Our study explored this possibility further by introducing fungus-farming bark beetles as a system for the experimental study of social immunity.
These so-called ambrosia beetles offer a unique opportunity for studying the evolution of social traits because closely related species express various social structures ranging from uniparental care to eusociality [18]. Cooperative breeders are of particular interest for experimentally studying social evolution, as here adult females delay dispersal and act as helpers or temporary workers. The length of dispersal delay is affected by the presence and quantity of dependent offspring in the colony and the level of nutrition [19,20], but it might also be affected by the presence and load of pathogens.
All ambrosia beetles live in close mutualistic relationships with different fungi and bacteria, which they farm as their sole source of nutrition within tunnel systems in the heartwood of trees. The main mutualists are so-called ambrosia fungi, primarily from the ascomycete orders Microascales and Ophiostomatales [2123]. These fungi are taken up from the natal nest by dispersing adult females in special spore carrying organs called mycetangia and subsequently spread on the walls of newly excavated tunnel systems. Finally, they are cultivated and possibly protected from other (fungal) pathogens or competitors [18,23,24]. In addition to these fungal mutualists, several other fungi have been isolated from beetle nests, many of which are pathogens for the beetles or at least competitors of the beetles' fungal mutualists [21,23]. The genera Aspergillus and Beauveria, for example, can directly infect and kill adults and brood of ambrosia beetles [23,2528]. Other fungi compete with the ambrosia fungi and thus deplete the food source of the beetles (e.g. Penicillium sp., Chaetomium sp., Nectria sp. [29]). Such pathogens and competitors are probably the primary threat for the beetles because within the wood, they are well protected from most other natural enemies.
Morphological castes such as those in eusocial insects do not exist in ambrosia beetles. Instead, many species show division of labour among totipotent adult and larval offspring, with adults overtaking nest protection and sanitation, and larvae engaging in nest enlargement and packing of frass (i.e. sawdust, faeces and possibly pathogens) [19]. Larvae and adults join forces to pack and expel pellets of waste through the nest entrance. One of the most common behaviours in both adults and larvae is allogrooming of each other and the brood, probably against pathogens. Diseased individuals are either cannibalized or removed from the nest [19]. Currently, it is unknown, however, if ambrosia beetle larvae and/or adults can detect pathogens and actively suppress their load within nests. Some bark beetles have been shown to exude secretions from their mouth to kill pathogenic fungi [30]. Others, like the species Dendroctonus frontalis, are associated with bacteria that produce antibiotics which selectively kill antagonistic microorganisms threatening their fungal associates [31]. Indications for such a bacterial defence mechanism that specifically targets fungal pathogens and not the fungal cultivars have been recently also found in our model species Xyleborinus saxesenii [32].
Recent advancements in laboratory rearing, observation and in situ manipulation techniques [24,33,34] allow studies of social pathogen defence in ambrosia beetles. Previous studies revealed vigorous cleaning behaviours by adult offspring and even larvae. Since all ambrosia beetles live in close contact to a rich microbial environment, similar to some of the best-described models for social immunity in eusocial insects, we expect to find convergent behavioural adaptations to increased pathogen exposure. In addition, the naturally very high inbreeding rate found in cooperatively breeding ambrosia beetles is assumed to create a condition similar to eusocial insects, where the genetic homogeneity of nestmates renders group members highly vulnerable to microbial attack.
To test this idea, we used the cooperatively breeding and naturally highly inbred species X. saxesenii Ratzeburg to determine the effect of Aspergillus fungal pathogens on beetle social behaviours and potential social immunity. This pathogen was chosen because it has been repeatedly isolated from diseased individuals from X. saxesenii nests (see electronic supplementary material, figure S1) and it is well known for its pathogenicity for many insects (including other bark beetles [26,27]), which is a result of produced aflatoxins [35,36]. Aspergillus spores were experimentally injected in laboratory nests, and effects were determined on (i) the social behaviours displayed by larvae and adults and (ii) the timing of dispersal of adult offspring from the natal nest. In addition, (iii) we assessed the effectiveness of the beetles' hygienic behaviours on pathogen spore loads, by comparing pathogen spore loads of nest parts with beetles present against parts where beetles had been experimentally removed after injection of the pathogen. We predict that the group members increase nest sanitation in response to the introduced pathogen and that this behaviour reduces pathogen spore loads. Furthermore, daughters will either delay their dispersal to help with nest hygiene and thus increase their indirect fitness benefits or disperse earlier to protect their individual health and direct fitness gains.

Deconstructing sociality: the types of social connections that predict longevity in a group-living primate

Deconstructing sociality: the types of social connections that predict longevity in a group-living primate. Samuel Ellis, Noah Snyder-Mackler, Angelina Ruiz-Lambides, Michael L. Platt and Lauren J. N. Brent. Proceedings of the Royal Society B, Volume 286, Issue 1917, December 18 2019. https://royalsocietypublishing.org/doi/10.1098/rspb.2019.1991

Abstract: Many species use social interactions to cope with challenges in their environment and a growing number of studies show that individuals which are well-connected to their group have higher fitness than socially isolated individuals. However, there are many ways to be ‘well-connected’ and it is unclear which aspects of sociality drive fitness benefits. Being well-connected can be conceptualized in four main ways: individuals can be socially integrated by engaging in a high rate of social behaviour or having many partners; they can have strong and stable connections to favoured partners; they can indirectly connect to the broader group structure; or directly engage in a high rate of beneficial behaviours, such as grooming. In this study, we use survival models and long-term data in adult female rhesus macaques (Macaca mulatta) to compare the fitness outcomes of multiple measures of social connectedness. Females that maintained strong connections to favoured partners had the highest relative survival probability, as did females well-integrated owing to forming many weak connections. We found no survival benefits to being structurally well-connected or engaging in high rates of grooming. Being well-connected to favoured partners could provide fitness benefits by, for example, increasing the efficacy of coordinated or mutualistic behaviours.

4. Discussion

By quantifying the relationship between survival and four of the most common operational definitions of social connectedness in a single system, this study highlights the fact that being ‘well-connected’ is multi-faceted in nature and provides evidence that some aspects of sociality represent more straightforward routes to biological success than others. In particular, we found support for a relationship between survival and dyadic connectedness: adult female rhesus macaques that frequently interacted with their top partners and that had partners that were stable over time were more likely to survive than females which interacted less often with their preferred and stable partners. However, we found no relationship between a female's number of strong connections and her probability of survival. For dyadic connections, at least, it appeared as though quality was more important than quantity. We also found some support for a relationship between social integration and survival: females that had a large number of weak connections experienced a lower mortality hazard. Other predictions of the social integration hypothesis were not supported, and there was little evidence that being structurally or directly well-connected resulted in survival benefits.
Our results add to previous studies linking the quality of dyadic relationships with positive fitness outcomes in social animals (table 1). In this study, rhesus macaque females with the strongest connections to their top partner had an 11% higher probability of survival than females that were less well-connected to their top partner. Repeatedly interacting with the same small number of individuals may facilitate the emergence and maintenance of cooperative relationships, whereby partners exchange behavioural services, such as grooming and coalitionary support, and where the consistency of partner identity may improve coordination of those behaviours and deter cheating [60,61].
Consistent and frequent partners may also result in benefits related to mutual social tolerance. In despotic, hierarchical, societies, like those of many female Old World primates, tolerated access to necessary resources, including food and space, may be beneficial to individuals [6264]. Repeated and stable partnerships may initially arise because of shared needs or preferences amongst pairs of individuals. For example, individuals with similar metabolisms, thermoregulatory needs, or preferences for certain foods, may repeatedly find themselves attempting to access the same resource [65,66]. If alliances between pairs of individuals result in tolerance of that pair when accessing a resource, combined with mutualistic joint defence of that resource against competing groupmates, repeated and stable relationships may emerge. This scenario relies on relative stability in resource availability and in individual differences in needs and preferences. Individuals living outside of those conditions may have little need for stable partners, and may therefore exhibit a divergent relationship between dyadic connectedness and fitness [22,23,30]. In these species, a more flexible and generalized strategy of connectedness—via, for example, social integration—may be a better strategy for coping with the challenges of group-living.
In addition to dyadic connectedness, we found that some aspects of social integration predicted survival in this study; the number of weak connections a female maintained was linked to her mortality hazard. Wide social tolerance derived from these connections may allow a female to feed without disturbance or avoid harassment in a greater number of settings than females with fewer weak connections. Similar to the results presented here, blue monkeys (Cercopithecus mitis) survival has been shown to be positively associated with both strong-consistent connections and weak-inconsistent connections [51]. In the current population of rhesus macaques, measures of social integration have been positively linked to reproductive output [12] and proxies of social integration (family size) have been linked to survival [6]. Interestingly, correlations (electronic supplementary material, figure S1) and principal component analysis (electronic supplementary material, figure S6) suggest that dyadic connectedness measures and social integration measures are negatively associated in this population. That is, females with strong dyadic connectedness tend to have weak social integration. Taken together, these results may suggest that both dyadic connectedness and social integration can provide fitness benefits (albeit perhaps of different types) within the same system.
There was quantitative and qualitative variation in the relationship between survival and a female's number of strong connections, and between survival and number of weak connections depending on the threshold used to define connections as strong or weak. Choice of the threshold can, therefore, have important implications for the conclusions reached by a study, and we suggest that thresholds either be based on features of the data or behaviour of study species. More generally, connectedness is an individual effect. Defining connections as strong or weak at the population level and then calculating connectedness at the individual level may not best represent the salient features of the social environment experienced by individuals. This is highlighted by our contrasting results for number of strong connections and strength of connection to top associates (which is a measure defined at the individual level).
We found no evidence of a relationship between an individual's position in the broader social network and their probability of surviving. Individuals that are well-connected to their broader social worlds have been suggested to benefit from being among the first to receive useful information when it enters the system. For example, in resident-ecotype killer whales, indirect network position predicts male survival, potentially because well-positioned males are more likely to receive information about the presence and location of resources [13]. The rhesus macaques in our study were provisioned at regular intervals and predictable locations and have no predators. The opportunities for individuals to gain survival benefits from social information in this population may, therefore, be limited. Although information about the social environment such as mating opportunities, changes in group membership or dominance rank, are probably important for the success of these animals, the benefits of this information might be more tightly born out in terms of reproductive success [12] and less so in terms of survival.
Measures of direct connectedness were also not important predictors of survival in female rhesus macaques: neither a greater amount of time spent in proximity to others, engaged in grooming, nor the relative amount of grooming received were associated with increased probability of survival. In some primate species, grooming rates have been linked to lower parasite loads (e.g. [35]). Our findings suggest that the benefits of sociality are not directly derived from the behaviours involved in sociality, at least in this population. This interpretation aligns with suggestions that relationships are a commodity or resource that are promoted and maintained in some social animals.
Other social factors not considered in detail here are also likely to influence mortality. Dominance rank has been shown to be an important predictor of fitness and health (e.g. [10]) and a source of variation in social behaviour [67]) in primates, including in rhesus macaques [6,42,68]. Dominance rank did not significantly predict survival when evaluated as a term on its own and it was therefore not included as a main effect in subsequent models. Dominance rank was also not included as an interaction term with social connectedness because of concerns of overfitting. The analyses—in essence—represent the fitness consequences of sociality in females of ‘average’ rank. Including the interaction between connectedness and rank in future analyses may reveal important subtleties in the relationship between sociality and fitness. It is conceivable, for example, that the importance of social connectedness differs for females of high and low rank, though it should be noted that including rank has increased the observed benefits of sociality in this study system [6]. Further analyses based on longer observations and increased sample sizes would be needed to reveal how rank, and other behavioural and ecological constraints, influence the relationship between connectedness and longevity.
Overall, the results presented here demonstrate the value of understanding what exactly is meant by being ‘socially well-connected’. Although ‘sociality’ and ‘connectedness’ are useful catch-all terms, the methods used to measure them can influence results revealed and the conclusions reached. We have highlighted how different aspects of sociality can result in different biological conclusions. Future work in other species is needed to understand the generality of the conclusions reached here. Testing whether different conceptualizations of being well-connected are related to proxies of fitness other than survival, such as reproductive success, are also required, as are studies investigating how different aspects of connectedness interact in other systems.

One unique feature of digital emotion contagion is that it is mediated by digital media platforms that are motivated to upregulate users’ emotions

Goldenberg, Amit, and James Gross. 2019. “Digital Emotion Contagion.” OSF Preprints. October 2. doi:10.31219/osf.io/53bdu

Abstract: People spend considerable time on digital media, and during this time they are often exposed to others’ emotion expressions. This exposure can lead their own emotion expressions to become more like others’ emotion expressions, a process we refer to as digital emotion contagion. This paper reviews the growing literature on digital emotion contagion. After defining emotion contagion, we suggest that one unique feature of digital emotion contagion is that it is mediated by digital media platforms that are motivated to upregulate users’ emotions. We then turn to measurement, and consider the challenges of demonstrating that digital emotion contagion has occurred, and how these challenges have been addressed. Finally, we call for a greater focus on understanding when emotion contagion effects will be strong versus weak or non-existent.

Killfish evolution of brain cell proliferation: More plasticity according to a higher degree of predation

Predation drives the evolution of brain cell proliferation and brain allometry in male Trinidadian killifish, Rivulus hartii. Kent D. Dunlap, Joshua H. Corbo, Margarita M. Vergara, Shannon M. Beston and Matthew R. Walsh. Proceedings of the Royal Society B, Volume 286, Issue 1917, December 18 2019. https://doi.org/10.1098/rspb.2019.1485

Abstract: The external environment influences brain cell proliferation, and this might contribute to brain plasticity underlying adaptive behavioural changes. Additionally, internal genetic factors influence the brain cell proliferation rate. However, to date, researchers have not examined the importance of environmental versus genetic factors in causing natural variation in brain cell proliferation. Here, we examine brain cell proliferation and brain growth trajectories in free-living populations of Trinidadian killifish, Rivulus hartii, exposed to contrasting predation environments. Compared to populations without predators, populations in high predation (HP) environments exhibited higher rates of brain cell proliferation and a steeper brain growth trajectory (relative to body size). To test whether these differences in the wild persist in a common garden environment, we reared first-generation fish originating from both predation environments in uniform laboratory conditions. Just as in the wild, brain cell proliferation and brain growth in the common garden were greater in HP populations than in no predation populations. The differences in cell proliferation observed across the brain in both the field and common garden studies indicate that the differences are probably genetically based and are mediated by evolutionary shifts in overall brain growth and life-history traits.

1. Introduction
Researchers have devoted much attention to assessing whether changes in adult neurogenesis in response to the environment might be a mechanism of adaptive brain plasticity [1,2]. While the precise functional significance of adult neurogenesis is still debated, there is substantial evidence from many model systems that environmental stimuli alter neurogenic rates in specific brain regions, and that such neurogenic changes have behavioural consequences [3]. For example, complex odour environments increase neurogenesis in the olfactory bulb of rodents [4], and these new neurons enhance odour discrimination abilities [5]. Similarly, seasonal changes in day length promote neurogenesis in the song nuclei in the brains of several bird species, and these neurons are linked to seasonal song production [6].

Most of our understanding of environmental influences on adult neurogenesis comes from laboratory studies in which researchers manipulate environmental stimuli and document effects over the timescale of days to months. That is, they demonstrate an external factor driving phenotypic plasticity in the neurogenic rate. However, the neurogenic rate can also be influenced by intrinsic genetic factors [7–9], and thus, over evolutionary timescales, the environment can modify the neurogenic rate via natural selection acting within populations. Selection could act directly on the neurogenic rate if enhanced (or reduced) brain plasticity confers an advantage in responding to environmental change. Additionally, in species with indeterminate growth, such as most fishes, the brain grows in tandem with the body throughout adulthood [10,11], and selection on body growth trajectories could indirectly affect brain growth and the underlying cellular processes of brain growth [12]. Thus, population variation in the neurogenic rate could arise from phenotypic responses to different environments, or from evolved genetic divergence owing to direct selection on brain growth rate or as an indirect, correlated response to selection on body growth (figure 1). We evaluated these alternative explanations by examining one stage of adult neurogenesis, brain cell proliferation, in killifish (Rivulus hartii) populations from different predator environments. By measuring brain cell proliferation rates in populations exposed to differential predation pressure in the field as well as those same populations reared in a common laboratory environment, we assessed whether population variation in brain cell proliferation is attributable to natural environmental differences versus intrinsic population differences. Finally, we evaluated the predator effects on brain cell proliferation within the context of lifetime growth trajectories of the brain and body in populations [13–16] to assess how population variation in brain cell proliferation fits into the overall evolved difference in life history.

[Figure 1. Three alternative causal chains linking the environment with variation in brain cell proliferation.]

In Trinidad, R. hartii are found in sites where they are the only species present (Rivulus only (RO) sites) and lack predators as well as in sites where they are exposed to predatory fish such as Hoplias malabaricus and Crenicichla frenata (high predation (HP) sites). RO sites are typically located upstream from HP sites above barrier waterfalls that truncate the distribution of large piscivores [13,14,16,17]. These sites are located near each other and thus do not differ in physical habitat and environmental variables (i.e. water temperature and dissolved oxygen) [14]. In HP sites, Rivulus suffer increased mortality, are found at lower densities, and, in turn, they exhibit faster rates of individual growth (HP sites also have a more open canopy) [13,18]. Rivulus can also be bred and reared in the laboratory, allowing us to identify intrinsic (probably genetic) differences between populations that are independent of the environment, and many previous studies have indeed demonstrated that increased predation pressure is associated with evolutionary changes in life-history traits [13,15,19].

Recent work on Rivulus showed that divergent patterns of predation lead to evolutionary shifts in brain size [20]. Increased predation rates in HP sites are associated with the evolution of smaller brains in male (but not in female) Rivulus. Given this negative association between predator environment and brain size in male Rivulus and the negative effect of predators on brain cell proliferation in another freshwater teleost [21,22], we predicted that Rivulus from HP populations would have lower rates of brain cell proliferation than those from RO populations. In fact, we found the opposite: brain cell proliferation was higher in HP populations than in RO populations. These differences were maintained in first-generation laboratory-reared fish, indicating that they probably arise from evolved genetic divergence rather than through phenotypic plasticity. Population differences in cell proliferation were found across all sampled brain regions and correlated with population differences in overall brain allometry, suggesting that they evolved as part of broader evolutionary changes in overall brain growth rather than as a mechanism serving a specific behavioural adaptation.

Harsh and unpredictable environments and adverse internal states in childhood are each uniquely associated with fast life history behaviour: Aggression, impulsivity, and risk-taking in adolescence

External environment and internal state in relation to life-history behavioural profiles of adolescents in nine countries. Lei Chang et al. Proceedings of the Royal Society B, Volume 286, Issue 1917, December 18 2019. https://doi.org/10.1098/rspb.2019.2097

Abstract: The external environment has traditionally been considered as the primary driver of animal life history (LH). Recent research suggests that animals' internal state is also involved, especially in forming LH behavioural phenotypes. The present study investigated how these two factors interact in formulating LH in humans. Based on a longitudinal sample of 1223 adolescents in nine countries, the results show that harsh and unpredictable environments and adverse internal states in childhood are each uniquely associated with fast LH behavioural profiles consisting of aggression, impulsivity, and risk-taking in adolescence. The external environment and internal state each strengthened the LH association of the other, but overall the external environment was more predictive of LH than was the internal state. These findings suggest that individuals rely on a multitude and consistency of sensory information in more decisively calibrating LH and behavioural strategies.


4. Discussion

Existing research has shown, in separate studies, that both external environments and internal body states affect animal LH. Investigating these two factors in the same individuals, recent studies have demonstrated unique LH predictions by the two predictors (e.g. [33]). Findings of the present study add to the literature by showing the two factors interact in reinforcing LH calibration in the same direction. Childhood harsh and unpredictable environments and adverse internal states were each uniquely associated with fast LH behavioural profiles in adolescence, and each factor strengthened the LH association of the other factor. Specifically, at higher, compared to lower, levels of internal adversity, external harshness and unpredictability was more predictive of fast LH. Similarly, adverse internal state was predictive of fast LH when the external environment was harsh and unpredictable but not when it was benign. These conditional predictions suggest that external environments and internal body states likely reinforce the same cues about mortality–morbidity in accelerating fast LH. In the present study, we obtained information about childhood environmental harshness and unpredictability and adverse internal state at approximately the same time, 3 years before measuring LH profiles, therefore establishing longitudinal, unconditional, and conditional associations with or predictions of LH from the external and internal predictors. Additionally, external environment was more strongly associated with fast LH than was internal state as evidenced both by the two main effects (β = 0.50 versus 0.15) and the four conditional associations (figure 2). External environment may be an underlying cause of both fast LH behavioural profiles and internal body states [1] and, when this common variance was statistically accounted for in the present study by integrating the mediation modelling of the relation of external environment to internal state, LH prediction by the internal but not the external predictor was attenuated. These comprehensive findings support the conclusion that external environment has a stronger impact in shaping LH than internal state. Nonetheless, the internal state is still predictive of a fast LH behavioural profile overall and when external harshness and unpredictability are severe rather than benign.
Traditional LH theory emphasizes the environment as the sole actor in activating and formulating animal LH. Specifically, the mortality-causing harshness and unpredictability, as well as resource conditions (which were not investigated in the present study and have rarely been studied by other human LH research [8]), cause physiological (e.g. endocrine and homeostasis) and psychological tuning (e.g. cognitive and behaviour) that is oriented either towards growth and development (slow LH) or mating and reproduction (fast LH). Recent research has introduced animal internal state as a complementary actor in forming LH strategies [13,22,67]. In the light of the present findings, external environment and internal body state should not be distinguished as categorically separate drivers of LH. The two can be regarded as representing different and unique sensory inputs. For humans, the external environment is sensorially processed mainly as visual, auditory, and tactile data, whereas internal state is processed mainly as interoceptive including visceroceptive information. Humans and other animals processing more and more consistent sensory information should calibrate LH more decisively than those having less or less consistent information. Such is the implication of the present findings about the significant ordinal interaction effect. In the human case, for example, a child who constantly feels pains in her stomach while witnessing adult neighbours falling ill to a strange parasite should be more readily set on a faster LH path compared to the one who receives only one but not both sensory inputs or who receives two inconsistent inputs by experiencing external mortality but internal homeostasis or vice-versa.
A proviso to the present findings and discussion is that, like most of the existing human LH literature (e.g. [68]), the external environment operationalized in the present study was mostly individually specific, representing a person's unique home and family environment rather than a larger ecology shared with other conspecifics and thus increasing seeming consistency between external and internal conditions in impacting LH. This is not necessarily a limitation of the present study but, rather, represents the reality of LH research on contemporary human participants. Because humans have long mastered the external environment [69], most ecology-wide variables, such as mortality, pandemic and even intraspecific conflict that are traditionally defined in the species-general LH literature as extrinsic risks beyond individuals' survival effort, operate exactly depending on modern human individuals’ survival abilities or, more precisely, resource conditions. Thus, they do not fully satisfy the definition of extrinsic risks [70]. Because of these difficulties, individual-level variables, such as chaos at home, family misfunctioning, and unpredictable life events, have been substituted as proxies of extrinsic risks in human LH research (e.g. [40]). Another related weakness is using multi-informant reporting to measure internal states, as well as external environments, because individual differences in interoception may potentially be correlated with LH (e.g. [71]) and may in general confound the effects of body conditions. However, cognitive assessment of both internal and external information may become integral parts of human LH calibration. An appreciation of these human-specific conditions is needed in applying species-general models to study human LH. The present study represents such an appreciation and one of the first attempts to our knowledge to investigate how external environments and internal states interact in calibrating human LH behavioural profiles.

The germline—soma barrier seems leaky, & information is transferred from soma to germline; moreover, the germline, which also ages, is influenced by an age-related deterioration of the soma

The deteriorating soma and the indispensable germline: gamete senescence and offspring fitness. Pat Monaghan and Neil B. Metcalfe. Proceedings of the Royal Society B, Volume 286, Issue 1917, December 18 2019. https://doi.org/10.1098/rspb.2019.2187

Abstract: The idea that there is an impenetrable barrier that separates the germline and soma has shaped much thinking in evolutionary biology and in many other disciplines. However, recent research has revealed that the so-called ‘Weismann Barrier’ is leaky, and that information is transferred from soma to germline. Moreover, the germline itself is now known to age, and to be influenced by an age-related deterioration of the soma that houses and protects it. This could reduce the likelihood of successful reproduction by old individuals, but also lead to long-term deleterious consequences for any offspring that they do produce (including a shortened lifespan). Here, we review the evidence from a diverse and multidisciplinary literature for senescence in the germline and its consequences; we also examine the underlying mechanisms responsible, emphasizing changes in mutation rate, telomere loss, and impaired mitochondrial function in gametes. We consider the effect on life-history evolution, particularly reproductive scheduling and mate choice. Throughout, we draw attention to unresolved issues, new questions to consider, and areas where more research is needed. We also highlight the need for a more comparative approach that would reveal the diversity of processes that organisms have evolved to slow or halt age-related germline deterioration.


1. Introduction

While a mechanism whereby offspring inherit beneficial traits from their parents is central to the theory of evolution by natural selection, robust scientific information on the processes of heredity was lacking when Darwin put forward his theory in 1859 [1]. Being apparently unaware of the pioneering work of Mendel on inheritance, Darwin later suggested that inheritance might occur via ‘gemmules’, tiny particles that circulate around the body and accumulate in the gonads, a developmental process he termed ‘Pangenesis’ [2]. Attempts to test this idea, notably by Galton, provided no support and it fell by the wayside [3]. Towards the end of the nineteenth century, August Weismann put forward his ‘germ plasm’ theory, based on the idea of continuity of the germline, its high level of protection, and its isolation from the somatic cells [4,5]. In contrast to Darwin, he proposed that there was no transfer of genetic information between the soma and the germline, a separation which came to be termed the Weismann Barrier. This distinction between germline and soma became central to the neo-Darwinian evolutionary theories developed in the early twentieth century. It has also been central to key theories of the evolution of ageing in animals, such as the disposable soma theory [6], with the soma being seen as the vehicle that prioritizes, protects, and preserves the integrity of the germline, passing it on to future generations. The central argument is that, while the soma is allowed to degenerate with age, the germline is protected and damage to it should not be allowed to accumulate, either within the individual or from generation to generation.
However, we now know that Darwin's gemmule idea may not be entirely fanciful [3,7], and that the Weismann Barrier is not so impenetrable as previously thought [8]: various potential carriers of epigenetic hereditary information from the soma to the germline have been identified, particularly those involving DNA methylation, chromatin modification, small RNAs and proteins that can influence gene expression, and extracellular vesicles that potentially move from the soma to the germline [712]. Investigating the transfer of epigenetic information across the generations by both sexes is a fast-growing field of research. Moreover, while it appears that germline DNA is indeed afforded special protection [13], germline mutations do occur, since neither DNA replication nor repair are perfect processes and external insults can also inflict significant damage.
So to what extent is the germline imperfectly isolated from the age-related deterioration generally evident in the soma? Does the germline itself also age, and if so in what way? Is this different in male and female germ cells? How does this affect the germline DNA and other hereditary processes? Is it also the case that the material passed via the cytoplasm of the oocyte is adversely influenced by the passage of time, both by deterioration in the oocyte itself and in the somatic tissue that exists to protect it? Does all of this have implications for the shaping of animal life histories?
These questions are the focus of this review. First, we consider briefly whether there is evidence of a negative effect of parental age on offspring health and longevity, and the routes whereby such an effect of paternal and maternal age could occur. We then focus on the germline itself, examine the evidence that it can deteriorate as the soma ages, and review the mechanisms by which this occurs. We then consider what this means for relevant aspects of life-history evolution, in particular, the scheduling of reproduction and mate choice. Throughout, we highlight and discuss the most critical gaps in our current understanding.


2. Negative effects of parental age on offspring longevity

One of the first studies to demonstrate parental age effects on offspring health and longevity was undertaken by Alexander Graham Bell, inventor of the telephone. Towards the end of his life, he developed an interest in heredity (unfortunately combined with one in eugenics). Using data from the family tree of William Hyde, one of the early English settlers in Connecticut, USA, Bell showed in 1918 that children born to older mothers and fathers had reduced lifespans [14]. Jennings & Lynch followed up this idea experimentally by using parthenogenically reproducing rotifers Proales sordida [15]; their results also suggested (while not being statistically significant) that the offspring of old females do not live as long as those of young females. This was taken further by Albert Lansing, using clones of the rotifer Philodina citrina. In 1947, he showed, through selecting old animals as breeders, that the offspring of old parents had a reduced lifespan [16], an effect that has become known as the Lansing effect. Furthermore, by creating parthenogenic selection lines in which he continually used the offspring of old or young individuals as parents for the next generation, his experiments appeared to show that this adverse parental age effect became magnified over generations, leading to the relatively rapid extinction of the old breeder line. By contrast, there was no change in lifespan or viability in lines based on selecting offspring produced only by young individuals [16].
It is important to note that almost all recent studies of the Lansing effect only consider two generations (i.e. they test whether offspring of old parents have a shortened lifespan), and so cannot test whether (or how) the effect is or is not cumulative over successive generations, as suggested in Lansing's original experiments. A partial exception is a study showing a cumulative negative effect of maternal age on offspring in Drosophila: the lowest proportion of eggs that reached adulthood came from old mothers that also had old grandmothers [17]. The extent to which a parental age effect on offspring fitness persists beyond the F1 generation, and whether it is truly cumulative, is little known in other taxa. However, a substantial body of evidence does exist to show that the age of the parents at reproduction can reduce offspring longevity in the F1 generation. Early investigations of effects of parental age on offspring in sexually reproducing species (mostly Drosophila spp.) gave inconsistent results (see [18] for a critical appraisal of these early studies), but more recent studies have frequently found a negative effect on offspring longevity in a wide range of species including humans [1923], other mammals [24,25], birds [2629], rotifers, crustaceans, numerous insects, yeast, and nematodes [3032]. These include studies where animals were raised in consistent and benign laboratory conditions, such that the shorter lifespan of offspring appears to be due to faster ageing independent of environmental conditions (e.g. [24]). A reduced reproductive performance in offspring of older parents has also been reported in some cases [26,27]; while this is much less frequently reported than effects on lifespan (and may not always be apparent [25]), it should be noted that studies of lifetime reproductive effects of parental age under natural conditions are very limited ([25] and references therein).
Both establishing and teasing apart the causes of effects of parental age on offspring viability is not straightforward. In sexually reproducing animals, both maternal and paternal age can potentially adversely affect the offspring; in practice however, it can be difficult to tease apart the two since the age of the two parents is often correlated under natural conditions. There are many different pre- and post-natal routes for such effects. However, it is important to mention that there can be causes of a negative relationship between parental age and offspring viability that does not involve ageing of the germline—or indeed any ageing process at all. For instance, previous reproductive effort could have effects independent of parental age [33]. Many of the studies to date, particularly in long-lived species, are non-experimental and cross-sectional (i.e. comparing young versus old members of the population at a given time) rather than longitudinal (comparing the same parents when they are young versus when they are old), and thus differential survival of parental phenotypes into old age could mask or enhance effects, as could cohort effects since in many studies the capacity to compare aged individuals born in different years is limited [34].
Germline senescence is a wide-reaching, multidisciplinary topic. We restrict our review to mechanisms related to the ageing of the germline in animals where there is a separation of the germline and the soma. We also confine ourselves to sexually reproducing animals (noting the current bias in the literature towards vertebrates), and consider effects operating via both eggs and sperm. We now briefly describe relevant aspects of the production and storage of the gametes before discussing the evidence that they deteriorate with parental age, focusing in particular on age-related changes in levels of de novo DNA mutation and aneuploidy, telomere length, and mitochondrial function since these are key factors that could give rise to both transmissible and cumulative negative effects on offspring health and longevity.


Not so important for the children: Women attached significantly greater importance to social status, personality, & physical appearance in a desired life partner than to those traits in a sperm donor

Choosing genes without jeans: do evolutionary psychological mechanisms have an impact on thinking distortions in sperm donor preferences among heterosexual sperm recipients? Emad Gith & Ya’arit Bokek-Cohen. Human Fertility, Dec 18 2019. https://doi.org/10.1080/14647273.2019.1700560

Abstract: The objective of the project was to compare the importance of traits desired in a life partner to traits desired in a sperm donor. A survey was distributed via internet support groups to women undergoing donor insemination and the questionnaire consisted of 35 traits of a desired life partner and of a desired sperm donor. The respondents comprised 278 unmarried childless heterosexual women over 38 years old undergoing donor insemination treatments. The 35 traits of a desired life partner and a desired sperm donor were grouped by confirmatory factor analysis (CFA) into four factors: (i) personality; (ii) physical appearance; (iii) genes and health; and (iv) socio-economic status. Paired-sample t-tests showed that patients attached significantly greater importance to social status, personality, and physical appearance in a desired life partner than to those traits in a desired sperm donor. No differences were found regarding the genetic quality of the desired life partner versus the sperm donor. These findings contribute to the understanding of fertility patients’ preferences in sperm donors.

Keywords: Donor insemination, genes, mate selection, parental investment theory, single women, sperm donor



Saturday, December 21, 2019

Identity as Dependent Variable: Americans Shift Their Identities to Align with Their Politics

Identity as Dependent Variable: How Americans Shift Their Identities to Align with Their Politics. Patrick J. Egan. American Journal of Political Science, December 20 2019. https://doi.org/10.1111/ajps.12496

Abstract: Political science generally treats identities such as ethnicity, religion, and sexuality as “unmoved movers” in the chain of causality. I hypothesize that the growing salience of partisanship and ideology as social identities in the United States, combined with the increasing demographic distinctiveness of the nation's two political coalitions, is leading some Americans to engage in a self‐categorization and depersonalization process in which they shift their identities toward the demographic prototypes of their political groups. Analyses of a representative panel data set that tracks identities and political affiliations over a 4‐year span confirm that small but significant shares of Americans engage in identity switching regarding ethnicity, religion, sexual orientation, and class that is predicted by partisanship and ideology in their pasts, bringing their identities into alignment with their politics. These findings enrich and complicate our understanding of the relationship between identity and politics and suggest caution in treating identities as unchanging phenomena.

---
From the September 10, 2018 version:

Conclusion

These findings yield new insight on the nature of politically salient American identities
and how they can be shaped by the liberal-conservative, Democrat-Republican divide.
Inter-temporal stability varies highly among identities, running from relatively high (for
race, Latino origin and most religions) to moderate (for party identification and some national origins) to low (for most national origins, sexual orientation, and class). Many of the
identities commonly understood to be highly stable can in fact shift over time, and those
who have switched in or will soon switch out of identities make up very large shares of
those identifying as sexual minorities, religious “nones,” and any economic class.
These analyses permit us to see for the first time the extent to which over-time instability in identification is associated with politics, with liberalism and Democratic party
identification predicting shifts toward identification as Latino, lesbian, gay, or bisexual, as
nonreligious, lower class, and claiming national origin associated with being non-white;
and conservatism and Republican party ID yielding movement toward identification as being a member of Protestant faith, and having had an experience as a born-again Christian.
This is no small discovery: many of these identities are at the center of important American
policy debates, and those who claim these identities are key blocs of voters, party activists
and political donors. The data show us how in our era, which is so polarized that political
affiliations become identities in themselves, politics can create and reinforce identities even
thought to be as fixed as racial and ethnic categories. They thus reveal that “social sorting,”
while predominantly the result of individuals changing their politics to align with
their identities, is also due in some part to people shifting their identities to better align
with their politics.
Nearly sixty years ago, the “Michigan school” authors of The American Voter noted that the influence of group membership on political behavior might be overstated, as members of many identity groups often “come to identify with the group on the basis of preexisting beliefs and sympathies.” (Campbell et al 1960, 323). The findings presented here join mounting evidence that this concern was well-placed, and that more rich discoveries await those who continue to make use of powerful tools and data to understand the origins of important identities in American politics.