Thursday, August 19, 2021

Paradoxical intention has been considered an evidence-based treatment for insomnia since the 1990s; it seems to result in great reductions in sleep-related performance anxiety & marked clinical improvements

Paradoxical intention for insomnia: A systematic review and meta-analysis. Markus Jansson-Fröjmark, Sven Alfonsson, Benjamin Bohman, Alexander Rozental, Annika Norell-Clarke. Journal of Sleep Research, August 17 2021. https://doi.org/10.1111/jsr.13464

Summary: Paradoxical intention (PI) has been considered an evidence-based treatment for insomnia since the 1990s, but it has not been evaluated with modern review techniques such as meta-analysis. The present study aimed to conduct the first systematic review and meta-analysis of studies that explore the effectiveness of PI for insomnia on insomnia symptomatology and theory-derived processes. A systematic review and meta-analysis was conducted by searching for eligible articles or dissertations in six online bibliographic databases. Randomised controlled trials and experimental studies comparing PI for insomnia to active and passive comparators and assessing insomnia symptoms as outcomes were included. A random effects model was estimated to determine the standardised mean difference Hedge’s g at post-treatment. Test for heterogeneity was performed, fail-safe N was calculated, and study quality was assessed. The study was pre-registered at International Prospective Register of Systematic Reviews (PROSPERO, CRD42019137357). A total of 10 trials were identified. Compared to passive comparators, PI led to large improvements in key insomnia symptoms. Relative to active comparators, the improvements were smaller, but still moderate for several central outcomes. Compared to passive comparators, PI resulted in great reductions in sleep-related performance anxiety, one of several proposed mechanisms of change for PI. PI for insomnia resulted in marked clinical improvements, large relative to passive comparators and moderate compared to active comparators. However, methodologically stronger studies are needed before more firm conclusions can be drawn.

4 DISCUSSION

4.1 Summary of main results

The present study is the first comprehensive systematic review and meta-analysis of the effectiveness of PI for insomnia. Relative to passive comparators, PI resulted in large improvements in several central insomnia symptoms. Although the effectiveness of PI was smaller compared to active comparators, the effects were still moderate for several key outcomes. Relative to previous reviews, the present study extends the quantitative assessment of PI as an evidence-based intervention in that it compared PI with passive versus active comparators and included both night-time and daytime symptoms (Jansson-Fröjmark & Norell-Clarke, 2018; Morin et al., ,19992006). A unique finding was support for great reductions in sleep-related performance anxiety by PI. This finding strengthens the notion that decreased performance anxiety is a mechanism through which PI might work.

Cumming and Finch (2001) have recommended that effect sizes should be compared to other relevant estimates in the literature to grasp their significance. In one of the largest and more recent meta-analysis, cognitive and behavioural interventions (e.g. CBT-I, relaxation, stimulus control, psychoeducation, and sleep restriction) were compared with passive comparators (van Straten et al., 2018). Comparing the effect sizes from van Straten et al., (2018) for cognitive and behavioural therapies with the present study’s effect sizes for PI relative to passive comparators, the effects were larger in the present study for PI on SOL (0.57 versus 0.82), NAW (0.28 versus 1.10), and TST (0.16 versus 0.51), and smaller on SE (0.71 versus 0.00). Although inferences from comparisons of this sort are difficult to draw from a methodological viewpoint, a reasonable conclusion would be to state that PI tentatively has a similar effectiveness as other cognitive and behavioural interventions. At the same time, this conclusion is hampered by several limitations in the trials exploring the effectiveness of PI. The relatively few studies, limited number of study participants, and other methodological characteristics of the studies makes an overall conclusion about effectiveness and generalisability of PI uncertain.

4.2 Methodological considerations and quality of evidence

The present review identified 10 studies that evaluated the effectiveness of PI. There were a number of notable methodological limitations of the studies. The study quality assessment showed that the quality of the 10 studies ranged from 15 to 20 points out of 26, implying a moderate study quality. The methodological quality was particularly weak in two areas. First, no studies reported using blinding of subjects, even though it appeared as if this would have been possible. Second, it was uncommon that studies appeared to have sufficient power to detect group differences. While some of these limitations were noted in the study quality assessment, others will be underscored more specifically below.

Across the 10 studies, there was diversity concerning the design. In nine trials, PI was compared with a passive comparator, which means that non-specific factors (e.g. therapist contact) were not controlled for in the estimations comparing PI with passive comparators. Concerning design, it is also worth underscoring that the aggregation of various active comparators into one active comparator category was based on that they provided study participants with active treatment content. This aggregation could, however, have resulted in that comparators with differing effects were combined, so that the comparison between PI and active comparators becomes uncertain.

Another limitation regards the patient characteristics. The total sample size was limited to <400 participants, and none of the trials reported that power calculations were made prior to study start. In all, Type 2 errors are likely, particularly when active treatments were compared. Further, all participants were recruited from the community, which might make the present findings less generalisable to health settings, as patients in clinical settings tend to display elevated symptoms (Davidson et al., 2009). Another observation is that, in almost all of the studies, we categorised the participants as meeting criteria for sleep-onset insomnia or primary insomnia. Therefore, it is uncertain whether PI should be viewed as an effective intervention for other types of insomnia, such as comorbid insomnia. It is also worth noting that there might be specific insomnia profiles that are particularly susceptible to PI. For example, Espie et al., (2006) have proposed that PI might be specifically suited for patients with psychophysiological insomnia, as this profile of patients are believed to be characterised by attentional bias, preoccupation with sleep, and using several strategies to avoid sleeplessness. In future research, the study of PI and the effectiveness for different insomnia profiles might also be based on recent empirical attempts to subtype insomnia (Blanken et al., 2019). On a related note, we observed that comorbidity was not formally assessed in the included studies. Although several studies used certain criteria to assess and/or exclude comorbidity, the lack of validated assessments of psychiatric and somatic conditions limits generalisability. As comorbid problems are more common than “pure” insomnia (Stepanski & Rybarczyk, 2006), the lack of assessing comorbid conditions and exclusion of participants with comorbid problems are problematic.

Another issue of methodological uncertainty concerns the administration of PI. There were slight variations concerning several features of the delivery. The rationale and instructions varied across studies, although the original approach by Ascher and Efran (1978) was most commonly employed. Also, the delivery format was mixed, with individual, self-help, and group formats identified. Further, in several treatment-related parameters, it was rare that sufficient information was provided; this concerned whether a treatment manual was used, who delivered PI, whether the therapists were trained and/or supervised, and whether treatment integrity was assessed. Also, the dose of PI varied across studies. Often, PI was delivered across 2–4 weeks, but longer treatment periods were also identified. Based on the limited number of studies in the present review, we were unable to investigate whether certain formats of delivery of PI was more effective than others. During the review process, we also noted that none of the studies assessed treatment-relevant domains that might have importance for the interpretation of findings, such as acceptability, adherence, credibility and expectancy ratings, and perceived usefulness of PI. It should also be emphasised that worsened sleep after PI has been reported in the research literature (Espie & Lindsay, 1985). As none of the included studies in the present review reported on adverse events or deterioration, more research is warranted to examine whether PI produces negative effects among patients with insomnia in general or in subgroups of patients.

An inclusion criterion for the present review was that trials must report insomnia-related outcomes (i.e. night-time and/or daytime symptoms). Across studies, it was less common to index objective sleep outcomes, daytime symptoms, theory-derived processes, and global insomnia symptoms [e.g. with the Insomnia Severity Index; (Bastien et al., 2001)]. Due to the lack of studies assessing several outcome domains, all meta-analytical estimations were based on sleep diary or questionnaire data assessing sleep performance anxiety. As a result, we can only draw conclusions for PI concerning sleep diary-assessed night-time symptoms and, to a lesser extent, sleep performance anxiety. A related limitation is that estimations of effectiveness for PI was not possible to assess in the longer term, as there were not sufficient data for such calculations.

A further limitation is that sensitivity and moderator analyses were not employed due to the limited number of studies. For example, it would have been interesting to explore the effects of the addition or removal of lower quality studies and, to examine whether insomnia symptomatology at baseline and PI administration might moderate the effectiveness of PI. A final limitation is that it was required that the included studies were published in English, thereby introducing a possible language bias.

4.3 Putative mechanisms

In the present study, we identified three studies that assessed sleep-related performance anxiety as a putative mechanism, and no trial indexing other potential mechanisms (e.g. sleep intention). As a whole, performance anxiety was reduced to a large degree after PI in the included trials. However, it is important to emphasise that this does not imply that performance anxiety has been demonstrated to act as a putative mechanism. As all trials in the present review analysed sleep-related performance anxiety only as pre- to post-treatment changes, future research might design studies so that mediational analyses become possible. In such studies, repeated assessment of mediators is necessary, and then analysing whether change in mediators precede improvements in insomnia symptoms. This would pave the way for evidence-based explanations for how PI produces improvements (Kazdin, 2007).

Another important methodological aspect of the research literature on performance anxiety is that the self-report scales used in the three studies have not been systematically validated in psychometric terms (Broomfield & Espie, 2003; Buchanan, 1988; Fogle & Dyal, 1983). As a result, it is uncertain whether the construct validity of the self-report scales is sufficiently captured, so that conclusions about sleep performance anxiety can be drawn in the present review. Concerning the measurement of sleep performance anxiety, it should be noted that validated self-report scales are available, such as the Glasgow Sleep Effort Scale (Broomfield & Espie, 2005; Meia-Via et al., 2016; Vand et al., 2020), and such instruments are recommended for future research. The use of validated measures in future trials would enable stronger conclusions about the effectiveness of PI on sleep performance anxiety as well as the possibility to examine mediation in a more rigorous way and explore moderation (e.g. whether PI is particularly effective among insomnia patients with elevated sleep performance anxiety).

One should note that sleep-related performance anxiety is not the only candidate as a putative mechanism for PI. First, PI could be viewed as an intervention that exposes patients to learned, feared stimuli in the bed or bedroom (Lundh, 1998), which enables extinction and the formation of new learning (Craske et al., 2014). However, this notion has not yet been articulated in detail in the research literature and not examined empirically. A second putative mechanistic pathway is described in the attention–intention–effort model (Espie et al., 2006). Although the pathway by Espie et al., (2006) appears to have high face validity, the model has not, to our knowledge, been explicitly tested in its full complexity in the realm of PI treatment.

4.4 Future directions

There are several important areas that future research could focus on to enhance the understanding of PI. Following from the limitations and uncertainties described above, we recommend future research to use active comparators, sample sizes based on power calculation, samples from clinical settings, a variety of insomnia types (including insomnia disorder), formal assessments of comorbidity, different delivery formats, broad assessments of insomnia symptoms and correlates as outcomes, and different mediators to examine mechanistic pathways.

One unknown dimension of PI is the optimal dosing and administration. Although PI has commonly been implemented by patients during a 2–4-week period, it could be argued that shorter administration of PI could be beneficial as well. Based on the theoretical rationale; that is, breaking a vicious cycle of sleep intention and associated performance anxiety, PI could potentially also be delivered as a behavioural experiment, during which patients test their predictions (e.g. “If I do not try to fall asleep, I will remain awake all night”), followed by testing PI for a limited number of nights. Another topic for future research is the optimal treatment rationale and instructions for PI. Based on two studies included in the present review (Ascher & Turner, 1980; Ott et al., 1983), it appears likely that PI with a desensitisation rationale or with feedback is less beneficial than the original approach by Ascher and Efran (1978). Beyond that, the ideal rationale and instructions remains unknown when delivering PI.

Based on the findings in the present review, the notion of how PI should be used warrants reflection. On the one hand, we believe that CBT-I should still be regarded as the first-line intervention for insomnia disorder (Riemann et al., 2017). On the other hand, PI might play a role in some cases. For example, if a patient remains unimproved after CBT-I, PI could be one option. Also, if the patient reports high sleep-related performance anxiety, and this appears as the primary maintaining factor, PI could be used in isolation or in combination with other efficacious CBT-I components, such as sleep restriction (Miller et al., 2014). To date, current CBT manuals do not include PI as a treatment component (van Straten et al., 2018). Whether the addition of PI could add efficacy to CBT-I is currently unknown. Future research could explore the notion of combining PI with CBT-I to explore potential additive effects, but also whether there are subgroups of patients who benefit more from PI.

Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars? The evidence seems statistically weak & conceptually inconclusive

Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars? Uri Simonsohn. Data Colada, April 30, 2021. http://datacolada.org/96

A recent NBER paper titled "Gender and the Dynamics of Economics Seminars" (https://www.bipartisanalliance.com/2021/02/economics-seminars-women-presenters-are.html) reports analyses of audience questions asked during 462 economics seminars, concluding that

“women are asked more questions . . . and the questions asked of women are more likely to be patronizing or hostile . . . suggest[ing] yet another potential explanation for their under-representation at senior levels within the economics profession” (abstract)

In this post I explain why my interpretation of the data is different.

My prior, before reading this paper, was that women were probably treated worse in seminars, especially in economics. But, after reading this paper I am less inclined to believe that.

[...]

Do Female Speakers Get More Antagonistic Questions?

Another result highlighted in the abstract is that the questions female speakers get “are more likely to be patronizing or hostile”.

Unlike the optimal number of total questions, the optimal number of hostile and patronizing questions is zero. So noticeable differences in hostility are easier to interpret.

But the evidence behind those claims seems insufficiently clear, in my opinion, to be interpretable, let alone actionable. Specifically, the evidence is:

*  Statistically weak. The estimates are arguably small in magnitude (e.g, women get 0.1 extra hostile questions on average), and evidentially weak (the patronizing difference is p=.1, the hostile difference p=.02). Moreover, these two results were selected post-hoc from a larger set of measures collected, and the rest were not significant (e.g., if questions were critical, or disruptive, or fair). Statistically at least, this is not strong evidence against the null that all observed differences are caused by chance.

*  Conceptually inconclusive. Other estimates in the paper are conceptually inconsistent with the conclusion that female speakers receive worse treatment. For example, they get directionally fewer “criticism” questions than do male speakers  (see Table 6 .png). While this result is p > .1, so we do not rule not zero difference, the estimate is precise enough to rule out that women get ¼ additional  critical question per talk. Women also get an additional 1.7 clarification questions (p < .01), and half an additional suggestion (p < .1). Personally I like getting these kinds of questions, as they often signal audience engagement and can, of course, be useful.

---

More at the original link

Migration during the last 500 years induced differences in contemporary health outcomes: Due to vitamin D deficiency, migration from high UV radiation places to low radiation ones takes a toll on health

Historical migration and contemporary health. Thomas Barnebeck Andersen, Carl-Johan Dalgaard, Christian Volmer Skovsgaard, Pablo Selaya. Oxford Economic Papers, Volume 73, Issue 3, July 2021, Pages 955–981, https://doi.org/10.1093/oep/gpaa047

Abstract: We argue that migration during the last 500 years induced differences in contemporary health outcomes. The theory behind our analysis builds on three physiological facts. First, vitamin D deficiency is directly associated with higher risk of all-cause mortality. Second, the ability of humans to synthesize vitamin D from sunlight (UV-R) declines with skin pigmentation. Third, skin pigmentation is the result of an evolutionary compromise between higher risk of vitamin D deficiency and lower risk of skin cancer. When individuals from high UV-R regions migrate to low UV-R regions, the risk of vitamin D deficiency rises markedly. We develop a measure that allows us to empirically explore the aggregate health consequences of such migration in a long historical perspective. We find that the potential risk of vitamin D deficiency induced by migration during the last half millennium is a robust predictor of present-day aggregate health indicators.

JEL I1 - HealthJ1 - Demographic EconomicsJ15 - Economics of Minorities, Races, Indigenous Peoples, and Immigrants; Non-labor Discrimination


5. Conclusion

We have examined whether a migration-induced imbalance between the intensity of skin pigmentation and ambient UV-R holds explanatory power vis-à-vis present-day global health differences. We find that it does. Consequently, our results suggest that low UV-R regions that have received substantial immigration from high UV-R regions experience lower life expectancy than would have been the case in the absence of such migration flows.

The underlying theory derives from the life sciences. Conditional on ambient UV-R, individuals with intense skin pigmentation (deriving from high ancestral UV-R exposure) are more susceptible to vitamin D deficiency, which is a leading cause of a range of afflictions that cause premature death. The contribution of the present study lies in exploring whether this theory holds explanatory power in the aggregate. The weight of the evidence presented above suggests it does.

Although the economic significance of our measure of the risk of vitamin D deficiency (if taken at face value) is relatively strong, it is also clear that its ability to account for cross-country variation in life expectancy is modest. However, if current movements of people continue, which to a large extent represent movements from ‘South to North’, much more variation is likely to become visible during the 21st century. As such, vitamin D deficiency may become an increasing public health issue in the years to come, at least in the absence of preventive public health measures.

We believe the present study could be usefully extended in the direction of studying within country migration. For example, Black et al. (2015) find that the Great Migration within the USA reduced the health of African Americans significantly. While the authors suggest that some of the impact may be linked to changes in the intake of alcohol and cigarette smoking, it is worth noting that migrants also experienced changes in the environment. For example, moving from Georgia to New York would imply a reduction in ambient UV-R of roughly 43%, implying in turn a considerable increase in the risk of vitamin D deficiency for an African American. Whether a vitamin D mechanism could be contributing to the decline in health outcomes in the aftermath of the Great Migration seems to be an interesting topic for future research.


Wednesday, August 18, 2021

When forming personality impressions from faces, people rely on features that resemble "frozen" emotional expressions

Which Facial Features Are Central in Impression Formation? Bastian Jaeger, Alex L. Jones. Social Psychological and Personality Science, August 17, 2021. https://doi.org/10.1177/19485506211034979

Abstract: Which facial characteristics do people rely on when forming personality impressions? Previous research has uncovered an array of facial features that influence people’s impressions. Even though some (classes of) features, such as resemblances to emotional expressions or facial width-to-height ratio (fWHR), play a central role in theories of social perception, their relative importance in impression formation remains unclear. Here, we model faces along a wide range of theoretically important dimensions and use machine learning techniques to test how well 28 features predict impressions of trustworthiness and dominance in a diverse set of 597 faces. In line with overgeneralization theory, emotion resemblances were most predictive of both traits. Other features that have received a lot of attention in the literature, such as fWHR, were relatively uninformative. Our results highlight the importance of modeling faces along a wide range of dimensions to elucidate their relative importance in impression formation.

Keywords: social perception, personality impressions, overgeneralization theory, emotional expressions, facial width-to-height ratio

Which facial characteristics do people rely on when forming impressions of others? Some facial features, such as resemblances to emotional expressions and fWHR, occupy a central role in theories of social perception (Todorov et al., 2008Zebrowitz, 2017). However, it is not clear whether this focus is justified, as little is known about the relative importance of different characteristics. Faces can be modeled along many dimensions, and many facial features are correlated. Yet, prior work has mostly examined one feature or a few features in isolation. These approaches cannot provide strong evidence for the claim that people rely on certain facial features in impression formation, as it remains unclear whether people relied on the facial feature in question, or on other correlated ones. In short, even though studies have identified a long list of facial features that are correlated with impressions, the question of which facial features are actually central in impression formation remains largely unaddressed. Here, we used methods from machine learning (i.e., cross-validation, regularization) to estimate and compare the extent to which a wide range of facial features predict trustworthiness and dominance impressions for a large and demographically diverse set of faces. We tested facial characteristics that have been theorized to be important in impression formation (resemblances to emotional expressions, attractiveness, babyfacedness, familiarity, and fWHR; Geniole et al., 2014Stirrat & Perrett, 2010Zebrowitz, 2017). We also tested a large set of other facial characteristics that have received less attention or are often held constant in social perception studies, even though they might be important in impression formation (e.g., gender, race, age, eye size, lip fullness).

When comparing different classes of facial features, we found that emotion resemblances were most predictive of both trustworthiness and dominance impressions, outperforming all other theory-driven models. When examining the importance of all 28 facial characteristics simultaneously, we found that perceptions of trustworthiness were best predicted by a face’s resemblance to a happy expression. Emotionally neutral faces were perceived as more trustworthy when facial features resembled a facial expression of happiness. Perceptions of dominance were best predicted by targets’ gender (with women being perceived as less dominant than men) and by resemblance to a facial expression of anger. Together, our results support the notion that resemblances to emotional expressions are central for explaining how people form personality impressions from facial features. Our findings are in line with overgeneralization theory (and the emotion overgeneralization hypothesis in particular; Todorov et al., 2008Zebrowitz, 2017), which posits that personality impressions of faces are driven by an oversensitive emotion detection system: Due to their social relevance, people even perceive emotions (and associated personality traits) in emotionally neutral faces that structurally resemble emotional expressions.

Support for the importance of other facial characteristics evoked by overgeneralization theory (i.e., attractiveness, babyfacedness, and familiarity; Zebrowitz, 20122017) was mixed. Facial attractiveness was the second-most informative predictor of trustworthiness impressions, whereas babyfacedness and familiarity were less informative. None of the three characteristics were among the most informative predictors of dominance impressions.

We also found that demographic factors (i.e., gender, age, and race)—which have received less attention as predictors of personality impressions—were in some instances among the most important predictors of impressions. This highlights potential problems associated with keeping features like gender and race constant when studying social perception. Certain features may guide impression formation when demographic characteristics do not vary, but they may be uninformative when more diagnostic cues such as demographic characteristics do vary.

A wealth of studies has examined the influence of fWHR on personality judgments (e.g., Geniole et al., 2014Ormiston et al., 2017Stirrat & Perrett, 2010). Yet, the current results suggest that fWHR is not an informative predictor of trustworthiness or dominance impressions. When comparing the predictive fit of fWHR to the four characteristics that form the basis of overgeneralization theory, fWHR emerged as the weakest predictor. When modeled alongside all other facial features that we included in our analyses, fWHR was again among the least informative predictors. Similar results were obtained in additional analyses when examining impressions of male and female targets separately and when all other variables that included some measurement of face length or width were omitted from analyses (see Supplemental Materials). Together, these findings suggest that the importance of fWHR for impression formation may have been overstated in previous studies. Previously observed associations between fWHR and personality impressions may have been due to the fact that people rely on facial features that are correlated with fWHR, but not on fWHR per se.

Interestingly, all seven classes of predictors showed better predictive accuracy for trustworthiness perceptions than for dominance perceptions. It has been suggested that emotion resemblances are particularly important for trustworthiness impressions, whereas morphological characteristics, such as fWHR, are more important for dominance impressions (Hehman et al., 2015). The current results are not in line with this notion and suggest that emotion resemblances are the most important determinant of both trustworthiness and dominance impressions. It should also be noted that even though emotion resemblances were the most important class of predictors, not all emotion resemblances were equally meaningful. Resemblance to a happy expression was the most important predictor of trustworthiness impressions, whereas resemblance to an angry expression was the most important predictor of dominance impressions.

Limitations and Future Directions

Despite the relatively good performance of some of our models, results also suggest that our list of relevant features was not exhaustive. Emotion resemblances explained 53% and 42% of the variance in trustworthiness and dominance perceptions. Even the optimized Elastic Net models explained around 68% of the variance, indicating there are other important factors contributing to personality impressions. Other facial features that might show independent contributions to personality impressions include skin texture (Jaeger et al., 2018; A. L. Jones et al., 2012) and perceived weight (Holzleitner et al., 2019). Examining the role of additional predictors will show how generalizable the present results are, as the relative importance of facial features ultimately depends on the specific set of features that is modeled. In order to conclusively establish that certain facial features are central in impression formation (and that observed associations are not due to other, unmeasured dimensions), faces need to be modeled along all potentially meaningful dimensions. From a practical perspective, achieving this goal may be unfeasible at best and impossible at worst. Still, future work should strive to test the relative importance of different features by comparing them against large sets of other features that have been shown to predict impressions.

Future studies could also investigate characteristics of the perceiver which explain a nontrivial amount of variance in impressions (Hehman et al., 2019). Moreover, while the current set of faces was relatively large and diverse in terms of gender, age, and race, we only examined U.S. individuals who were photographed in a controlled lab setting. Future studies could test whether the current findings replicate when using more naturalistic images of individuals from different nationalities (Sutherland et al., 2013).

Episodic-like memory (what, where and when specific things happened) is preserved with age in cuttlefish, molluscs that lack a hippocampus, maybe due to reproductive pressure

Episodic-like memory is preserved with age in cuttlefish. Alexandra K. Schnell, Nicola S. Clayton, Roger T. Hanlon and Christelle Jozet-Alves. August 18 2021. https://doi.org/10.1098/rspb.2021.1052

Abstract: Episodic memory, remembering past experiences based on unique what–where–when components, declines during ageing in humans, as does episodic-like memory in non-human mammals. By contrast, semantic memory, remembering learnt knowledge without recalling unique what–where–when features, remains relatively intact with advancing age. The age-related decline in episodic memory likely stems from the deteriorating function of the hippocampus in the brain. Whether episodic memory can deteriorate with age in species that lack a hippocampus is unknown. Cuttlefish are molluscs that lack a hippocampus. We test both semantic-like and episodic-like memory in sub-adults and aged-adults nearing senescence (n = 6 per cohort). In the semantic-like memory task, cuttlefish had to learn that the location of a food resource was dependent on the time of day. Performance, measured as proportion of correct trials, was comparable across age groups. In the episodic-like memory task, cuttlefish had to solve a foraging task by retrieving what–where–when information about a past event with unique spatio-temporal features. In this task, performance was comparable across age groups; however, aged-adults reached the success criterion (8/10 correct choices in consecutive trials) significantly faster than sub-adults. Contrary to other animals, episodic-like memory is preserved in aged cuttlefish, suggesting that memory deterioration is delayed in this species.

Popular version: https://www.sciencedaily.com/releases/2021/08/210817193055.htm

"The old cuttlefish were just as good as the younger ones in the memory task -- in fact, many of the older ones did better in the test phase. We think this ability might help cuttlefish in the wild to remember who they mated with, so they don't go back to the same partner," said Schnell.

Cuttlefish only breed at the end of their life. By remembering who they mated with, where, and how long ago, the researchers think this helps the cuttlefish to spread their genes widely by mating with as many partners as possible.

From 2019... In the U.S. and Côte d’Ivoire, highly educated people make decisions that are less consistent with the rational model while low-income respondents make decisions more consistent with the rational model

From 2019... Are We All Predictably Irrational? An Experimental Analysis. John A. Doces & Amy Wolaver. Political Behavior volume 43, pp1205–1226. Dec 18 2019. https://link.springer.com/article/10.1007/s11109-019-09579-0

Abstract: We examine the question of rationality, replicating two core experiments used to establish that people deviate from the rational actor model. Our analysis extends existing research to a developing country context. Based on our theoretical expectations, we test if respondents make decisions consistent with the rational actor framework. Experimental surveys were administered in Côte d’Ivoire and Ghana, two developing countries in West Africa, focusing on issues of risk aversion and framing. Findings indicate that respondents make decisions more consistent with the rational actor model than has been found in the developed world. Extending our analysis to test if the differences in responses are due to other demographic differences between the African samples and the United States, we replicated these experiments on a nationally representative analysis in the U.S., finding results primarily consistent with the seminal findings of irrationality. In the U.S. and Côte d’Ivoire, highly educated people make decisions that are less consistent with the rational model while low-income respondents make decisions more consistent with the rational model. The degree to which people are irrational thus is contextual, possibly western, and not nearly as universal as has been concluded.

Introduction

Are we all predictably irrational? Since the seminal work of Tversky and Kahneman (1974198119861991) and other behavioral economists (Akerlof and Kranton 2000; Ariely 2010; Camerer 2003; Thaler 1980), the usefulness of models assuming rationality has been questioned, if not entirely dismissed in some cases (Green and Shapiro 1994; Sen 1977). However, the vast majority of the empirical work establishing consistently irrational behaviors has been conducted on populations in the West, dubbed WEIRD for Western, educated, industrialized, rich and democratic (Henrich et al. 2010). In their review of the psychology literature, Henrich et al. (2010) indicate that 68% of subjects were from the United States, 96% from the Western Industrialized world, with 80% of the Western sample being undergraduate students. If other populations behave differently than these groups, then the implications of the new models of behavior may not be as widely applicable as we thought.

There have been some forays examining deviations from the predictions based on the rational actor model across different populations, notably studies of the impact that poverty has on decision-making. While some studies establish ways in which poverty decreases cognitive ability through the additional stresses associated with living in poverty (Mani et al. 2013; Haushofer and Fehr 2014), others argue that the influence of poverty on decisions is related more to additional constraints and a constant presence of risk in the lives of the poor (Duflo 2006; Banerjee and Duflo 2007; Carvalho et al. 2016). There is a growing body of literature that establishes that the poor are less subject to some of the cognitive biases found by Tversky and Kahneman (Shah et al. 20152018). Possible explanations for the differences in decision-making by the poor are that poverty may increase attention to costs, and/or it exposes one to more risk, which causes people to give more weight to current versus future outcomes.

To determine whether these predictable irrationalities are applicable in other parts of the world, we replicate some of the most important experiments conducted by Tversky and Kahneman (1981) in Côte d’Ivoire and Ghana, and compare these samples to those from Western populations. We find that respondents from Côte d’Ivoire and Ghana make decisions that are closer to the model of rationality than Westerners. Building on this key finding, we also examine the effects of individual characteristics on decision-making to determine whether there are systematic differences within these populations. Here, we find that most sub-groups, with some exceptions, make decisions that are relatively more consistent with the predictions of the rational model than has been found in prior research. Finally, to ascertain why the differences exist between the original results from 1981 and our data, we re-consider two of the original experiments in a nationally representative sample of American adults. This sample provides support for the 1981 results of systematic irrationality, with important exceptions, and helps to contrast our findings from West Africa that the rational actor model is most applicable in the developing world.

Our empirical results, in sum, suggest there is more merit to the rational choice paradigm than perhaps has been thought, and that existing studies concluding people are predictably irrational are overstated in a number of ways. This is an important finding with implications for several areas of academic scholarship. The rational actor model has served as the cornerstone assumption about the behavior of political actors, influencing research in political science on voter choice, foreign policy making, conflict, and international political economy amongst others (de Mesquita and Smith 2011; Mansfield et al. 2000; Powell 1991; Slantchev and Tarar 2011). Recent work has extended the paradigm to explicitly non-western contexts (Hollyer et al. 2015). Nevertheless, debates about its utility in political science have been especially spirited (de Mesquita and Morrow 1999; Walt 1999), with one enduring criticism being the lack of empirical support that people behave as the model assumes, a point which even supporters acknowledge (Kahler 1998; Snidal 2002). In economics, as well, the core mainstream model assumes rationality, with applications to the law (Posner 2014) and even addiction (Becker et al. 1991). By addressing the empirical underpinnings of rational choice, we help fill an important gap in our understanding of rationality and show that the model might be most relevant for non-western populations.


Uncommon case of complete loss of hunger following an isolated left insular stroke

Uncommon case of complete loss of hunger following an isolated left insular stroke. Benjamin Hébert-Seropian, Olivier Boucher, Didier Jutras-Aswad & Dang Khoa Nguyen. The Neural Basis of Cognition, Aug 16 2021. https://doi.org/10.1080/13554794.2021.1966044

Abstract: The insula has long been among the least understood regions of the human brain, in part due to its restricted accessibility. Mounting evidence suggests that the insula is a prominent player in gustatory, interoceptive, and emotional processing, and likely integrates these different functions to contribute to the homeostatic control of food intake. Here we report the case of a young adult patient who lost the subjective experience of hunger following an ischemic stroke localized in the posterior left insula. The loss of hunger was not attributable to medication, substance use, or a clinical disorder, and lasted for a period of 15 months. In line with the role attributed to the insula in gustation and interoception, we suggest that the insula integrates information about taste, interoception, and the hedonic value of food in the service of homeostatic regulation.

KEYWORDS: Hungerappetiteinsulastrokecase report


Found some evidence that higher income is associated with less happiness and no substantive benefit to higher household income in the US after $35-40K and in Germany after €14-18K (in daily life, not as an assessment of the whole)

Kudrna, Laura, and Kostadin Kushlev. 2021. “Money Does Not Always Buy Happiness, but Are Richer People Less Happy in Their Daily Lives? It Depends on How You Analyze Income.” PsyArXiv. August 18. doi:10.31234/osf.io/4jvh5

Abstract: Do people who have more money feel happier during their daily activities? Some prior research has found no relationship between income and daily happiness when treating income as a continuous variable in OLS regressions, although results differ between studies. We re-analyzed existing data, treating household income as a categorical variable and using lowess and spline regressions to explore non-linearities. Our analyses reveal that these methodological decisions provide new insights into the relationship between income and happiness. We find some evidence that higher income is associated with less happiness and no substantive benefit to higher household income in the US after $35-40K and in Germany after €14-18K. Not all analytic approaches generate the same conclusions, which may explain discrepant results.


From 2019... Greater male cognitive variability has implications for both tails of the distribution: Danish data (n = 1.3 million) finds that twice as many boys than girls are diagnosed with intellectual disability

From 2019... Incidence Rates and Cumulative Incidences of the Full Spectrum of Diagnosed Mental Disorders in Childhood and Adolescence. Søren Dalsgaard et al. JAMA Psychiatry. 2020;77(2):155-164. Nov 20, 2019, doi:10.1001/jamapsychia

Key Points

Question  What are the age- and sex-specific incidence rates and cumulative incidences of the full spectrum of diagnosed mental disorders during childhood and adolescence?

Findings  In this nationwide cohort study of 1.3 million individuals in Denmark, the risk (cumulative incidence) of being diagnosed with a mental disorder before 18 years of age was 14.63% in girls and 15.51% in boys. Distinct age- and sex-specific patterns of occurrence were found across mental disorders in children and adolescents.

Meaning  These findings suggest that precise estimates of rates and risks of all mental disorders during childhood and adolescence are essential for future planning of services and care and for etiological research.


Abstract

Importance: Knowledge about the epidemiology of mental disorders in children and adolescents is essential for research and planning of health services. Surveys can provide prevalence rates, whereas population-based registers are instrumental to obtain precise estimates of incidence rates and risks.

Objective  To estimate age- and sex-specific incidence rates and risks of being diagnosed with any mental disorder during childhood and adolescence.

Design  This cohort study included all individuals born in Denmark from January 1, 1995, through December 31, 2016 (1.3 million), and followed up from birth until December 31, 2016, or the date of death, emigration, disappearance, or diagnosis of 1 of the mental disorders examined (14.4 million person-years of follow-up). Data were analyzed from September 14, 2018, through June 11, 2019.

Exposures: Age and sex.

Main Outcomes and Measures  Incidence rates and cumulative incidences of all mental disorders according to the ICD-10 Classification of Mental and Behavioral Disorders: Diagnostic Criteria for Research, diagnosed before 18 years of age during the study period.

Results  A total of 99 926 individuals (15.01%; 95% CI, 14.98%-15.17%), including 41 350 girls (14.63%; 95% CI, 14.48%-14.77%) and 58 576 boys (15.51%; 95% CI, 15.18%-15.84%), were diagnosed with a mental disorder before 18 years of age. Anxiety disorder was the most common diagnosis in girls (7.85%; 95% CI, 7.74%-7.97%); attention-deficit/hyperactivity disorder (ADHD) was the most common in boys (5.90%; 95% CI, 5.76%-6.03%). Girls had a higher risk than boys of schizophrenia (0.76% [95% CI, 0.72%-0.80%] vs 0.48% [95% CI, 0.39%-0.59%]), obsessive-compulsive disorder (0.96% [95% CI, 0.92%-1.00%] vs 0.63% [95% CI, 0.56%-0.72%]), and mood disorders (2.54% [95% CI, 2.47%-2.61%] vs 1.10% [95% CI, 0.84%-1.21%]). Incidence peaked earlier in boys than girls in ADHD (8 vs 17 years of age), intellectual disability (5 vs 14 years of age), and other developmental disorders (5 vs 16 years of age). The overall risk of being diagnosed with a mental disorder before 6 years of age was 2.13% (95% CI, 2.11%-2.16%) and was higher in boys (2.78% [95% CI, 2.44%-3.15%]) than in girls (1.45% [95% CI, 1.42%-1.49%]).

Conclusions and Relevance  This nationwide population-based cohort study provides a first comprehensive assessment of the incidence and risks of mental disorders in childhood and adolescence. By 18 years of age, 15.01% of children and adolescents in this study were diagnosed with a mental disorder. The incidence of several neurodevelopmental disorders peaked in late adolescence in girls, suggesting possible delayed detection. The distinct signatures of the different mental disorders with respect to sex and age may have important implications for service planning and etiological research.


Check also Greater male variability is currently universal in internationally comparable assessments; some of this heterogeneity can be attributed to some species universal mechanism or some other social/cultural phenomenon

Sex differences in variability across nations in reading, mathematics and science: a meta-analytic extension of Baye and Monseur (2016). Helen Gray, Andrew Lyth, Catherine McKenna, Susan Stothard, Peter Tymms and Lee Copping. Large-scale Assessments in EducationAn IEA-ETS Research Institute Journal 20197:2. https://www.bipartisanalliance.com/2019/03/greater-male-variability-is-currently.html

Faecal transplants from young mice can enhance cognitive function in older animals

Microbiota from young mice counteracts selective age-associated behavioral deficits. Marcus Boehme et al. Nature Aging volume 1, pages666–676. Aug 9 2021. https://www.nature.com/articles/s43587-021-00093-9

Abstract: The gut microbiota is increasingly recognized as an important regulator of host immunity and brain health. The aging process yields dramatic alterations in the microbiota, which is linked to poorer health and frailty in elderly populations. However, there is limited evidence for a mechanistic role of the gut microbiota in brain health and neuroimmunity during aging processes. Therefore, we conducted fecal microbiota transplantation from either young (3–4 months) or old (19–20 months) donor mice into aged recipient mice (19–20 months). Transplant of a microbiota from young donors reversed aging-associated differences in peripheral and brain immunity, as well as the hippocampal metabolome and transcriptome of aging recipient mice. Finally, the young donor-derived microbiota attenuated selective age-associated impairments in cognitive behavior when transplanted into an aged host. Our results reveal that the microbiome may be a suitable therapeutic target to promote healthy aging.

Popular version: Faecal transplants from young mice can enhance cognitive function in older animals. https://www.nature.com/articles/d41586-021-02184-4


Challenging the binary: Gender/sex and the bio-logics of normalcy

Challenging the binary: Gender/sex and the bio-logics of normalcy. L. Zachary DuBois, Heather Shattuck-Heidorn. American Journal of Human Biology, June 6 2021. https://doi.org/10.1002/ajhb.23623

Abstract

Background: We are witnessing renewed debates regarding definitions and boundaries of human gender/sex, where lines of genetics, gonadal hormones, and secondary sex characteristics are drawn to defend strict binary categorizations, with attendant implications for the acceptability and limits of gender identity and diversity.

Aims: Many argue for the need to recognize the entanglement of gender/sex in humans and the myriad ways that gender experience becomes biology; translating this theory into practice in human biology research is essential. Biological anthropology is well poised to contribute to these societal conversations and debates. To do this effectively, a reconsideration of our own conceptions of gender/sex, gender identity, and sexuality is necessary.

Methods: In this article, we discuss biological variation associated with gender/sex and propose ways forward to ensure we are engaging with gender/sex diversity. We base our analysis in the concept of “biological normalcy,” which allows consideration of the relationships between statistical distributions and normative views. We address the problematic reliance on binary categories, the utilization of group means to represent typical biologies, and document ways in which binary norms reinforce stigma and inequality regarding gender/sex, gender identity, and sexuality.

Discussion and Conclusions: We conclude with guidelines and methodological suggestions for how to engage gender/sex and gender identity in research. Our goal is to contribute a framework that all human biologists can use, not just those who work with gender or sexually diverse populations. We hope that in bringing this perspective to bear in human biology, that novel ideas and applications will emerge from within our own discipline.


1 | INTRODUCTION

Biological anthropologists are experts at teasing apart the complexities of biocultural interactions that inform what it is to be human, examining how broad-ranging factors such as market acculturation (Godoy et al., 2005; Liebert et al., 2013), parenting strategies (McKenna et al., 2007; Nelson, 2016), or socially constructed categories of race (Dressler & Bindon, 2000; Gravlee, 2009) relate to physiology including growth and development, immune function, and endocrinology. Yet we have not fully engaged with cutting-edge understandings of variation in gender, sex, and sexuality. This is a critical gap, especially given renewed debates regarding the boundaries of human sex, where lines of genetics, “sex hormones,” and secondary sex characteristics are drawn to defend a strict biologically based sex binary, with attendant implications for the acceptability and limits of gender identity and expression for all people. Whether regulating testosterone levels and bodies of women and girls in sports, legislating the use of gender-specific bathrooms, or enacting broadsweeping federal definitions of sex, bodily “norms” are being weaponized as a means to discriminate (Karkazis et al., 2012; Nondiscrimination in Health and Health Education or Activities, 2020). Biological anthropology is well poised to contribute to these societal conversations, but first, we need to more deeply consider our own conceptions of sex, gender, and sexuality, and how we implement such understandings in our research. In this article, we discuss biological variation associated with sex and gender and possible ways forward for conceptualizing and operationalizing these constructs within biological anthropology. We base our analysis in the concept of “biological normalcy,” which allows consideration of the relationships between “statistical distributions of biological traits and normative views about what bodies ‘should’ be like or what constitutes a ‘normal’ body” (Wiley & Cullin, 2020: p. 1; Wiley, 2021). A classic example of how bionormalcy enables critical interrogation of norms is seen in the case of dietary recommendations normalizing milk consumption culturally as “healthy” and even necessary, despite the statistical norm of lactase nonpersistence (Wiley, 2021). This can be seen as normalizing and even moralizing a biological trait present only in some individuals in some populations (Wiley & Allen, 2017; Wiley & Cullin, 2020). This example aptly demonstrates the fact that many of the statistical distributions that end up being “normalized” are based on samples drawn from predominantly white, “Western” populations (Clancy & Davis, 2019; Henrich et al., 2010), with the psychological, behavioral, and biological traits of these populations referenced as the standard from which other populations deviate (e.g., body size and growth, Thompson et al., 2014). The model of biological normalcy (Figure 1) is circular. Cultural norms and assumptions inform the development of research questions, methods of data collection, and analyses as well as interpretations of data. Statistical norms are also leveraged (albeit sometimes unconsciously) to create, reinforce, or otherwise inform those very cultural norms and assumptions. However, normalcy has not always been conceptualized in this way. The word “normal” as reflective of something to be desired in reference to an “abnormal” state arose only in the mid to late 19th century (Cryle & Stephens, 2017; Hacking, 1990). Initially, the term “normal” did not represent statistical distributions nor did it carry the morality it is imbued with today. Instead, norms provided a way to reference something “in its own right” and not necessarily through comparison to an ideal. In this way, even anomalies could be understood within a framework of “normal.” With the emergence of statistics in the late 19th century, the concept of the normal became hitched to statistical distributions and to the racist and eugenicist ideas imposed on population traits (Cryle & Stephens, 2017). And with this shift, the concept of the normal intertwines with the history of biological anthropology, as eugenic and white supremacist concepts of human traits and the categorical position of white men as both unmarked and ideal are the very foundation of much of our field (Blakey, 2020; Caspari, 2018; Marks, 2012). Racism and colonialism are equally culpable in the development of value-laden categories of sex and gender and the behavioral norms to which they are often tied. For example, conceptualizations of femininity and masculinity themselves were initially intertwined with racialized categories in an effort to hierarchically demarcate rank, reflecting a colonialist project with the “white ideal” as most differentiated between the sexes (Markowitz, 2001). As a field, biological anthropology continues to suffer from how our history influences who practices biological anthropology (e.g., Bolnick et al., 2019). As biocultural anthropologists, in this article, we aim to broaden the way that human biology engages with categorical thinking about gender and sex and to push for greater recognition of variation in these domains. We are inspired by the decades of strong work into race as a social construct with biological outcomes (Armelagos & Goodman, 1998; Dressler et al., 2005; Graves Jr, 2003; Graves Jr, 2015; Gravlee, 2009; Williams & Mohammed, 2013), and by recent work contextualizing how concepts such as violence are gendered and raced (e.g., Nelson, 2021; Smith, 2021). In our own work, we have grappled with how to better conceptualize and operationalize sex and gender, whether examining energetics and immune function in pubertal girls (ShattuckHeidorn, Reiches, & Richardson, 2020), sexual decision making among queer adolescent cis men (DuBoiset al., 2015), or immune marker and environmental conditions for (cis) men and women (Shattuck-Heidorn, Eick, et al., 2020). In some of our prior work, the category “cis” was unmarked, and at times, in our analytical strategies, we have statistically compared cis men to cis women without a clear justification as to why the sample should be divided by sex as opposed to some other trait(s). Much of our recent scholarship integrates theoretical insights from gender and feminist theory and presents challenges to simple gender/sex binaries through our research questions, study designs, and hypotheses. This is reflected for example, in work expanding understandings of stigma and embodied inequalities among trans and gender diverse people (DuBois, 2012; DuBois et al., 2017), furthering our methodological and theoretical approaches to better encompass gender/sex and sexual diversity (DuBois et al., 2021; Shattuck-Heidorn & Richardson, 2019), and interrogations into the basis for disparities in COVID-19 outcomes (Gibb et al., 2020; Rushovich et al., 2021; Shattuck-Heidorn, Reiches, & Richardson, 2020). Such interdisciplinary merging has enabled us to better conceptualize human gender/sex and enhanced our understanding of variation in embodiment and health. In this article, we address the following critical areas: (1) the problematic reliance on binary sex categories used as a priori biological categories across traits; (2) the attendant focus on group means to represent typical “male” and “female” behaviors and biology and accompanying fixation on “difference;” (3) the ways in which binary sex norms reinforce stigma and inequality regarding sex, gender, gender identity, and sexuality; (4) the need for “best practices” to effectively engage sex and gender in research; and (5) methodological suggestions to address the lack of inclusive data collection needed to enhance our understanding of gender and sex and sexual variation. Our goal is to contribute to a framework that all human biology researchers, not just those who work with gender or sexually diverse populations, can use to inform their thinking as well as decisions about best practices for whether and how to implement sex and gender analyses within their research, both theoretically and methodologically.


6 | CONCLUSIONS

As human biologists, gender/sex is central to how we understand and organize our thinking about human evolution as well as health in contemporary and historic contexts. The entwinement of gender and sex is complex, as is much of the science exploring this variation and how it develops. It is increasingly necessary for human biologists to engage novel methodologies to ensure we are capturing and engaging with gender/sex diversity. As detailed above, research in human biology and other disciplines challenges the understanding and the use of binary sex as a meaningful category explaining human biological variation across contexts. The work reviewed here is a small part of a large field of research that pushes us to continue to consider the ways in which human bodies and identities resist static categorization. Hormones vary and function in complex ecological and social environments, brains and bodies develop over time in response to varied experiences and inputs, and societal structures of gender norms, race and racism, and sexuality influence and mediate human biology. As the common-sense notion of binary categories for human gender/sex are destabilized, our discipline is well-positioned to meaningfully explore the complexity of gender/sex in terms of human variation and to understand that variation within a sociocultural context, including race, sexuality, and gender diversity. Our field has contributed substantially to an understanding of human biology in a socioecological context. We look forward to a generation of work from biological anthropologists who are incorporating intersectional analyses of gender/sex and gender identity into our understandings of human diversity.