Thursday, August 19, 2021

Women were substantially more likely to wear painful, restricting or distracting clothing than men, clothing that requires ongoing monitoring or adjusting

These Boots Weren’t Made for Walking: Gendered Discrepancies in Wearing Painful, Restricting, or Distracting Clothing. Renee Engeln & Anne Zola. Sex Roles, Aug 19 2021. https://rd.springer.com/article/10.1007/s11199-021-01230-9

Abstract: Using the framework of objectification theory (Fredrickson & Roberts in Psychology of Women Quarterly 21(2): 173–206, 1997), the current studies explored how often women (vs. men) reported wearing clothing that is painful, distracting, and/or restricting (PDR clothing). Additionally, we examined differences in body surveillance (i.e., chronically monitoring the appearance of one’s body) and body appreciation between those who reported wearing various types of PDR clothing and those who did not. In both a sample of U.S. college students (n = 545) and a broader sample of U.S. adults (n = 252), results indicated that women were substantially more likely to wear PDR clothing than men. Across both samples, the largest differences between men and women were in wearing uncomfortable or painful shoes and in wearing clothing that is distracting because it requires ongoing monitoring or adjusting. Women and men with higher body surveillance were more likely to report wearing PDR clothing. Though some findings pointed toward a negative association between body appreciation and wearing PDR clothing, these results were inconsistent. Overall, results were consistent with the notion that the gendered nature of clothing might reflect and provoke chronic vigilance of the body’s appearance. Gendered differences in the extent to which clothing promotes comfort and movement vs. discomfort and distraction has clear implications for women’s quality of life.

Discussion

Once again, we found that women were substantially more likely to wear PDR clothing than men. Wearing PDR clothing was linked to greater body surveillance among both women and men. In general, there was no pattern to suggest that body appreciation differed significantly between those who do and do not wear PDR clothing.

Coding of themes in the open-ended responses to the question about why participants wore PDR clothing suggested two areas of gender discrepancy. Consistent with evidence that women score higher than men on measures of the salience of appearance in their lives (Cash et al., 2004), and with arguments that women face more rigid appearance ideals than men (Buote et al., 2011), women who wore PDR clothing were more likely than men who wore PDR clothing to indicate that they did so in order to appear more attractive. On the other hand, men were more likely than women to indicate that when they wore PDR clothing, they did so because it was a workplace requirement (e.g., wearing a tie or suit jacket). This finding was somewhat surprising given the attention in both popular media and legal settings to sexist workplace apparel requirements (e.g., requiring women to wear heels or skimpy uniforms; Aamodt, 2017). It is possible that for some men, PDR clothing in the workplace (e.g., wearing a tie or blazer) can be a means of projecting power and financial success, both of which are tied to masculinity pressures (Berdahl et al., 2018). However, a more parsimonious explanation for this pattern (and one that is consistent with objectification theory), is that for women, the pressure to “look good” extends across all settings. In other words, if one’s reason for wearing PDR clothing is to look attractive to others, that reason might supersede any specific reference to work or particular social settings. Unfortunately, the brief responses to this exploratory, open-ended question did not provide us with enough detail to more fully examine these possibilities.

General Discussion

Across two studies, we demonstrated that women are significantly more likely than men to wear clothing that is painful, distracts, or restricts movement. Additionally, results revealed that overall, men and women who wear PDR clothing engage in more body surveillance than men and women who do not wear this type of clothing. Finally, we found that when they wear PDR clothing, women are more likely to indicate that their reason for doing so is to look attractive to others, whereas men were more likely to indicate that they do so out of a workplace obligation. This descriptive, exploratory research is the first we are aware of that directly examines how often men vs. women wear PDR clothing.

These results may appear obvious to many readers. One would need only a passing familiarity with women’s fashions to ascertain that they regularly show little regard for comfort or function. As just one example, consider widespread popular media coverage of the claim that the lack of pockets in women’s clothing is an issue of gender equality (Basu, 2014), and that designers leave useful pockets off women’s clothing primarily because pockets are viewed as unflattering to the lower body. Despite how easy it may be to casually observe the gender difference in wearing PDR clothing, documenting this pattern is a necessary first step in building an understanding of how often individuals wear PDR clothing, the psychological (or practical) factors involved in decisions to wear such clothing, and the psychological outcomes that follow.

Certainly, men’s clothing can fall under the umbrella of painful, distracting, or restricting as well. For example, neckties are a common source of fashion-related discomfort for men. However, as workplaces become more casual, fewer men are required to wear ties on a regular basis. A 2007 Gallup poll found that two in three men never wear a tie to work and only nine percent wear a tie most days (Carroll, 2007). Rates are likely substantially lower today.

Though men in the current studies were less likely than women to wear PDR clothing, men and women who wore PDR clothing tended to have greater body surveillance than those who did not wear such clothing. The link between body surveillance and wearing PDR clothing could be conceptualized as moving in two directions. Some types of PDR clothing literally require body surveillance (e.g., clothing that must be adjusted/monitored in order to avoid showing more of your body than you mean to). For example, if a woman wears a low-cut blouse but does not wish to expose her breasts, that blouse will cause her to monitor her body in order to determine how much of it is visible to other people. Other types of PDR clothing may be more of a reflection of ongoing body surveillance. For example, women may wear “shapewear” in part because they are sensitive to how the shape of their body appears to others. Of course, these effects could also act in a feedback loop, where trait levels of body surveillance prompt a person to choose PDR clothing, and the PDR clothing itself then draws more of that person’s attention to the appearance of their body.

The chronic appearance monitoring assessed by the measure of body surveillance used in these studies is strongly linked to self-objectification (Calogero, 2012). Self-objectification has negative psychological outcomes for men as well as women (e.g., Hebl et al., 2004; Martins et al., 2007), suggesting that the potential psychological toll of body surveillance is relevant regardless of gender. However, because women report wearing PDR clothing substantially more frequently than men do, PDR clothing can be conceptualized as a factor that may partially explain the gender gap in rates of self-objectification (with women consistently reporting higher levels; Frederick et al., 2007).

Because one component of body appreciation is a focus on and appreciation for the functions of one’s body (Tylka & Wood-Barcalow, 2015), and because many types of PDR clothing can limit some of the body’s functions (e.g., comfortable movement, taking deep breaths), we anticipated that wearing PDR clothing would be negatively associated with body appreciation. However, we found inconsistent support for this prediction. This may be because appreciation for the body’s functionality is only one of several components of body appreciation. Other components (e.g., body acceptance and rejecting unhealthy or rigid appearance ideals; (Tylka & Wood-Barcalow, 2015) may be less relevant to decisions around PDR clothing. An alternative explanation for the inconsistency of results regarding the link between PDR clothing and body appreciation is the complicating factor of choice. Regardless of whether you freely choose to wear PDR clothing or are required to do so (by a workplace, for example), body surveillance is a logical outcome of PDR clothing if it draws your attention to how you look. On the other hand, one can imagine a person with high levels of body appreciation who wears PDR clothing out of obligation. In this case, there is no reason to suspect that wearing PDR clothing would necessarily lower one’s body appreciation.

The pattern of gender differences across these two studies with respect to how often men vs. women wear PDR clothing was clear: women wear such clothing more often. However, some categories of PDR clothing showed larger and more consistent gender differences. Across both studies, some of the largest differences between men and women were in wearing shoes that cause pain/blisters and wearing shoes that limit the time one can comfortably stand. The findings regarding shoes may speak to gender differences in taking a functional perspective on one’s body (Alleva & Tylka, 2021). When it comes to facilitating movement, shoes are arguably the single most important article of clothing. Shoes affect how quickly and confidently one can walk and how long one can stand without breaks. Though men’s shoes vary to some extent in terms of how comfortable they are (e.g., dress shoes vs. running shoes), only in women’s fashion do we see the dominance of a type of shoe (the high heel) that clearly impedes movement (Jeffreys, 2015). Previous research has found that women report wearing high heels in order to look sexy (Smolak et al., 2014), suggesting that shoes may be a key area where women negotiate trade-offs between comfort and appearance pressures. Consistent with this argument, across both studies, those who reported wearing shoes that cause pain/blisters or limited the time they could comfortably stand scored significantly higher on body surveillance.

A second area of notable gender differences was in wearing clothing that requires adjusting or monitoring throughout the day: women were much more likely to indicate that they wore this type of clothing. This finding is consistent with Fredrickson and Roberts (1997) argument that certain women’s fashions require women to be “chronically vigilant” of their bodies (p. 182). Monitoring your clothing provides ample opportunity to bring your attention back to your appearance. Interestingly, this type of PDR clothing was the only one to show a significant link with body appreciation across both studies. Men and women who indicated they wore clothing that requires this type of ongoing monitoring reported lower body appreciation.

One of several questions the current research leaves unanswered is the extent to which women freely choose to wear PDR clothing. This is a complicated question to tackle. In a culture in which women are taught that their primary form of social currency is their appearance (Fredrickson & Roberts, 1997), the behaviors women engage in in order to appear attractive or sexy are at best viewed as constrained choices. Some women in Study 2 directly stated that they choose to endure fashion-related pain and discomfort because that is what it takes to look sexy. Even in settings where specific types of apparel are not explicitly required, social pressures to follow fashion trends can be fierce. A norm-enforced unofficial dress code (e.g., wearing tight, short dresses in order to gain entry to a trendy bar or wearing heels for an important work presentation) can still exert a substantial pull on behavior.

An important point of difference between men’s and women’s PDR clothing is that for women, PDR clothing is often revealing (e.g., tight, short, or low-cut clothing; Goodin et al., 2011), not just distracting or uncomfortable. In other words, much of women’s PDR clothing seems intended to draw the (potentially sexually objectifying) gaze of others, whereas men’s PDR clothing is often intended to signal competence or power (e.g., a suit coat and tie). Consistent with this trend, in the current studies, the only PDR clothing type men were more likely than women to report wearing at least once a week was clothing that makes one too hot or too covered for weather conditions (Study 1). This difference between revealing and non-revealing PDR clothing likely matters in terms of the subjective experience of wearing such clothing. A suit can hide perceived bodily flaws and make a person feel (and be perceived as) more powerful (Kraus & Mendes, 2014); highly revealing clothing can prompt body consciousness and make a person more likely to be perceived as a sexual object (Gray et al., 2011).

Limitations and Future Research Directions

The current studies were primarily exploratory and cannot provide conclusive evidence about the direction of the association between wearing PDR clothing and body surveillance. Additionally, the limited data about reasons why men and women wear PDR clothing suggests that a more thorough analysis on this topic is warranted. Some reasons for wearing PDR clothing (e.g., to look good) seem to indicate free (or at least, somewhat free) choice. Other responses suggest bowing to social norms or following explicit guidelines for different work/social settings. Many participants listed both types of reasons. Future work on this topic should include a more nuanced set of questions about when, why, and how often men and women wear PDR clothing. This is especially important given the relatively informal process by which the list of PDR clothing types used in these studies was generated. Future researchers could consider using these initial data to inform the development of a formal measure of behaviors and attitudes around PDR clothing. The general categories of PDR clothing examined in these studies could also be used as a starting point for a more detailed analysis of specific articles of clothing and their psychological effects. For example, researchers could examine what types of clothing participants are thinking of when they respond to questions about clothing that leaves welts or makes it difficult to breathe. Of particular interest would be any gender differences in the extent to which PDR categories are capturing rarely worn types of clothing (e.g., formalwear) vs. more everyday types of clothing (e.g., undergarments, shoes).

We recommend that future work examining reasons why individuals wear PDR clothing employ focus groups or semi-structured interviews in order to more carefully interrogate how people make decisions around PDR clothing. Though many participants in the current study indicated that they wore PDR clothing to be more attractive to others, we were not able to explore how (or to what extent) men and women understood these choices in terms of gender roles or gendered sociocultural appearance ideals. In addition to this type of qualitative work, researchers should consider using experimental methods to test the extent to which wearing PDR clothing might lead to trade-offs between momentary boosts in self-esteem (e.g., feeling sexy or confident while wearing heels) and disruptions in the ability to focus (e.g., when one’s attention is drawn to foot pain or the need to adjust one’s clothing).

The current studies are also limited by their reliance on participants’ memory and on participants’ rough estimates of how often they wear different types of apparel. Additionally, our online survey did not include attention checks (beyond evaluating the open-ended responses in Study 2). Observational or field studies could provide more detailed data on the types of PDR clothing men and women wear in their everyday lives and how PDR clothing choices vary by setting or context. Some researchers have argued that those whose bodies least resemble cultural body ideals (typically people in marginalized bodies) may feel the greatest pressure to engage in appearance surveillance (Frederick et al., 2007). Relatedly, others have pointed to appearance management behaviors as a means for women who are poor to attempt to improve their status or financial situation (Edmonds, 2007). Together, these findings suggest that the links between social status and choices around PDR clothing would be a rich area for future research.

The current studies were not designed in a way to allow for a rigorous examination of how age (or the interaction between age and gender) might be related to wearing PDR clothing. However, there are numerous reasons that this could be an interesting area for future work. There are theoretical and empirical reasons to predict that women may be less likely to wear PDR clothing as they age. For example, Fredrickson and Roberts (1997) argued that because older women tend to receive less sexualized attention from their culture, women may find themselves more able to “step out of the objectification limelight,” (p. 195) as they age. To the extent that they do so, they may feel less pressure to wear PDR clothing. This possibility would be consistent with evidence that older women report lower levels of self-objectification than young women (Tiggemann & Lynch, 2001).

On the other hand, (Tiggemann, 2004) argued that, unlike age-related body changes, appearance management behaviors like clothing choices remain largely under one’s control as one ages. For that reason, clothing choices designed to maximize attractiveness may become more important for women as they age. This perspective suggests that PDR clothing could be more common among older women.

Practice Implications

Therapists and other practitioners working with individuals who struggle with body image-related issues might consider clothing choices as a worthwhile topic to address. Previous research has suggested that a more functional approach to understanding one’s body can help reduce body image disturbance (Alleva et al., 2015). To the extent that more comfortable clothing choices allow one to focus more on how one’s body moves and how it feels, opting out of PDR clothing could be a healing choice for some (assuming they have the freedom and means to do so). This may be particularly true for women, both because women are more likely than men to wear PDR clothing and because women tend to engage in more body surveillance than men. Of course, practitioners should take care to avoid shaming people over any clothing choices, instead considering how one might select apparel that is both confidence-inducing and allows for comfortable freedom of movement and less distraction. Activists working in this space can continue to push fashion designers and clothing manufacturers to provide comfortable clothing that does not require monitoring and adjustment throughout the day – and insist that such options be available to all genders and all body shapes and sizes.

Paradoxical intention has been considered an evidence-based treatment for insomnia since the 1990s; it seems to result in great reductions in sleep-related performance anxiety & marked clinical improvements

Paradoxical intention for insomnia: A systematic review and meta-analysis. Markus Jansson-Fröjmark, Sven Alfonsson, Benjamin Bohman, Alexander Rozental, Annika Norell-Clarke. Journal of Sleep Research, August 17 2021. https://doi.org/10.1111/jsr.13464

Summary: Paradoxical intention (PI) has been considered an evidence-based treatment for insomnia since the 1990s, but it has not been evaluated with modern review techniques such as meta-analysis. The present study aimed to conduct the first systematic review and meta-analysis of studies that explore the effectiveness of PI for insomnia on insomnia symptomatology and theory-derived processes. A systematic review and meta-analysis was conducted by searching for eligible articles or dissertations in six online bibliographic databases. Randomised controlled trials and experimental studies comparing PI for insomnia to active and passive comparators and assessing insomnia symptoms as outcomes were included. A random effects model was estimated to determine the standardised mean difference Hedge’s g at post-treatment. Test for heterogeneity was performed, fail-safe N was calculated, and study quality was assessed. The study was pre-registered at International Prospective Register of Systematic Reviews (PROSPERO, CRD42019137357). A total of 10 trials were identified. Compared to passive comparators, PI led to large improvements in key insomnia symptoms. Relative to active comparators, the improvements were smaller, but still moderate for several central outcomes. Compared to passive comparators, PI resulted in great reductions in sleep-related performance anxiety, one of several proposed mechanisms of change for PI. PI for insomnia resulted in marked clinical improvements, large relative to passive comparators and moderate compared to active comparators. However, methodologically stronger studies are needed before more firm conclusions can be drawn.

4 DISCUSSION

4.1 Summary of main results

The present study is the first comprehensive systematic review and meta-analysis of the effectiveness of PI for insomnia. Relative to passive comparators, PI resulted in large improvements in several central insomnia symptoms. Although the effectiveness of PI was smaller compared to active comparators, the effects were still moderate for several key outcomes. Relative to previous reviews, the present study extends the quantitative assessment of PI as an evidence-based intervention in that it compared PI with passive versus active comparators and included both night-time and daytime symptoms (Jansson-Fröjmark & Norell-Clarke, 2018; Morin et al., ,19992006). A unique finding was support for great reductions in sleep-related performance anxiety by PI. This finding strengthens the notion that decreased performance anxiety is a mechanism through which PI might work.

Cumming and Finch (2001) have recommended that effect sizes should be compared to other relevant estimates in the literature to grasp their significance. In one of the largest and more recent meta-analysis, cognitive and behavioural interventions (e.g. CBT-I, relaxation, stimulus control, psychoeducation, and sleep restriction) were compared with passive comparators (van Straten et al., 2018). Comparing the effect sizes from van Straten et al., (2018) for cognitive and behavioural therapies with the present study’s effect sizes for PI relative to passive comparators, the effects were larger in the present study for PI on SOL (0.57 versus 0.82), NAW (0.28 versus 1.10), and TST (0.16 versus 0.51), and smaller on SE (0.71 versus 0.00). Although inferences from comparisons of this sort are difficult to draw from a methodological viewpoint, a reasonable conclusion would be to state that PI tentatively has a similar effectiveness as other cognitive and behavioural interventions. At the same time, this conclusion is hampered by several limitations in the trials exploring the effectiveness of PI. The relatively few studies, limited number of study participants, and other methodological characteristics of the studies makes an overall conclusion about effectiveness and generalisability of PI uncertain.

4.2 Methodological considerations and quality of evidence

The present review identified 10 studies that evaluated the effectiveness of PI. There were a number of notable methodological limitations of the studies. The study quality assessment showed that the quality of the 10 studies ranged from 15 to 20 points out of 26, implying a moderate study quality. The methodological quality was particularly weak in two areas. First, no studies reported using blinding of subjects, even though it appeared as if this would have been possible. Second, it was uncommon that studies appeared to have sufficient power to detect group differences. While some of these limitations were noted in the study quality assessment, others will be underscored more specifically below.

Across the 10 studies, there was diversity concerning the design. In nine trials, PI was compared with a passive comparator, which means that non-specific factors (e.g. therapist contact) were not controlled for in the estimations comparing PI with passive comparators. Concerning design, it is also worth underscoring that the aggregation of various active comparators into one active comparator category was based on that they provided study participants with active treatment content. This aggregation could, however, have resulted in that comparators with differing effects were combined, so that the comparison between PI and active comparators becomes uncertain.

Another limitation regards the patient characteristics. The total sample size was limited to <400 participants, and none of the trials reported that power calculations were made prior to study start. In all, Type 2 errors are likely, particularly when active treatments were compared. Further, all participants were recruited from the community, which might make the present findings less generalisable to health settings, as patients in clinical settings tend to display elevated symptoms (Davidson et al., 2009). Another observation is that, in almost all of the studies, we categorised the participants as meeting criteria for sleep-onset insomnia or primary insomnia. Therefore, it is uncertain whether PI should be viewed as an effective intervention for other types of insomnia, such as comorbid insomnia. It is also worth noting that there might be specific insomnia profiles that are particularly susceptible to PI. For example, Espie et al., (2006) have proposed that PI might be specifically suited for patients with psychophysiological insomnia, as this profile of patients are believed to be characterised by attentional bias, preoccupation with sleep, and using several strategies to avoid sleeplessness. In future research, the study of PI and the effectiveness for different insomnia profiles might also be based on recent empirical attempts to subtype insomnia (Blanken et al., 2019). On a related note, we observed that comorbidity was not formally assessed in the included studies. Although several studies used certain criteria to assess and/or exclude comorbidity, the lack of validated assessments of psychiatric and somatic conditions limits generalisability. As comorbid problems are more common than “pure” insomnia (Stepanski & Rybarczyk, 2006), the lack of assessing comorbid conditions and exclusion of participants with comorbid problems are problematic.

Another issue of methodological uncertainty concerns the administration of PI. There were slight variations concerning several features of the delivery. The rationale and instructions varied across studies, although the original approach by Ascher and Efran (1978) was most commonly employed. Also, the delivery format was mixed, with individual, self-help, and group formats identified. Further, in several treatment-related parameters, it was rare that sufficient information was provided; this concerned whether a treatment manual was used, who delivered PI, whether the therapists were trained and/or supervised, and whether treatment integrity was assessed. Also, the dose of PI varied across studies. Often, PI was delivered across 2–4 weeks, but longer treatment periods were also identified. Based on the limited number of studies in the present review, we were unable to investigate whether certain formats of delivery of PI was more effective than others. During the review process, we also noted that none of the studies assessed treatment-relevant domains that might have importance for the interpretation of findings, such as acceptability, adherence, credibility and expectancy ratings, and perceived usefulness of PI. It should also be emphasised that worsened sleep after PI has been reported in the research literature (Espie & Lindsay, 1985). As none of the included studies in the present review reported on adverse events or deterioration, more research is warranted to examine whether PI produces negative effects among patients with insomnia in general or in subgroups of patients.

An inclusion criterion for the present review was that trials must report insomnia-related outcomes (i.e. night-time and/or daytime symptoms). Across studies, it was less common to index objective sleep outcomes, daytime symptoms, theory-derived processes, and global insomnia symptoms [e.g. with the Insomnia Severity Index; (Bastien et al., 2001)]. Due to the lack of studies assessing several outcome domains, all meta-analytical estimations were based on sleep diary or questionnaire data assessing sleep performance anxiety. As a result, we can only draw conclusions for PI concerning sleep diary-assessed night-time symptoms and, to a lesser extent, sleep performance anxiety. A related limitation is that estimations of effectiveness for PI was not possible to assess in the longer term, as there were not sufficient data for such calculations.

A further limitation is that sensitivity and moderator analyses were not employed due to the limited number of studies. For example, it would have been interesting to explore the effects of the addition or removal of lower quality studies and, to examine whether insomnia symptomatology at baseline and PI administration might moderate the effectiveness of PI. A final limitation is that it was required that the included studies were published in English, thereby introducing a possible language bias.

4.3 Putative mechanisms

In the present study, we identified three studies that assessed sleep-related performance anxiety as a putative mechanism, and no trial indexing other potential mechanisms (e.g. sleep intention). As a whole, performance anxiety was reduced to a large degree after PI in the included trials. However, it is important to emphasise that this does not imply that performance anxiety has been demonstrated to act as a putative mechanism. As all trials in the present review analysed sleep-related performance anxiety only as pre- to post-treatment changes, future research might design studies so that mediational analyses become possible. In such studies, repeated assessment of mediators is necessary, and then analysing whether change in mediators precede improvements in insomnia symptoms. This would pave the way for evidence-based explanations for how PI produces improvements (Kazdin, 2007).

Another important methodological aspect of the research literature on performance anxiety is that the self-report scales used in the three studies have not been systematically validated in psychometric terms (Broomfield & Espie, 2003; Buchanan, 1988; Fogle & Dyal, 1983). As a result, it is uncertain whether the construct validity of the self-report scales is sufficiently captured, so that conclusions about sleep performance anxiety can be drawn in the present review. Concerning the measurement of sleep performance anxiety, it should be noted that validated self-report scales are available, such as the Glasgow Sleep Effort Scale (Broomfield & Espie, 2005; Meia-Via et al., 2016; Vand et al., 2020), and such instruments are recommended for future research. The use of validated measures in future trials would enable stronger conclusions about the effectiveness of PI on sleep performance anxiety as well as the possibility to examine mediation in a more rigorous way and explore moderation (e.g. whether PI is particularly effective among insomnia patients with elevated sleep performance anxiety).

One should note that sleep-related performance anxiety is not the only candidate as a putative mechanism for PI. First, PI could be viewed as an intervention that exposes patients to learned, feared stimuli in the bed or bedroom (Lundh, 1998), which enables extinction and the formation of new learning (Craske et al., 2014). However, this notion has not yet been articulated in detail in the research literature and not examined empirically. A second putative mechanistic pathway is described in the attention–intention–effort model (Espie et al., 2006). Although the pathway by Espie et al., (2006) appears to have high face validity, the model has not, to our knowledge, been explicitly tested in its full complexity in the realm of PI treatment.

4.4 Future directions

There are several important areas that future research could focus on to enhance the understanding of PI. Following from the limitations and uncertainties described above, we recommend future research to use active comparators, sample sizes based on power calculation, samples from clinical settings, a variety of insomnia types (including insomnia disorder), formal assessments of comorbidity, different delivery formats, broad assessments of insomnia symptoms and correlates as outcomes, and different mediators to examine mechanistic pathways.

One unknown dimension of PI is the optimal dosing and administration. Although PI has commonly been implemented by patients during a 2–4-week period, it could be argued that shorter administration of PI could be beneficial as well. Based on the theoretical rationale; that is, breaking a vicious cycle of sleep intention and associated performance anxiety, PI could potentially also be delivered as a behavioural experiment, during which patients test their predictions (e.g. “If I do not try to fall asleep, I will remain awake all night”), followed by testing PI for a limited number of nights. Another topic for future research is the optimal treatment rationale and instructions for PI. Based on two studies included in the present review (Ascher & Turner, 1980; Ott et al., 1983), it appears likely that PI with a desensitisation rationale or with feedback is less beneficial than the original approach by Ascher and Efran (1978). Beyond that, the ideal rationale and instructions remains unknown when delivering PI.

Based on the findings in the present review, the notion of how PI should be used warrants reflection. On the one hand, we believe that CBT-I should still be regarded as the first-line intervention for insomnia disorder (Riemann et al., 2017). On the other hand, PI might play a role in some cases. For example, if a patient remains unimproved after CBT-I, PI could be one option. Also, if the patient reports high sleep-related performance anxiety, and this appears as the primary maintaining factor, PI could be used in isolation or in combination with other efficacious CBT-I components, such as sleep restriction (Miller et al., 2014). To date, current CBT manuals do not include PI as a treatment component (van Straten et al., 2018). Whether the addition of PI could add efficacy to CBT-I is currently unknown. Future research could explore the notion of combining PI with CBT-I to explore potential additive effects, but also whether there are subgroups of patients who benefit more from PI.

Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars? The evidence seems statistically weak & conceptually inconclusive

Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars? Uri Simonsohn. Data Colada, April 30, 2021. http://datacolada.org/96

A recent NBER paper titled "Gender and the Dynamics of Economics Seminars" (https://www.bipartisanalliance.com/2021/02/economics-seminars-women-presenters-are.html) reports analyses of audience questions asked during 462 economics seminars, concluding that

“women are asked more questions . . . and the questions asked of women are more likely to be patronizing or hostile . . . suggest[ing] yet another potential explanation for their under-representation at senior levels within the economics profession” (abstract)

In this post I explain why my interpretation of the data is different.

My prior, before reading this paper, was that women were probably treated worse in seminars, especially in economics. But, after reading this paper I am less inclined to believe that.

[...]

Do Female Speakers Get More Antagonistic Questions?

Another result highlighted in the abstract is that the questions female speakers get “are more likely to be patronizing or hostile”.

Unlike the optimal number of total questions, the optimal number of hostile and patronizing questions is zero. So noticeable differences in hostility are easier to interpret.

But the evidence behind those claims seems insufficiently clear, in my opinion, to be interpretable, let alone actionable. Specifically, the evidence is:

*  Statistically weak. The estimates are arguably small in magnitude (e.g, women get 0.1 extra hostile questions on average), and evidentially weak (the patronizing difference is p=.1, the hostile difference p=.02). Moreover, these two results were selected post-hoc from a larger set of measures collected, and the rest were not significant (e.g., if questions were critical, or disruptive, or fair). Statistically at least, this is not strong evidence against the null that all observed differences are caused by chance.

*  Conceptually inconclusive. Other estimates in the paper are conceptually inconsistent with the conclusion that female speakers receive worse treatment. For example, they get directionally fewer “criticism” questions than do male speakers  (see Table 6 .png). While this result is p > .1, so we do not rule not zero difference, the estimate is precise enough to rule out that women get ¼ additional  critical question per talk. Women also get an additional 1.7 clarification questions (p < .01), and half an additional suggestion (p < .1). Personally I like getting these kinds of questions, as they often signal audience engagement and can, of course, be useful.

---

More at the original link

Migration during the last 500 years induced differences in contemporary health outcomes: Due to vitamin D deficiency, migration from high UV radiation places to low radiation ones takes a toll on health

Historical migration and contemporary health. Thomas Barnebeck Andersen, Carl-Johan Dalgaard, Christian Volmer Skovsgaard, Pablo Selaya. Oxford Economic Papers, Volume 73, Issue 3, July 2021, Pages 955–981, https://doi.org/10.1093/oep/gpaa047

Abstract: We argue that migration during the last 500 years induced differences in contemporary health outcomes. The theory behind our analysis builds on three physiological facts. First, vitamin D deficiency is directly associated with higher risk of all-cause mortality. Second, the ability of humans to synthesize vitamin D from sunlight (UV-R) declines with skin pigmentation. Third, skin pigmentation is the result of an evolutionary compromise between higher risk of vitamin D deficiency and lower risk of skin cancer. When individuals from high UV-R regions migrate to low UV-R regions, the risk of vitamin D deficiency rises markedly. We develop a measure that allows us to empirically explore the aggregate health consequences of such migration in a long historical perspective. We find that the potential risk of vitamin D deficiency induced by migration during the last half millennium is a robust predictor of present-day aggregate health indicators.

JEL I1 - HealthJ1 - Demographic EconomicsJ15 - Economics of Minorities, Races, Indigenous Peoples, and Immigrants; Non-labor Discrimination


5. Conclusion

We have examined whether a migration-induced imbalance between the intensity of skin pigmentation and ambient UV-R holds explanatory power vis-à-vis present-day global health differences. We find that it does. Consequently, our results suggest that low UV-R regions that have received substantial immigration from high UV-R regions experience lower life expectancy than would have been the case in the absence of such migration flows.

The underlying theory derives from the life sciences. Conditional on ambient UV-R, individuals with intense skin pigmentation (deriving from high ancestral UV-R exposure) are more susceptible to vitamin D deficiency, which is a leading cause of a range of afflictions that cause premature death. The contribution of the present study lies in exploring whether this theory holds explanatory power in the aggregate. The weight of the evidence presented above suggests it does.

Although the economic significance of our measure of the risk of vitamin D deficiency (if taken at face value) is relatively strong, it is also clear that its ability to account for cross-country variation in life expectancy is modest. However, if current movements of people continue, which to a large extent represent movements from ‘South to North’, much more variation is likely to become visible during the 21st century. As such, vitamin D deficiency may become an increasing public health issue in the years to come, at least in the absence of preventive public health measures.

We believe the present study could be usefully extended in the direction of studying within country migration. For example, Black et al. (2015) find that the Great Migration within the USA reduced the health of African Americans significantly. While the authors suggest that some of the impact may be linked to changes in the intake of alcohol and cigarette smoking, it is worth noting that migrants also experienced changes in the environment. For example, moving from Georgia to New York would imply a reduction in ambient UV-R of roughly 43%, implying in turn a considerable increase in the risk of vitamin D deficiency for an African American. Whether a vitamin D mechanism could be contributing to the decline in health outcomes in the aftermath of the Great Migration seems to be an interesting topic for future research.


Wednesday, August 18, 2021

When forming personality impressions from faces, people rely on features that resemble "frozen" emotional expressions

Which Facial Features Are Central in Impression Formation? Bastian Jaeger, Alex L. Jones. Social Psychological and Personality Science, August 17, 2021. https://doi.org/10.1177/19485506211034979

Abstract: Which facial characteristics do people rely on when forming personality impressions? Previous research has uncovered an array of facial features that influence people’s impressions. Even though some (classes of) features, such as resemblances to emotional expressions or facial width-to-height ratio (fWHR), play a central role in theories of social perception, their relative importance in impression formation remains unclear. Here, we model faces along a wide range of theoretically important dimensions and use machine learning techniques to test how well 28 features predict impressions of trustworthiness and dominance in a diverse set of 597 faces. In line with overgeneralization theory, emotion resemblances were most predictive of both traits. Other features that have received a lot of attention in the literature, such as fWHR, were relatively uninformative. Our results highlight the importance of modeling faces along a wide range of dimensions to elucidate their relative importance in impression formation.

Keywords: social perception, personality impressions, overgeneralization theory, emotional expressions, facial width-to-height ratio

Which facial characteristics do people rely on when forming impressions of others? Some facial features, such as resemblances to emotional expressions and fWHR, occupy a central role in theories of social perception (Todorov et al., 2008Zebrowitz, 2017). However, it is not clear whether this focus is justified, as little is known about the relative importance of different characteristics. Faces can be modeled along many dimensions, and many facial features are correlated. Yet, prior work has mostly examined one feature or a few features in isolation. These approaches cannot provide strong evidence for the claim that people rely on certain facial features in impression formation, as it remains unclear whether people relied on the facial feature in question, or on other correlated ones. In short, even though studies have identified a long list of facial features that are correlated with impressions, the question of which facial features are actually central in impression formation remains largely unaddressed. Here, we used methods from machine learning (i.e., cross-validation, regularization) to estimate and compare the extent to which a wide range of facial features predict trustworthiness and dominance impressions for a large and demographically diverse set of faces. We tested facial characteristics that have been theorized to be important in impression formation (resemblances to emotional expressions, attractiveness, babyfacedness, familiarity, and fWHR; Geniole et al., 2014Stirrat & Perrett, 2010Zebrowitz, 2017). We also tested a large set of other facial characteristics that have received less attention or are often held constant in social perception studies, even though they might be important in impression formation (e.g., gender, race, age, eye size, lip fullness).

When comparing different classes of facial features, we found that emotion resemblances were most predictive of both trustworthiness and dominance impressions, outperforming all other theory-driven models. When examining the importance of all 28 facial characteristics simultaneously, we found that perceptions of trustworthiness were best predicted by a face’s resemblance to a happy expression. Emotionally neutral faces were perceived as more trustworthy when facial features resembled a facial expression of happiness. Perceptions of dominance were best predicted by targets’ gender (with women being perceived as less dominant than men) and by resemblance to a facial expression of anger. Together, our results support the notion that resemblances to emotional expressions are central for explaining how people form personality impressions from facial features. Our findings are in line with overgeneralization theory (and the emotion overgeneralization hypothesis in particular; Todorov et al., 2008Zebrowitz, 2017), which posits that personality impressions of faces are driven by an oversensitive emotion detection system: Due to their social relevance, people even perceive emotions (and associated personality traits) in emotionally neutral faces that structurally resemble emotional expressions.

Support for the importance of other facial characteristics evoked by overgeneralization theory (i.e., attractiveness, babyfacedness, and familiarity; Zebrowitz, 20122017) was mixed. Facial attractiveness was the second-most informative predictor of trustworthiness impressions, whereas babyfacedness and familiarity were less informative. None of the three characteristics were among the most informative predictors of dominance impressions.

We also found that demographic factors (i.e., gender, age, and race)—which have received less attention as predictors of personality impressions—were in some instances among the most important predictors of impressions. This highlights potential problems associated with keeping features like gender and race constant when studying social perception. Certain features may guide impression formation when demographic characteristics do not vary, but they may be uninformative when more diagnostic cues such as demographic characteristics do vary.

A wealth of studies has examined the influence of fWHR on personality judgments (e.g., Geniole et al., 2014Ormiston et al., 2017Stirrat & Perrett, 2010). Yet, the current results suggest that fWHR is not an informative predictor of trustworthiness or dominance impressions. When comparing the predictive fit of fWHR to the four characteristics that form the basis of overgeneralization theory, fWHR emerged as the weakest predictor. When modeled alongside all other facial features that we included in our analyses, fWHR was again among the least informative predictors. Similar results were obtained in additional analyses when examining impressions of male and female targets separately and when all other variables that included some measurement of face length or width were omitted from analyses (see Supplemental Materials). Together, these findings suggest that the importance of fWHR for impression formation may have been overstated in previous studies. Previously observed associations between fWHR and personality impressions may have been due to the fact that people rely on facial features that are correlated with fWHR, but not on fWHR per se.

Interestingly, all seven classes of predictors showed better predictive accuracy for trustworthiness perceptions than for dominance perceptions. It has been suggested that emotion resemblances are particularly important for trustworthiness impressions, whereas morphological characteristics, such as fWHR, are more important for dominance impressions (Hehman et al., 2015). The current results are not in line with this notion and suggest that emotion resemblances are the most important determinant of both trustworthiness and dominance impressions. It should also be noted that even though emotion resemblances were the most important class of predictors, not all emotion resemblances were equally meaningful. Resemblance to a happy expression was the most important predictor of trustworthiness impressions, whereas resemblance to an angry expression was the most important predictor of dominance impressions.

Limitations and Future Directions

Despite the relatively good performance of some of our models, results also suggest that our list of relevant features was not exhaustive. Emotion resemblances explained 53% and 42% of the variance in trustworthiness and dominance perceptions. Even the optimized Elastic Net models explained around 68% of the variance, indicating there are other important factors contributing to personality impressions. Other facial features that might show independent contributions to personality impressions include skin texture (Jaeger et al., 2018; A. L. Jones et al., 2012) and perceived weight (Holzleitner et al., 2019). Examining the role of additional predictors will show how generalizable the present results are, as the relative importance of facial features ultimately depends on the specific set of features that is modeled. In order to conclusively establish that certain facial features are central in impression formation (and that observed associations are not due to other, unmeasured dimensions), faces need to be modeled along all potentially meaningful dimensions. From a practical perspective, achieving this goal may be unfeasible at best and impossible at worst. Still, future work should strive to test the relative importance of different features by comparing them against large sets of other features that have been shown to predict impressions.

Future studies could also investigate characteristics of the perceiver which explain a nontrivial amount of variance in impressions (Hehman et al., 2019). Moreover, while the current set of faces was relatively large and diverse in terms of gender, age, and race, we only examined U.S. individuals who were photographed in a controlled lab setting. Future studies could test whether the current findings replicate when using more naturalistic images of individuals from different nationalities (Sutherland et al., 2013).

Episodic-like memory (what, where and when specific things happened) is preserved with age in cuttlefish, molluscs that lack a hippocampus, maybe due to reproductive pressure

Episodic-like memory is preserved with age in cuttlefish. Alexandra K. Schnell, Nicola S. Clayton, Roger T. Hanlon and Christelle Jozet-Alves. August 18 2021. https://doi.org/10.1098/rspb.2021.1052

Abstract: Episodic memory, remembering past experiences based on unique what–where–when components, declines during ageing in humans, as does episodic-like memory in non-human mammals. By contrast, semantic memory, remembering learnt knowledge without recalling unique what–where–when features, remains relatively intact with advancing age. The age-related decline in episodic memory likely stems from the deteriorating function of the hippocampus in the brain. Whether episodic memory can deteriorate with age in species that lack a hippocampus is unknown. Cuttlefish are molluscs that lack a hippocampus. We test both semantic-like and episodic-like memory in sub-adults and aged-adults nearing senescence (n = 6 per cohort). In the semantic-like memory task, cuttlefish had to learn that the location of a food resource was dependent on the time of day. Performance, measured as proportion of correct trials, was comparable across age groups. In the episodic-like memory task, cuttlefish had to solve a foraging task by retrieving what–where–when information about a past event with unique spatio-temporal features. In this task, performance was comparable across age groups; however, aged-adults reached the success criterion (8/10 correct choices in consecutive trials) significantly faster than sub-adults. Contrary to other animals, episodic-like memory is preserved in aged cuttlefish, suggesting that memory deterioration is delayed in this species.

Popular version: https://www.sciencedaily.com/releases/2021/08/210817193055.htm

"The old cuttlefish were just as good as the younger ones in the memory task -- in fact, many of the older ones did better in the test phase. We think this ability might help cuttlefish in the wild to remember who they mated with, so they don't go back to the same partner," said Schnell.

Cuttlefish only breed at the end of their life. By remembering who they mated with, where, and how long ago, the researchers think this helps the cuttlefish to spread their genes widely by mating with as many partners as possible.

From 2019... In the U.S. and Côte d’Ivoire, highly educated people make decisions that are less consistent with the rational model while low-income respondents make decisions more consistent with the rational model

From 2019... Are We All Predictably Irrational? An Experimental Analysis. John A. Doces & Amy Wolaver. Political Behavior volume 43, pp1205–1226. Dec 18 2019. https://link.springer.com/article/10.1007/s11109-019-09579-0

Abstract: We examine the question of rationality, replicating two core experiments used to establish that people deviate from the rational actor model. Our analysis extends existing research to a developing country context. Based on our theoretical expectations, we test if respondents make decisions consistent with the rational actor framework. Experimental surveys were administered in Côte d’Ivoire and Ghana, two developing countries in West Africa, focusing on issues of risk aversion and framing. Findings indicate that respondents make decisions more consistent with the rational actor model than has been found in the developed world. Extending our analysis to test if the differences in responses are due to other demographic differences between the African samples and the United States, we replicated these experiments on a nationally representative analysis in the U.S., finding results primarily consistent with the seminal findings of irrationality. In the U.S. and Côte d’Ivoire, highly educated people make decisions that are less consistent with the rational model while low-income respondents make decisions more consistent with the rational model. The degree to which people are irrational thus is contextual, possibly western, and not nearly as universal as has been concluded.

Introduction

Are we all predictably irrational? Since the seminal work of Tversky and Kahneman (1974198119861991) and other behavioral economists (Akerlof and Kranton 2000; Ariely 2010; Camerer 2003; Thaler 1980), the usefulness of models assuming rationality has been questioned, if not entirely dismissed in some cases (Green and Shapiro 1994; Sen 1977). However, the vast majority of the empirical work establishing consistently irrational behaviors has been conducted on populations in the West, dubbed WEIRD for Western, educated, industrialized, rich and democratic (Henrich et al. 2010). In their review of the psychology literature, Henrich et al. (2010) indicate that 68% of subjects were from the United States, 96% from the Western Industrialized world, with 80% of the Western sample being undergraduate students. If other populations behave differently than these groups, then the implications of the new models of behavior may not be as widely applicable as we thought.

There have been some forays examining deviations from the predictions based on the rational actor model across different populations, notably studies of the impact that poverty has on decision-making. While some studies establish ways in which poverty decreases cognitive ability through the additional stresses associated with living in poverty (Mani et al. 2013; Haushofer and Fehr 2014), others argue that the influence of poverty on decisions is related more to additional constraints and a constant presence of risk in the lives of the poor (Duflo 2006; Banerjee and Duflo 2007; Carvalho et al. 2016). There is a growing body of literature that establishes that the poor are less subject to some of the cognitive biases found by Tversky and Kahneman (Shah et al. 20152018). Possible explanations for the differences in decision-making by the poor are that poverty may increase attention to costs, and/or it exposes one to more risk, which causes people to give more weight to current versus future outcomes.

To determine whether these predictable irrationalities are applicable in other parts of the world, we replicate some of the most important experiments conducted by Tversky and Kahneman (1981) in Côte d’Ivoire and Ghana, and compare these samples to those from Western populations. We find that respondents from Côte d’Ivoire and Ghana make decisions that are closer to the model of rationality than Westerners. Building on this key finding, we also examine the effects of individual characteristics on decision-making to determine whether there are systematic differences within these populations. Here, we find that most sub-groups, with some exceptions, make decisions that are relatively more consistent with the predictions of the rational model than has been found in prior research. Finally, to ascertain why the differences exist between the original results from 1981 and our data, we re-consider two of the original experiments in a nationally representative sample of American adults. This sample provides support for the 1981 results of systematic irrationality, with important exceptions, and helps to contrast our findings from West Africa that the rational actor model is most applicable in the developing world.

Our empirical results, in sum, suggest there is more merit to the rational choice paradigm than perhaps has been thought, and that existing studies concluding people are predictably irrational are overstated in a number of ways. This is an important finding with implications for several areas of academic scholarship. The rational actor model has served as the cornerstone assumption about the behavior of political actors, influencing research in political science on voter choice, foreign policy making, conflict, and international political economy amongst others (de Mesquita and Smith 2011; Mansfield et al. 2000; Powell 1991; Slantchev and Tarar 2011). Recent work has extended the paradigm to explicitly non-western contexts (Hollyer et al. 2015). Nevertheless, debates about its utility in political science have been especially spirited (de Mesquita and Morrow 1999; Walt 1999), with one enduring criticism being the lack of empirical support that people behave as the model assumes, a point which even supporters acknowledge (Kahler 1998; Snidal 2002). In economics, as well, the core mainstream model assumes rationality, with applications to the law (Posner 2014) and even addiction (Becker et al. 1991). By addressing the empirical underpinnings of rational choice, we help fill an important gap in our understanding of rationality and show that the model might be most relevant for non-western populations.


Uncommon case of complete loss of hunger following an isolated left insular stroke

Uncommon case of complete loss of hunger following an isolated left insular stroke. Benjamin Hébert-Seropian, Olivier Boucher, Didier Jutras-Aswad & Dang Khoa Nguyen. The Neural Basis of Cognition, Aug 16 2021. https://doi.org/10.1080/13554794.2021.1966044

Abstract: The insula has long been among the least understood regions of the human brain, in part due to its restricted accessibility. Mounting evidence suggests that the insula is a prominent player in gustatory, interoceptive, and emotional processing, and likely integrates these different functions to contribute to the homeostatic control of food intake. Here we report the case of a young adult patient who lost the subjective experience of hunger following an ischemic stroke localized in the posterior left insula. The loss of hunger was not attributable to medication, substance use, or a clinical disorder, and lasted for a period of 15 months. In line with the role attributed to the insula in gustation and interoception, we suggest that the insula integrates information about taste, interoception, and the hedonic value of food in the service of homeostatic regulation.

KEYWORDS: Hungerappetiteinsulastrokecase report


Found some evidence that higher income is associated with less happiness and no substantive benefit to higher household income in the US after $35-40K and in Germany after €14-18K (in daily life, not as an assessment of the whole)

Kudrna, Laura, and Kostadin Kushlev. 2021. “Money Does Not Always Buy Happiness, but Are Richer People Less Happy in Their Daily Lives? It Depends on How You Analyze Income.” PsyArXiv. August 18. doi:10.31234/osf.io/4jvh5

Abstract: Do people who have more money feel happier during their daily activities? Some prior research has found no relationship between income and daily happiness when treating income as a continuous variable in OLS regressions, although results differ between studies. We re-analyzed existing data, treating household income as a categorical variable and using lowess and spline regressions to explore non-linearities. Our analyses reveal that these methodological decisions provide new insights into the relationship between income and happiness. We find some evidence that higher income is associated with less happiness and no substantive benefit to higher household income in the US after $35-40K and in Germany after €14-18K. Not all analytic approaches generate the same conclusions, which may explain discrepant results.


From 2019... Greater male cognitive variability has implications for both tails of the distribution: Danish data (n = 1.3 million) finds that twice as many boys than girls are diagnosed with intellectual disability

From 2019... Incidence Rates and Cumulative Incidences of the Full Spectrum of Diagnosed Mental Disorders in Childhood and Adolescence. Søren Dalsgaard et al. JAMA Psychiatry. 2020;77(2):155-164. Nov 20, 2019, doi:10.1001/jamapsychia

Key Points

Question  What are the age- and sex-specific incidence rates and cumulative incidences of the full spectrum of diagnosed mental disorders during childhood and adolescence?

Findings  In this nationwide cohort study of 1.3 million individuals in Denmark, the risk (cumulative incidence) of being diagnosed with a mental disorder before 18 years of age was 14.63% in girls and 15.51% in boys. Distinct age- and sex-specific patterns of occurrence were found across mental disorders in children and adolescents.

Meaning  These findings suggest that precise estimates of rates and risks of all mental disorders during childhood and adolescence are essential for future planning of services and care and for etiological research.


Abstract

Importance: Knowledge about the epidemiology of mental disorders in children and adolescents is essential for research and planning of health services. Surveys can provide prevalence rates, whereas population-based registers are instrumental to obtain precise estimates of incidence rates and risks.

Objective  To estimate age- and sex-specific incidence rates and risks of being diagnosed with any mental disorder during childhood and adolescence.

Design  This cohort study included all individuals born in Denmark from January 1, 1995, through December 31, 2016 (1.3 million), and followed up from birth until December 31, 2016, or the date of death, emigration, disappearance, or diagnosis of 1 of the mental disorders examined (14.4 million person-years of follow-up). Data were analyzed from September 14, 2018, through June 11, 2019.

Exposures: Age and sex.

Main Outcomes and Measures  Incidence rates and cumulative incidences of all mental disorders according to the ICD-10 Classification of Mental and Behavioral Disorders: Diagnostic Criteria for Research, diagnosed before 18 years of age during the study period.

Results  A total of 99 926 individuals (15.01%; 95% CI, 14.98%-15.17%), including 41 350 girls (14.63%; 95% CI, 14.48%-14.77%) and 58 576 boys (15.51%; 95% CI, 15.18%-15.84%), were diagnosed with a mental disorder before 18 years of age. Anxiety disorder was the most common diagnosis in girls (7.85%; 95% CI, 7.74%-7.97%); attention-deficit/hyperactivity disorder (ADHD) was the most common in boys (5.90%; 95% CI, 5.76%-6.03%). Girls had a higher risk than boys of schizophrenia (0.76% [95% CI, 0.72%-0.80%] vs 0.48% [95% CI, 0.39%-0.59%]), obsessive-compulsive disorder (0.96% [95% CI, 0.92%-1.00%] vs 0.63% [95% CI, 0.56%-0.72%]), and mood disorders (2.54% [95% CI, 2.47%-2.61%] vs 1.10% [95% CI, 0.84%-1.21%]). Incidence peaked earlier in boys than girls in ADHD (8 vs 17 years of age), intellectual disability (5 vs 14 years of age), and other developmental disorders (5 vs 16 years of age). The overall risk of being diagnosed with a mental disorder before 6 years of age was 2.13% (95% CI, 2.11%-2.16%) and was higher in boys (2.78% [95% CI, 2.44%-3.15%]) than in girls (1.45% [95% CI, 1.42%-1.49%]).

Conclusions and Relevance  This nationwide population-based cohort study provides a first comprehensive assessment of the incidence and risks of mental disorders in childhood and adolescence. By 18 years of age, 15.01% of children and adolescents in this study were diagnosed with a mental disorder. The incidence of several neurodevelopmental disorders peaked in late adolescence in girls, suggesting possible delayed detection. The distinct signatures of the different mental disorders with respect to sex and age may have important implications for service planning and etiological research.


Check also Greater male variability is currently universal in internationally comparable assessments; some of this heterogeneity can be attributed to some species universal mechanism or some other social/cultural phenomenon

Sex differences in variability across nations in reading, mathematics and science: a meta-analytic extension of Baye and Monseur (2016). Helen Gray, Andrew Lyth, Catherine McKenna, Susan Stothard, Peter Tymms and Lee Copping. Large-scale Assessments in EducationAn IEA-ETS Research Institute Journal 20197:2. https://www.bipartisanalliance.com/2019/03/greater-male-variability-is-currently.html

Faecal transplants from young mice can enhance cognitive function in older animals

Microbiota from young mice counteracts selective age-associated behavioral deficits. Marcus Boehme et al. Nature Aging volume 1, pages666–676. Aug 9 2021. https://www.nature.com/articles/s43587-021-00093-9

Abstract: The gut microbiota is increasingly recognized as an important regulator of host immunity and brain health. The aging process yields dramatic alterations in the microbiota, which is linked to poorer health and frailty in elderly populations. However, there is limited evidence for a mechanistic role of the gut microbiota in brain health and neuroimmunity during aging processes. Therefore, we conducted fecal microbiota transplantation from either young (3–4 months) or old (19–20 months) donor mice into aged recipient mice (19–20 months). Transplant of a microbiota from young donors reversed aging-associated differences in peripheral and brain immunity, as well as the hippocampal metabolome and transcriptome of aging recipient mice. Finally, the young donor-derived microbiota attenuated selective age-associated impairments in cognitive behavior when transplanted into an aged host. Our results reveal that the microbiome may be a suitable therapeutic target to promote healthy aging.

Popular version: Faecal transplants from young mice can enhance cognitive function in older animals. https://www.nature.com/articles/d41586-021-02184-4