Saturday, September 11, 2021

Adoptive & biological families with 30-year-old offspring: Proportion of variance in IQ attributable to environmentally mediated effects of parental IQs was estimated at .01; heritability was estimated to be 0.42

Genetic and environmental contributions to IQ in adoptive and biological families with 30-year-old offspring. Emily A. Willoughby, Matt McGue, William G. Iacono, James J. Lee. Intelligence, Volume 88, September–October 2021, 101579. https://doi.org/10.1016/j.intell.2021.101579

Highlights

• Genetic and environmental sources of variance in IQ were estimated from 486 adoptive and biological families

• Families include 419 mothers, 201 fathers, 415 adopted and 347 biological fully-adult offspring (M age = 31.8 years; SD = 2.7)

• Proportion of variance in IQ attributable to environmentally mediated effects of parental IQs was estimated at .01 [95% CI 0.00, 0.02]

• Heritability was estimated to be 0.42 [95% CI 0.21, 0.64]

• Parent-offspring correlations for educational attainment polygenic scores show no evidence of adoption placement effect

Abstract: While adoption studies have provided key insights into the influence of the familial environment on IQ scores of adolescents and children, few have followed adopted offspring long past the time spent living in the family home. To improve confidence about the extent to which shared environment exerts enduring effects on IQ, we estimated genetic and environmental effects on adulthood IQ in a unique sample of 486 biological and adoptive families. These families, tested previously on measures of IQ when offspring averaged age 15, were assessed a second time nearly two decades later (M offspring age = 32 years). We estimated the proportions of the variance in IQ attributable to environmentally mediated effects of parental IQs, sibling-specific shared environment, and gene-environment covariance to be 0.01 [95% CI 0.00, 0.02], 0.04 [95% CI 0.00, 0.15], and 0.03 [95% CI 0.00, 0.07] respectively; these components jointly accounted for 8% of the IQ variance in adulthood. The heritability was estimated to be 0.42 [95% CI 0.21, 0.64]. Together, these findings provide further evidence for the predominance of genetic influences on adult intelligence over any other systematic source of variation.

Keywords: IntelligenceAdoptionHeritabilityVocabularyPolygenic scores


Asking the mayor's office how to start a business: Chinese & German cities responded to 36-37% requests; American cities responded to 22%; Chinese cities were more responsive to requests from men

Köhler, Ekkehard A. and Matsusaka, John G. and Wu, Yanhui, Street-Level Responsiveness of City Governments in China, Germany, and the United States (August 19, 2021). SSRN: http://dx.doi.org/10.2139/ssrn.3907862

Abstract: This paper presents evidence from parallel field experiments in China, Germany, and the United States. We contacted the mayor’s office in over 6,000 cities asking for information about procedures for starting a new business. Chinese and German cities responded to 36-37 percent requests; American cities responded to only 22 percent of requests. We randomly varied the text of the request to identify factors that affect the likelihood of receiving a response. American and German cities were more responsive to requests from citizens than foreigners; Chinese cities did not discriminate on this basis. Chinese cities were more responsive to requests from men than women; German cities did not discriminate on this basis and American cities had a slight bias in favor of women. Cities in all three countries were more responsive to requests associated with starting a construction business than a green business, but especially Chinese cities. Chinese cities were more responsive when the mayor was being considered for promotion than after a promotion decision, suggesting the importance of promotion incentives in China, but low responsiveness to green investment suggests limited incentives for environmental improvement. We argue that the response patterns are consistent with simple political economy theories of democracy and autocracy.

Keywords: Responsiveness, bureaucracy, democracy, autocracy, environment, discrimination, China, Germany

JEL Classification: H7, H83, O38, P5, R5


Friday, September 10, 2021

Older Taiwanese people had slower processing speeds than their US peers; younger Taiwanese people had faster processing speeds than their US peers; physiological, environmental, and genetic factors contribute

Development of processing speed in the United States and Taiwan: A brief report. Hsinyi Chen, Yung-Kun Liao, Richard Lynn. Personality and Individual Differences, Volume 184, January 2022, 111227. https://doi.org/10.1016/j.paid.2021.111227

Highlights

• We compared processing speed using collective US and Taiwanese data.

• Older Taiwanese people had slower processing speeds than their US peers.

• Younger Taiwanese people had faster processing speeds than their US peers.

• Physiological, environmental, and genetic factors contribute to these differences.

Abstract: Processing speed is a critical component of intelligence, and an important cognitive ability in the facilitation of human learning. Previous studies have examined the development of processing speed across cultures. This study investigated processing speed performance using collective data from standardized Wechsler scales in the United States and Taiwan from the past two decades. Drawing comparisons between national norms from ages 4 to 80 years, the results revealed that older people in Taiwan recorded slower processing speeds than their US counterparts. Conversely, younger people in Taiwan recorded faster processing speeds than their US counterparts. This evidence shows that physiological, environmental, and genetic factors contribute to group differences in the development of processing speed.

Keywords: AgeCross-cultural comparisonIntelligenceProcessing speedWechsler tests


Helping behavior, bystander intervention, violence, emergency, danger, systematic video observation

Does Danger Level Affect Bystander Intervention in Real-Life Conflicts? Evidence From CCTV Footage. Marie Rosenkrantz Lindegaard et al. Social Psychological and Personality Science, September 9, 2021. https://doi.org/10.1177/19485506211042683

Abstract: In real-life violence, bystanders can take an active role in de-escalating conflict and helping others. Recent meta-analytical evidence of experimental studies suggests that elevated danger levels in conflicts facilitate bystander intervention. However, this finding may lack ecological validity because ethical concerns prohibit exposing participants to potentially harmful situations. Using an ecologically valid method, based on an analysis of 80 interpersonal conflicts unobtrusively recorded by public surveillance cameras, the present study confirms that danger is positively associated with bystander intervention. In the presence of danger, bystanders were 19 times more likely to intervene than in the absence of danger. It extends this knowledge by discovering that incremental changes in the severity level of the danger (low, medium, and high), however, were not associated with bystander intervention. These findings confirm the importance of further investigating the role of danger for bystander intervention, in larger samples, and involving multiple types of real-life emergencies.

Keywords: helping behavior, bystander intervention, violence, aggression, emergency, danger, systematic video observation

The results provide support for a discrete effect of danger on bystander intervention. The odds of bystander intervention are 19 times higher when conflict parties display targeted aggression than when they do not. This result is in line with theories of altruism and prosocial behavior that suggest that increases in potential harm motivate bystander interventions. It is also in line with studies suggesting that the urgency of the need for help facilitates intervention. However, we also found that the aggression intensity level did neither facilitate nor deter bystander intervention. Serious forms of aggression were not more likely to provoke bystander intervention than minor forms of aggression. This finding runs counter the idea that increased danger boosts the motivation to intervene because nonintervention might be experienced as a too high cost in serious danger situations. It also runs counter the idea that increased danger reduces the bystanders’ motivation to intervene for fear being of harmed themselves in the act of intervention. A potential explanation for the absence of overall aggression intensity effect in our findings is that the two mechanisms that stimulate and deter intervention in response to danger might cancel out. According to this argument, increasing danger raises the benefits of intervention by reducing victim harm, while simultaneously raising the costs of intervention by increasing bystander harm. In nonexperimental conflict situations, danger affects the risks for victims and bystanders alike. Therefore, our observational data cannot disentangle both mechanisms. Another, less substantive but possible, the explanation is that our observational design lacks the statistical power to identify the effects of low and high aggression levels because these levels occur infrequently in the data (eight intervals with low aggression and seven intervals with high aggression) whereas medium aggression is much more common (45 instances of medium aggression). A larger sample size would be needed to identify potential effects of low and high aggression levels and to also demonstrate differences in the effects sizes of all three aggression levels.

Our within-person design rules out all stable between-person and between-situation confounders as explanations for these mixed results but unobserved time-varying factors such as, for example, changes in the communication of urgency could play a role. We found no moderating effect of the number of bystanders present, preexisting social relationships, and gender on the effect of danger on bystander intervention. In Amsterdam, interveners responded more strongly to variation in aggression level than in Cape Town. Future studies should consider the effect size of danger compared to stable personal and situational characteristics of conflict situations, for example, evaluate the extent to which bystander intervention is driven by dynamic or stable factors. They should also consider whether the danger has a different confounding effect on different kinds of individuals (e.g., Do people with low self-control vs. high self-control respond similarly to danger?) and in different kinds of situations (e.g., Does danger also facilitate bystander intervention in, for example, robberies and sexual assaults).

A limitation of our aggression-level measures is that they are objective physical actions and exclude verbal communications. Obviously, bystanders respond to subjective impressions, and their actions may be driven by expectations based on threats or other verbal expressions that we could not capture and thus could not code as aggression. Future studies might improve on the measurement of aggression by using footage that includes sound, for example, from body-worn camera footage, and by considering the communication of urgency other than aggression. Another limitation of our study is that we operationalize bystander intervention to physical actions aimed at stopping the conflict, while bystanders engage in nonphysical intervention behavior too, including more indirect ways of intervening such as phoning the police. Future work could also improve our test of the effect of danger on bystander intervention by including more high-intensity cases (in the current study, we only had seven incidents in which the highest aggression level was observed), for example, those with visible injuries or use of weapons. Future studies should consider the role of danger for various types of bystander intervention behavior.

In summary, we conclude that bystanders are more likely to intervene when danger is present than when danger is absent but that there is not enough evidence to conclude that the intensity of the danger makes their intervention either more or less likely.

Centenarians: 31.4% expressed willingness to live longer, 30.6% did not, & 38% presented no clear positioning; annoyance, uselessness, loss of meaning, disconnection, & loneliness were common justifications for reluctance

To Live or Die: What to Wish at 100 Years and Older. Lia Araújo et al. Front. Psychol., September 10 2021. https://doi.org/10.3389/fpsyg.2021.726621

Abstract: Previous research has shown that will to live is a strong predictor for survival among older people, irrespective of age, gender, and comorbidities. However, research on whether life at age 100 is perceived as worth living is limited. The available literature has presented evidence for good levels of positive attitudes and life satisfaction at such an advanced age, but it has also suggested that a longing for death is common. This study aimed to add to the existing data on this matter by exploring centenarians' will to live and the associated factors. The sample comprised 121 centenarians (mean age, 101 years; SD, 1.63 years), 19 (15.7%) of whom were males, from two centenarian studies (PT100). Answers to open questions were analyzed to identify the centenarians' will to live and the reasons behind it. Three groups were created (willing to live longer, not willing to live longer, no clear positioning) and further analyzed in terms of sociodemographic characteristics, health status, social functioning, and well-being. Of the total sample, 31.4% expressed willingness to live longer, 30.6% did not, and 38% presented no clear positioning. The presence of the Catholic religion (God) was referred for centenarians in all three groups. Annoyance, uselessness, loss of meaning, disconnection, and loneliness were the most common justifications for being reluctant to live longer. Positive valuation of life and good self-rated health, followed by having a confidant and reduced pain frequency, were the factors associated with being willing to live longer. The results of the study contribute to the understanding of the psychological functioning of individuals with exceptional longevity, particularly concerning the factors behind willingness to live at such an advanced age.

Discussion

Centenarians are an elite group, significantly exceeding the average life expectancy. This study explored the will to live and associated factors in a sample of these long-lived individuals by considering both quantitative indicators and qualitative data. The number of participants willing and unwilling to live longer was similar (31%) but lower than those without clear positioning (38%). Compared with the findings of studies of preferred life expectancy that also considered a non-response group, this was a very high percentage. For instance, in a study of 1,631 younger and middle-aged adults, Bowen and Skirbekk (2017) found that 15.9% of the sample did not clarify their preferred life expectancy. However, due to the lack of studies similar to the present one, whether the greater percentage was related to the centenarians' characteristics or the methodology of the study cannot be determined. Nevertheless, this group may have represented a stoic mindset in which individuals express a valuation of life per se and “as it comes” with discomfort or unwillingness to reflect about lifetime extension (Lang and Rupprecht, 2019).

Still, the qualitative exploration of this group's answers showed that almost everyone justified their lack of answer/positioning by mentioning God, stating that their remaining time to live was a matter that was not in “their hands” (i.e., one they could not control). This follows the idea that a sense of control through the sacred may come when life seems out of control (Wong et al., 2014). Although a sense of control is recognized as an important source of human life-strength, individuals who accept that declining control over environment comes with aging and focus on their ability to control their own internal states and behaviors demonstrate a more successful adjustment to aging (Hyer et al., 2011). Indeed, this group presented a satisfaction with life score very close to the group reporting willingness to live longer.

Regarding the reference to God, which was also present in the other two groups, religion and spirituality play an important role in the lives of older adults, as they help older people find meaning in later life (Frankl, 1963Atchley, 2009Wong et al., 2018) and are thus associated with how long one desires to live (Lang and Rupprecht, 2019). Different studies focusing on the centenarian population have confirmed the positive impact of religion and spirituality in well-being, which may be even more significant since this age group may fail to derive basic resources (Bishop, 2011). Archert et al. (2005) found that religiosity was one of the major themes that emerged from a qualitative analysis about adaptation and coping in the lives of centenarians; when asked about the most important thing in their lives, 58% of the respondents mentioned church and/or God. Interestingly, a female centenarian shared a sentence very similar to one of the participants in the present study, arguing that the future is held in God's hands (Archert et al., 2005). Furthermore, Manning et al. (2012) found that centenarians place considerable importance on divine support in their lives. This study found an interconnectedness of spirituality with religion for centenarians; in other words, these two constructs overlap. Through a phenomenological examination of life-satisfaction and compensatory strategies in Jewish-Canadian centenarians, Milevsky (2021) found that compensatory, cultural, and religious processes were imbued into several of the themes, such as “Maintaining connections with family, friends, and God” (p. 101) and “Remaining positive and kind” (p. 104). It is expected to have an intrinsic need to have hope that goes beyond this life and have faith either in something or in someone, which can be religiously oriented or without any religiosity (Saarelainen et al., 2020). But the huge presence of religion in centenarians' discourses found in the present study may echo the importance of church and military in shaping the lives of this older Portuguese generation (Birmingham, 2003), as well as the overall impact of religious beliefs, practices, and culture (Boerner et al., 2019).

The quantitative results of the present study revealed the significant contribution of health (pain frequency and SRH), social functioning (friends as confidants), and well-being (positive valuation of life). These findings confirmed that will to live is the summation of individuals' biopsychosociospiritual dimensions (Bornet et al., 2021) and depends on both external (e.g., social networks) and intra-personal factors (e.g., health and self-perceptions; Lawton et al., 1999). No sociodemographic variable was found to be relevant, which agreed with the findings of a scope review on will to live conducted by Bornet et al. (2021). Positive VOL had the strongest significant association with will to live, which was expected. Despite the great importance being attributed to variables like physical and health functionality in longevity and quality of life (Rowe and Kahn, 1997), some studies have emphasized the importance of psychological functioning and well-being, especially for very old individuals. For instance, a comparison of components in the World Health Organization's (WHO) active aging model by age group (< 75 years vs. ≥ 75 years) revealed the major relevance of the psychological component to the older age group (Paúl et al., 2017). Likewise, the operationalization of the successful aging model (Rowe and Kahn, 1997) in centenarians revealed the importance of subjective appraisals and psychological variables (Araújo et al., 2016).

The fact that the number of health conditions and levels of fatigue and functional capacity presented no significant association with will to live in this study supported the argument that individuals with problems related to physical health and functioning may be able to maintain subjective well-being. This agreed with the paradox of well-being, i.e., reporting experiences of positive psychological functioning despite decline in physical health, as the evidence of resilience in old age (Wiesmann and Hannich, 2014). Interestingly, pain and SRH, the two health factors with a significant impact on will to live, have also been associated with resilience in centenarians (Amaral et al., 2020). Gu and Feng (2018) argued that higher resilience could yield a greater protection for SRH and life satisfaction among centenarians compared with younger elderly groups. Thus, resilience may also be associated with will to live, as identified in younger groups (Bornet et al., 2021). This was supported by the qualitative analysis; aspects related to health, such as dependency, sense of burden, and fear of suffering, were referred to less than uselessness, annoyance, and loss of meaning by centenarians who were unwilling to live longer.

In advanced age, some aspects of purpose in life are more difficult to fulfill, such as having goals for the far future or feeling useful (Pinquart, 2002). This study found that these variables continue to be very important, specifically for (un)willingness to live. Conversely, will to live has strong, direct effects on well-being, including life and aging satisfaction (Jopp et al., 2017). The large influence of positive VOL on will to live meets Lawton et al.'s (1999) assumptions (i.e., years of desired life are mediated by VOL) in an age group in which this issue was not studied. Both existential beliefs and perceived control were higher in the group willing to live longer—that is, they represented important aspects for centenarians' reason for living, even under difficult conditions of functional impairment and disease. This confirmed that a “possible mechanism for the potency of VOL as a determinant of Years of Desired Life is the ability of people to adjust their standards for what is acceptable in everyday life in accord with changes in both their personal characteristics and the circumstances under which they live” (Lawton et al., 2001, p. 25).

Social factors also seem to have a contribution to will to live (Bornet et al., 2021), being an important source of meaning in life among older people (Saarelainen et al., 2020). Social relationships and support may be particularly important for centenarians (Boerner et al., 2016). The loss of friends and relatives that is typical in these long-lived individuals can make social contacts even more significant by reducing opportunities for (intra- and intergenerational) relationships (Randall et al., 2010). In the present study, the only social variable significantly associated with will to live was the number of friends as confidants. Thus, support—rather than size—may be the most significant aspect of social networks. Previous studies of the oldest old have shown the importance of having a close friend for independence (Pin et al., 2005) and well-being (Johnson and Barer, 1997). Indeed, maintaining a confidant is suggested as a strategy that centenarians use to compensate for losses and increase well-being (Araújo and Ribeiro, 2012). From the qualitative data, (social) disconnection and loneliness emerged as an important motive for losing will to live, as was also the case of feeling like a burden.

This last reason, which is typically referred to in studies on will to live (Bornet et al., 2021) since its related to the high burden of taking care of a person in the end of life, shows the need to acknowledge those who are supporting centenarians. The few studies that investigated will to live and end of life issues in centenarian's caregivers and offspring indicated concerns of family members that they'd become a burden for caregivers and would face the unavailability of family support if they became centenarians (Brandão et al., 2019). The fact that caregivers value this aspect so much, reinforces their potential burdens and needs. If being willing to live at 100 years old depends on the availability of social support, more must be invested in these caregiving offspring who are confronted with their own advanced age and the burdens of their parents' very old age (Eggert et al., 2020).

Despite the richness of this study's findings, some limitations should be considered. The cross-sectional design of the study prevented the ability to determine the direction of the relationships between variables, which could be of particular interest in this topic since willingness to live could be a predictor of well-being. Furthermore, those who shared their opinion (i.e., mostly individuals with mild or no cognitive impairment) represented only a part of the original sample, so these findings should not be generalized to the centenarian population.

Higher nut intake was associated with reductions in body weight and body fat; current evidence demonstrates the concern that nut consumption contributes to increased adiposity appears unwarranted

Are fatty nuts a weighty concern? A systematic review and meta-analysis and dose–response meta-regression of prospective cohorts and randomized controlled trials. Stephanie K. Nishi, Effie Viguiliouk, Sonia Blanco Mejia, Cyril W. C. Kendall, Richard P. Bazinet, Anthony J. Hanley, Elena M. Comelli, Jordi Salas Salvadó, David J. A. Jenkins, John L. Sievenpiper. Obesity Reviews, September 8 2021. https://doi.org/10.1111/obr.13330

Summary: Nuts are recommended for cardiovascular health, yet concerns remain that nuts may contribute to weight gain due to their high energy density. A systematic review and meta-analysis of prospective cohorts and randomized controlled trials (RCTs) was conducted to update the evidence, provide a dose–response analysis, and assess differences in nut type, comparator and more in subgroup analyses. MEDLINE, EMBASE, and Cochrane were searched, along with manual searches. Data from eligible studies were pooled using meta-analysis methods. Interstudy heterogeneity was assessed (Cochran Q statistic) and quantified (I2 statistic). Certainty of the evidence was assessed by Grading of Recommendations Assessment, Development, and Evaluation (GRADE). Six prospective cohort studies (7 unique cohorts, n = 569,910) and 86 RCTs (114 comparisons, n = 5873) met eligibility criteria. Nuts were associated with lower incidence of overweight/obesity (RR 0.93 [95% CI 0.88 to 0.98] P < 0.001, “moderate” certainty of evidence) in prospective cohorts. RCTs presented no adverse effect of nuts on body weight (MD 0.09 kg, [95% CI −0.09 to 0.27 kg] P < 0.001, “high” certainty of evidence). Meta-regression showed that higher nut intake was associated with reductions in body weight and body fat. Current evidence demonstrates the concern that nut consumption contributes to increased adiposity appears unwarranted.

4 DISCUSSION

The present systematic review and meta-analysis of nut consumption and adiposity involving six prospective cohort studies and 86 RCTs (114 trial comparisons) did not illustrate an increased risk of overweight/obesity or raise other measures of adiposity studied in adults.

Based on the long-term findings from the prospective cohort studies, a significant inverse association was observed across outcomes assessed. These findings align with those proposed by the systematic review of prospective studies by Eslami and colleagues.35 Suggesting that nut consumption may have a protective effect on risk of adiposity accumulation. This is further supported by the results of the present aggregate analyses from the RCTs, which showed a lack of a causal effect of nut consumption on the reported measures of adiposity. Previous systematic reviews and meta-analyses of trials involved differing inclusion and exclusion criteria yet showed similar findings in regard to a lack of effect of nut consumption on body weight, BMI, or waist circumference.3637 The lack of effect of nut consumption on waist circumference is further supported by Blanco Mejia and colleagues in their systematic review and meta-analysis assessing nuts and metabolic syndrome.3

Significant heterogeneity in the current analysis did exist. While this heterogeneity could not be adequately assessed categorically for the cohorts as there were too few cohort studies, subgroup analyses and meta-regression of the trials identified potential sources of heterogeneity. For the trials, similar to the previous publications,3637 energy balance was identified as a potential source of heterogeneity. However, in the current analysis, incorporating nuts into a dietary pattern involving an overall negative energy balance compared to a negative energy balance without nuts was observed to favour nuts in regard to not increasing body weight, BMI, or waist-to-hip ratio. Inclusion of nuts as a part of a dietary pattern without concern for increased body weight or adiposity measures is further supported by findings from the PREDIMED trial, where inclusion of nuts as part of a Mediterranean dietary pattern saw slightly reduced body weight and adiposity measures with no significant differences when compared with the Mediterranean dietary pattern with olive oil or the low fat dietary pattern.144 A sensitivity analysis involving the inclusion of the PREDIMED trial did not significantly affect the magnitude or direction of the current findings. In addition to energy balance, nut dose was detected as a potential effect modifier of body weight and body fat, where greater reductions were observed with increasing nut dose. In categorical analyses, nut doses ≥45.5 g/day indicated lower adiposity measures compared to lower doses. As nut doses of 1 to 1.5 ounces (~28 to 42.5 g) per day are often noted in dietary guidelines, as well as the FDA qualified health claim for coronary heart disease risk reduction, this suggests the provision often seen following nut recommendations, as well as stated at the end of the applicable qualified health claims asserting “see nutrition information for fat [and calorie] content” with the implied message that foods high in fat and calories lead to increased adiposity may be unwarranted.17-19 Likewise, continuous linear meta-regression identified dose-dependent relationships between nut consumption with both body weight and body fat, where nut dose was inversely correlated with body weight and fat. However, significant departures from linearity were observed in BMI, waist circumference, and waist-to-hip ratio, where the maximum protective dose appeared to be around 50 g/day based on waist-to-hip ratio. Although the waist-to-hip ratio may have been confounded by the nonsignificant positive correlation observed between waist circumference and nut consumption. This positive association between nut consumption and waist circumference differs from findings in the literature, where nut and seed consumption has been associated with significantly decreased pericardial fat, and trends toward decreased visceral fat,145 and monounsaturated fat intake, which is prevalent in nuts, compared to carbohydrate intake has been shown to prevent central fat redistribution.146

4.1 Strengths and limitations

Strengths of the present systematic review and meta-analysis include its comprehensive design, comprising both prospective cohort studies and RCTs, using the GRADE approach to evaluate the certainty of evidence. The prospective cohort studies provide assessment of nut consumption over the long term in a large sample of participants in free-living conditions in relation to adiposity. The design of RCTs provides the best protection against bias; there were also a substantial number of trials identified (106 trial comparisons) for the primary outcome of body weight; the median follow-up period was 8 weeks, which allows for the assessment of a moderate duration of intervention. In addition, the meta-regression and subgroup analyses provide further insight as to various factors that have previously been hypothesized to influence the impact of nut consumption on adiposity.

These analyses are not without limitations. For the prospective cohort studies, we downgraded the certainty of the evidence for serious inconsistency in the estimates across the studies for body weight change as there was evidence of unexplained heterogeneity (92%). Although the inconsistency may have related to measurement error as there was a lack of repeated measurement of intake of nuts, use of a food frequency questionnaire measure that was not specifically validated for nut intake, and adiposity measures were mainly self-reported by participants. Risk of bias was also observed for body weight change as participants were primarily comprised of well-educated individuals, many of whom were health professionals, including university graduates from SUN and health professionals recruited from NHS, NHS II, and HPFS, and thus may not be generalizable to other populations.

For the RCTs, we downgraded the certainty of evidence for serious inconsistency in the estimates due to unexplained heterogeneity in all the outcomes assessed, except BMI. Subgroup analyses indicated potential sources of heterogeneity; however, this was often observed when the covariate was unevenly distributed, as well as the differences in treatment effects between subgroups are unlikely to otherwise alter clinical decisions.

Weighing these strengths and limitations using GRADE, the certainty of evidence ranged from “very low” to “high.” A reason for the “very low” certainty of evidence observed is due to the GRADE approach starting observational studies at “low” certainty. Overall, the prospective cohort studies showed mostly “moderate” and the RCTs showed equally “high” and “moderate” certainty of evidence.

4.2 Potential mechanisms of action

There are several biological mechanisms which may explain the association, more specifically, the lack of association observed between nut consumption with overweight/obesity risk and other measures of adiposity, including: (1) unsaturated fatty acid content, (2) satiating effect, and (3) physical structure, each in a way associated with the bioavailability of nuts when consumed. Nuts are rich in unsaturated fatty acids (monounsaturated fatty acids [MUFAs] and polyunsaturated fatty acids [PUFAs]), which are suggested to be more readily oxidized147 and have a greater thermogenic effect148 compared to saturated fatty acids, leading to less fat accumulation. Nuts are also rich in protein and fiber and dietary components associated with increased satiety.149-151 In addition to the protein and dietary fiber content of nuts, the physical structure may also contribute to their satiating effect since the mastication process involved in mechanically reducing nuts to a particle size small enough to swallow activates signaling systems that may modify appetite sensations.152 The physical structure of nuts may also contribute to fat malabsorption due to the fat content in nuts being contained within walled cellular structures that are incompletely masticated and/or digested.153-156 Thus, due to these biological mechanisms which may be associated with decreased bioavailability, the Atwater Factor, a system for determining the energy value of foods which was founded over a century ago, associated with nuts, may overestimate the calories obtained by the body from nut consumption by approximately 16% to 25% depending on the nut type and form.157-159 This may potentially explain the present findings of a protective effect of nut consumption on measures of adiposity.

4.3 Practical implications

Current clinical practice guidelines already suggest the incorporation of nuts for the improvement of glycemic control and cardiovascular risk factors; however, there are often qualifiers regarding their fat content and energy density.14-16 With overweight and obesity respectively affecting 39% and 13% of adults globally and increased adiposity being a modifiable risk factor for diabetes and cardiovascular diseases, body weight management is an important consideration in dietary and lifestyle recommendations.160 Evidence from this systematic review and meta-analysis suggests that nuts may continue to be highlighted as a nutrient dense component of dietary patterns for their cardiometabolic benefits without concerns of an adverse effect on weight control. Nuts are currently recommended as part of the Mediterranean, Portfolio, and DASH dietary patterns, yet despite tree nut and peanut intake increasing over the past 10 years, intake worldwide remains low at an estimated 16.7 g/day with about 15.2 g being contributed by peanuts.20 This is far below current recommendations of 1 to 1.5 ounces per day (approximately 28.3 to 42.5 g/day).617-19 Based on the median nut intake in the trials of the current analyses and FDA qualified health claims, a dose of 42.5 g/day of nuts could easily be integrated into a daily dietary pattern by incorporating them into meals and/or consuming them as snacks. Except for individuals with nut allergies, no increase in side effects compared with control groups was reported in any of the cohort studies or trials, suggesting that dietary patterns which incorporate nuts as a regularly consumed component are safe. Future research may further assess the impact of different varieties of nuts and formats in which they may be consumed and how they are incorporated into the diet.

Thursday, September 9, 2021

Average generation time is 26.9 years across the past 250,000 years, with fathers consistently older (30.7 y) than mothers (23.2 y), a disproportionate increase in female generation times over the past several thousand years

Human generation times across the past 250,000 years. Richard J. Wang et al. bioRxiv Sep 7 2021. https://doi.org/10.1101/2021.09.07.459333

Abstract: The generation times of our recent ancestors can tell us about both the biology and social organization of prehistoric humans, placing human evolution on an absolute timescale. We present a method for predicting historic male and female generation times based on changes in the mutation spectrum. Our analyses of whole-genome data reveal an average generation time of 26.9 years across the past 250,000 years, with fathers consistently older (30.7 years) than mothers (23.2 years). Shifts in sex-averaged generation times have been driven primarily by changes to the age of paternity rather than maternity, though we report a disproportionate increase in female generation times over the past several thousand years. We also find a large difference in generation times among populations, with samples from current African populations showing longer ancestral generation times than non-Africans for over a hundred thousand years, reaching back to a time when all humans occupied Africa.



Babbling in bat pups is characterized by the same eight features as babbling in human infants, including the conspicuous features reduplication and rhythmicity

Babbling in a vocal learning bat resembles human infant babbling. Ahana A. Fernandez, Lara S. Burchardt, Martina Nagy, Mirjam Knörnschild. Science, Aug 20 2021, Vol 373, Issue 6557, pp. 923-926. https://www.science.org/lookup/doi/10.1126/science.abf9279

Abstract: Babbling is a production milestone in infant speech development. Evidence for babbling in nonhuman mammals is scarce, which has prevented cross-species comparisons. In this study, we investigated the conspicuous babbling behavior of Saccopteryx bilineata, a bat capable of vocal production learning. We analyzed the babbling of 20 bat pups in the field during their 3-month ontogeny and compared its features to those that characterize babbling in human infants. Our findings demonstrate that babbling in bat pups is characterized by the same eight features as babbling in human infants, including the conspicuous features reduplication and rhythmicity. These parallels in vocal ontogeny between two mammalian species offer future possibilities for comparison of cognitive and neuromolecular mechanisms and adaptive functions of babbling in bats and humans.


The ultimatum and dictator games were developed to help identify the fundamental motivators of human behavior, typically by asking participants to share windfall endowments with other persons

If you've earned it, you deserve it: ultimatums, with Lego. Adam Oliver. Behavioural Public Policy, September 9 2021. https://www.cambridge.org/core/journals/behavioural-public-policy/article/if-youve-earned-it-you-deserve-it-ultimatums-with-lego/EB5907A941220FB244234AC8C355DBA5

Abstract: The ultimatum and dictator games were developed to help identify the fundamental motivators of human behavior, typically by asking participants to share windfall endowments with other persons. In the ultimatum game, a common observation is that proposers offer, and responders refuse to accept, a much larger share of the endowment than is predicted by rational choice theory. However, in the real world, windfalls are rare: money is usually earned. I report here a small study aimed at testing how participants react to an ultimatum game after they have earned their endowments by either building a Lego model or spending some time sorting out screws by their length. I find that the shares that proposers offer and responders accept are significantly lower than that typically observed with windfall money, an observation that is intensified when the task undertaken to earn the endowment is generally less enjoyable and thus perhaps more effortful (i.e., screw sorting compared to Lego building). I suggest, therefore, that considerations of effort-based desert are often important drivers behind individual decision-making, and that laboratory experiments, if intended to inform public policy design and implementation, ought to mirror the broad characteristics of the realities that people face.

The policy relevance

My small study of course has many limitations, several of which have already been acknowledged. The participants, for example, were chosen for their convenience, and are hardly representative of the general population. Moreover, to reiterate, some of the questions were not financially incentivized – sometimes, it is argued, after considering the merits and demerits of different methods, but nonetheless the potential problems with the approach adopted are fully appreciated.

Limitations aside, I contend that the results suggest that effort-based desert matters to people, and that if, rather than receiving windfalls, they have to earn their endowments, then, if asked, they will be willing to share, and be expected to share, a lower proportion of their endowments with others. This general conclusion applies not only to windfall versus earned endowments but also across different earnings-related tasks. For example, a task (or indeed a job) that is perceived to be generally more effortful (or less enjoyable) may provoke lower levels of generosity and less punishment for an apparent lack of generosity than those that generally require less effort. Or at least this will be the observation at face value, for if the different levels of effort are controlled for, we may find that generosity and punishment remain quite stable.

The recognition of the importance of effort-based desert leads me to propose that rewarding people for their effort sustains their effort. This was reflected in Akerlof's (1982) contention that a wage higher than the minimum necessary is met by employee effort that is higher than egoism dictates, because employees now think that employers deserve a fair return. In real work scenarios, there is a general acceptance of desert-based rewards that results in unequal distributions (Starmans et al.2017), but, as noted above, the voluminous literature on the dictator and ultimatum games that uses windfall endowments fails to acknowledge the importance of desert. That being the case, this body of research lacks real-world policy relevance in relation to peoples’ propensities to share their resources with others or, in the case of the ultimatum game, propensities to punish others for perceived insufficiencies in sharing, at least beyond the limited circumstances where one might experience windfalls. At most, this research offers only very general conclusions that might be relevant to policy design, principally that people often appear to be strategically self-interested when they are aware that they may be punished for blatant acts of selfishness, but, at the same time, many people like to see an element of distributional fairness over final outcomes if no party can claim property rights over an endowment.

In short, the research using windfall endowments decontextualises decision-making too much, which is a little ironic if one is interested in real-world implications, given that the essence of behavioral public policy is that context matters. Of course, the research that uses earned outcomes also in many ways departs from the circumstances that people actually face – in terms of the small study reported in this article, for instance, there are very few people who earn an income from constructing Lego models. (NB. Sorting screws might be different – quite a few participants asked me if I was paying them to tidy up my garage.) But by requiring participants to at least do something to earn their endowments the study – like those principally focussed on the dictator game summarized in Table 1 – took them one step closer to reality. The policy lesson emerging from this body of work is that people respect property rights and that there is broad recognition and acceptance of effort-based desert. Consequently, when considering an endowment that one party to an exchange has earned, the willingness of that party to share, and the tendency for other parties to punish a perceived lack of generosity by that person, are much closer to the predictions of rational choice theory than the evidence using windfall endowments, where close to no effort is expended by participants, typically implies.

More generally, for laboratory studies of human motivations to hold relevance for policy design and implementation the context of the study ought to match, as far as possible, the circumstances that people actually face. I fear that insufficient attention is sometimes paid to this basic premise. For instance, in the real world, some people suffer extreme shortages, others face moderate scarcity, and still others enjoy abundance, and different motivational forces will come to the fore to facilitate flourishing, or even survival, in these different circumstances. Behavioral experiments ought to aim to reflect these (and other) circumstances to enable their results to offer better insights into what drives people as they navigate their way through life.

Our analyses do not establish causality; the small effect sizes suggest that increased screen time is unlikely to be directly harmful (mental health, behavioral problems, academic performance, peer relationships) to 9 & 10-yo children

Paulich KN, Ross JM, Lessem JM, Hewitt JK (2021) Screen time and early adolescent mental health, academic, and social outcomes in 9- and 10- year old children: Utilizing the Adolescent Brain Cognitive Development ℠ (ABCD) Study. PLoS ONE 16(9): e0256591, Sep 8 2021. https://doi.org/10.1371/journal.pone.0256591

Abstract: In a technology-driven society, screens are being used more than ever. The high rate of electronic media use among children and adolescents begs the question: is screen time harming our youth? The current study draws from a nationwide sample of 11,875 participants in the United States, aged 9 to 10 years, from the Adolescent Brain Cognitive Development Study (ABCD Study®). We investigate relationships between screen time and mental health, behavioral problems, academic performance, sleep habits, and peer relationships by conducting a series of correlation and regression analyses, controlling for SES and race/ethnicity. We find that more screen time is moderately associated with worse mental health, increased behavioral problems, decreased academic performance, and poorer sleep, but heightened quality of peer relationships. However, effect sizes associated with screen time and the various outcomes were modest; SES was more strongly associated with each outcome measure. Our analyses do not establish causality and the small effect sizes observed suggest that increased screen time is unlikely to be directly harmful to 9-and-10-year-old children.

Discussion

These results have important implications. The lack of consistently significant interactions between screen time and sex—but often significant main effects for both screen time and sex—demonstrate that generally, both screen time and sex predict the outcome variables, but that the effect of screen time on the outcome variables often does not depend on sex, and vice versa. For the outcome measures with non-significant interaction terms but significant main effects of both/either screen time and/or sex, it appears that screen time and sex are independent predictors of the outcome measure. For these outcome measures, the effect of either screen time or sex on the outcome variable did not depend on the other independent variable. A potential reason for that finding could be sex differences in how screens are being used. The only outcome measure demonstrating a significant interaction term, for Part 1 and for Part 2, is number of close friends who are males. It is possible that, because males in this study tend to use screen time for video gaming—which is often a social activity—more than females do (refer to Table 1), screen time and sex interact such that the effect of screen time (e.g., using screens for video gaming) on number of close male friends depends on the sex of the participant, where male participants who spend more time on screens video gaming have more male friends.

Screen time—above and beyond both SES and race/ethnicity—is a significant predictor of some internalizing symptoms, behavioral problems, academic performance, sleep quality and quantity, and the strength of peer relationships for 9- to 10-year-old children, in both boys and girls. However, the effect of screen time was small (<2% of the variance explained) for all outcomes, with SES—which was demonstrated to be a significant predictor for the nearly all outcome variables of interest—accounting for much more of the variance (~5%), perhaps because parent SES contributes to nearly every facet of children’s physical and mental health outcomes [28]. Taken together, our results imply that too much time spent on screens is associated with poorer mental health, behavioral health, and academic outcomes in 9- and 10- year old children, but that negative impact on the subjects is likely not clinically harmful at this age.

The significant association between screen time and externalizing disorder symptoms was in line with previous research [13]. However, this association is not necessarily causal; for example, it has been suggested that parents/guardians of children who display externalizing disorder symptoms, along with oppositional defiance disorder and conduct disorder, are more likely to place their child in front of a screen as a distraction [29], so it is possible that externalizing disorder symptoms feed into additional screen time rather than the reverse.

The negative association between screen time and academic performance may be of some concern to parents; another group of researchers reported a similar trend in a sample of Chinese adolescents [30]. We speculate that more time dedicated to recreational screen use detracts from time spent on schoolwork and studying for exams, though this proposed explanation should be examined further. In data collection for the ABCD Study, academic screen time (e.g., using a computer to complete an academic paper) was not recorded; it is possible that academic screen time could be positively associated with academic performance, suggesting, as previous studies [2223] point out, that the type of screen time use is more important to consider than screen time itself.

The negative association between screen time and amount of sleep has been demonstrated previously [17] and, as in the case of academic performance, it is possible that time on screens takes away from time asleep. The positive association between sleep disorder score and screen time is of interest, though how that relationship is mediated is a topic of future research. It could be that when children and adolescents struggle with sleep, they turn to electronic media as a way to distract themselves or in an attempt to lull themselves back to sleep, or that screen use contributes to delayed bedtime, as has been suggested in previous literature [17].

The lack of significant relationships between screen time and internalizing disorder symptoms (i.e., depression and anxiety) was surprising and does not align with prior findings by researchers who also used the ABCD study to examine screen time as a predictor variable. To examine the discrepancy, we conducted a replication of their study [11], using the early release data of 4528 participants, which is less than half the sample size used in the current study. We replicated their findings closely, which suggests that the discrepancy in our results primarily arises from the differences in the sample as it doubled in size. Overall, both the current study and the previous [11] find only weak associations of screen time with internalizing problems in the baseline ABCD sample. It is possible that because internalizing disorders typically develop throughout childhood and adolescence [3132], 9- and 10- year old children are simply not displaying immediately noticeable internalizing symptoms.

The finding that more screen time is associated with a greater number of close friends, both male and female, is in line with previous research [21] and suggests that when on screens, adolescents are communicating with their friends via texting, social media, or video chat, and the social nature of such screen time use strengthens relationships between peers and allows them to stay connected even when apart.

The current study is not without limitations. Because participants are 9 and 10, they simply are not using screens as much as their older peers; means for screen time use are low, especially for texting and social media, two aspects of screen time that may have the most impact on peer relationships and mental health outcomes [21]. The frequencies of mature gaming and viewing of R-rated movies are also low. Similarly due to the age of the sample, the majority of participants do not display signs of mental ill health. Follow-up interview studies conducted as the sample ages would likely be more powered as adolescents increase in their screen use and they evidence more mental health issues at older ages. Beneficially, however, the longitudinal nature of the ABCD Study will allow continuation of study of these potential associations over the course of the participants’ adolescence. Next, the measures used by the ABCD Study at baseline have some limitations. By restricting the screen time maximum label to “4+ hours” for all subsets of screen time apart from total screen time, it was not possible to examine extremes in screen time (e.g., the present data do not differentiate between four hours of texting and 15 hours. Additionally, the majority of outcome measures were evaluated through parent report rather than child self-report, and it is possible that parent evaluations are inaccurate, especially for more subtle symptoms such as internalizing problems. However, for the majority of outcome variables, parents responded to the Child Behavior Checklist, which demonstrates strong psychometric validity [33]. Additionally, parent report is preferred for assessing some outcome measures of interest; in externalizing problems and attention problems specifically, the positive illusory bias skews youth self-report to overly positive reports of their performance in comparison to criteria that reflects actual performance [3435].