Monday, August 23, 2021

Rhesus monkeys also choke under pressure; this indicates that there may be shared neural mechanisms that underlie the behavior in both humans and monkeys

Monkeys exhibit a paradoxical decrease in performance in high-stakes scenarios. Adam L. Smoulder et al.  Proceedings of the National Academy of Sciences, August 31, 2021 118 (35) e2109643118; https://doi.org/10.1073/pnas.2109643118

Significance: Choking under pressure is a frustrating phenomenon experienced sometimes by skilled performers as well as during everyday life. The phenomenon has been extensively studied in humans, but it has not been previously shown whether animals also choke under pressure. Here we report that rhesus monkeys also choke under pressure. This indicates that there may be shared neural mechanisms that underlie the behavior in both humans and monkeys. Introducing an animal model for choking under pressure allows for opportunities to study the neural causes of this paradoxical behavior.

Abstract: In high-stakes situations, people sometimes exhibit a frustrating phenomenon known as “choking under pressure.” Usually, we perform better when the potential payoff is larger. However, once potential rewards get too high, performance paradoxically decreases—we “choke.” Why do we choke under pressure? An animal model of choking would facilitate the investigation of its neural basis. However, it could be that choking is a uniquely human occurrence. To determine whether animals also choke, we trained three rhesus monkeys to perform a difficult reaching task in which they knew in advance the amount of reward to be given upon successful completion. Like humans, monkeys performed worse when potential rewards were exceptionally valuable. Failures that occurred at the highest level of reward were due to overly cautious reaching, in line with the psychological theory that explicit monitoring of behavior leads to choking. Our results demonstrate that choking under pressure is not unique to humans, and thus, its neural basis might be conserved across species.


Keywords: motor controlrewardanimal behaviorreachingmotivation


Despite commonly experiencing empathy in daily life, older adults are not more prosocial than other age cohorts

Pollerhoff, Lena, Julia Stietz, Gregory J. Depow, Michael Inzlicht, Philipp Kanske, Shu-Chen Li, and Andrea M. Reiter. 2021. “Investigating Adult Age Differences in Real-life Empathy, Prosociality, and Well-being Using Experience Sampling.” PsyArXiv. August 23. doi:10.31234/osf.io/983ey

Abstract: While the importance of social affect and cognition is indisputable throughout the adult lifespan, findings of how empathy and prosociality develop and interact across adulthood are mixed, and real-life data are scarce. Research using ecological momentary assessment recently demonstrated that adults commonly experience empathy in daily life. Furthermore, predictors of empathy were linked to higher prosocial behavior and subjective well-being. However, to date, it is not clear whether there are adult age differences in daily empathy and daily prosociality and whether age moderates the relationship between empathy and prosociality across adulthood. Here we analyzed experience-sampling data collected from participants across the adult lifespan to study age effects on empathy, prosocial behavior, and well-being under real-life circumstances. Linear and quadratic age effects were found for the experience of empathy, with increased empathy across the three younger age groups (18 to 45 years) and a slight decrease in the oldest group (55 years and older). Neither prosocial behavior nor well-being showed significant age-related differences. We discuss these findings with respect to (partially discrepant) results derived from lab-based or traditional survey studies. We conclude that studies linking in-lab experiments with real-life experience-sampling might be a promising venue for future lifespan studies.




Residues of glyphosate in food and dietary exposure

Residues of glyphosate in food and dietary exposure. John L. Vicini, Pamela K. Jensen, Bruce M. Young, John T. Swarthout. Comprehensive Reviews in Food Science and Food Safety, August 16 2021. https://doi.org/10.1111/1541-4337.12822

Abstract: Glyphosate is the active ingredient in Roundup® brand nonselective herbicides, and residue testing for food has been conducted as part of the normal regulatory processes. Additional testing has been conducted by university researchers and nongovernmental agencies. Presence of residues needs to be put into the context of safety standards. Furthermore, to appropriately interpret residue data, analytical assays must be validated for each food sample matrix. Regulatory agency surveys indicate that 99% of glyphosate residues in food are below the European maximum residue limits (MRLs) or U.S. Environmental Protection Agency tolerances. These data support the conclusion that overall residues are not elevated above MRLs/tolerances due to agricultural practices or usage on genetically modified (GM) crops. However, it is important to understand that MRLs and tolerances are limits for legal pesticide usage. MRLs only provide health information when the sum of MRLs of all foods is compared to limits established by toxicology studies, such as the acceptable daily intake (ADI). Conclusions from dietary modeling that use actual food residues, or MRLs themselves, combined with consumption data indicate that dietary exposures to glyphosate are within established safe limits. Measurements of glyphosate in urine can also be used to estimate ingested glyphosate exposure, and studies indicate that exposure is <3% of the current European ADI for glyphosate, which is 0.5 mg glyphosate/kg body weight. Conclusions of risk assessments, based on dietary modeling or urine data, are that exposures to glyphosate from food are well below the amount that can be ingested daily over a lifetime with a reasonable certainty of no harm.

7 DISCUSSION

Calculating dietary consumption of glyphosate can be done using two disparate methods. In the first method, residues are measured in individual food items and these are summed based on consumption data for the foods that people eat. This approach requires answering several questions to make reasonable assumptions used in the modeling such as: (1) what residue value for a food is used; (2) is the food processed or cooked; (3) what residue value is used when an analyte's concentration is < LOD; (4) are mean, median, or 90th percentile values used for consumption; and (5) what and how much do people eat? In the second approach, for a pesticide like glyphosate with the knowledge of absorption from the gut, lack of metabolism, and elimination from the body, sampling of urine is an accurate way to calculate ingestion and exposure within the body (Acquavella et al., 2004; Niemann et al., 2015). Regardless of which of the two methods is used, these values need to be compared to a safety standard, such as the ADI or RfD, which are regulatory-derived safety standards. Results of determining exposure to glyphosate by dietary modeling or urinary glyphosate are presented in Table 5, Table 6, and Figure 2. Modeling allows different scenarios using estimates of consumption and data derived from market surveys, whereas urine is a surrogate for estimating actual dietary exposure to glyphosate. The results from these two methods are in relatively close agreement. Dietary estimates range from 0.03% to 18% depending on assumptions. An especially critical assumption used is the residue levels that are used. Urinary glyphosate estimates of exposure are 0.02%–2.67% of the ADI, which do not require an assumption about residue on individual foods.

In spite of that, testing and publishing about glyphosate residues, whether in peer-reviewed journals, by internet postings or in the news media, has become somewhat common in the last decade. Unfortunately, many of the popular press reports are accompanied by value-judgment words like “high” or “contaminate,” or they make scientifically inappropriate comparisons to other standards (i.e., concentrations in urine vs. regulatory defined residue levels for drinking water). Furthermore, some of these reports imply that glyphosate residues were not known to exist previously in a given food or in urine, and, therefore the findings are regarded as novel. Recent publications, such as Winter and Jara (2015), Winter et al. (2019), and Reeves et al. (2019), have attempted to provide more information about the process for risk assessment of pesticides conducted by regulatory agencies. Moreover, timely communications from regulatory agencies, such as BfR responding to reports of residues in food, German beer, or urine (BfR, 20162017), provide helpful information from what should be a trustworthy source in the face of widespread social media communications about food and agriculture (Ryan et al., 2020).

One statistic that is often encountered in publications that also might generate concern by consumers is reports of increased trends over time of usage of a pesticide, either by expanded adoption or as the result of new technology, such as herbicide-tolerant crops (Benbrook, 2016). These statistics intimate that pesticide use has exceeded safe levels established by the original regulatory assessments. In one residue study, the authors suggested that it appeared that MRL values were adjusted due to actual observed increases and not based on toxicity (Bøhn et al., 2014). This is precisely how MRLs are derived. It is important to highlight that alterations in the use of a previously approved pesticide, such as usage of glyphosate on newly approved GT crops, require new residue data to be submitted from the pesticide registrant(s) prior to regulatory approval. These new residue data are reviewed by regulators in order to ensure that the previous ADI or RfD is not exceeded. Additionally, MRLs or tolerances are derived from empirical data of real-world conditions and, once established, MRLs represent for any crop the agricultural practice that results in the highest residue. EPA (1996) stated in their guidance that pesticide use patterns, such as changes in the preharvest interval and/or postharvest treatment, are likely to require residue studies, and potentially another petition for a new tolerance. Expanded usage of a pesticide might change, but by conservatively assuming that 100% of a crop will use the agricultural practice with the highest residue, exposure remaining below the ADI is not subject to changes in commercial adoption. If a new exposure resulted in the sum of all exposures exceeding the ADI, there would need to be a restriction in some use. Moreover, since regulatory authorities use data collected prior to authorization of cultivation or import of the crop, combined with periodic testing to ensure that tolerances are not being exceeded, these media reports of residues do not necessarily provide unexpected data. When properly conducted, independent, peer-reviewed studies are published, they can be a corroboration of the accuracy of previously reported regulatory residue studies.

Putting pesticide residues into context by converting these values to percentages of the EFSA- or EPA-derived ADI or RfD helps one understand the margin of safety, but many consumers want food that is free of synthetic pesticides (Krystallis & Chryssohoidis, 2005). According to Currie (1999), many believe that with improved assays, a concentration of zero might be detected, but that is scientifically not feasible.

More than ever, as in other areas of science, transparency on residues of pesticides and their assessment by global regulatory authorities entrusted by the public to ensure food safety is needed to address complex scientific information (OECD, 2020). The scientific publication process that requires peer-review of the data and conclusions has largely provided the basis for science-based regulatory assessments for the past century (Codex, 2004) . Although peer-review is not a foolproof process, it is a process with the intent of ensuring that results and conclusions from published studies are based on well-conducted and documented scientific experiments. This is in sharp contrast to the essentially unreviewed environment of media and online publications. Adequacy of peer review is increasingly more confusing with predatory journals and electronic publishing (Kelly et al., 2014). Since the public lacks training to help them distinguish information from peer-reviewed journals and science-based regulatory authorities from information they see in media reports and predatory journals, this review has included results on glyphosate residues from both sources to provide them with a single-point reference for an informed discussion of this subject.

Less intelligent Americans & British voters were more likely to have nationalist attitudes; the tendency to be more nationalist & belligerent may, among other things, form the microfoundation of democratic peace in international relations

Possible Evolutionary Origins of Nationalism. Satoshi Kanazawa. Political Behavior, Aug 23 2021. https://rd.springer.com/article/10.1007/s11109-021-09741-7

Abstract: Why do some individuals support nationalist policies while others don’t? The Savanna-IQ Interaction Hypothesis in evolutionary psychology suggests that more intelligent individuals may be more likely to acquire and espouse evolutionarily novel values whereas less intelligent individuals may be more likely to hold evolutionarily familiar values. Nationalism is evolutionarily familiar, so the Savanna-IQ Interaction Hypothesis suggests that less intelligent individuals may be more likely to be nationalist. The analyses of the General Social Survey (GSS) data in the US and the National Child Development Study (NCDS) data in the UK confirmed the prediction. Less intelligent Americans were more likely to have nationalist attitudes, and less intelligent British voters were more likely to support nationalist parties in five general elections over three decades. The tendency of less intelligent individuals to be more nationalist and belligerent may, among other things, form the microfoundation of democratic peace in international relations.


Changes in sexual behavior observed in the early months of the pandemic have continued, with small but significant decreases in many partnered sexual behaviors & a small increase in men's solitary sexual behaviors

The Impact of the COVID-19 Pandemic on Sexual Behaviors: Findings from a National Survey in the United States. Neil Gleason et al. The Journal of Sexual Medicine, August 23 2021. https://doi.org/10.1016/j.jsxm.2021.08.008

Abstract

Background: Studies from the first months of the COVID-19 pandemic and the resulting lockdown and social distancing measures have shown that there have been decreases in sexual frequency and relationship satisfaction.

Aim. To evaluate the ongoing impact of the COVID-19 pandemic on sexual behavior, relationship satisfaction, and intimate partner violence in the U.S. using a large national convenience sample.

Methods. 1,051 participants across the U.S. were recruited in October 2020 to complete a cross-sectional online survey.

Outcomes: Participants were asked to retrospectively report their sexual behavior frequency, relationship satisfaction, and intimate partner violence during the pandemic and prior to the pandemic

Results. There was a small but significant decrease in some retrospectively-reported partnered sexual activities, and men reported a small increase in masturbation and pornography use. There was no evidence for a change in relationship satisfaction or intimate partner violence, but both men and women reported a small decrease in sexual pleasure, and women reported a small decrease in sexual desire. The sexual behaviors with greatest reduction were casual sex, hookups, and number of partners, and the most diminished as aspect of sexual functioning was sexual enjoyment. Depression symptoms, relationship status, and perceived importance of social distancing emerged as predictors of these reductions. Less than half of individuals who engaged with casual sex partners before the start of the pandemic ceased this behavior completely after the start of the pandemic. Individuals waited on average 6-7 weeks before reengaging in casual sex.

Clinical translation: These results inform public health response to the effects of the pandemic and add to our understanding of how the pandemic has continued to impact sexual behavior.

Strengths and Limitations: This is the first known study to evaluate sexual behavior several months into the COVID-19 pandemic using a large national sample. However, the results of this study are limited by its convenience sampling method and cross-sectional design.

Conclusions: These results indicate that the changes in sexual behavior observed in the early months of the pandemic have continued, with small but significant decreases in many partnered sexual behaviors and a small increase in men's solitary sexual behaviors.

Keywords: COVID-19sexual behaviorsexual functioningsexual frequencyrelationship satisfactionsexual satisfactionintimate partner violence


Happiness has positive effects both on income generation and preferring more leisure time; the net effect on income generation is positive and significant

Effects of Happiness on Income and Income Inequality. Satya Paul. Journal of Happiness Studies, Aug 22 2021. https://rd.springer.com/article/10.1007/s10902-021-00439-5

Abstract: This paper examines the effects of happiness on income and income inequality. We postulate that happiness impacts upon the income generating capacity of individuals directly by stimulating work efficiency, and indirectly by affecting their allocation of time for paid work. These direct and indirect effects of happiness on income are tested in a regression model and the implication of these effects for income distribution is explored using an inequality decomposition framework. An empirical exercise based on Australian HILDA panel survey data (2001–2014) reveals that happiness has a positive and significant effect on income generation and contributes to the reduction of inequality.


Androgynous individuals are less likely to suffer depression while undifferentiated individuals are more susceptible to depression; masculinity traits seem to be a robust protective factor for depression regardless of gender

Does gender role explain a high risk of depression? A meta-analytic review of 40 years of evidence. Jingyuan Lin et al. Journal of Affective Disorders, Volume 294, Nov 1 2021, Pages 261-278. https://doi.org/10.1016/j.jad.2021.07.018

Highlights

• Androgynous individuals are less likely to suffer depression while undifferentiated individuals are more susceptible to depression.

• Masculinity traits seem to be a robust protective factor for depression regardless of gender. Of note, the dominance of masculinity has declined as life expectancy increases.

• The protective effect of femininity against depression starts to emerge with the gradual increase in educational attainment and income level from 1990 to 2019.

Abstract

Background: This meta-analytic review aimed to systematically evaluate associations of depression with multiple gender role dimensions (masculinity, femininity, androgyny, and undifferentiated traits) and to determine potential moderators (participant characteristics, study instruments and sociocultural factors) of the relationship.

Methods: Of 4481 initially identified records in three electronic databases, 58 studies published 1978 to 2021 were included for meta-analysis.

Results: (1) Association of depression and gender role is moderated by study year and human development indices. (2) Masculinity is a protective factor for depression, while this dominance has declined as life expectancy increases. (3) A negative, weak but significant association between depression and femininity is observed in women, and college students, which starts to emerge with the gradual increase in the national education and income index from 1990 to 2019. (4) Androgynous individuals reported the lowest level of depression as compared with other gender role orientations (masculine, feminine, and undifferentiated trait group). This disparity is becoming more extreme with life expectancy and per capita income index increases.

Limitations: English-language studies were only included in this review.

Conclusions: Androgyny might be the most ideal gender role protecting both women and men from depression.

Keywords: DepressionGender roleMasculinityFemininityAndrogynyHuman development index

Popular version: Masculinity may have a protective effect against the development of depression -- even for women (psypost.org)