Friday, August 19, 2022

Anointing others is a social antiparasitic behaviour; in capuchin monkeys it is also a beneficial custom thru social bonding

The role of anointing in robust capuchin monkey, Sapajus apella, social dynamics. Emily J.E. Messer et al. Animal Behaviour, Volume 190, August 2022, Pages 103-114. https://doi.org/10.1016/j.anbehav.2022.04.017

Highlights

•Social network analysis revealed association changes in group organization.

•Anointing is an antiparasitic behaviour analogous to social grooming.

•Social anointing has evolved within the context of complex social behaviour.

•We conceptualize anointing in capuchins as ‘social medication’.

Abstract: Anointing is a behaviour in which animals apply pungent-smelling materials over their bodies. It can be done individually or socially in contact with others. Social anointing can provide coverage of body parts inaccessible to the individual, consistent with hypotheses that propose medicinal benefits. However, in highly social capuchin monkeys, Sapajus and Cebus spp., anointing has been suggested to also benefit group members through ‘social bonding’. To test this, we used social network analysis to measure changes in proximity patterns during and shortly after anointing compared to a baseline condition. We presented two capuchin groups with varying quantities of onion, which reliably induces anointing, to create ‘rare resource’ and ‘abundant resource’ conditions. We examined the immediate and overall effects of anointing behaviour on the monkeys' social networks, using patterns of proximity as a measure of social bonds. For one group, proximity increased significantly after anointing over baseline values for both rare and abundant resource conditions, but for the other group proximity only increased following the rare resource condition, suggesting a role in mediating social relationships. Social interactions were affected differently in the two groups, reflecting the complex nature of capuchin social organization. Although peripheral males anointed in proximity to other group members, the weak centrality only changed in one group following anointing bouts, indicating variable social responses to anointing. We suggest in part that anointing in capuchins is analogous to social grooming: both behaviours have an antiparasitic function and can be done individually or socially requiring contact between two or more individuals. We propose that they have evolved a social function within complex repertoires of social behaviours. Our alternative perspective avoids treating medicinal and social explanations as alternative hypotheses and, along with increasing support for the medical explanations for anointing, allows us to conceptualize social anointing in capuchins as ‘social medication’.


Keywords: anointingcapuchin monkeyfur rubbingsocial bondingsocial network analysis

Discussion

We observed that capuchin monkeys enthusiastically anointed whether resource density was high or low. However, the effect of anointing on their social dynamics varied by group and the density of resources available. While we found increased levels of association after anointing for the West group regardless of resource density, the East group monkeys increased their associations after anointing only when the resource was rare.

When resources were sufficiently plentiful for every monkey to have a piece of onion, strength values in the West group were significantly higher (either having stronger or more associations with others) after anointing in both resource conditions compared to the baseline. This suggests that anointing can mediate social relationships, since monkeys did not need to increase proximity to anoint in the abundant resource condition. Moreover, their associations were highest after anointing in the abundant resource condition compared to the rare resource condition, showing that the monkeys chose to continue to associate together after anointing. Conversely in the East group, we found that the monkeys' associations were highest after anointing in the rare resource condition, indicating that the monkeys remained closer together than in the baseline condition after anointing only when they would have had to come close together to gain access to limited materials. Thus, associations with group members were higher after anointing regardless of the density of available resources for the West monkeys. By contrast in the East group, we saw changes in social structure emerge after anointing when resource density was lower, an increase in proximity patterns that could in part be due to the limited resources available.

During anointing, the West group's associations increased above the baseline in the two anointing conditions, but the effect was greater in the abundant resource condition. Thus, when resource density was higher, which could facilitate individual anointing and decrease associations, monkeys in the West group were opting to increase their associations. Conversely, in the East group, we found no differences in the monkeys' associations during anointing in either of the two resource conditions compared to the baseline. Therefore, although the East group engaged in anointing, the effect on their social structure was only evident after anointing, perhaps after the monkeys had to come together to access the limited resource. These group differences are likely to be a reflection of the complex nature of capuchin social organization.

Differing social dynamics within the East and West groups could be contributing to these differing results. The West group was formed of one main matriline plus two unrelated adult males, whereas the East group had two main matrilines and two unrelated adult males. Thus, from the outset, the West group individuals had a higher level of overall relatedness between individuals than the East group. Indeed, Welker, Hoehmann and Schaefer-Witt (1990) have argued that the matrilines in their captive Cebus apella formed the foundation of the group's social structure (Fragaszy et al., 2004). As such, the West group's main single matriline versus the East group's two matrilines could be contributing to the changes in social dynamics we report. Future work examining the function of social bonding in other capuchin monkeys' anointing behaviour should seek to include measures of relatedness between individuals.

In capuchins, the matrilines underlie rank structures that affect access to resources and social organization. In our two groups of capuchins, the more dominant group members could monopolize resources from subordinates which tended to wait. All the monkeys with the lowest baseline measures of strength (four males: Kato, Toka, Carlos and Manuel; two females: Junon and Pedra) were subordinates. Junon was the only adult female to have a baseline strength score less than one, likely because she was from a different matriline, and subordinate to the two other adult females. While we may expect the four subordinate males to have had lower baseline strength, perhaps indicating they were the most likely to be peripheral males, Pedra's (our only subadult female) low strength was less expected and may change as she reaches sexual maturity.

Our social network analysis provides some support for the social-bonding hypothesis (see also Leca et al., 2007Paukner & Suomi, 2008Valderrama et al., 2000), particularly in the West group, where there was an increase in group cohesion in the short term after anointing. Analysis of longer-term changes in social dynamics over time following differing access to anointing materials would provide further insight into any longer-term changes to group social structure.

Only the West group peripheral males' overall integration (connectedness) into their group differed across the five conditions, as peripheral males increased associations during anointing. Although all the peripheral males were engaging in social anointing (E. Messer & M. Bowler, personal observation), and potentially gaining from the functional benefits of reaching inaccessible and nonvisible areas of their body (as we have previously shown with the same group of capuchin monkeys, Bowler et al., 2015), changes in their social integration were not detectable in the East group. This difference might be due to individual differences between the males' positions in the dominance hierarchy of their respective groups. Although both groups had subordinate males with low baseline associations (e.g. Kato and Carlos in the East group and Toka in the West group), both groups also contained subordinate males that had higher centrality scores (e.g. Manuel in the East group, and Diego and Figo in the West group). Moreover, in the West group, Diego, a beta male, became the alpha male after the study. Future work excluding beta males in peripheral male subgroupings (e.g. Izawa, 1980) and including more data collected on the social network position of other peripheral males to increase the sample size would be useful to examine any subtler changes, for example in competitive friction. When we compared the rest of the group centrality measures without the peripheral males', we found no significant effect of anointing, indicating that the remaining monkeys did not become more integrated into the groups after anointing. Thus, although anointing impacted the monkeys’ strength scores, these associations may be short lived. We surmise that anointing with onions in robust capuchin monkeys appears to impact individual connectedness rather than group integration.

Because we focused on monkey proximity patterns and collected scan data every 4 min, we could not accurately assess who joined whom and how monkeys reacted to these aggregations during anointing. To provide further insights into the impact of such social influences and the effect of resource density on individuals’ proximal choices, future work could explore the spread of social and individual anointing over time in groups of monkeys, and any contagious effects of the behaviour.

In capuchins, anointing has an apparent role in self-medication (Alfaro et al., 2011). Previous studies of anointing have shown that social anointing may be an entirely functional extension of this, helping to provide medicinal coverage for group members (e.g. Bowler et al., 2015), which may be relatives or potential hosts for infectious parasites. As such, these phenomena may also provide some insight into the basis of human healthcare networks where individuals care for the sick (Kessler, 2020). Future work examining any changes in group structure during social anointing could provide further insights into anointing as social medicine.

Here we have shown that anointing in robust capuchin monkeys affected social behaviour through increased and/or stronger associations. There is perhaps a strong partial analogy here with grooming, shown in capuchin monkeys to serve various hygiene and social functions (Fragaszy et al., 2004). Autogrooming appears to fill an obvious role of removing ectoparasites and other debris while social grooming (allogrooming) extends this benefit by reaching parts of the body that an individual cannot reach itself by a groomer actively grooming another individual (Barton, 1985). Adding to this, groomers could also benefit if they consume parasites they remove. However, there is also plentiful evidence that social grooming serves additional social functions (di Bitetti, 1997Dunbar, 1991Sánchez-Villagra et al., 1998Nunn, Altizer, & Altizer, 2006), with individuals prioritizing grooming with those ranked slightly higher than themselves (Seyfarth, 1977; but see Parr et al., 1997 which indicates that robust capuchins are more likely to groom other closely ranked individuals). Although grooming likely changes with the social organization and varies with ecological conditions (e.g. see Lazaro-Perea, de Fátima Arruda, & Snowdon, 2004), it has also been shown to be a resource to be traded with others such as for food sharing (de Waal, 1997; but this can be affected by rank differences, e.g. see Jaeggi et al., 2013), or support in aggressive disputes (Hemelrijk, 1994Seyfarth, 1977Seyfarth & Cheney, 1984).

Conclusion

Social and medicinal hypotheses for anointing are not mutually exclusive, and while the widespread nature of anointing within the primates and other taxa suggests that there is an underlying nonsocial benefit to the behaviour, like grooming, anointing in capuchin monkeys has evolved within the context of a highly complex repertoire of social behaviours and may have taken on an additional social function. The complexity of social behaviour in these monkeys may make separating the cause and effect of anointing on social structure challenging. Our alternative perspective departs from treating medicinal and social explanations as alternative hypotheses, and along with increasing support for the medical explanations for anointing, justifies describing anointing in capuchin monkeys as ‘social medication’.


57pct of people snoozed; the majority of them snoozed for almost half an hour before getting out of bed

Snoozing: An Examination of A Common Method of Waking. Stephen M Mattingly, Gonzalo Martinez, Jessica Young, Meghan K Cain, Aaron Striegel. Sleep, zsac184, August 11 2022. https://doi.org/10.1093/sleep/zsac184

Abstract

Study Objectives: Snoozing was defined as using multiple alarms to accomplish waking, and considered as a method of sleep inertia reduction that utilizes the stress system. Surveys measured snoozing behavior including who, when, how, and why snoozing occurs. In addition, the physiological effects of snoozing on sleep were examined via wearable sleep staging and heart rate activity, both over a long time scale, and on the days that it occurs. We aimed to establish snoozing as a construct in need of additional study.

Methods: A novel survey examined snoozing prevalence, how snoozing was accomplished, and explored possible contributors and motivators of snoozing behavior in 450 participants. Trait- and day-level surveys were combined with wearable data to determine if snoozers sleep differently than non-snoozers, and how snoozers and non-snoozers differ in other areas, such as personality.

Results: 57% of participants snoozed. Being female, younger, having fewer steps, having lower conscientiousness, having more disturbed sleep, and being a more evening chronotype increased the likelihood of being a snoozer. Snoozers had elevated resting heart rate and showed lighter sleep before waking. Snoozers did not sleep less than non-snoozers nor did they feel more sleepiness or nap more often.

Conclusions: Snoozing is a common behavior associated with changes in sleep physiology before waking, both in a trait- and state-dependent manner, and is influenced by demographic and behavioral traits. Additional research is needed, especially in detailing the physiology of snoozing, its impact on health, and its interactions with observational studies of sleep.

Keywords: Snooze, Sleep, Wearables, Sleep Staging, Heart Rate


Wednesday, August 17, 2022

Attitudes toward public female toplessness appear to be driven more by individual opinions than by context (e.g., beach, park) or structural factors (e.g., region or state-legality)

Objectification and Reactions toward Public Female Toplessness in the United States: Looking Beyond Legal Approval. Colin R. Harbke & Dana F. Lindemann. Sexuality & Culture, Aug 17 2022. https://rd.springer.com/article/10.1007/s12119-022-10005-7

Abstract: Multiple United States federal courts have recently drawn inferences regarding community sentiment as it pertains to public female toplessness. Despite citing common social factors in their rulings, the courts have rendered conflicting decisions to uphold (Ocean City, MD) or to overturn (Fort Collins, CO) female-specific bans. Regional differences in attitudes toward toplessness may in part explain these discrepant legal outcomes. Participants (n = 326) were asked to rate their general impressions of photos depicting topless women in three different public settings. Geographic region was unrelated to reactions toward toplessness, however, participants from states with prohibitive or ambiguous statutes rated the photos differently. Consistent with a body of theoretical and empirical work on cultural objectification of women, female participants, on average, were more critical of the photos of other topless women. Other demographic and attitudinal predictors showed a pattern that suggests moral objections as a likely source of unfavorable reactions. Ascribing morality with the practice of toplessness echoed some of the commentary that surrounded the above legal cases and further substantiates prior objectification research (i.e., Madonna-whore dichotomy). Overall, attitudes toward public female toplessness appear to be driven more by individual opinions than by context (e.g., beach, park) or structural factors (e.g., region or state-legality).


Bored and better: Finding a boring person results in people feeling not only superior to the boring individual, but also to others

Bored and better: Interpersonal boredom results in people feeling not only superior to the boring individual, but also to others. Jonathan Gallegos,Karen GasperORCID Icon &Nathaniel E. C. Schermerhorn. Self and Identity, Aug 16 2022. https://doi.org/10.1080/15298868.2022.2111341

Abstract: Four experiments tested the hypothesis that meeting someone new who is boring would result in people feeling superior to the boring individual, which would then result in people viewing themselves as better than others and increased confidence. Respondents reported greater feelings of superiority, meaninglessness, and difficulty paying attention when they wrote about meeting a new, boring individual than a new or manipulative individual. Feeling superior, but not meaninglessness and attention, mediated the effect of interpersonal boredom on viewing oneself as better than others, but not on confidence. These finding did not occur when people wrote about a boring task or a disliked, manipulative individual. The experiments elucidate how interpersonal boredom, albeit a negative experience, can enhance people’s sense of self.

Keywords: Interpersonal boredomsuperiorityself-enhancementmeaninglessness


Less than 50pct of Psychology research were successfully replicated by preregistered studies

Röseler, Lukas, Taisia Gendlina, Josefine Krapp, Noemi Labusch, and Astrid Schütz. 2022. “Successes and Failures of Replications: A Meta-analysis of Independent Replication Studies Based on the OSF Registries.” MetaArXiv. August 16. doi:10.31222/osf.io/8psw2

Abstract: A considerable proportion of psychological research has not been replicable, and estimates range from 9% to 77% for nonreplicable results. The extent to which vast proportions of studies in the field are replicable is still unknown, as researchers lack incentives for publishing individual replication studies. When preregistering replication studies via the Open Science Foundation website (OSF, osf.io), researchers can publicly register their results without having to publish them and thus circumvent file-drawer effects. We analyzed data from 139 replication studies for which the results were publicly registered on the OSF and found that out of 62 reports that included the authors’ assessments, 23 were categorized as “informative failures to replicate” by the original authors. 24 studies allowed for comparisons between the original and replication effect sizes, and whereas 75% of the original effects were statistically significant, only 30% of the replication effects were. The replication effects were also significantly smaller than the original effects (approx. 38% the size). Replication closeness did not moderate the difference between the original and the replication effects. Our results provide a glimpse into estimating replicability for studies from a wide range of psychological fields chosen for replication by independent groups of researchers. We invite researchers to browse the Replication Database (ReD) ShinyApp, which we created to check for whether seminal studies from their respective fields have been replicated. Our data and code are available online: https://osf.io/9r62x


Tuesday, August 16, 2022

From 2018... Against commonly held views, cynical individuals generally do worse on cognitive ability and academic competency tasks

 From 2018... The Cynical Genius Illusion: Exploring and Debunking Lay Beliefs About Cynicism and Competence. Olga Stavrova, Daniel Ehlebracht. Personality and Social Psychology Bulletin, July 11, 2018. https://doi.org/10.1177/0146167218783195

Abstract: Cynicism refers to a negative appraisal of human nature—a belief that self-interest is the ultimate motive guiding human behavior. We explored laypersons’ beliefs about cynicism and competence and to what extent these beliefs correspond to reality. Four studies showed that laypeople tend to believe in cynical individuals’ cognitive superiority. A further three studies based on the data of about 200,000 individuals from 30 countries debunked these lay beliefs as illusionary by revealing that cynical (vs. less cynical) individuals generally do worse on cognitive ability and academic competency tasks. Cross-cultural analyses showed that competent individuals held contingent attitudes and endorsed cynicism only if it was warranted in a given sociocultural environment. Less competent individuals embraced cynicism unconditionally, suggesting that—at low levels of competence—holding a cynical worldview might represent an adaptive default strategy to avoid the potential costs of falling prey to others’ cunning.

Keywords: cynicism, competence, lay theories, social perception

The academic literature has consistently painted a dim picture of cynicism, linking it to bad health outcomes, lower well-being, poor relationship quality, and decreased financial success (Chen et al., 2016; Haukkala, Konttinen, Laatikainen, Kawachi, & Uutela, 2010; Stavrova & Ehlebracht, 2016). In contrast, in popular culture, cynicism seems to have a better reputation. For example, in film and fiction, the most cynical characters (e.g., Sherlock Holmes or Dr. House), although lonely and unhappy, are frequently painted as the most intelligent, witty, experienced, and knowledgeable ones. In the present studies, we explored lay beliefs about the association between cynicism and competence and tested whether these beliefs reflect empirical associations between these traits. Our results revealed that laypeople tend to endorse the “cynical genius” belief—that is, believed that cynical individuals would do better on a variety of cognitive tasks and cognitive ability tests than their less cynical counterparts. An examination of empirical associations between cynicism and competence based on the data of about 200,000 individuals from 30 countries debunked the “cynical genius” belief as illusionary. Cynical individuals are likely to do worse (rather than better) on cognitive tasks, cognitive abilities, and competencies tests, and tend to be less educated than less cynical individuals.

What is the source of the discrepancy between lay beliefs and reality? Literature on the negativity bias (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001) and loss aversion (Kahneman & Tversky, 1979) might give a clue. Findings from these research fields suggest that pain associated with negative outcomes (e.g., betrayed trust) is stronger than pleasure associated with positive outcomes (e.g., rewarded trust). Consequently, individuals might be more aware of the negative consequences of other people’s gullibility than of the positive consequences that a trusting stance and positive view of human nature often convey.

In addition, according to insights from trust research (Fetchenhauer & Dunning, 2010), when people endorse a cynical stance concerning others and consequently forgo trust, they usually do not even get a chance to learn whether their untrustworthiness assumption was correct and being cynical thus spared them a “loss”—or whether it was incorrect and therefore denied them a “win.” In other words, cynicism often precludes the possibility of experiencing negative outcomes. As a result, it might be perceived as a smarter, more successful strategy and cynical individuals might be attributed higher levels of competence than their less cynical counterparts. After all, they are highly unlikely to be betrayed, deceived, and exploited, whereas it usually remains unknown whether their cynicism resulted in missed opportunities.

Finally, the abundance of smart and witty cynics in fiction might fuel the “cynical genius illusion” as well. As the primary goal of fiction is entertainment, fictional worlds are typically more dangerous, their villains are meaner, and the costs of mistakes are higher than in reality—or, as Barack Obama (2014) put it referring to the House of Cards series: “Life in Washington is a little more boring than displayed on the screen.” In these hostile and dangerous worlds created for our entertainment, cynicism is warranted and often turns out to be essential for survival, suggesting that those who endorse it are likely to be the smart ones. Our cross-cultural analyses indirectly support this idea, showing that the negative association between competence and cynicism gets weaker with increasing levels of environmental hostility, such that in the most corrupt countries in our sample, competent individuals are not necessarily less cynical than their less competent counterparts (see Table 4).

This observation inevitably leads to the conclusion that whether the “cynical genius” belief represents an illusion or not must depend on the sociocultural environment. While we explored the empirical association between cynicism and competence across 30 countries, our conclusions regarding the perceived association are restricted to three Western countries: United States, United Kingdom, and Germany. We acknowledge that it is highly important to explore lay beliefs in other cultural contexts as well. It is possible that cross-cultural differences in the perceived association between cynicism and competence are also explained in part by the degree to which cynicism is warranted in a particular sociocultural context, with a stronger “cynical genius” belief in more versus less corrupt countries. In this case, perceived and actual associations between cynicism and competence might covary at the country level suggesting that there might be some truth to the “cynical genius illusion” after all.

Although our reliance on large-scale publicly available datasets (Studies 4-6) facilitated a precise assessment of the empirical associations between cynicism and competence, it did not allow for a direct comparison between the actual empirical associations and lay beliefs about these associations within a given sample. As we took great care to ensure the conceptual equivalence between the measures of the former (e.g., perceived ability to solve math problems) and the latter (e.g., actual performance on numeracy tests), we are confident in the validity of our conclusions. It is also important to note that even though the “cynical genius belief” emerged consistently across the studies, its effect size showed substantial variation across the measures of cognitive competence, with the strongest effect obtained for items reflecting mathematical competence and the weakest effect obtained for items associated with verbal skills. It seems that people like to think that those who are good at scrutinizing numbers must also be good at scrutinizing other people’s intentions. Finally, besides a belief in cynics’ “cognitive competence,” our participants showed an even stronger belief in cynics’ “social incompetence.” This belief as well as the question of whether it corresponds to reality might be worth a separate, more thorough (e.g., using more diverse social tasks) investigation.

While we have shown cynicism to be positively associated with competence in lay beliefs, it is less clear what causal theory people use to explain this association. Do they think that cynicism makes people more competent or that higher levels of competence turn people into cynics? A similar question arises with respect to the causality of the empirical associations between competence and cynicism. However, higher levels of cognitive ability, academic competence, and education might protect from adverse life experiences, not only as they allow discovering potential fraud but also as they increase the chances of living in a safe and friendly environment, providing more evidence for a positive than for a negative view of human nature and consequently preventing cynicism development. Our findings showing that cognitive ability in adolescence contributes to decreased levels of cynicism in adulthood provide some preliminary support for a causal effect of competence. However, another causal direction is possible as well: As cynicism is closely related to distrust (Singelis, Hubbard, Her, & An, 2003), cynical (vs. less cynical) individuals might be more distrustful of the opinions and knowledge of others, a behavior that can eventually prevent them from expanding their knowledge and understanding. We hope that future studies will pick up here and explore the causal directions underlying both perceived (i.e., lay beliefs) and empirical association between competence and cynicism.

To conclude, the idea of cynical individuals being more competent, intelligent, and experienced than less cynical ones appears to be quite common and widespread, yet, as demonstrated by our estimates of the true empirical associations between cynicism and competence, largely illusory. As Stephan Colbert, an American comedian, writer, and television host, phrased it, “Cynicism masquerades as wisdom, but it is the furthest thing from it.”

Monday, August 15, 2022

Instead of the human larynx having increased complexity, it has actually simplified relative to other primates, allowing for clearer sound production with less aural chaos

Evolutionary loss of complexity in human vocal anatomy as an adaptation for speech. Takeshi Nishimura et al. Science, Aug 11 2022, Vol 377, Issue 6607, pp. 760-763. DOI: 10.1126/science.abm1574


When less is more in the evolution of language

Complexity from simplification

Human speech and language are highly complex, consisting of a large number of sounds. The human phonal apparatus, the larynx, has acquired the capability to create a wider array of sounds, even though previous work has revealed many similarities between our larynx and those in other primates. Looking across a large number of primates, Nishimura et al. used a combination of anatomical, phonal, and modeling approaches to characterize sound production in the larynx (see the Perspective by Gouzoules). They found that instead of the human larynx having increased complexity, it has actually simplified relative to other primates, allowing for clearer sound production with less aural chaos. —SNV


Abstract: Human speech production obeys the same acoustic principles as vocal production in other animals but has distinctive features: A stable vocal source is filtered by rapidly changing formant frequencies. To understand speech evolution, we examined a wide range of primates, combining observations of phonation with mathematical modeling. We found that source stability relies upon simplifications in laryngeal anatomy, specifically the loss of air sacs and vocal membranes. We conclude that the evolutionary loss of vocal membranes allows human speech to mostly avoid the spontaneous nonlinear phenomena and acoustic chaos common in other primate vocalizations. This loss allows our larynx to produce stable, harmonic-rich phonation, ideally highlighting formant changes that convey most phonetic information. Paradoxically, the increased complexity of human spoken language thus followed simplification of our laryngeal anatomy.


Why are males not doing these environmental behaviors?: exploring males’ psychological barriers to environmental action

Why are males not doing these environmental behaviors?: exploring males’ psychological barriers to environmental action. Jessica E. Desrochers & John M. Zelenski. Current Psychology, Aug 13 2022. https://rd.springer.com/article/10.1007/s12144-022-03587-w


Abstract: Previous research has reported that females are more likely than males to do pro-environmental behaviors. This research focused on understanding this relationship by exploring individual difference characteristics that may explain the sex difference, specifically traits and psychological barriers to pro-environmental action. Two studies (N = 246 and N = 357) confirm that males were less likely to report doing pro-environmental behaviors; males also reported more of Gifford’s (2011) Dragons of Inaction Psychological Barriers (DIP-Barriers) to pro-environmental action than females. Broad traits predicted pro-environmental attitudes and behaviors similar to past research, but they did not account for the sex difference. In addition, we suggest a new psychological barrier for males: perceptions of femininity may dissuade males from some pro-environmental behaviors. Results provide preliminary support for this idea and complement previous suggestions that environmentalism is perceived as more feminine. We discuss ways that future research can build on these suggestions with the ultimate goal of more effectively promoting environmentalism to males.




Sunday, August 14, 2022

Do Funding Agencies Select and Enable Risky Research? In the European Research Council, applicants with a history of risky research are less likely to be selected for funding than those without such a history

Do Funding Agencies Select and Enable Risky Research: Evidence from ERC Using Novelty as a Proxy of Risk Taking. Reinhilde Veugelers, Jian Wang & Paula Stephan. NBER Working Paper, 30320. Aug 2022. DOI 10.3386/w30320

Abstract: Concern exists that public funding of science is increasingly risk averse. Funders have addressed this concern by soliciting the submission of high-risk research to either regular or specially designed programs. Little evidence, however, has been gathered to examine the extent to which such programs and initiatives accomplish their stated goal. This paper sets out to study this using data from the European Research Council (ERC), a program within the EC, established in 2007 to support high-risk/high-gain research. We examine whether the ERC selected researchers with a track record of conducting risky research. We proxy high-risk by a measure of novelty in the publication records of applicants both before and after the application, recognizing that it is but one dimension of risk. We control and interact the risk measure with high-gain by tracking whether the applicant has one or more top 1% highly cited papers in their field. We find that applicants with a history of risky research are less likely to be selected for funding than those without such a history, especially early career applicants. This selection penalty for high-risk also holds among those applicants with a history of high-gain publications. To test whether receiving a long and generous prestigious ERC grant promotes risk taking, we employ a diff-in-diff approach. We find no evidence of a significant positive risk treatment effect for advanced grantees. Only for early career grantees do we find that recipients are more likely to engage in risky research, but only compared to applicants who are unsuccessful at the second stage. This positive treatment effect is in part due to unsuccessful applicants cutting back on risky research. We cautiously interpret this as a “lesson learned” that risk is not rewarded.


Naive Stoic Ideology, a misinterpretation of stoic philosophy: There is a negative association between stoic ideology and well-being

Misunderstood Stoicism: The negative Association Between Stoic Ideology and well-Being. Johannes Alfons Karl, Paul Verhaeghen, Shelley N. Aikman, Stian Solem, Espen R. Lassen & Ronald Fischer. Journal of Happiness Studies, Aug 12 2022. https://rd.springer.com/article/10.1007/s10902-022-00563-w

Abstract: Ancient philosophy proposed a wide range of possible approaches to life which may enhance well-being. Stoic philosophy has influenced various therapeutic traditions. Individuals today may adopt an approach to life representing a naive Stoic Ideology, which nevertheless reflects a misinterpretation of stoic philosophy. How do these interpretations affect well-being and meaning in life? We examine the differential effects of Stoic Ideology on eudaimonic versus hedonic well-being across three cultural contexts. In this pre-registered study, across samples in New Zealand (N = 636), Norway (N = 290), and the US (N = 381) we found that a) Stoic Ideology can be measured across all three contexts and b) Converging evidence that Stoic Ideology was negatively related to both hedonic well-being and eudaimonic well-being. Focusing on specific relationships, we found especially pronounced effects for Taciturnity (the desire to not express emotions) and Serenity (the desire to feel less emotions). Despite being a misinterpretation of stoic philosophy, these findings highlight the important role of individuals’ orientations to emotional processing for well-being.


Discussion

Across three cultures we investigated how a naïve stoic ideology, which captures a laypersons’ misunderstood Stoicism (as expressed in stoic ideology), might be associated with approaches to, and actual levels of, well-being. We initially predicted that stoic ideology would show a more negative association with hedonic compared to eudaimonic aspects of well-being. This was overall not confirmed. While we found that stoic ideology was more negatively associated with hedonic well-being in New Zealand, this was the only relationship in the predicted direction. Our findings, using the stoic ideology scale, are consistent with previous studies using similar measures of hedonic well-being (Bei et al., 2013; Murray et al., 2008). Importantly, on a facet level this effect was mostly driven by Taciturnity and Serenity for Eudaimonia and Hedonia. The exception was hedonic orientation to happiness which was only associated with Serenity. This pattern implies that the tendency and desire to suppress one’s problems, both experience and expression, is related to lower well-being, both hedonic and eudaimonic. Across the three countries the pattern of relationships was largely identical for higher order stoic ideology with the potential exception of the association between stoic ideology and hedonic orientation in Norway. The traditional stereotype of Nordic cultures also features a rather stoic outlook on life, which emphasizes emotional control, doing ‘your own thing’ without complaining or expressing strong emotional reactions (Saville-Troike & Tannen, 1985; Stivers et al., 2009; Tsai & Chentsova-Dutton, 2003), stoic ideology might therefore be less related to orientations to well-being. Due to the cross-sectional nature of our study, we cannot untangle whether stoic ideology only influences responding, or, as some studies have indicated, has conceptually causal relationships to well-being, theoretically driven by reduced help-seeking for example (Kaukiainen & Kõlves, 2020; Rughani et al., 2011).

It is important to highlight that our hypotheses were based on a measure which captures stoic ideology as a naïve belief system, which does not represent the philosophical ethical system underpinning Stoicism. Current psychological measures of naive stoic ideology do not capture the richness of the wider stoic belief system within classic philosophical discussions. We encourage researchers to make it explicitly clear when they are referring to Stoicism (the philosophical belief system) or stoic ideology (as captured in the Pathak-Wieten Stoic ideology scale) as an expression of a lay stoic ideological system. Future research should clarify the relationship between Stoicism and stoicism, to explore overlaps and divergences. Investigations into this area appear important, especially given the positive well-being effects of the aforementioned therapeutic approaches that are conceptually based in Stoicism (Beck, 1979; Ellis, 1962; Robertson, 2019), and the presumed malleability of stoic ideology (Pathak et al., 2017).

In future research, it would be essential to compare the relationship of the Pathak-Wieten scale empirically with measures incorporating a wider range of stoic attitudes and behaviors (centering around issues of controllability of the environment, and teleology of the universe). This is not to indicate that the Pathak-Wieten scale is not a useful tool to measure stoic ideology (but possibly not Stoicism). As we have shown here, the scale shows good measurement properties across the cultures included in our study, and reliably shows good fit across samples. From a psychometric perspective, it is a reliable and equivalent scale that can be used to compare correlation patterns across samples. The major question to be addressed in further research is what the instrument measures conceptually. As the lack of scalar invariance implies, the items measure potentially additional concepts across the different cultural contexts, which together with the philosophical questions, clearly requires further analyses and development.

At the same time, the measure provides important insight into potential determinants of reduced well-being. Given the consistent negative relationships that we found between stoic ideology and well-being across cultures, clinical practitioners might consider how these naive beliefs could be built upon for beneficial health outcomes. Given the findings of negative relationships between both aspects of well-being and the Taciturnity and Serenity facets in particular, individuals might be encouraged to share personal problems in appropriate ways and to acknowledge emotions, rather than suppressing or ignoring emotional experiences. Our study also supports previous notions (Benita et al., 2020; Gross, 2013) that it might be beneficial for individuals’ well-being to engage in practices that foster an accepting or non-judgmental stance to their emotions, for example mindfulness practices (Dundas et al., 2017), rather than suppressing their emotions. Obviously, we are unable to point towards causal directions, but the therapeutic literature using stoic philosophy principles as well as related philosophical concepts, such as mindfulness, clearly suggests that such behavioral changes may have positive health consequences.

Limitations

Our current study was mostly limited by our samples, based on student populations. This limits the generalizability of our findings to the general population. However, it should be noted that the original instruments were largely developed in student samples, hence, our findings are compatible with previous research contexts. Further, we have no information on participants’ exposure to stoic philosophy, which might alter the observed association between stoic ideology and well-being. Finally, the current study, in line with previous research, focused on well-being, but not specifically on affective components. Given the conceptual overlap between stoic ideology and affective experience, future research should examine the potential link between stoic ideology beliefs, well-being, and the potential mediation role of affective experience.

Saturday, August 13, 2022

Female teams lower their contribution to the public good in the event of low likeability among members, while male teams achieve high levels of co-operation irrespective of the level of mutual likeability

I (Don’t) Like You! But Who Cares? Gender Differences in Same-Sex and Mixed-Sex Teams. Leonie Gerhards, Michael Kosfeld. The Economic Journal, Volume 130, Issue 627, April 2020, Pages 716–739, https://doi.org/10.1093/ej/uez067

Abstract: We study the effect of likeability on women’s and men’s team behaviour in a lab experiment. Extending a two-player public goods game and a minimum effort game by an additional pre-play stage that informs team members about their mutual likeability, we find that female teams lower their contribution to the public good in the event of low likeability, while male teams achieve high levels of co-operation irrespective of the level of mutual likeability. In mixed-sex teams, both women’s and men’s contributions depend on mutual likeability. Similar results are found in the minimum effort game. Our results offer a new perspective on gender differences in labour market outcomes: mutual dislikeability impedes team behaviour, except in all-male teams.


About half of the cobalt ends up being lost during production; indium sees losses hit 70 pct; and many metals have production losses of 95 pct or higher: arsenic, gallium, germanium, hafnium, scandium, selenium, & tellurium

Losses and lifetimes of metals in the economy. Alexandre Charpentier Poncelet, Christoph Helbig, Philippe Loubet, Antoine Beylot, Stéphanie Muller, Jacques Villeneuve, Bertrand Laratte, Andrea Thorenz, Axel Tuma & Guido Sonnemann. Nature Sustainability, corr. May 31 2022. https://www.nature.com/articles/s41893-022-00895-8


Abstract: The consumption of most metals continues to rise following ever-increasing population growth, affluence and technological development. Sustainability considerations urge greater resource efficiency and retention of metals in the economy. We model the fate of a yearly cohort of 61 extracted metals over time and identify where losses are expected to occur through a life-cycle lens. We find that ferrous metals have the longest lifetimes, with 150 years on average, followed by precious, non-ferrous and specialty metals with 61, 50 and 12 years on average, respectively. Production losses are the largest for 15 of the studied metals whereas use losses are the largest for barium, mercury and strontium. Losses to waste management and recycling are the largest for 43 metals, suggesting the need to improve design for better sorting and recycling and to ensure longer-lasting products, in combination with improving waste-management practices. Compared with the United Nations Environmental Programme’s recycling statistics, our results show the importance of taking a life-cycle perspective to estimate losses of metals to develop effective circular economy strategies. We provide the dataset and model used in a machine-readable format to allow further research on metal cycles.


---

Popular version: New study estimates how long mined metals circulate before being lost https://arstechnica.com/science/2022/05/new-study-estimates-how-long-mined-metals-circulate-before-being-lost/

Losses at different stages of a metal's life cycle also varied widely. We're very good at extracting most metals from ores so that most of the losses are incidental—that is, some of the metal happens to be present in an ore we use for other materials. For example, iron ore may contain something like manganese at low concentrations, but the amount of ore we process means that a lot of manganese will end up being thrown away. Overall, these losses tended to be in the area of 15 percent, with the exception of specialty metals, which averaged about 25 percent.


Both those averages obscure some fairly horrifying losses. About half of the cobalt, which is highly desired for many types of batteries, ends up being lost during production. Indium, used in many semiconductor products, sees losses hit 70 percent. And many metals have production losses of 95 percent or higher: arsenic, gallium, germanium, hafnium, scandium, selenium, and tellurium.


Losses in manufacturing are much less scary; they're generally a rounding error compared to the losses in extraction. Manufacturing produces the smallest losses for over half the metals analyzed, and there's none for which it's the highest. Even the worst rate of loss (among non-ferrous metals) only reaches 6 percent. It's clear that manufacturing has been very good at avoiding waste.


Once in use, most metals suffer minimal losses, with averages of about 5 percent or less for everything but specialty metals. But those specialty metals see losses that average over 30 percent during use. Some of them, notably strontium and barium, primarily end up in single-use products that are permanently lost to the environment (they're part of the mud injected into wells during gas and oil drilling). Those two, along with mercury, are the only three for which use is the largest source of loss.

Meta-analysis: Afternoon naps sharpen the mind; effects are small to medium

Systematic review and meta-analyses on the effects of afternoon napping on cognition. Ruth L.F. Leong, June C. Lo, Michael W.L. Chee. Sleep Medicine Reviews, August 13 2022, 101666. https://doi.org/10.1016/j.smrv.2022.101666

Abstract: Naps are increasingly considered a means to boost cognitive performance. We quantified the cognitive effects of napping in 60 samples from 54 studies. 52 samples evaluated memory. We first evaluated effect sizes for all tests together, before separately assessing their effects on memory, vigilance, speed of processing and executive function. We next examined whether nap effects were moderated by study features of age, nap length, nap start time, habituality and prior sleep restriction. Naps showed significant benefits for the total aggregate of cognitive tests (Cohen's d = 0.379, CI95 = 0.296–0.462). Significant domain specific effects were present for declarative (Cohen's d = 0.376, CI95 = 0.269–0.482) and procedural memory (Cohen's d = 0.494, CI95 = 0.301–0.686), vigilance (Cohen's d = 0.610, CI95 = 0.291–0.929) and speed of processing (Cohen's d = 0.211, CI95 = 0.052–0.369). There were no significant moderation effects of any of the study features. Nap effects were of comparable magnitude across subgroups of each of the 5 moderators (Q values = 0.009 to 8.572, p values > 0.116). Afternoon naps have a small to medium benefit over multiple cognitive tests. These effects transcend age, nap duration and tentatively, habituality and prior nocturnal sleep.

Keywords: NapCognitionVigilanceMemoryAge effects


Friday, August 12, 2022

Sweet diversity: Colonial goods (tea, coffee, sugar, tobacco) and the welfare gains from global trade after 1492

Sweet diversity: Colonial goods and the welfare gains from global trade after 1492. Jonathan Hersh, Hans-Joachim Voth. Explorations in Economic History, July 23 2022, 101468. https://doi.org/10.1016/j.eeh.2022.101468

Abstract: When did overseas trade start to matter for living standards? Traditional real-wage indices suggest that living standards in Europe stagnated before 1800. In this paper, we argue that welfare may have actually risen substantially, but surreptitiously, because of an influx of new goods. Colonial “luxuries” such as tea, coffee, and sugar became highly coveted. Together with more simple household staples such as potatoes and tomatoes, overseas goods transformed European diets after the discovery of America and the rounding of the Cape of Good Hope. They became household items in many countries by the end of the 18th century. We apply two standard methods to calculate broad orders of magnitude of the resulting welfare gains. While they cannot be assessed precisely, gains from greater variety may well have been big enough to boost European real incomes by 10% or more (depending on the assumptions used).

Keywords: Gains from varietyGlobal tradeWelfare gains from new goodsAge of discoveryLiving standards over the long run

JEL D12D60F10F15N33


5. Conclusions

When did globalization begin to matter for living standards? According to the prevailing consensus, the answer is – not before the 19th century. O'Rourke and Williamson (2002) analysed traditional wage indices to show that trade across the Atlantic did not change real incomes before the 1830s. This paper argues that global trade quickly began to matter for living standards. As Europeans rounded the Cape of Good Hope, they brought back tea; from the New World, they brought tobacco, chocolate, and potatoes. In the Caribbean and other tropical colonies, Europeans set up a production system for sugar, tea, and coffee that transformed the supply of these goods. By the eighteenth century at the latest, consumption habits had undergone a profound transformation. New consumption goods offered variety where monotony had once reigned: hot, sweet caffeinated beverages replaced water and ale, and by revealed preference, consumers favoured tea, sugar, and coffee. Sugar also helped to reduce the culinary monotony of winter: it facilitated the making of jam and marmalade, preserving fruit flavours throughout the winter.

The welfare gains from access to new goods can be assessed by asking a counterfactual question – how much would incomes have to go up to compensate for a particular consumer item no longer being available? We use two different methods, pioneered by Hausman and Greenwood and Kopecky, to gauge orders of magnitude. Results are broadly similar. Most estimates, even under pessimistic assumptions, suggest that colonial luxuries made consumers better off by about one tenth of final-period consumption – and perhaps more. We cannot confirm these results using highly granular data on individual demand curves as modern-day studies can, but the closest historical analogues also imply welfare gains of 10% or more.

Our quantitative results for tea, sugar, and coffee may well constitute a lower bound on the discoveries’ overall effect. An even wider range of ‘new goods’ arrived on European shores as a result of overseas expansion (Nunn and Qian, 2010). The addition of tomatoes, potatoes, chocolate, exotic spices, polenta, and tobacco transformed consumption habits in even more fundamental ways than sugar, tea, and coffee. If data tracking the rise in consumption of all of these colonial goods were available, welfare increases for European consumers after 1492 as a result of growing variety could be even larger than our findings suggest.

Compared to the gains from new goods today, the welfare increases from introducing sugar, tea, and coffee in the past appear large. In Table 3, we compare the impact of recently invented new goods with our results, including welfare gains from tobacco estimated in the Appendix. Even for the contemporary new goods with the biggest impacts, such as personal computers and the internet, welfare gains pale in magnitude compared with those for colonial goods. Goolsbee and Klenow (2006) calculate a gain of approximately 2% for the internet. Our findings suggest welfare gains that are up to an order of magnitude larger (except when compared with personal computers).26 Other studies of modern-day gains from trade through increasing variety also show smaller increases than the ones we derive. Broda and Weinstein (2006) find welfare gains of 2.2-2.6%, approximately 1/4 of our “best-guess” improvement of 10% from sugar, tea, and coffee alone.

[...]

Relatively large gains in the more distant past make sense intuitively: Introducing a new good matters more when the pre-existing range of goods is small. Put another way – adding Apple Cheerios to the range of choices for breakfast cereals has (some) value. However, being able to replace beer soup, porridge and cold cuts with milky, sugary coffee and bread with jam was much nicer, as evidenced by rising budget shares of colonial goods. Exotic new products from the Americas and the Far East – pepper and nutmeg, tea and sugar, coffee and tobacco, chocolate and cloves – improved living standards by far more than modern consumers, sated by an ever-expanding range of new goods, can readily appreciate. The reason why seemingly mundane goods like sugar, coffee and tea probably made a big difference to living standards is that life was not just ‘nasty, brutish, and short’ in Hobbes’ phrase, at their time of introduction – it was also (in culinary terms) boring and bland.

Lower socio-economic status individuals are more active in building up weak ties in the electronic world

Sun, Rui, and Laura Vuillier. 2022. “Socioeconomic Status Predicts Who Initiates and Who Awaits Social Connections.” OSF Preprints. August 12. doi:10.31219/osf.io/jvmhd

Abstract: Weak social ties bring people multiple benefits, however, little is known about who initiates social connections that gives rise to weak ties. We propose and tested across 3 studies that socioeconomic status (SES) is a powerful predictor of who initiates or awaits connections with whom, and lower SES individuals are more likely to initiate weak social ties. In Study 1, lower SES individuals reported themselves more likely to initiate and accept Facebook friendship requests. In Study 2, an online dyadic interaction study, people of lower SES were more likely to initiate social connections after a brief chat. In Study 3, across more than 1000 participants, lower SES individuals had more followings (outbound connections) on Twitter, and this result held after controlling hours spent on Twitter and number of followers (inbound connections) on Twitter. Together, we suggested that lower SES individuals are more active in building up weak ties.


Smartphones are considerably more personal and private than PCs, so using them increases private self-focus; making choices using a personal smartphone, compared with a PC, tends to increase the preference for unique & self-expressive options

Phone and Self: How Smartphone Use Increases Preference for Uniqueness. Camilla Eunyoung Song, Aner Sela. Journal of Marketing Research, August 3, 2022. https://doi.org/10.1177/00222437221120404


Abstract: One of the most dramatic shifts in recent years has been consumers’ increased use of smartphones for making purchases and choices, but does using a smartphone influence what consumers choose? This paper shows that, compared with using a personal computer (PC), making choices using a personal smartphone leads consumers to prefer more unique options. The authors theorize that because smartphones are considerably more personal and private than PCs, using them activates intimate self-knowledge and increases private self-focus, shifting attention toward individuating personal preferences, feelings, and inner states. Consequently, making choices using a personal smartphone, compared with a PC, tends to increase the preference for unique and self-expressive options. Six experiments and several replications examine the effects of personal smartphone use on the preference for unique options and test the underlying role of private self-focus. The findings have important implications for theories of self-focus, uniqueness seeking, and technology’s impact on consumers, as well as tangible implications for many online vendors, brands, and researchers who use mobile devices to interact with their respective audiences.


Keywords: smartphones, online shopping, mobile marketing, uniqueness seeking, self-expression, customization, self-focus


Thursday, August 11, 2022

Average TFP growth declined after 1970 due to constraints on idea processing capability, not idea supply, in particular by policies that affect financial market effectiveness

James, Kevin Roger and Kotak, Akshay and Tsomocos, Dimitrios P., Ideas, Idea Processing, and TFP Growth in the US: 1899 to 2019 (July 13, 2022). SSRN: http://dx.doi.org/10.2139/ssrn.4161964

Abstract: Innovativity - an economy's ability to produce the innovations that drive total factor productivity (TFP)  growth - requires both ideas and the ability to process those ideas into new products and/or techniques. We model innovativity as a function of endogenous idea processing capability subject to an exogenous idea supply constraint and derive an empirical measure of innovativity that is independent of the TFP data itself. Using exogenous shocks and theoretical restrictions, we establish that: i) innovativity predicts the evolution of average TFP growth; ii) idea processing capability is the binding constraint on innovativity; and iii) average TFP growth declined after 1970 due to a constraints on idea processing capability, not idea supply.


Keywords: Innovation, Financial Market Effectiveness, Endogenous Growth, Total Factor Productivity

JEL Classification: O44, O43, O47, O16, O51, O31


V Conclusion

An innovation requires both an exploitable idea and an entrepreneur who transforms that exploitable idea into a new product or process. Innovativity—the economy’s ability to create the innovations that drive TFP growth—is therefore determined by both idea supply and idea processing capability rather than by idea supply alone. Examining US innovativity over the last 120 years, we find that it is plausibly the case that idea processing capability is now and has been the binding constraint on US TFP growth. This finding therefore suggests that idea processing capability plays a central role in the growth process and merits further investigation.
Our innovativity framework creates a new perspective on the debate over the future of economic growth by calling the neo-Malthusian analysis of Gordan (2012, 2014) into question. Starting from the premise that ideas drive TFP growth and the observation that TFP growth has fallen since the Peak regime of 1946/1969, Gordon reaches the seemingly inescapable conclusion that TFP growth is declining because we are running out of ideas. And, if we are running out of ideas, it inevitably follows that “future economic growth may gradually sputter out”(Gordon 2012). Needless to say, the end of growth would have profound and terrible consequences for all aspects of economic, political, and social life.
Our analysis offers a way out of this dismal conclusion. We find that the poor TFP growth performance of the US economy since 1980 is not due a lack of ideas but to a lack of idea processing capability. Our analysis further suggests that the economy’s idea processing capability can be (and has been) influenced by policy, and in particular by policies that a↵ect financial market effectiveness. Consequently, the poor TFP growth performance of the US economy may be due to (cheaply) correctable policy failings rather than to a brute fact of nature that we must simply accept and deal with as best we can.
Our analysis here is exploratory. We focus upon endogenizing idea processing capability in a TFP growth model in which both idea processing capability and idea supply play a central role. To do that, we abstract away from important features of endogenous growth theory. We aim to more fully incorporate these features in future work. It may happen that doing so alters some of the conclusions we reach here. But, given the stakes in the future of growth debate, we should find out.

All over the world, gay men are met with more aversion than gay women; gay men are perceived as mentally unhealthy and more threatening (e.g., more likely to be child molesters)

Sexual Orientation as Gendered to the Everyday Perceiver. P. J. Henry & Russell L. Steiger. Sex Roles, Aug 10 2022. https://rd.springer.com/article/10.1007/s11199-022-01313-1

Abstract: We present an integrated interdisciplinary review of people’s tendency to perceive sexual orientation as a fundamentally gendered phenomenon. We draw from psychology and other disciplines to illustrate that, across cultures and over time, people view and evaluate lesbians, gay men, and bisexuals through how they conform or fail to conform to traditional gender expectations. We divide the review into two sections. The first draws upon historical, anthropological, legal, and qualitative approaches. The second draws upon psychological and sociological quantitative studies. A common thread across these disciplines is that gender and sexual orientation are inseparable constructs in the mind of the everyday social perceiver.


Temporal variations in individuals' religiosity did not predict variations in happiness

Temporal Associations between Religiosity and Subjective Well-Being in a Nationally Representative Australian Sample. Mohsen Joshanloo. The International Journal for the Psychology of Religion, Aug 10 2022. https://doi.org/10.1080/10508619.2022.2108257

Abstract: This study examined the between-person and within-person associations between 4 components of subjective well-being (i.e., general life satisfaction, satisfaction with life domains, positive affect, and negative affect) and 2 components of religiosity (i.e., religious salience and religious participation). Data were drawn from the Household, Income, and Labor Dynamics in Australia (HILDA) Survey, collected 5 times between 2004 and 2018. The Random-Intercept Cross-Lagged Panel Model was used to analyze the data. Results showed weak between-person associations between the components of religiosity and subjective well-being. At the within-person level, the cross-lagged associations between religiosity and subjective well-being variables were trivial and nonsignificant. This indicates a lack of robust temporal associations between religiosity and subjective well-being when measured at intervals of a few years.


Higher IQ in adolescence was related to higher openness, lower neuroticism, lower extraversion, lower agreeableness and lower conscientiousness 50 years later

IQ in adolescence and cognition over 50 years later: The mediating role of adult personality. Yannick Stephan et al. Intelligence, Volume 94, September–October 2022, 101682. https://doi.org/10.1016/j.intell.2022.101682

Highlights

• Higher IQ in adolescence was related to better cognition 50 years later.

• Higher IQ was related to higher openness to experience in adulthood.

• Higher openness mediated the link between adolescent IQ and late life cognition.

Abstract: There is substantial evidence for the association between higher early life IQ and better cognition in late life. To advance knowledge on potential pathways, the present study tested whether Five-Factor Model personality traits in adulthood mediate the association between adolescent IQ and later-life cognition. Participants were from the Graduate sample of the Wisconsin Longitudinal Study on Aging (WLS; N = 3585). IQ was assessed in 1957 (about age 17), personality was assessed in 2003–2005 (age = 64), and cognition was assessed in 2011 (age = 71). Controlling for demographic factors, higher IQ in adolescence was related to higher openness, lower neuroticism, lower extraversion, lower agreeableness and lower conscientiousness in adulthood. Higher openness partially mediated the association between higher IQ and better cognition. Additional analyses indicated that the pattern of associations between IQ, personality and cognition was similar when the polygenic score for cognition was included as an additional covariate. Although effect size were small, this study provides new evidence that openness in adulthood is on the pathway between early life IQ and later-life cognition.


Keywords: IQPersonality traitsCognitionMediationLongitudinal study


Wednesday, August 10, 2022

Bonobos were less ready to make coveted fruit juice available to their peers than chimpanzees

Self-interest precludes prosocial juice provisioning in a free choice group experiment in bonobos. Jonas Verspeek, Edwin J. C. van Leeuwen, Daan W. Laméris & Jeroen M. G. Stevens. Primates, Aug 10 2022. https://rd.springer.com/article/10.1007/s10329-022-01008-x

Abstract: Previous studies on prosociality in bonobos have reported contrasting results, which might partly be explained by differences in experimental contexts. In this study, we implement a free choice group experiment in which bonobos can provide fruit juice to their group members at a low cost for themselves. Four out of five bonobos passed a training phase and understood the setup and provisioned fruit juice in a total of 17 dyads. We show that even in this egalitarian group with a shallow hierarchy, the majority of pushing was done by the alpha female, who monopolized the setup and provided most juice to two adult females, her closest social partners. Nonetheless, the bonobos in this study pushed less frequently than the chimpanzees in the original juice-paradigm study, suggesting that bonobos might be less likely than chimpanzees to provide benefits to group members. Moreover, in half of the pushing acts, subjects obtained juice for themselves, suggesting that juice provisioning was partly driven by self-regarding behavior. Our study indicates that a more nuanced view on the prosocial food provisioning nature of bonobos is warranted but based on this case study, we suggest that the observed sex differences in providing food to friends corresponds with the socio-ecological sex difference in cooperative interactions in wild and zoo-housed bonobos.


Courts do not let offenders with multiple personality disorder, which has imperceptibly morphed into dissociative identity disorder, get off the hook

Dissociative Identity Disorder and the Law: Guilty or Not Guilty? Stefane M. Kabene, Nazli Balkir Neftci and Efthymios Papatzikis. Front. Psychol., August 9 2022. https://doi.org/10.3389/fpsyg.2022.891941


Abstract: Dissociative identity disorder (DID) is a dissociative disorder that gained a significant rise in the past few decades. There has been less than 50 DID cases recorded between 1922 and 1972, while 20,000 cases are recorded by 1990. Therefore, it becomes of great significant to assess the various concepts related to DID to further understand the disorder. The current review has a goal of understanding whether an individual suffering from DID is legally responsible for the committed crime, and whether or not he or she can be considered competent to stand trial. These two questions are to be raised in understanding DID, by first shedding a light on the nature of the disorder and second by examining the past legal case examples. Despite the very nature of the disorder is characterized by dissociative amnesia and the fact that the host personality may have limited or no contact with the alters, there is no consensus within the legal system whether the DID patients should be responsible for their actions. Further to that, courts generally deny the insanity claims for DID suffering patients. In conclusion, more studies in the field are suggested to incorporate primary data into research, as the extensive reliance on secondary data forces us to believe the conclusions that were previously made, and no opportunity to verify those conclusions is present.

Discussion and Analysis

The literature review suggests a general tendency from the courts’ side not to accept the DID propositions and hence exempt the person from the responsibility on the basis of NGRI-DID. The major reasons for the tendency were lack of reliability of scientific methods in diagnosing DID, the possibility of a suspect to malinger DID in such a way that certain specialists will give the desired diagnosis (Ms. Orndorff’s case), the social response to the successful defense based on NGRI-DID, and the immaterial fact of DID, as related to the legal responsibility (the alter in control being sane and competent to stand the trial). Moreover, the case of Maxwell clearly showed that the person can commit the crime again when the society will hardly accept the decision of non-guiltiness. Therefore, the prosecutors tend to find criminals responsible due to the past experience and research done on DID.

The complexity of DID is also supported through the differences in the opinions on the reliability of the tests administered with the purpose of diagnosing DID. It has been suggested by Steinberg that the introduction of Structured Clinical Interview for DSM-III-R (SCID) and the Schedule for Schizophrenia for Affective Disorders and Schizophrenia (SADS) has increased the reliability in diagnosing disorders such as DID (Steinberg et al., 1993). The case of Ms. Orndorff, however, has happened in 2000 and suggests that the diagnostic capabilities in terms of DID were still lacking and hence insufficient to accurately diagnose DID.

As was mentioned before, the courts do have a tendency to deny the NGRI-DID claims for the DID patients that commit crimes. However, it becomes interesting to check on whether similar illnesses, such as epileptic seizures, face the same level of denials in the courts. Epileptic seizures resemble DID in terms of legal responsibility in a way that during a seizure, a person may engage in “actions such as picking at the clothes, trying to remove them, walking about aimlessly, picking up things, or mumbling” (Farrell, 2011b). Of greater importance is the fact that “following the seizure, there will be no memory of it” (Farrell, 2011a). As the actions performed during a seizure are involuntary, the person is unable to appreciate the actions or the consequences that follow, and has no memory of the events, not explained by the regular forgetfulness, the court should consider the person insane at the moment of committing a crime. Farrell elaborates on three cases of successful defenses on the basis of “non-insane automatism” (the definition under which courts nowadays classify epileptic seizures). In all cases, the courts have declared the defendants not guilty of the crimes, as their actions were involuntary, and the defendants had no memory of the events.

It is interesting in the light of above-mentioned cases to see the drastic difference in the courts’ opinions about the similar illnesses in terms of legal responsibility. In both cases, the defendants have no memory of the actions committed. However, it must also be presented that DID patients generally have an identity within them that was aware of the wrongdoing and also carries the memory of that wrongdoing, while under epileptic seizures there is not a single trace that would suggest that the defendant has a memory of a wrongful conduct. One could also argue that while considering the epilepsy-suffering patient, we are concerned with a single identity that is a subject to a biological illness and therefore, it becomes easy to say that the person’s actions were indeed involuntary, while considering the DID, we are talking about totally different identities with their own mindset within a single individual with a very limited information regarding its etiopathology. It means that the court can be reasonably confident in the reliability of epilepsy truly belonging to an individual, while an DID patient can potentially malinger the illness. Even though a few studies have emerged within the last a few years investigating the neurological correlates of DID, the research in this domain is still in the stage of infancy.

Taking a look at the root causes of the DID, it is found that severe psychological trauma or prolonged abuse in the childhood are the most possible reasons that cause the brain to trigger the self-defense mechanisms and protect itself through the dissociation of identities. As the effect of DID is not happening on its own and is occurring following a severe trauma, it should be considered a mental illness and thus be a sufficient reason for claiming the person to be not guilty by the reason of insanity (NGRI-DID). Moreover, both genders can be exposed to any kind of assault or negative experience in the childhood and the tendency of being diagnosed with DID of those victims is correlated. Both men and women showed similar types of identities and behavior that leads to the conclusion that crimes can be done by anybody regardless of their sex (O’Boyle, 1993). Therefore, the framework of how to justify or punish the person who committed wrongdoings should be the same for both male and female.

Many psychiatrists tend to question whether the person is really suffering from DID or trying to pretend in order to have NGRI-DID. However, involving only one specialist might not be enough as we all are human beings and think subjectively based on our past experience and beliefs. The case of Thomas Huskey was advised by the psychiatrist that already had strong beliefs that the murderer is just a great actor, therefore, he did not attempt to search for the root cause of the behavior that was hard to explain at that time (Haliman, 2015). Moreover, involving a few professionals is no longer enough since the opinion can differ based on individual observation, however, even the final judgment can be affected by groupthink. Based on the case of Ms. Moore, it was easier to find her guilty since both identities were directly involved in the action, so even the presence of other minor identities would not justify her wrongdoings. In particular, she was not even diagnosed with DID during the trial and was found responsible regardless of her mental illness (Moore, 1988).

Regarding the doubts over the reliability of measures for the assessment of DID, there are so far very few mechanisms available to psychiatrists that can be used in an attempt to evaluate patient’s dissociative disorders. It has been found that the long interviews used during the evaluation allow for emerging of different identities present within an individual. The long aspect of the interviews and evaluation also reduce the possibility of patient malingering the diagnosis. Kluft (1999) stated that “simulated DID presents crude manifestations of the disorder, such as stereotypical good/bad identity states and a preoccupation with the circumstances individual hopes to avoid by obtaining an DID diagnosis.” Kluft also suggested that it is difficult for the individual to maintain the voice, set of body gestures, and memory for every personality that he or she is trying to simulate. Hence, it can be suggested that the actual possibility of malingering DID is extremely challenging, and that cases of malingered DID will be very rare compared to correctly diagnosed DID.

Speaking of suggesting the framework for deciding on person’s liability on the basis of DID, the diagnosis has proven itself to be so complex that no universal method can actually be applied. However, there is a set of actions that should be done in order to assess the responsibility for the crime committed. Initially, an evaluation of the patient should be performed by several independent psychiatrists. The DID in our opinion should only be considered valid when all the psychiatrists involved agree on the opinion that the defendant is suffering from DID. Based on the diagnosis, the question of competency to stand trial must be answered. Then, the court should select the appropriate method for assessing the responsibility. The “host-alter” method is best when there is a dominant personality present, and the crime was committed by the alter identity. The “alter-in-control” method should be used when there is no clear evidence of the dominant identity. If the method used provides a result that supports the fact that the identity evaluated is insane at the time of committing a crime, the defendant should be considered not guilty.