Thursday, May 5, 2022

Current study supports previous findings showing that people living in slums tend to report higher levels of life satisfaction than one might expect; factors other than objective poverty make life more, or less, satisfying

Life Satisfaction among the Poorest of the Poor: A Study in Urban Slum Communities in India. Esther Sulkers & Jasmijn Loos. Psychological Studies, May 4 2022. https://link.springer.com/article/10.1007/s12646-022-00657-8

Abstract: This study investigates the level and predictors of life satisfaction in people living in slums in Kolkata, India. Participants of six slum settlements (n = 164; 91% female) were interviewed and data on age, gender, poverty indicators and life satisfaction were collected. The results showed that the level of global life satisfaction in this sample of slum residents did not significantly differ from that of a representative sample of another large Indian city. In terms of life-domain satisfaction, the slum residents were most satisfied with their social relationships and least satisfied with their financial situation. Global life satisfaction was predicted by age, income and non-monetary poverty indicators (deprivation in terms of health, education and living standards) (R2 15.4%). The current study supports previous findings showing that people living in slums tend to report higher levels of life satisfaction than one might expect given the deprivation of objective circumstances of their lives. Furthermore, the results suggest that factors other than objective poverty make life more, or less, satisfying. The findings are discussed in terms of theory about psychological adaptation to poverty.

Discussion

This study investigated the level and predictors of life satisfaction of people living in slums in Kolkata, India. In line with previous research, it was found that slum residents were less dissatisfied with their lives than one would have held given the dire living conditions of these people. For the prediction of global life satisfaction, income (monetary poverty) was complemented with the Multidimensional Poverty Index (non-monetary poverty) and fear of eviction. The results showed that not only income but also non-monetary indices such as education, living standards and fear of eviction are important correlates of life satisfaction of people living in slums.

The level of global life satisfaction observed in this study was comparable to those measured in a representative sample from Delhi, another large metropole in India. Although counterintuitive, our finding of a relatively high level of life satisfaction in a marginalized group is not new. For example, in a study among the poorest of the poor in South Africa, it was found that landfill waste pickers scored higher on life satisfaction than the national average (Blaauw et al., 2020). The same study found that there was a significant group of waste pickers who were very satisfied with their lives. Our findings also resemble those reported by Biswas-Diener and Diener (2001) and Cox (2012) who found slightly positive global life satisfaction in urban slum residents and dump dwellers in Kolkata, India and Managua, Nicaragua, respectively.

With regard to domain satisfaction, the slum residents were fairly satisfied with three of the five life domains assessed in this study i.e., their social relationships and health (physical and psychological). They were least satisfied with their financial situation and physical environment. Similar findings have been reported in previous studies addressing domain satisfaction in poor populations (Biswas-Diener & Diener, 2001; Cox, 2012; Sharma et al., 2019). Various scholars have emphasized the importance of social ties for well-being (Diener & Seligman, 2004), especially in poor populations (Boswell & Stack, 1975; Domínguez & Watkins, 2003; Henly, 2007). Social connectedness has been associated with access to various forms of social support and cognitive processes associated with subjective well-being such as life satisfaction, enhanced self-esteem, self-worth, purpose and meaning in life (Thoits, 2011). Social ties may serve as a private safety net, a poor family can fall back on in times of need (Edin & Lein, 1997).

In terms of prediction, higher levels of life satisfaction were related to age, income and deprivation. Due to shared variance with the MPI, fear of eviction did not explain unique variance in life satisfaction. Specifically, younger residents and those with higher incomes and lower scores on the MPI reported higher levels of global life satisfaction. Our findings regarding the relationship between age and global life satisfaction related to those reported by (Cox, 2012) who examined age as a predictor of life satisfaction in poor populations in Nicaragua and data from the Gallup World Poll (Fortin et al., 2015). Our results are in line with previous work which emphasized the role of income in life satisfaction (Whitaker & Moss, 1976). Moreover, the income-life satisfaction relationship in this study was comparable to the average r effect size of 0.28 computed for low-income samples in developing countries in Howell and Howell’s (2008) meta-analysis. The current study also confirms the results of research reporting a negative relationship between the MPI and life satisfaction in people living in the poorest districts of Peru (Mateu et al., 2020) and India (Strotmann & Volkert, 2018).

Overall, data from several studies suggest that slum residents in developing countries, such as India, are more satisfied with their lives than one would expect based on their living conditions. This contradicts the common-sense belief that poor people are unhappy by definition. Such judgment is, however, an illustration of the “focusing illusion” (Schkade & Kahneman, 1998) which has received a lot of attention in the literature on life satisfaction. The “focusing illusion” takes place when individuals exaggerate the importance of a single factor (e.g., living circumstances or material wealth) on well-being. Going beyond the stereotype that poverty equates unhappiness may provide a different picture. Research suggests that people living in poverty may consider different aspects of life important for their well-being than people from a more affluent background. For example, extremely poor Nicaraguan garbage dump dwellers in the study by (Vásquez-Vera et al., 2017) reported that their happiness did not emerge from job status or income, but rather from meaningful interactions and relationships with others.

Moreover, the explanatory power of objective poverty (as measured by income and the MPI) for life satisfaction was limited. This is in line with a vast array of research showing that objective life conditions do explain only a minor part of inter-individual differences in life satisfaction (Argyle, 2013; Diener & Biswas-Diener, 2002). How hardship is perceived on the other hand, may be of much bigger importance for the appraisal of one's life (Veenhoven, 2005). Poverty is a subjective feeling, which means that people defined as poor by objective standards do not necessarily have to feel poor. Indeed, results from a recent meta-analysis (Tan et al., 2020) indicate that life satisfaction has a stronger link to subjective socio-economic status than objectively measured income or education.

Our findings could be interpreted in the light of the human capacity to adapt to environmental demands. Adaptability is a self-regulatory resource which allows individuals to adjust to good and bad phenomena by altering their standards, thoughts, behaviors and emotions to the requirements of situations at hand. Adaptability can help prevent or mitigate the negative impact of challenge and adversity on well-being (Carver & Scheier, 2001). Following the multiple discrepancies theory (Michalos, 1985), life satisfaction relates to the discrepancy between what one has and what one wants (desire discrepancy) and what relevant others have (social comparison discrepancy) (Brown et al., 2009). Perceived negative discrepancies between one’s standards and one’s actual situation have a negative impact on life satisfaction. In the context of slums, perceived discrepancies between what one has (slum dwelling) and what one wants (a decent house), or what one has (no income) and what relevant others have (improvement in daily wage) could be a source of dissatisfaction with life. Effects of perceived negative discrepancies can be counterbalanced, however, by self-regulatory discrepancy reducing processes such as choosing a relevant reference group for social comparison and lowering aspirations (Carver & Scheier, 2001).

Regarding social comparison, it has been found that people have a natural tendency to compare themselves with others (Festinger, 1954), in particular with relevant reference groups such as people with a similar ethnicity, background or occupation (Khaptsova & Schwartz, 2016). In the case of low status or minority groups, several studies found that exposure to a successful referent from a low-status group is more pleasant and meaningful than exposure to a referent from a high-status group (Blanton et al., 2000; Leach & Smith, 2006; Mussweiler et al., 2000). This highlights the value of identifying local champions (e.g., former classmates who have excelled in school or sports) to serve as role models for young people living in low resource settings (Kearney & Levine, 2020).

Lowering aspirations is another discrepancy reducing mechanism. This has been observed in deprived neighborhoods including two Kenyan urban slums (Kabiru et al., 2013) where the constraints of the environment had a leveling effect on young people’s occupational and educational aspirations. Similar findings have been reported for youth in disadvantaged neighborhoods in the US and Scotland (Furlong et al., 1996; Stewart et al., 2007). In the case of Kolkata, it is possible that slum residents compare themselves mostly to people within their community and set their aspirations and goals accordingly. Indeed, research has found that expectations of life and oneself are influenced by one’s relative position and social norms within one’s community (Knight et al., 2009). Both social comparison and lowering aspirations are self-protective strategies that may help to ensure subjective well-being in situations in which the remediation of disadvantage is beyond the scope of personal control (Blanton et al., 2000; Leach & Smith, 2006; Mussweiler et al., 2000). Unfortunately, such strategies may also lead to aspiration traps where people under-aspire in occupational and educational goals, thereby contributing to the intergenerational transmission of poverty (Flechtner, 2014).

This study is one of the few examining life satisfaction in people living in a very low resource setting such as an urban slum in India. Other strengths are the relatively large sample size and the inclusion of non-monetary indicators of objective poverty as predictors of life satisfaction. The use of non-monetary poverty indices such as the MPI in life satisfaction research is relatively new. This approach is in line with new perspectives on measuring the material situation (combining income with a direct measure such as a deprivation index) (Christoph, 2010). Our results (showing an incremental contribution by the MPI) suggest the added value of combining monetary- (income) and non-monetary measures (the MPI) when analyzing the relationship between the material situation and life satisfaction.

Nevertheless, some limitations merit attention. First of all, this study only included objective measures of poverty. The addition of subjective measures of poverty (the individual’s perception of his/her financial and material situation) could have offered a more complete picture of the poverty-life satisfaction relationship. Secondly, the cross-sectional design of this study failed to establish causality. Thirdly, because the interviews were conducted in person and in the participants’ homes, which gave the possibility onlookers or family members meandering in earshot of the survey being asked, the research design could have been prone to social desirability bias (Tourangeau et al., 2000). Finally, the fact that the sample was predominantly female was most likely caused by the fact that interviews were conducted during the day when women were more typically at home. This may limit the generalization of the results. However, a recent meta-analysis of 281 samples (Batz-Barbarich et al., 2018) did not show significant gender differences in life satisfaction. In addition, the study of Biswas-Diener and Diener (2001) which was conducted in a comparable sample in Kolkata showed no significant differences in life satisfaction between men and women. This gives us no reason to believe that the unequal sample sizes in gender influence outcomes in life satisfaction in the current study.

The results of the present study highlight the need for further research. A mixed methods design adding qualitative approaches to the assessment of life satisfaction could illuminate a more holistic and contextual understanding of slum residents’ perceptions and experiences in daily life (Camfield et al., 2009). Secondly, in addition to measures of objective poverty, further research should also include subjective indices of poverty as this accounts for a better prediction of life satisfaction compared to objective poverty measures (Tan et al., 2020). Lastly, it would be valuable to learn more in-depth about psychological processes underlying life satisfaction of people living in slums such as social comparison and aspirations.

In terms of clinical practice, practical assistance such as slum upgrading should be complemented with efforts to improve the life satisfaction of slum residents. Research highlights the benefits of a positive mindset including a less pronounced stress response (Smyth et al., 2017), better role functioning (Moskowitz et al., 2012) and more efficient decision making (Isen, 2000). This has been explained by research showing that a positive mental state helps building coping resources by broadening the individual’s attention and action repertoire (Fredrickson, 2004). Other research has shown that the presence of a positive mindset buffers against the negative psychological impact of adversity (Suldo & Huebner, 2004; Veenhoven, 2008). Psychological interventions aimed at improving the mental health of people living in slums should thus not exclusively focus on the reduction in problems but also on the enhancement of positive mental states. The few studies that have examined the effect of individual and group-based positive psychology interventions in disadvantaged populations in developing countries show promising results, including a large increase in life satisfaction, positive affect, positive thoughts, generalized self-efficacy and reductions in self-reported symptoms of depression and negative affect (Ghosal et al., 2013; Sundar et al., 2016). Efforts to improve the life satisfaction of the slum residents may thus be worthwhile to consider, as it may help them deal with the harsh reality of life.

Social desirability bias leads some unvaccinated individuals to claim they are vaccinated; conventional survey studies thus likely overestimate vaccination coverage because of misreporting

Overestimation of COVID-19 Vaccination Coverage in Population Surveys Due to Social Desirability Bias: Results of an Experimental Methods Study in Germany. Felix Wolter et al. Socius: Sociological Research for a Dynamic World, May 4, 2022. https://doi.org/10.1177/23780231221094749

Abstract: In Germany, studies have shown that official coronavirus disease 2019 (COVID-19) vaccination coverage estimated using data collected directly from vaccination centers, hospitals, and physicians is lower than that calculated using surveys of the general population. Public debate has since centered on whether the official statistics are failing to capture the actual vaccination coverage. The authors argue that the topic of one’s COVID-19 vaccination status is sensitive in times of a pandemic and that estimates based on surveys are biased by social desirability. The authors investigate this conjecture using an experimental method called the item count technique, which provides respondents with the opportunity to answer in an anonymous setting. Estimates obtained using the item count technique are compared with those obtained using the conventional method of asking directly. Results show that social desirability bias leads some unvaccinated individuals to claim they are vaccinated. Conventional survey studies thus likely overestimate vaccination coverage because of misreporting by survey respondents.

Keywords: COVID-19, vaccine coverage, sensitive topics, social desirability, item count technique

Different methods tend to produce different results. The fact that COVID-19 vaccination coverage estimates differ depending on the method of data collection should come as no surprise to those in the scientific community. But such discrepancies are troublesome not only because they make it more difficult to develop prognoses and plan policy but also because they can undercut trust in governments and institutions, which is already at a premium in many regions in the ongoing pandemic.8

Our analysis of the use of survey data in estimating vaccine coverage underlines those difficulties: although surveys may be useful in the context of many other types of vaccines, we argue that the topic of COVID-19 vaccination in late 2021 is too sensitive to rely solely on survey data for coverage rates. Although it cannot be ruled out that official statistics based on reporting by hospitals and physicians are not also biased (perhaps because of incomplete or erroneous reporting), we showed in this investigation that COVID-19 vaccination coverage on the basis of survey data is likely biased upward by social desirability. Providing individuals with an anonymous way to report their unvaccinated status resulted in an estimated vaccination coverage that was significantly lower than the one based on the conventional method of DQ.

However, there are some important limitations to note. First, this article is written within the context of Germany in the fall of 2021. The local situation may change in the future, and it may be that the topic of COVID-19 vaccination is not as sensitive in other parts of the world. In some countries, vaccination coverage is extremely high. For example, in Portugal and Singapore, nearly 90 percent of adults are fully vaccinated as of November 2021. This means that in those countries, the question of vaccine status is likely much less sensitive. After all, as Tourangeau and Yan (2007:860) noted, “a question about voting is not sensitive for a respondent who voted,” so a question about one’s vaccination status is likely not sensitive for someone who is vaccinated.

Another limitation of the study is that we cannot compare our survey estimates with the survey estimates of the RKI or any other institute, because our survey was a nonprobability sample of a particular group of Internet users. However, the randomized experimental design means that we can indeed compare estimates between groups (DQ and ICT) within the study. This also entails that although we have an unbiased estimate of the sample treatment effect (with high internal validity resulting from a strict experimental setup), we have no guarantee that the population treatment effect is the same. This relates both to the German population and to other countries. As we noted earlier, question sensitivity probably varies across different populations, and hence the treatment effect of applying ICT to survey questions on COVID-19 vaccination is likely to also vary. Moreover, we cannot make any statements about the validity of estimates on the basis of data collected directly from health officials and workers, such as those collected in the RKI’s Digital Vaccine Coverage Monitor. It is possible that incomplete data transferred from vaccination centers, hospitals, and physicians may lead to other forms of bias (especially if the quality of the reporting is dependent on other unobserved factors).

Another issue is statistical power. If we follow the advice of Blair, Coppock, and Moor (2020), then our sample size implies a power of less than 80 percent for a given DQ-ICT difference of 10 percentage points. This is a notorious problem of ICT procedures, which always come with highly inflated standard errors compared with conventional estimates. With respect to future studies, we strongly recommend ensuring sufficiently large sample sizes on one hand and using more advanced ICT setups that help boost the statistical efficiency of the estimates on the other hand (for some propositions, see Aronow et al. 2015).

More work is needed to discover further sources of bias (e.g., reaching nonnative speakers in surveys) to get a better idea of the true vaccination coverage. Until then, discrepancies in statistics will continue to exist for well-known reasons such as sampling error, survey biases, systematic under- and overreporting by health organizations, and others. The ultimate goal should be to work toward understanding as many sources of bias and inaccuracy as possible in order to provide the general public with honest and transparent information and avoid confusion and the potential for misrepresentation of statistics.

Heterosexual identified females were much more likely to report two or more lifetime same-sex partners than males

Heterosexual Identification and Same-Sex Partnering: Prevalence and Attitudinal Characteristics in the USA. Tony Silva. Archives of Sexual Behavior, May 3 2022. https://link.springer.com/article/10.1007/s10508-022-02293-9

Abstract: This paper used the 2011–2017 National Survey of Family Growth to estimate population sizes and attitudinal characteristics of heterosexual-identified men who have sex with men (MSM) and women who have sex with women (WSW) aged 15–44 years. Analyses estimated population sizes in stages: after excluding respondents who reported only one lifetime same-sex partner, which happened before the age of 15; after excluding males who reported nonconsensual male–male sex; after excluding respondents who reported only one lifetime same-sex partner, regardless of the age at which that experience occurred; after excluding respondents who reported only two lifetime same-sex partners, the first of which occurred before age 15; and after excluding males who reported male–male sex work. The broadest criteria included many individuals with limited same-sex sexual histories or those who experienced nonconsensual sex or potentially coerced sex in youth. After excluding those respondents, analyses showed that heterosexual-identified MSM and WSW had a diversity of attitudes about gender and LGB rights; only a distinct minority were overtly homophobic and conservative. Researchers should carefully consider whether to include respondents who report unwanted sexual contact or sex at very young ages when they analyze sexual identity–behavior discordance or define sexual minority populations on the basis of behavior.


A Novel Human Sex Difference: Male Sclera Are Redder and Yellower than Female Sclera

A Novel Human Sex Difference: Male Sclera Are Redder and Yellower than Female Sclera. Sarah S. Kramer & Richard Russell. Archives of Sexual Behavior, May 4 2022. https://link.springer.com/article/10.1007/s10508-022-02304-9

Abstract

In a seminal study, Kobayashi and Kohshima (1997) found that the human sclera—the white of the eye—is unique among primates for its whitish color, and subsequent work has supported the notion that this coloration underlies the human ability to gaze follow. Kobayashi and Kohshima also claimed that there is no significant sex difference in sclera color, though no data were presented to support the claim. We investigated sclera color in a standardized sample of faces varying in age and sex, presenting the first data comparing male and female sclera color. Our data support the claim that indeed there is a sex difference in sclera color, with male sclera being yellower and redder than female sclera. We also replicated earlier findings that female sclera vary in color across the adult lifespan, with older sclera appearing yellower, redder, and slightly darker than younger sclera, and we extended these findings to male sclera. Finally, in two experiments we found evidence that people use sclera color as a cue for making judgements of facial femininity or masculinity. When sclera were manipulated to appear redder and yellower, faces were perceived as more masculine, but were perceived as more feminine when sclera were manipulated to appear less red and yellow. Though people are typically unaware of the sex difference in sclera color, these findings suggest that people nevertheless use the difference as a visual cue when perceiving sex-related traits from the face.


Insurance: In many golf circles, it was (and still is) customary for the lucky golfer to buy drinks for everyone in the clubhouse after landing a hole-in-one; this often resulted in prohibitively expensive bar tabs

I loved this:


In many golf circles, it was (and still is) customary for the lucky golfer to buy drinks for everyone in the clubhouse after landing a hole-in-one. This often resulted in prohibitively expensive bar tabs.


And an industry sprouted up to protect these golfers.


A newspaper archive analysis by The Hustle revealed that hole-in-one insurance firms sprouted up as early as 1933.


Under this model, golfers could pay a fee — say, $1.50 (about $35 today) — to cover a $25 (~$550) bar tab. And as one paper noted in 1937: “The way some of the boys have been bagging the dodos, it might not be a bad idea.”


Though the concept largely faded away in the US, it became a big business in Japan, where golfers who landed a hole-in-one were expected to throw parties “comparable to a small wedding,” including live music, food, drinks, and commemorative tree plantings.


By the 1990s, the hole-in-one insurance industry had a total market value of $220m. An estimated 30% of all Japanese golfers shelled out $50-$70/year to insure themselves against up to $3.5k in expenses.


Source: The strange business of hole-in-one insurance. Zachary Crockett. The Hustle. Apr 22 2022. https://thehustle.co/the-strange-business-of-hole-in-one-insurance/amp/


h/t Tyler Cowen https://marginalrevolution.com/marginalrevolution/2022/05/insurance-markets-in-everything-4.html



Wednesday, May 4, 2022

A large majority of the population agreed that certain people had better aesthetic taste than others, only 1.3pct considered that good taste is determined by the taste of experts

De Gustibus Est Disputandum: An Empirical Investigation of the Folk Concept of Aesthetic Taste. Constant Bonard, Florian Cova, Steve Humbert-Droz. Chapter in Perspectives on Taste, 1st ed, Routeledge, 2022. ISBN 9781003184225. https://www.taylorfrancis.com/chapters/edit/10.4324/9781003184225-7/de-gustibus-est-disputandum-constant-bonard-florian-cova-steve-humbert-droz

Abstract: Past research on folk aesthetics has suggested that most people are subjectivists when it comes to aesthetic judgment. However, most people also make a distinction between good and bad aesthetic taste. To understand the extent to which these two observations conflict with one another, we need a better understanding of people’s everyday concept of aesthetic taste. In this chapter, we present the results of a study in which participants drawn from a representative sample of the US population were asked whether they usually distinguish between good and bad taste, how they define them, and whether aesthetic taste can be improved. Those who answered positively to the first question were asked to provide their definition of good and bad taste, while those who answered positively to the third question were asked to detail by what means taste can be improved. Our results suggest that most people distinguish between good and bad taste and think taste can be improved. People’s definitions of good and bad taste were varied and were torn between very subjectivist conceptions of taste and others that lent themselves to a more objectivist interpretation. Overall, our results suggest that the tension Hume observed in conceptions of aesthetic taste is still present today.



The relationship of facial width-to-height ratio and aggressiveness is strongest for males at 27–33 and females at 34–61

Tracking sexual dimorphism of facial width-to-height ratio across the lifespan: implications for perceived aggressiveness. Stephanie Summersby, Bonnie Harris, Thomas F. Denson and David White. Royal Society Open Science, Vol 9 Iss 5, May 4 2022. https://doi.org/10.1098/rsos.211500

Abstract: The facial width-to-height ratio (FWHR) influences social judgements like perceived aggression. This may be because FWHR is a sexually dimorphic feature, with males having higher FWHR than females. However, evidence for sexual dimorphism is mixed, little is known about how it varies with age, and the relationship between sexual dimorphism and perceived aggressiveness is unclear. We addressed these gaps by measuring FWHR of 17 607 passport images of male and female faces across the lifespan. We found larger FWHR in males only in young adulthood, aligning with the stage most commonly associated with mate selection and intrasexual competition. However, the direction of dimorphism was reversed after 48 years of age, with females recording larger FWHRs than males. We then examined how natural variation in FWHR affected perceived aggressiveness. The relationship between FWHR and perceived aggressiveness was strongest for males at 27–33 and females at 34–61. Raters were most sensitive to differences in FWHR for young adult male faces, pointing to enhanced sensitivity to FWHR as a cue to aggressiveness. This may reflect a common mechanism for evaluating male aggressiveness from variability in structural (FWHR) and malleable (emotional expression) aspects of the face.

4. General discussion

The present study was a large-scale investigation into sexual dimorphism in the FWHR across the lifespan. We found that sexual dimorphism was present in our sample of over 17 000 Australian citizens. In younger to middle adulthood, the FWHR in men was greater than the FWHR in women. From the age of 48 onwards, this pattern was reversed such than women had larger FWHRs than men. One qualification is that the differences between FWHRs in men and women were very small across the age groups. These small effect sizes are consistent with that reported for sexual dimorphism in a meta-analysis (d = 0.11; [5]).

The predicted sexual dimorphism was therefore only observed in a surprisingly narrow age band in early adulthood. Yet this pattern in young adults is consistent with the view that FWHR is an evolutionarily important cue to physical formidability, as sexual dimorphism in this age band aligns with the period of life most commonly associated with mate selection and intrasexual competition. The reversal of this dimorphism in middle and late-adulthood, with progressively larger female compared to male FWHR, is more difficult to explain. It is possible that there are broader physical changes in ageing that explain the pattern. For example, because body mass index (BMI) is moderately correlated with the FWHR (r = 0.31; [5]), one possibility is that age-related BMI changes are different for males and females. Other possibilities are that the reversal in dimorphism is connected to age-related structural changes to the faces, such as differences in the rate of face lengthening with age [42]. Other possibilities are that increasingly fewer males with higher FWHR apply for passports later in life—perhaps because many men with the largest FWHR may be removed from society via incarceration [43] or early mortality relative to women—or that the difference is affected by changes in head pose behaviour in males and females of different ages [44].

We also tested the relationships between the FWHR and perceived aggressiveness in men and women across the lifespan. In their meta-analysis, Geniole et al. [5] found that the relationship between the FWHR and perceived aggressiveness was stronger for younger faces than older faces, but there was no evidence of moderation by sex. By contrast, we found that the relationships between the FWHR and perceived aggressiveness for males was strongest for the youngest age group of faces (27–33 years old), but from 34–61 years old, this relationship was strongest for female faces. These results suggest that the effect of FWHR on perceived aggressiveness ratings varies as a function of age and sex.

Moreover, the effect of FWHR on perceived aggressiveness was somewhat independent from physical variation in FWHR in these age groups. Aggressiveness ratings to faces in the 34 to 40 age range show greater modulation for female faces, despite there being more physical FWHR variation in male faces (see the electronic supplementary material, figure S4). This shows increased sensitivity to FWHR in the youngest male faces when people evaluate perceived aggressiveness, albeit restricted to a relatively narrow age band. This is consistent with results showing that people are more sensitive to threatening emotional expressions in male compared with female faces [45,46] and may point to a common mechanism responsible for processing FWHR-related and expression-related cues to threat. In face perception research more broadly, it has been proposed that our social impressions of structural aspects of faces are shaped by social learning of facial expressions [47,48], for example, that trustworthiness judgements from structural properties of faces are linked to transient changes such as smiling or warm expressions (e.g. [49]). Future work examining whether other threat cues are also modulated by face age can potentially help to resolve whether similar social learning mechanisms are involved in perceived aggressiveness.3

Another potential explanation of these findings is that the apparent increase in sensitivity to FWHR cues in young males was owing to participants being mostly undergraduate students. For face identity processing at least, there is consistent evidence that people develop perceptual expertise specifically for faces fitting the viewer's demographic profile, including faces of the same age as the viewer [51,52]. This raises the possibility that the apparent perceptual sensitivity to FWHR in young faces that we observe may be specific to the younger participants in our study. However, we note that this ‘own age effect’ is reported mostly in identity memory-based recognition tasks and is not consistently found for other types of identity processing task formats [53], or for other types of face judgements [54].4 Moreover, participant's age is not known to affect perceptions of aggression [56]. Nevertheless, this is an intriguing question that could be addressed in future work.

The enhanced effect of FWHR on aggressiveness for young men is consistent with evolutionary perspectives on the FWHR as a cue to physical formidability. However, the relationship between the FWHR and aggressiveness for women in middle and late-adulthood is more difficult to explain. Physical variation in FWHR in these age groups was greater for male faces (electronic supplementary material, figure S4), but the effect this variation had on aggressiveness ratings was higher for female faces (figure 4). The differences in ratings of aggression for younger and older men and women may be related to age-specific facial adiposity (the perception of weight in the face). Higher facial adiposity had been associated with higher perceptions of male facial masculinity [57], and lower perceptions of female facial femininity [58]. Thus, one possibility is that age-related changes in facial adiposity are different for males and females, and could be contributing to the sex differences in FWHR and perceived aggressiveness.

Another plausible explanation is that differences in sensitivity to FWHR cues were mediated by widely held stereotypes of masculinity and femininity in younger and older men and women. Men with relatively larger FWHRs are considered masculine, unattractive and physically formidable [5]. As in previous research, we found that these men were also considered likely to become aggressive if provoked [5]. This perception was largest among young men and faded with time. This finding is consistent with the stereotype of older men becoming weaker and less formidable with age, while younger men with relatively large FWHRs were probably viewed as ‘fighting fit’. By contrast, younger women are stereotyped as more feminine, attractive and passive than older women. Thus, participants' judgements of aggressiveness may have been relatively unaffected by the FWHR of the young women. However, stereotypes of older women can be particularly harmful, as they lead to appearance-based discrimination [59]. In this case, the larger FWHRs may have elicited a global negative evaluation bias because the women were not stereotypically attractive, feminine women.

These proposals should be treated as speculative for now. Indeed, there should be some caution when interpreting associations between FWHR and social impressions more generally. The use of discrete anthropometric ratios can sometimes be misleading owing to their associations with other surrounding measures in the body and face (e.g. [60]). This can lead to effects emerging as a consequence of interactions with other interrelated traits (e.g. attractiveness, facial masculinity, facial maturity and anger resemblance, [56]) rather than a consequence of the ratio itself. Therefore, it is important for future research to explore potential traits that may be interacting with the FWHR to impact judgements such as perceived aggressiveness in younger and older low- and high-FWHR faces.

Notwithstanding, to our knowledge our results provide the most comprehensive analysis of FWHR across the lifespan to date. We show that the sexual dimorphism of this trait is consistent with a secondary sexual characteristic that signals formidability in young males. We also show that perceivers were particularly sensitive to FWHR variation in young faces when evaluating perceived aggressiveness. Understanding the causes of face age dependency on perceptions of aggressiveness would be a worthwhile focus of future work.

Greedy individuals more often self-selected themselves into business-related environments, which presumably allow them to fulfill their greed-related need to earn a lot of money

The development of trait greed during young adulthood: A simultaneous investigation of environmental effects and negative core beliefs. Patrick Mussel et al. European Journal of Personality, May 3, 2022. https://doi.org/10.1177/08902070221090101

Abstract: Recent models of personality development have emphasized the role of the environment in terms of selection and socialization effects and their interaction. Our study provides partial evidence for these models and, crucially, extends these models by adding a person variable: Core beliefs, which are defined as mental representations of experiences that individuals have while pursuing need-fulfilling goals. Specifically, we report results from a longitudinal investigation of the development of trait greed across time. Based on data from the German Personality Panel, we analyzed data on 1,965 young adults on up to 4 occasions, spanning a period of more than 3 years. According to our results, negative core beliefs that have so far been proposed only in the clinical literature (e.g., being unloved or being insecure) contributed to the development of trait greed, indicating that striving for material goals might be a substitute for unmet needs in the past. Additionally, greedy individuals more often self-selected themselves into business-related environments, which presumably allow them to fulfill their greed-related need to earn a lot of money. Our results expose important mechanisms for trait greed development. Regarding personality development in general, core beliefs were identified as an important variable for future theory building.

Keywords: personality development, trait greed, core beliefs, environment


Tuesday, May 3, 2022

COVID-19: people started to remember more dreams, & dream content changed as themes like sickness, confinement, and—in the English-speaking world—even bugs began to dominate

Dreams and nightmares during the pandemic. Severin Ableidinger, Franziska Nierwetberg & Brigitte Holzinger. Somnologie, May 3 2022. https://link.springer.com/article/10.1007/s11818-022-00351-x

Abstract: The pandemic caused by the coronavirus disease 2019 (COVID-19) had a huge impact on public mental health. This was also reflected in dreams. Not only did people start to remember more dreams, but dream content changed as themes like sickness, confinement, and—in the English-speaking world—even bugs began to dominate. This also led to an increase in nightmare frequency. There are various factors that contributed to this change in the dream landscape. Some people have started to sleep more and hereby spend more time in REM sleep, which is known to increase dream recall and further lead to bizarre and vivid dreams. On the other hand, stress and poor mental health had an impact on sleep, and sleep quality thus dropped in many individuals. Poor sleep quality can also lead to an increase in dream recall. Dreams are known to regulate mood, so the rise in dreams and the change in dream content could also reflect a reaction to the overall rise in stress and decline in mental health. Recent studies have shown that as the pandemic progresses, further changes in mental health, dream recall, and dream content arise, but data are still scarce. Further research could help understand the impact the pandemic still has on mental health and dreams, and how this impact is changing over the course of the pandemic.


Making sense of changed dreams

But why do people remember more dreams? There are various factors and theories on dream recall, but including all of them would go beyond the scope of this paper. Thus, in this paper, various factors that were found in the cited studies will be mentioned. For further factors and theories relevant for dream recall, look at [34]. One possible explanation for people remembering more dreams is that people could be getting more sleep. As people started working from home, and sometimes had reduced workloads, people didn’t have the need to commute to work, school, or university anymore. Having spared this time, many people could now afford to sleep in. This translated into people reporting longer sleep times [27] and better sleep quality [13]. Interestingly, many did not change their time for going to bed, even though some went to bed later now, but rather changed their time for waking up and getting out of bed, and in this manner increased their total bedtime [1]. This has the effect of enhancing REM sleep, as the proportion of REM sleep in one sleep cycle increases in the second half of the night and before waking up [24]. So the relative amount of REM sleep gets larger later in the night, or better said, later in the total time slept, as not all sleep solely during the night. Even though it is now revoked that REM sleep is equivalent to dreaming and dream research has shown that dreams also occur in non-REM sleep, REM sleep dreams are characterized by being much more vivid, bizarre, and emotional as compared to non-REM sleep dreams [20]. It is therefore plausible that longer bedtime and more sleep is responsible for higher dream recall. But not everybody is getting more sleep. In fact, many studies report insomnia, lower sleep quality, and more awakenings, which should reduce the total sleep time instead of increasing it [222527].

In a multinational study, worse sleep quality was found in about 20%, while better sleep quality was found in only 5% [25]. A study comparing sleep quality during and after a lockdown found that 46.07% reported low sleep quality during the lockdown. After the lockdown, people had significantly greater ease falling asleep and fewer awakenings during the night [30]. A different study found similar results, as people experienced longer sleep latency and more difficulties falling asleep during lockdown [1]. However, other research found that sleep quality did not increase after the end of a lockdown [11]. Indeed, it has been found that sleep quality and dream recall are associated, as people with lower sleep quality remembered dreams more frequently [8]. These results received further support as a recent study found dream recall and nightmare frequency to be solely associated with bad sleep quality and not with daily worries about COVID-19 [36].

Many different factors contribute to sleep quality, ranging from stress, physical activity, and physical health, to various psychological factors [316]. Since many people experience a lot of stress during the pandemic, it is logical to expect their sleep quality to decline. Additionally, it has been suggested that people neglect their sleep hygiene, by drinking more alcohol [26] or having longer screen time, even before sleeping [29].

Another factor related to dreaming is mental health. Although the existing literature is not conclusive on the effect of overall mental health and dream recall, it has been shown that stress is related to dream recall [35] and psychopathology to nightmare frequency [28]. As mentioned before, the stressors of the pandemic had a huge impact on public mental health. This translated into a rise in anxiety, depression, insomnia, and PTSD symptoms [522]. Research has shown that heightened dream recall during the pandemic was especially pronounced in people with poorer mental health [8]. But not everyone was affected equally by the pandemic. Mental health problems and high dream recall were especially frequent in women and younger people [822]. COVID-19 infections also increase the risk for anxiety, depression, and PTSD [31]. Additional pandemic-related factors, like financial burden, social isolation, and subjective risks have also been identified to contribute to mental health and dream recall during the pandemic. It has been shown that those who were more severely affected also suffered worse mental implications, like a heightened risk for insomnia [22], and experienced the biggest changes in their dreams and dream content [33]. Further, anxiety and PTSD are known to have a connection to nightmares [21]: 80% of PTSD patients suffer from regular nightmares [21]. So the rise in symptoms of both anxiety and PTSD could also contribute to the rise in dream recall and nightmares. Nightmares themselves can also lead to various disturbances. Decline in sleep quality, mood disturbances, daytime sleepiness, and cognitive impairment are some of the harmful results nightmares can bring [21]. In the context of the pandemic, this could indicate a vicious circle, where daily stress leads to nightmares, which themselves further lead to various problems and more daily stress.

Dreams are often seen as a mechanism that regulates our mood [437]. Therefore, increased dream recall could further indicate a natural way to deal with the overall stress during the pandemic. This would also be consistent with studies reporting changes in dream frequencies after other significant and threatening events (see [24]).

Conclusion

The pandemic has had a huge impact. It has affected mental health and caused various psychological symptoms. Further, it has influenced dream content and led to a rise in dream recall. As dreams started to get more negative, themes of sicknesses and pandemics arose, and nightmare frequency thus also spiked. This demonstrates how our daily lives influence our dreams, and how they reflect our mental health and wellbeing. Further, the quantity of changes shows how much of an impact the pandemic has had on the general population. This also implies that sudden changes in dream recall and dream content can reflect significant events and changes in mental health and sleep quality. So assessing dreams and dream frequency could help detect changes in mental wellbeing in the population.

Although the cited studies seem to show that the overall dream landscape has changed, it is necessary to emphasize that for all of the studies, participation was voluntary. Therefore, especially people who already had a change in dream and nightmare frequency as well as dream content could have chosen to participate in these studies which focused on dreams and COVID-19. However, the study by Callagher and Incelli [9] tested especially whether this sort of self-selecting bias leads to an overestimation of the changes in dream and nightmare frequency and dream content, or even creates the described changes in the first place. They did not give their participants any information about the purpose of the study until participation and yet they still found the described changes. However, all of the studies comparing pre-pandemic to pandemic dreams ask retrospectively, so there could possibly be memory bias.

In this article the continuity theory of dreaming has already been mentioned, as has emotion regulation, which is also central to various theories of dreaming. However, there are many more theories regarding the purpose of dreams. A complete listing of all theories and models would go beyond the scope of this review, for further reading [1019] are referred to.

As of now, most studies have explored the impact at the beginning of the pandemic, but more recent research has shown that there are differences between lockdown and post-lockdown periods, and that both are different to pre-pandemic times. But data on how dreams change during progression and time course of the pandemic are still lacking. It seems as if mental health problems get worse the longer the pandemic progresses, and this should also be reflected in dreams. So assessing dreams could help in finding trends regarding how mental health and wellbeing change throughout the course of the pandemic. Further, as dreams work as an emotional regulation mechanism, assessing nightmare and dream frequency could help assess whether this mechanism still functions properly and indicate heightened emotional load.

Clothing contributed to person misidentification just as much as the face in the first seeing but became less important over time; this suggests a dynamic shift in person identification depending on time

Miura, Hiroshi, Daisuke Shimane, and Yuji Itoh. 2022. “Person Misidentification Occurs in One-half of Cases: Demonstration in a Field Experiment.” PsyArXiv. May 3. doi:10.31234/osf.io/2etpm

Abstract: Few studies have examined person misidentification. Due to the lack of experimental studies, the reason person misidentification occurs, and its occurrence rate, remains unclear. Therefore, we aimed to demonstrate that person misidentification occurs with a certain probability, in a field experiment. We also wanted to examine whether the similarity between two people affects the occurrence of person misidentification. When 66 undergraduate participants made a rendezvous with an acquaintance, another person who wore similar clothes to the acquaintance or had a similar face appeared. The results showed that in both the conditions, approximately half of the participants made the person misidentification error, and one-fourth even spoke to the person mistakenly. Moreover, the results indicated that clothing contributed to person misidentification just as much as the face in the first seeing but became less important over time. This suggests a dynamic shift in person identification depending on time.