Saturday, February 20, 2021

Treating the weekend like a vacation can increase happiness, and exploratory analyses show support for the underlying role of increased attention to the present moment

Happiness From Treating the Weekend Like a Vacation. Colin West, Cassie Mogilner, Sanford E. DeVoe.  Social Psychological and Personality Science, June 15, 2020.

Abstract: Americans are time-poor. They work long hours and leave paid vacation days unused. An analysis of over 200,000 U.S. workers reveals that not prioritizing vacation is linked to lower happiness. Many people, however, do not feel they can take vacation due to financial and temporal constraints. How might people enjoy the emotional benefits of vacation without taking additional time off or spending additional money? Three preregistered experiments tested the effect of simply treating the weekend “like a vacation” (vs. “like a regular weekend”) on subsequent happiness—measured as more positive affect, less negative affect, and greater satisfaction when back at work on Monday. Although unable to definitively rule out the role of demand characteristics, the study results suggest that treating the weekend like a vacation can increase happiness, and exploratory analyses show support for the underlying role of increased attention to the present moment.

Keywords: happiness, subjective well-being, vacation, time, attention to the present, mindfulness

Intelligence contributes 48–90 times more than grit to educational success and 13 times more to job-market success

In a Representative Sample Grit Has a Negligible Effect on Educational and Economic Success Compared to Intelligence. Chen Zisman, Yoav Ganzach. Social Psychological and Personality Science, July 14, 2020.

Abstract: We compare the relative contribution of grit and intelligence to educational and job-market success in a representative sample of the American population. We find that, in terms of ΔR 2, intelligence contributes 48–90 times more than grit to educational success and 13 times more to job-market success. Conscientiousness also contributes to success more than grit but only twice as much. We show that the reason our results differ from those of previous studies which showed that grit has a stronger effect on success is that these previous studies used nonrepresentative samples that were range restricted on intelligence. Our findings suggest that although grit has some effect on success, it is negligible compared to intelligence and perhaps also to other traditional predictors of success.

Keywords: intelligence, achievement, grit, educational success

Although people can selectively endorse moral principles about freedom of speech depending on their political agenda, many seek to conceal this bias from others, and perhaps also themselves

Motivated moral judgments about freedom of speech are constrained by a need to maintain consistency. Nikolai Haahjem Eftedal, Lotte Thomsen. Cognition, Volume 211, June 2021, 104623.

Abstract: Speech is a critical means of negotiating political, adaptive interests in human society. Prior research on motivated political cognition has found that support for freedom of speech depends on whether one agrees with its ideological content. However, it remains unclear if people (A) openly hold that some speech should be more free than other speech; or (B) want to feel as if speech content does not affect their judgments. Here, we find support for (B) over (A), using social dominance orientation and political alignment to predict support for speech. Study 1 demonstrates that if people have previously judged restrictions of speech which they oppose, they are less harsh in condemning restrictions of speech which they support, and vice versa. Studies 2 and 3 find that when participants judge two versions of the same scenario, with only the ideological direction of speech being reversed, their answers are strongly affected by the ordering of conditions: While the first judgment is made in accordance with one's political attitudes, the second opposing judgment is made so as to remain consistent with the first. Studies 4 and 5 find that people broadly support the principle of giving both sides of contested issues equal speech rights, also when this is stated abstractly, detached from any specific scenario. In Study 6 we explore the boundaries of our findings, and find that the need to be consistent weakens substantially for speech that is widely seen as too extreme. Together, these results suggest that although people can selectively endorse moral principles depending on their political agenda, many seek to conceal this bias from others, and perhaps also themselves.

Keywords: Motivated reasoningMoral judgmentFreedom of speechSelf-deceptionSocial dominancePolitical ideology

Users do not universally interpret high numbers of “likes” for messages congruent to their own attitudes as valid evidence for the public agreeing with them, especially if their interest in a topic is high

Luzsa, R., & Mayr, S. (2021). False consensus in the echo chamber: Exposure to favorably biased social media news feeds leads to increased perception of public support for own opinions. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 15(1), Article 3.

Abstract: Studies suggest that users of online social networking sites can tend to preferably connect with like-minded others, leading to “Echo Chambers” in which attitudinally congruent information circulates. However, little is known about how exposure to artifacts of Echo Chambers, such as biased attitudinally congruent online news feeds, affects individuals’ perceptions and behavior. This study experimentally tested if exposure to attitudinally congruent online news feeds affects individuals' False Consensus Effect, that is, how strongly individuals perceive public opinions as favorably biased and in support of their own opinions. It was predicted that the extent of the False Consensus Effect is influenced by the level of agreement individuals encounter in online news feeds, with high agreement leading to a higher estimate of public support for their own opinions than low agreement. Two online experiments (n1 = 331 and n2 = 207) exposed participants to nine news feeds, each containing four messages. Two factors were manipulated: Agreement expressed in message texts (all but one [Exp.1] / all [Exp.2] messages were congruent or incongruent to participants' attitudes) and endorsement of congruent messages by other users (congruent messages displayed higher or lower numbers of “likes” than incongruent messages). Additionally, based on Elaboration Likelihood Theory, interest in a topic was considered as a moderating variable. Both studies confirmed that participants infer public support for their own attitudes from the degree of agreement they encounter in online messages, yet are skeptical of the validity of “likes”, especially if their interest in a topic is high.

Keywords: Echo chambers; social networking; false consensus; selective exposure


While online users appear not to suspect biases in agreement expressed in message texts, they appear critical of endorsement indicated by the numbers of “likes”: They do not universally interpret high numbers of “likes” for messages congruent to their own attitudes as valid evidence for the public agreeing with them, especially if their interest in a topic is high. Instead, they lower their estimate of public agreement. Thus, users appear to be wary of biases in numbers of “likes” and should be somewhat resistant towards attempts to influence their perception of public opinion via manipulated number of likes.

Deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm

Levine, Emma. 2021. “Community Standards of Deception: Deception Is Perceived to Be Ethical When It Prevents Unnecessary Harm.” PsyArXiv. February 19. doi:10.31234/

Abstract: We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, I find that deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm. I identify eight implicit rules – pertaining to the targets of deception and the topic and timing of a conversation – that clarify systematic circumstances in which deception is perceived to prevent unnecessary harm, and I document the causal effect of each implicit rule on the endorsement of deception. I also explore how perceptions of unnecessary harm influence communicators’ use of deception in everyday life, above and beyond other moral concerns. This research provides insight into when and why people value honesty and paves the way for future research on when and why people embrace deception.

Cleansing effects (one can wash one's hands to recover innocence): Its size has been inflated by dubious research practices

Ross, R., Van Aert, R., Van den Akker, O., & Van Elk, M. (2021). The role of meta-analysis and preregistration in assessing the evidence for cleansing effects. Behavioral and Brain Sciences, 44, E19. doi:10.1017/S0140525X20000606

Abstract: Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.

Free text: PsyArXiv Preprints | The role of meta-analysis and preregistration in assessing the evidence for cleansing effects

More Questions About Multiple Passions: Who Has Them, How Many Do People Have, and the Relationship Between Polyamorous Passion and Well-being

More Questions About Multiple Passions: Who Has Them, How Many Do People Have, and the Relationship Between Polyamorous Passion and Well-being. Benjamin Schellenberg & Daniel Bailis. Journal of Happiness Studies, Feb 19 2021.

Abstract: People are often passionate toward multiple activities in their lives. However, more has been learned about passion toward any single activity than about passion toward multiple activities. Relying on the dualistic model of passion (Vallerand in The psychology of passion: a dualistic model, Oxford University Press, New York, 2015), this research addressed the antecedents and consequences of polyamorous passion. In three pre-registered studies (total N = 1322) and one mini meta-analysis, we found that (a) people tend to report being passionate for between 2 and 4 activities; (b) harmonious passion becomes a less potent predictor of well-being as it is directed toward less-favored activities; (c) harmonious passion does not contribute to the prediction of well-being beyond a second-favorite activity; and (d) openness to experience is a personality trait that is positively associated with the number of passionate activities that people have in their lives. These results contribute to our understanding of who has multiple passions, how many passionate activities people tend to have, and the relationship between polyamorous passion and well-being.

General Discussion

Research relying on the dualistic model of passion has revealed a great deal about the antecedents and consequences of feeling passion toward a single activity (Vallerand 2015). But it is common for people to feel passionate about multiple activities, and little is known about people who have multiple passions in life. The aim of this research was to contribute to this emerging area by focusing on four specific questions about polyamorous passion. In general, our findings add to our understanding of how many passionate activities people have in their lives, the effect of having multiple passions on well-being, and who becomes polyamorously passionate. How many passionate activities do people have?  We addressed this question (Question 1) in all four studies and found that people typically have between 2 and 4 passionate activities in their lives. The number of reported passions was closer to 2 when the number of passionate activities was classified based on passion criteria scores, and closer to 4 when the number of passionate activities was freely reported. These results lead to two conclusions. First, most people are polyamorously passionate; it is more common to be passionate for multiple activities than it is to be passionate about one or no activity. People therefore appear to be very capable of engaging in multiple activities that they enjoy, find valuable and meaningful, devote a great deal of time, energy, and resources, and incorporate into their identities (Vallerand 2015). Second, people typically limit the number of passionate activities they pursue to only a few. There could be many factors that restrict the number of passionate activities people pursue, including limited time and energy that people are able to devote to different activities. The overall message from these findings is that people are unquestionably passionate, and this passion is most often directed toward more than one activity. What is the effect of having multiple passions on well-being? We addressed this question in two ways. In Study 1 we tested whether the relationship between HP and well-being depended on whether HP was directed toward a favorite or fourthfavorite activity (Question 2), and in Studies 2 and 3 we tested if HP for less favored activities predicted well-being beyond what could be predicted by HP for more favored activities (Question 3). In general, the results support the dualistic model by showing that well-being is positively associated with HP, not OP (Vallerand, 2012). But the results contribute to our knowledge about passion by showing that, when directed toward a second-favorite activity, HP contributed to variance in well-being beyond HP for a favorite activity. Having high levels of HP toward two activities may allow people to have two domains in which they can have experiences that contribute to greater well-being, including greater positive affect (Rousseau and Vallerand 2008), flow (Carpentier et al. 2011), and psychological need satisfaction (Verner-Filion et al. 2017). However, the results also showed that HP becomes less predictive of well-being as it is directed toward activities that are less favored. In fact, consistent across Studies 2 and 3, levels of HP did not significantly contribute to the prediction of well-being beyond a second-favorite activity. There must certainly be a limit on the extent to which engaging in passionate activities can enhance well-being (Lyubomirsky et al. 2005), and this research suggests that this benefit is limited to two passionate activities, provided that they are pursued with high HP. Beyond two activities, the benefits of HP for well-being reaches a point of diminishing returns. Perhaps most importantly, the findings also suggest that many adults, especially those higher in openness to experience, consider a greater number of activities to be passions in their lives than they can actually derive increased well-being from, even if those activities are pursued with high HP. We recognize that people may be passionate for numerous activities for good reasons that lie beyond personal well-being, and we would not take the present findings to imply that these people should reduce the number of passions they are trying to pursue. However, we do take the findings to reflect an underlying psychological reality that even in the best-case scenario of consistently high HP, the benefits for well-being do not extend equally and indefinitely to all of the activities one might pursue as passions. Most of these benefits derive from one’s favorite and second-favorite activity. Who has multiple passions?  To address this final question (Question 4), we began by exploring if the big five personality traits were related to passion quantity. The results of all four studies and a mini meta-analysis found consistent evidence in support of a small-to-medium-sized positive association between openness to experience and number of passionate activities. There are many potential reasons why openness predicts a greater number of passionate activities. People with high levels of openness may engage in more or more varied activities (Ihle et al. 2015), thus increasing chances that multiple activities will develop into passionate activities. People with high levels of openness may also find novel activities or experiences more interesting and pleasurable (Fayn et al. 2015), which could facilitate feelings of passion toward them. Testing these and other potential reasons for why openness is linked with a greater number of passionate activities is an important area for future research. Limitations and Future Directions This research is limited by its reliance on self-report assessments and its cross-sectional design. Additional research is needed to replicate these effects using other types of assessments (e.g., interviews, other-reports) or designs (e.g., experimental, longitudinal). There is also evidence that participants recruited on crowdsourcing websites such as Prolific may differ from the general population in several characteristics (see Huff and Tingley 2015). Although a more representative sample was recruited in Study 3, research going forward should focus on other types of samples to address the questions posed in this research. We should also note that the results with HP in Studies 2 and 3 were based on a short, 3-item assessment of HP. Short scales were administered to reduce participant burden and, although others have successfully taken this approach (e.g., Trepanier et al. 2014), our findings should be replicated with the full measure of HP. 

Our view is that this research is another step toward gaining a better understanding of the antecedents and consequences of polyamorous passion. But research in this area is still taking its first steps. Although there is still a great deal to learn, we would like to suggest two routes that additional research can take. A first route is to focus on how passion for multiple activities develops over the life course. For instance, does the number of passionate activities people pursue remain stable, or does this number fluctuate throughout life? Do people go through some periods when they have many passionate activities, and other periods when they have none? It should also be emphasized that all participants in this research, and in the studies reported by Schellenberg and Bailis (2015), were adults, meaning that little is known about how passion for multiple activities develops and changes from childhood to adolescence to adulthood. A second route is to focus on extremes. On one extreme are those who are not passionate for any activity in their lives. Vallerand (2015) reports that between 15% and 25% of people are not passionate for any activity in their lives. The results from this research support these figures. On the other extreme are those who are passionate for many activities. Why do some have passion for many activities in their lives, and others do not have passion for any? Little is known about nonpassionate people (Vallerand 2015), and even less is known about those who are superpolyamorously passionate

Like all salamanders, newts can re-grow a lost limb or amputated tail. This regenerative ability… is a superpower we are eager to steal, a piece of real animal magic

Torching for Newts. Anita Roy. Dark Mountain Project, Feb 10 2021.

A century ago there were a million ponds in Britain, home to the mighty great crested newt. Now with increasing building work and a hostile government, amphibian life is being squeezed out of its territories. But not entirely. As their first migrations from land to water begin next month, Anita Roy goes into the Somerset night to discover the tiny dragons that live in the liminal spaces of our land and imaginations.

Anita Roy is a writer and editor based in Wellington, Somerset. She is the co-editor of Gifts of Gravity and Light: A Nature Almanac for the 21st century (July 2021) and author of A Year in Kingcombe.


Hannah handed me a torch. It was a serious piece of kit, with a battery-pack the size of a toaster slung on a broad black strap across my shoulder. I strafed the undergrowth enthusiastically until she suggested mildly that I might like to conserve the power until we actually got where we were going. I quickly switched it off, partly to hide my blushes.

I was the only rookie accompanying four ecologists to check for newts in the ponds on the edge of town. Seven years ago, a new housing estate was being built here and the developers had had to create several new ponds in order to relocate a small population of great crested newts. The ecologists I was with – Hannah, Polly, Mark and Paul – had overseen the whole operation and our mission tonight was to check how the newts were faring in their newbuilds. 

We left behind the sodium glow of streetlights, and headed down the slope into the tall grass. The last vestiges of daylight showed as a streak of rose madder beneath clouds as rich as plums. Darkness pooled in dips and ditches and our pupils dilated to drink in what little light lingered. Tattered shadows flitted overhead: bats. 


The pond was one of three that had been created to ‘mitigate’ the effects of the housing estate. Polly – who forged along ahead of me, battery-pack knocking against her hip – was one of the original team who had supervised the gathering up and resettlement of newts before the bulldozers arrived. 

Pond number two was more established and seemed richer with wildlife possibilities. The rushes were fatter and lay in dense beds around the water’s edge. A few passes of the torch through the sepia water and – bingo: newts. ‘Lots of them,’ said Polly. ‘Good.’

These were smooth and palmate newts, the commonest of the UK’s three native species. Slim little creatures, as long as your little finger, they slipped through the water in tiny Chinese brushstrokes, with an elegant economy of movement. With their vertical fish-tails and four legs, they seem not quite fully formed – like adult frogs who have been unable to shake off their tadpoley youth, or an illustration from a children’s encyclopaedia on evolution: your Middle Devonian great-great-great-great-to-the-power-x grandmother, dragging herself out of the primaeval soup and onto the newly formed land. 


Amphibians are well-named: from the Greek ‘amphi’ meaning both, and ‘bios’, life. They live a double life, moving between aquatic and terrestrial realms. Half in the water and half out, creatures of the twilight world between day and night, living on the outskirts of town and the margins of countryside, newts seem most at home in liminal spaces. 

Like all salamanders, newts can re-grow a lost limb or amputated tail. This regenerative ability has long fascinated humans. It is a superpower we are eager to steal, a piece of real animal magic. The mad scientist eager to exploit the process for their own gains is a familiar figure in popular culture. In the 2012 Amazing Spider-Man movie, Rhys Ifans plays Dr Curt Connors, a scientist obsessed with regenerating his own amputated arm using a serum extracted from lizards. Inevitably, the experiment goes horribly wrong, as the mild-mannered doctor morphs into an evil monster, Lizardman, and goes on the rampage. The eponymous hybrid man-fish of the 1954 horror film Creature from the Black Lagoon also proved hard to kill, recovering/regenerating from each bullet-riddled denouement to rise again in sequel after sequel. The creature is reincarnated in Guillermo del Toro’s Shape of Water in 2017. As in the original movie, it has been captured from deep in the Amazon where it was worshipped as a god and brought to the lab, where it is known only as ‘the Asset’: biological raw material for humans to mine and extract knowledge from, at no matter what cost to the beast.

In 1994, Dr Goro Eguchi of the Shokei Educational Institution, Japan, and Panagiotis Tsonis at the University of Dayton, Ohio, decided to investigate this apparently magical ability for real. In the lab, they cut open the  eye of a live Japanese fire-bellied newt (Cynops pyrrhogaster) and removed the lens to see if it would regenerate. It did, perfectly. And not from residual lens tissue but from epithelial cells in the iris. So they did it again. And again. Over the course of sixteen years, they cut out the newt’s eye no fewer than eighteen times. And each time, the poor newt grew it back, fresh, complete, in fully working order. The eye of a newt half its age.

Dr Tsonis is quoted in one article as pointing out the ‘good news’ from this study: once we fully understand it, he says, ‘age will not be a problem’ in terms of, for example, wound repair. After all, he says, ‘old people need regeneration, not young ones.’

Perhaps it is no wonder that ‘eye of newt’ is an essential ingredient for the most famous magic potion in history: the hell-broth Macbeth’s witches boil up in order to reveal the future. It is no surprise that the future foretold involves hubris, nemesis and murder most foul. 


Full text at the link above.

Friday, February 19, 2021

Is Conscientiousness Always Associated With Better Health? A U.S.–Japan Cross-Cultural Examination of Biological Health Risk

Is Conscientiousness Always Associated With Better Health? A U.S.–Japan Cross-Cultural Examination of Biological Health Risk. Shinobu Kitayama, Jiyoung Park. Personality and Social Psychology Bulletin, June 18, 2020.

Abstract: In Western societies, conscientiousness is associated with better health. Here, we tested whether this pattern would extend to East Asian, collectivistic societies. In these societies, social obligation motivated by conscientiousness could be excessive and thus health-impairing. We tested this prediction using cross-cultural surveys of Americans (N = 1,054) and Japanese (N = 382). Biomarkers of inflammation (interleukin-6 and C-reactive protein) and cardiovascular malfunction (systolic blood pressure and total-to-HDL cholesterol ratio) were adopted to define biological health risk (BHR). Among Americans, conscientiousness was associated with lower BHR. Moreover, this relationship was mediated by healthy lifestyle. In contrast, among Japanese, the relationship between conscientiousness and BHR was not significant. Further analysis revealed, however, that conscientiousness was associated with a greater commitment to social obligation, which in turn predicted higher BHR. These findings suggest that conscientiousness may or may not be salubrious, depending on health implications of normatively sanctioned behaviors in varying cultures.

Keywords: culture, conscientiousness, biological health risk, healthy lifestyle, social obligation

Gay men were more likely to accept casual sex offers than lesbian women, but both had accepted their most recent casual sex offer more than half of the time

Gender Similarities and Differences in Casual Sex Acceptance Among Lesbian Women and Gay Men. Jes L. Matsick, Mary Kruk, Terri D. Conley, Amy C. Moors & Ali Ziegler. Archives of Sexual Behavior, Feb 18 2021.

Rolf Degen's take: Gay men were more likely to accept casual sex offers than lesbian women, but both had accepted their most recent casual sex offer more than half of the time

Abstract: Popular wisdom and scientific evidence suggest women desire and engage in casual sex less frequently than men; however, theories of gender differences in sexuality are often formulated in light of heterosexual relations. Less is understood about sexual behavior among lesbian and gay people, or individuals in which there is arguably less motivation to pursue sex for reproductive purposes and fewer expectations for people to behave in gender-typical ways. Drawing from scripts theory and pleasure theory, in two studies (N1 =  465; N2 =  487) we examined lesbian and gay people’s acceptance of casual sex. We asked participants who had been propositioned for casual sex whether they accepted the offer and to rate their perceptions of the proposer’s sexual capabilities and sexual orientation. They also reported on their awareness of stigma surrounding casual sex. We found a gender difference in acceptance: Gay men were more likely than lesbian women to have accepted a casual sex offer from other gay/lesbian people, and this difference was mediated by participants’ stigma awareness. We also found the proposer’s sexual orientation played a role in people’s acceptance. Lesbian women and gay men were equally likely to accept offers from bisexual proposers but expressed different acceptance rates with “straight-but-curious” proposers, which was mediated by expected pleasure. We discuss dynamics within lesbian and gay communities and implications for studying theories of sexual behavior and gender differences beyond heterosexual contexts.

Thursday, February 18, 2021

What do prime-age ‘NILF’ men do all day? A cautionary on universal basic income

What do prime-age ‘NILF’ men do all day? Nicholas Eberstadt, Evan Abramsky. AEI, Feb 8 2021.

To date, most of the debate about [the Universal Basic Income] has centered on its affordability—i.e., its staggering expense. But a scarcely less important question concerns the implications of such largesse for the recipients themselves and civil society. What would a guaranteed income mean for the quality of citizenship in our country, given that a UBI would allow some—perhaps many—adult beneficiaries to opt for a life that does not include gainful employment or other comparable work?

As it happens, an experiment of sorts is already underway to help us answer this very question. Thanks to the American Time Use Survey (ATUS) from the Bureau of Labor Statistics, we have detailed, self-reported information each year on how roughly 10,000 adult respondents spend their days—from the moment they wake until they sleep.1 These surveyed Americans include prime-age men who are not in labor force (or “NILF” to social scientists), ordinarily in their peak employment years, who are neither working nor looking for work. By examining the self-reported patterns of daily life of these grown men who do not have and are not seeking jobs, we may gain insights into the work-free existence that some UBI advocates hold to be a positive end in its own right.


The portrait of daily life that emerges from time-use surveys for grown men who are more or less entirely disconnected from the world of work is sobering. So far as can be divined statistically, their independence from obligations of the workforce does not translate into any obvious enhancement in their own quality of life or improvement in the well-being of others.

To go by the information they themselves report, quite the contrary seems to be true. Though they have nothing but time on their hands, they are not terribly involved in care for their home or for others in it. They are increasingly disinclined to embark on activities that take them outside the house. The central focus of their waking day is the television or computer scree, to which they commit as much time as many men and women devote to a full-time job. So far as we can tell, moreover, screen time is sucking up a still-increasing portion of their waking hours.

There would seem to be no shortage of anomie, alienation, or even despair in the daily lives of men entirely free from work in America today. Why, then, would we not expect a UBI—which would surely result in a detachment of more men from paid employment—to result in even more of the same?

Arguments can be made, of course, that UBI would attract a different sort of “unworking” man from those who predominate the prime-age male NEET population today. But the patterns we have presented on the daily routines of existing work-free men should make proponents of the UBI think long and hard. Instead of producing new community activists, composers, and philosophers, more paid worklessness in America might only further deplete our nation’s social capital at a time when good citizenship is already in painfully short supply.

Both men and women were more committed to their relationships if they perceived their partners as attractive; however, people tended to feel less committed the more attractive their partners perceived themselves

Committing to a romantic partner: Does attractiveness matter? A dyadic approach. Tita Gonzalez Aviles et al. Personality and Individual Differences, Volume 176, July 2021, 110765, February 16 2021.

Abstract: Physical attractiveness is a highly valued trait in prospective romantic partners. However, it is unclear whether romantic partners' attractiveness is associated with commitment to the relationship. We report the results of a study of 565 male-female couples residing in Austria, Germany, or Switzerland. Employing dyadic analytical methods, we show that both men and women were more committed to their relationships if they perceived their partners as attractive. However, attractiveness also had a negative effect on commitment: People tended to feel less committed the more attractive their partners perceived themselves. Furthermore, although partners perceived themselves as similar in attractiveness to their partners, analyses revealed that similarity was not associated with commitment. Together, the findings demonstrate that attractiveness does matter for commitment to existing romantic relationships and emphasize the value of dyadic approaches to studying romantic relationships.

Keywords: Actor-partner interdependence modelAttractionAttractivenessCommitmentDyadic response surface analysis

Even very subtle interactions with strangers yield short-term happiness

van Lange, Paul, and Simon Columbus. 2021. “Vitamin S: Why Is Social Contact, Even with Strangers, so Important to Well-being?.” PsyArXiv. February 18. doi:10.31234/

Abstract: Even before COVID-19, it was well-known in psychological science that our well-being is strongly served by the quality of our close relationships. But is our well-being also served by social contact with people we know less well? In this article, we discuss three propositions to support the conclusion that the benefits of social contact also derive from interactions with acquaintances and even strangers. The propositions state that most interaction situations with strangers are benign (Proposition 1), that most strangers are benign (Proposition 2), and that most interactions with strangers enhance well-being (Proposition 3). These propositions are supported, first, by recent research designed to illuminate the primary features of interaction situations, showing that situations with strangers often represent low conflict of interest. Second, in our interactions with strangers, most people exhibit high levels of low-cost cooperation (social mindfulness) and high-cost helping if help to strangers is urgent. We close by sharing research examples which show that even very subtle interactions with strangers yield short-term happiness. Broader implications for COVID-19 and urbanization are discussed.

From 2009... Voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending; we estimate that $1 spent on preparedness is worth about $15 of future damage mitigated

Healy, A., & Malhotra, N. (2009). Myopic Voters and Natural Disaster Policy. American Political Science Review, 103(3), 387-406, Aug 2009.

Abstract: Do voters effectively hold elected officials accountable for policy decisions? Using data on natural disasters, government spending, and election returns, we show that voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending. These inconsistencies distort the incentives of public officials, leading the government to underinvest in disaster preparedness, thereby causing substantial public welfare losses. We estimate that $1 spent on preparedness is worth about $15 in terms of the future damage it mitigates. By estimating both the determinants of policy decisions and the consequences of those policies, we provide more complete evidence about citizen competence and government accountability.


A government responding to the incentives implied by our results will underinvest in natural disaster preparedness. The inability of voters to effectively hold government accountable thus appears to contribute to significant inefficiencies in government spending because the results show that preparedness spending substantially reduces future disaster damage. Voters are, in a word, myopic. They are not, as we have shown, myopic in the sense that they respond more to spending just before an election than to spending a year or two earlier; rather, they are myopic in the sense that they are unwilling to spend on natural disasters before the disasters have occurred. An ounce of prevention would be far more efficient than a pound of cure, but voters seem interested only in the cure. The resulting inconsistencies in democratic accountability reduce public welfare by discouraging reelection-minded politicians from investing in protection, while encouraging them to provide assistance after harm has already occurred.

Although we consider our findings to be relevant to potential underinvestments in preparedness in areas beyond natural disasters such as preventive medicine, the government almost certainly does not underinvest in all kinds of preparedness. For example, after the attacks on September 11, large investments were made in preventing future attacks on passenger jets. One clear difference between airport security and most natural disaster preparedness measures is that airport security is highly observable and salient. Moreover, this example may be the exception that proves the rule we have demonstrated in this article. When voters provide their elected officials with incentives to make mistakes— ranging from insufficient investment in natural disaster preparedness to perhaps excessive attention to airline security—elected officials are likely to provide the inefficient policies that voters implicitly reward. Moreover, it is possible that major events such as Hurricane Katrina can heighten the value of natural disaster preparedness, but this effect may be temporary. For example, California passed Proposition 1E in 2006, a measure that provided bond financing for $4.1 billion in flood control measures, with $3 billion for upgrades to levees in the Central Valley, an area considered by experts to be exposed to catastrophic flooding due to insufficient protection from the existing levee network. Experts characterized the situation as a “ticking time bomb” in January 2005 (California ceeds were to be used to obtain federal matching funds for the projects, in addition to financial and technical assistance from federal agencies such as the Army Corps of Engineers. Despite repeated warnings about the risk of severe flooding in the Central Valley, large-scale action was implemented only after Hurricane Katrina made the danger salient. The importance of Hurricane Katrina in ensuring support for Proposition 1E is suggested by the short argument that supporters of the measure included on the ballot. The argument read, “Our nation learned a tragic lesson from Hurricane Katrina— we cannot continue to neglect our unsafe levees and flood control systems” (California Attorney General 2006). The measure passed easily, winning 64% of the vote, including 67% of the vote in Los Angeles County and 56% of the vote in relatively conservative Orange County, despite the fact that neither would be affected directly by the bulk of the proposed spending. For voters in these areas, it appears to be the case that levee repair became a public good that voters were willing to support after Hurricane Katrina made clear the potential costs of inaction.27

A similar phenomenon appears to have occurred at the federal level. Following Hurricane Katrina, Congress passed and President Bush signed the PostKatrina Emergency Reform Act of 2006, which reorganized FEMA and appropriated $3.6 billion for levees and other flood control measures.28 In the immediate aftermath of Katrina, voters in New Orleans also appear to have placed greater value on these preparedness projects. In late 2006, 30% of New Orleans residents said that “repairing the levees, pumps, and floodwalls” should be one of the top two priorities in the rebuilding efforts, ranking this item and crime control as their top two concerns (Kaiser Family Foundation 2007, 55). The increased voter concern for disaster protection appears to have faded significantly since then. By mid-2008, only 2% of New Orleans voters ranked “hurricane protection/rebuilding floodwalls, levees” as the top rebuilding concern (Kaiser Family Foundation 2008, 52). This apparent change in priorities for New Orleans residents suggests that even an event like Hurricane Katrina is likely to increase the salience of preparedness issues only temporarily. Interestingly, the case of Hurricane Katrina may be anomalous with respect to the electoral benefits of rethan $94.8 billion in relief payments to the Gulf Coast following Katrina (Congressional Budget Office 2007), and the Republican Party suffered heavy losses in the 2006 and 2008 elections. Unlike most disaster events, Hurricane Katrina was highly unique in the substantial amount of media coverage it received. In an Associated Press poll of U.S. news editors and in the Pew Research Center U.S. News Interest Index, Hurricane Katrina was the top world story of 2005 (Kohut, Allen, and Keeter 2005), and most of this coverage focused on the mishandled immediate logistical response to the disaster as opposed to the generous financial response that came later. Hence, voters may have been substantially affected by the early negative media coverage and carried those initially formed attitudes about the administration’s competence with them into the voting booth. Nevertheless, the case of Katrina offers two potential extensions to this research. Subsequent studies can explore how the salience of a disaster changes the political effectiveness of relief spending, in addition to more closely examining how logistical response differs from financial response.

Due to the transience of the effect that disasters have on the visibility of preparedness, it is important to note that there is some suggestive evidence that governments may be able to take action to make preparedness salient to voters in a more permanent fashion. In the late 1990s, FEMA introduced Project Impact, a grassroots disaster preparedness initiative that emphasized collaboration between government, businesses, and local community leaders, bypassing state governments (Birkland andWaterman 2008;Wachtendorf and Tierney 2001; Witt 1998). Under Project Impact, FEMA selected a group of 57 communities from all 50 states (as well as Puerto Rico and the District of Columbia) to receive either $500,000 or $1-million grants to pursue disaster preparedness and mitigation initiatives (Government Accounting Office 2002). The program targeted areas of varying size and disaster risk. Interviews with participants in the program indicate that people valued the program. It was also credited with helping limit damage from the February 2001 Nisqually earthquake in the Puget Sound, ironically on the very day that the program was cancelled by the Bush Administration (Holdeman 2005). Compared to other counties, the change in the Democrats’ vote share from 1996 to 2000 was 1.9% higher in Project Impact counties, a significant difference (p = .006) (Healy and Malhotra 2009). This estimate is only suggestive of the possibility that voters may have responded to Project Impact because it is not possible to control for the omitted variables that could be driving this difference.29 Future scholarship could use surveys, as well as lab and field experiments, to determine the extent to which voter decisions can be influenced by government efforts at increasing the salience of issues and policies in areas such as disaster preparedness.

Although our results indicate that the incumbent presidential party has not been rewarded for investing in disaster preparedness, it is possible that voters could credit members of Congress for those initiatives. A natural extension to this analysis is to explore whether similar effects are observed in House and Senate elections. We conducted a preliminary exploration of this question by estimating analogous models predicting the vote share for the incumbent Senate party in the county as the dependent variable. For a variety of potential reasons, we did not obtain precise coefficient estimates from which to draw firm conclusions.30 Across all specifications that we considered, though, preparedness spending entered with a near-zero coefficient. We anticipate that future research more closely examining Congressional elections will find that members of Congress, like presidents, are not rewarded for preparedness spending.

Subsequent research could also apply our empirical strategy of simultaneously examining voting decisions, government policy, and associated outcomes to issues such as education or health care, as well as explore potential ingredients for improved retrospection. A more complete understanding of how citizens value preparedness and relief across a variety of domains could both advance our theoretical understanding of retrospective voting and help inform policy making. Through an analysis of voter responses to disaster relief and preparedness spending, we have addressed outstanding questions in the long-standing and extensive literature on citizen competence in democratic societies. Examining actual decisions by the electorate, we found heterogeneity with respect to the public’s responsiveness to various government policies. However, we have also shown that the mere presence of responsiveness does not necessarily indicate citizen competence and that failures in accountability can lead to substantial welfare losses.

Many sex differences in humans are largest under optimal conditions and shrink as conditions deteriorate; sex differences in growth, social behavior, and cognition illustrate the approach

Now You See Them, and Now You Don’t: An Evolutionarily Informed Model of Environmental Influences on Human Sex Differences. David C. Geary. Neuroscience & Biobehavioral Reviews, February 17 2021.


• The magnitude of human sex differences varies across contexts

• An evolutionarily informed model of these environmental influences is discussed

• Many sex differences are largest under optimal conditions and shrink as conditions deteriorate

• Human sex differences in growth, social behavior, and cognition illustrate the approach

• The approach has implications for better understanding sex-specific vulnerabilities

Abstract: The contributions of evolutionary processes to human sex differences are vigorously debated. One counterargument is that the magnitude of many sex differences fluctuates from one context to the next, implying an environment origin. Sexual selection provides a framework for integrating evolutionary processes and environmental influences on the origin and magnitude of sex differences. The dynamics of sexual selection involve competition for mates and discriminative mate choices. The associated traits are typically exaggerated and condition-dependent, that is, their development and expression are very sensitive to social and ecological conditions. The magnitude of sex differences in sexually selected traits should then be largest under optimal social and ecological conditions and shrink as conditions deteriorate. The basics of this framework are described, and its utility is illustrated with discussion of fluctuations in the magnitude of human physical, behavioral, and cognitive sex differences.

Keywords: Sex differencessexual selectioncognitioncondition-dependentstressor

Precarious Manhood Beliefs in 62 Nations: Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action

Precarious Manhood Beliefs in 62 Nations. Bosson, Jennifer K. et al. Accepted Journal of Cross-Cultural Psychology, Feb 2021.

Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.

Working outside the home did nothing to help people feel socially connected, nor did video calls with friends and family; people living with a romantic partner were most likely to improve in social connection after social distancing measures

Okabe-Miyamoto K, Folk D, Lyubomirsky S, Dunn EW (2021) Changes in social connection during COVID-19 social distancing: It’s not (household) size that matters, it’s who you’re with. PLoS ONE 16(1): e0245009.

Popular version: Partners help us stay connected during pandemic | News (

Abstract: To slow the transmission of COVID-19, countries around the world have implemented social distancing and stay-at-home policies—potentially leading people to rely more on household members for their sense of closeness and belonging. To understand the conditions under which people felt the most connected, we examined whether changes in overall feelings of social connection varied by household size and composition. In two pre-registered studies, undergraduates in Canada (NStudy 1 = 548) and adults primarily from the U.S. and U.K. (NStudy 2 = 336) reported their perceived social connection once before and once during the pandemic. In both studies, living with a partner robustly and uniquely buffered shifts in social connection during the first phases of the pandemic (βStudy 1 = .22, βStudy 2 = .16). In contrast, neither household size nor other aspects of household composition predicted changes in connection. We discuss implications for future social distancing policies that aim to balance physical health with psychological health.


Across two pre-registered studies that followed the same participants from before the COVID-19 pandemic into its early stages, we found that living with a partner was the strongest predictor of shifts in social connection across time. This finding replicated across two different samples—a sample of undergraduates at a Canadian university and a sample of adults from mostly the U.S. and the U.K. Both of our studies revealed robust positive regression coefficients indicating that people living with a partner were more likely to improve in social connection after social distancing guidelines were in place than those not living with a partner. This finding is consistent with past research demonstrating that being in a relationship is one of the strongest predictors of connection and well-being [1145], in part because happier people are more likely to find partners [4647]. Additionally, during times of worry and uncertainty, partners have been found to be more valuable for coping than other types of household members [26]. Moreover, recent research has shown that, on average, romantic relationships have not deteriorated over the course of the pandemic; indeed, people are relatively more willing to forgive their partners during COVID-19 [48]. In light of this evidence, it is not surprising that partners showed the strongest effect, especially during a pandemic.

Contrary to our pre-registered hypotheses, changes in loneliness were not predicted by any other aspects of household composition. Furthermore, we found only nonsignificant trends for the impact of household size, including living alone, on social connection during COVID-19, perhaps because both our studies included small samples of those living in large households and households of one. It is important to keep in mind that the pandemic has forced people to spend unusually large amounts of time confined to home. Given that interpersonal interactions must be positive to contribute to one’s overall sense of connectedness [10], those who live in larger households—relative to those who live alone or in smaller households—may have had more interactions that were negative (e.g., due to bickering or lack of privacy and alone time) and, as a result, failed to experience benefits in terms of social connection. Moreover, our studies measured experiences fairly early in the pandemic (April 2020); thus, as people continue to distance over long periods of time, their feelings of social connection may suffer. Going beyond household size and structure, future studies should examine the effects of relationship quality on social connection over time.

When examining how other features of household composition were associated with shifts in social connection during the pandemic, we obtained mixed findings regarding living with pets and null findings for all other household variables. However, because households are multifaceted, larger sample sizes will be needed to fully dissect the household composition findings, as well as to reveal interactions (such as with household size, gender, or country of residence). For example, studies with larger sample sizes may uncover differences in connection between those in households of four (with a partner and two children) versus households of five (with a partner and three children), and so on. Importantly, future investigators may wish to further unpack the role of household dynamics, as some households include unhealthy relationships that may be exacerbated by social distancing measures and others include housemates that minimally interact. As such, the quality and frequency of interaction among household members—perhaps with experience sampling or daily diary measures—is an important factor to explore in future work.

Implications and conclusions

Directed by social distancing interventions in the spring of 2020, millions of people were no longer commuting to work, attending school, or leaving their homes to spend time with friends and family. These extraordinary conditions likely led people to rely more on their household members to fulfill their needs for closeness, belonging, and connection [10]. The results from our two studies revealed that living with a partner—but not how many people or who else one lives with—appeared to confer unique benefits during these uncertain and unprecedented times. Indeed, demonstrating its robustness, this finding replicated across our two studies, despite weak and opposite correlations between household size and living with a partner (r = -.06 in Study 1 and .11 in Study 2).

In light of these results, policy makers might consider developing guidelines for social/physical distancing that protect people’s physical health while ensuring they retain a sense of closeness and connection by spending time in close proximity with partners, even outside their households. Some areas in the world, such as New Zealand, have implemented a strategy known as the “social bubble,” which is the easing of social distancing to allow close contact with another household [49]. Such approaches might be especially helpful for individuals who have been unintentionally and disproportionally socially isolated by social distancing measures, such as those who are cut-off, separated from their partners, or generally struggling with staying at home. However, social bubbles pose a risk of increased infection rates [49]. Hence, just as safe sex education aims to reduce the rate of sexually transmitted diseases and unintended pregnancy, education on safe social distancing (or social bubbling) strategies might guide individuals across the globe how to connect with others safely while simultaneously curtailing COVID-19 rates. In sum, recommendations that reduce the risk of transmission while prioritizing social connection can ensure that people’s physical and psychological health are optimally balanced.

Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike & aversion

“I hate this part right here”: Embodied, subjective experiences of listening to aversive music. Henna-Riikka Peltola, Jonna Katariin Vuoskoski. Psychology of Music, February 17, 2021.

Abstract: Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike and aversion as well. To date, only limited research has been carried out to understand the subjective experience of listening to aversive music. This qualitative study explored people’s negative experiences associated with music listening, with the aim to understand what kinds of emotions, affective states, and physical responses are associated with listening to aversive music. One hundred and two participants provided free descriptions of (1) musical features of aversive music; (2) subjective physical sensations, thoughts and mental imagery evoked by aversive music; (3) typical contexts where aversive music is heard; and (4) the similarities and/or differences between music-related aversive experiences and experiences of dislike in other contexts. We found that responses to aversive music are characterized by embodied experiences, perceived loss of agency, and violation of musical identity, as well as social or moral attitudes and values. Furthermore, two “experiencer types” were identified: One reflecting a strong negative attitude toward unpleasant music, and the other reflecting a more neutral attitude. Finally, we discuss the theoretical implications of our findings in the broader context of music and emotion research.

Keywords: negative emotions, embodiment, emotion, listening, qualitative, valence

Although the main focus of previous research has been on the paradoxical enjoyment of negative emotions, some work on the unpleasant aspects of music and sounds has been carried out. Dermott (2012) summarized neuroscientific findings relating to auditory preferences, and presented typical aversive features of non-musical sounds. In general, loud and distorted sounds are usually considered as unpleasant, and certain frequencies are likely to trigger aversive responses: Sharpness (high-frequency energy of a sound) and roughness (rapid amplitude modulation of a sound) are major determinants of unpleasantness, but they can be less aversive at low volume. However, in the context of music, aversion to sounds is at least partially context-dependent and a matter of exposure and familiarization. For instance, the development of music technology and the introduction of distortion in rock music has challenged the traditional Western concepts of music aesthetics (Dermott, 2012). Cunningham et al. (2005) investigated aversive musical features, and discovered certain features explaining why a piece of music was hated: Bad or clichéd lyrics, catchiness (the “earworm effect”), voice quality of a singer, over-exposure, perceptions of pretentiousness, and extramusical associations (such as the influence of music videos or unpleasant personal experiences) were identified as the main factors making music unpleasant.

Furthermore, listeners’ psychological strategies in relation to musical taste have been preliminarily investigated. Ackermann (2019) used interviews to explore negative attitudes toward disliked music, and synthesized four themes of “legitimization strategies” that are used to justify these attitudes. The themes cover (1) music-specific legitimization strategies, where the focus is on the compositional aspects of music, the interpretation of the musician or composer, the lyrics and semantic content, and other aesthetic criteria; (2) listener-specific legitimization strategies, where the focus is on the emotional or mood-related responses to music, physical reactions, and other aspects relating to the self and identity; (3) social legitimation strategies, where the focus is on in-group and out-group relations; and finally (4) cross-category subject areas, consisting of aspects such as the exaggerated emotionalization (Kitsch) of music, the authenticity and commerciality of music, and differing definitions between music and noise. The first three strategies seem to be applicable for disliking singing voices in popular music as well. Merrill and Ackermann (2020) found that emotional reasons, factual reasons, bodily reactions and urges, and social reasons were rationales for the negative evaluation of pop-singers’ voices (see also Merrill, 2019). The preliminary work of these two scholars show that, in addition to socio-cultural perspectives and aspects relating to social identity, psychological, emotional, and physical responses play a crucial role in aversive musical experiences.

Krueger (2019) has proposed that music’s materiality is the key reason behind its power over listeners. The fact that we resonate (physically) with sounds explains why humans react to high volume and certain frequencies, but particularly musical sounds “seem to penetrate consciousness in a qualitatively deeper way than input from other perceptual modalities,” as Krueger (2019) states. Thus, music and soundscapes that are not made or chosen by the listener, can strongly affect them, and potentially even negate individual agency and consent by “hacking” their self-regulatory system. These mechanisms have been previously investigated in studies focusing on music and affect regulation, highlighting the positive effects of intentional music listening for self-regulative purposes (for a review on different approaches to affective self-regulation through music, see Baltazar & Saarikallio, 2017). According to Krueger (2019), it is possible to weaponize these processes, and thus use music as a technology for “affective mind invasion” and, in the worst case, torture, as was done by the United States military in the so-called “global war on terror.” Recorded cases of the military playing loud rock music from speakers during operations, as well as looping offensive unfamiliar heavy metal music or endless repetitions of Western children songs to “soften up detainees prior to questioning” instead of weaponizing sheer noise suggest that symbolic musical “messages” combined with high-volume sounds are effective and subtle ways of affecting one’s mind compared to more apparent forms of violence (Garratt, 2018, pp. 42–44).

The aim of the present study is to explore people’s negative experiences associated with music listening. We aim to understand what kinds of emotions, affective states, and physical responses are associated with aversive music, identify commonalities in the verbal descriptions, and reflect on the theoretical implications of these aversive musical experiences for the wider music and emotion research community.

Substantial heritability of neighborhood disadvantage: Individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults

Understanding neighborhood disadvantage: A behavior genetic analysis. Albert J. Ksinan, Alexander T.Vazsonyi. Journal of Criminal Justice, Volume 73, March–April 2021, 101782.


Purpose Studies have shown that disadvantaged neighborhoods are associated with higher levels of crime and delinquent behaviors. Existing explanations do not adequately address how individuals select neighborhood. Thus, the current study employed a genetically-informed design to test whether living in a disadvantaged neighborhood might be partly explained by individual characteristics, including self-control and cognitive ability.

Method A sibling subsample of N = 1573 Add Health siblings living away from their parents at Wave 4 was used in twin analyses to assess genetic and environmental effects on neighborhood disadvantage. To evaluate which individual-level variables might longitudinally predict neighborhood disadvantage, a sample of N = 12,405 individuals was used.

Results Findings provided evidence of significant heritability (32%) of neighborhood disadvantage. In addition, a significant negative effect by adolescent cognitive ability on neighborhood disadvantage 14 years later was observed (β = −0.04, p = .002). Follow-up analyses showed a genetic effect on the association between cognitive ability and neighborhood disadvantage.

Conclusions Study findings indicate substantial heritability of neighborhood disadvantage, showing that individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults.


Criminologists have extensively focused on the impact of neighborhood social disorganization on crime and deviance since the first half of the 20th century (Shaw & McKay, 1942). Research has provided evidence that neighborhoods with disorganized structural characteristics, including high levels of mobility, high rates of poverty, or high numbers of single-parent families, were associated with higher levels of criminal behavior (Bursik & Grasmick, 1999; Morenoff, Sampson, & Raudenbush, 2001; Sampson, 1985; Sampson, Raudenbush, & Earls, 1997; Wilson, 1987).

These hypothesized neighborhood effects have generally been considered to flow in one direction, namely from neighborhoods to individuals. However, a small number of studies have hypothesized and tested the opposite, namely that individuals select into their neighborhoods. Given that neighborhood variables reflect the aggregation of the qualities and characteristics of individual members, it seems likely that certain individual traits might predict neighborhood characteristics (Hedman & van Ham, 2012). If individual traits do in fact predict neighborhood characteristics and all psychological traits are to a certain extent heritable (Turkheimer, 2000), then it stands to reason that neighborhood characteristics will show some heritable effect as well. The current study used a genetically-informed design to test for both genetic and environmental effects on selecting into certain neighborhoods and to test whether individual characteristics (self-control and cognitive ability) have developmental effects on this selection process.

A neighborhood is defined as a geographically unique subsection or area, part of a larger community. Typically, neighborhoods are operationalized using geographic boundaries defined by an administrative agency (such as the Census Bureau), which partitions neighborhoods into tracts or blocks (Sampson, Morenoff, & Gannon-Rowley, 2002).

The traditional framework for studying neighborhood effects is rooted in social disorganization theory. According to this theory, every individual is prone to engage in some deviant or criminal behaviors. Bonds to society make these behaviors too costly and thus effectively prevent crime from happening. The neighborhood process through which it controls the behaviors of its members is termed collective efficacy (Morenoff et al., 2001) or the ability of individuals sharing a neighborhood to work together and to solve issues related to their neighborhood. In this way, individuals engage in effective indirect social control in order to prevent neighborhoods from deteriorating. A typical example of such indirect social control is when adults monitor youth loitering in the neighborhood and are willing to confront them when they disturb or disrupt a public space (Sampson et al., 1997). A well-functioning neighborhood is a complex and cohesive system of social networks, rooted in both the family as well as the community (Sampson, Morenoff, & Earls, 1999).

Neighborhood structural factors such as high poverty, single-parent families, residential instability, high unemployment, or a high number of minority inhabitants, are associated with lower levels of neighborhood organization or an inability of the community to maintain effective social control, according to social disorganization theory (Sampson, 1997; Sampson & Groves, 1989). The impact of these structural factors might lead to alienation of neighborhood members and low levels of investment in the community, which in turn leads to greater social disorder and thus higher proneness to disorder and crime (Leventhal & Brooks-Gunn, 2000; Leventhal, Dupéré, & Brooks-Gunn, 2009; Molnar, Miller, Azrael, & Buka, 2004; Sampson & Groves, 1989).

Empirical support for social disorganization theory and the concept of collective efficacy in predicting crime and delinquency has been provided by a number of studies that have used hierarchical or multi-level modeling. For example, Sampson et al. (1997) found that concentrated disadvantage, immigration concentration, and residential (in)stability significantly predicted collective efficacy, which in turn mediated the effects of disadvantage and residential (in)stability on several measures of violence. Similarly, Sampson and Raudenbush (1999) found that collective efficacy of a neighborhood predicted lower levels of disorder and crime (see also Molnar et al., 2004; Sampson, 1997; Valasik & Barton, 2017).

In contrast, a more recent approach to studying neighborhood effects has focused on neighborhood characteristics, including individual-level variables (as opposed to predicting rates in neighborhood). Based on Leventhal and Brooks-Gunn's review (2000), neighborhoods affect a plethora of individual adjustment measures. Among them, neighborhood SES was found to positively predict educational attainment, mental health, as well as negatively predict individual delinquency and criminal behavior (Leventhal et al., 2009).

Individuals do not randomly allocate into neighborhoods, but rather, they actively seek out and select their neighborhoods. If neighborhoods consist of individual members, it stands to reason that the likelihood of living in a particular place is, to some certain extent, affected by individual characteristics, and thus, that neighborhood characteristics are also affected by individual differences. This is referred to as ‘self-selection’. In the current definition, self-selection refers to a broader concept than simply ‘individuals making deliberate choices when deciding where to live.’ Such a view would be imprecise and potentially harmful, as it might put too much emphasis on personal responsibility for potentially detrimental living conditions. Rather, self-selection refers to a more impersonal process where individuals with different life histories occupy different life trajectories that lead them to different places of residence, and, in many cases, living in a particular neighborhood is not so much a volitional process or act, but rather a situation that cannot be easily changed.

The idea that a self-selection process might be taking place related to an association between an individual (or a family) and neighborhood characteristics is certainly not new. In fact, the issue with non-independence of neighborhood sorting and individual characteristics has been mentioned by several authors (Sampson & Sharkey, 2008). However, individual characteristics that were identified to influence self-sorting into particular neighborhoods were of a social nature, such as being a renter versus a homeowner, being single, or being an immigrant, just to name a few (Hedman & van Ham, 2012). At present, however, there does not appear to be a clear understanding about the potential effect of self-selection on neighborhood effects. Some research did not find support for neighborhood effect once self-selection was accounted for (Oreopoulos, 2003), while other studies found that neighborhood effects remained significant after accounting for self-selection (Aaronson, 1998; Dawkins, Shen, & Sanchez, 2005; Galster, Marcotte, Mandell, Wolman, & Augustine, 2007). Thus, the evidence is quite mixed.

Behavior genetic studies partition phenotypic variance into three sources: heritability, shared environment, and nonshared environment. Over the past three decades, studies have consistently shown both environmental and genetic influences on the vast majority of individual traits (Plomin, DeFries, Knopik, & Neiderheiser, 2013; Polderman et al., 2015). However, genetic effects are not limited to individual characteristics. In fact, some presumably environmental effects have also been found to be correlated with genetic predispositions. There are three types of gene-environment correlations: passive rGE, evocative rGE, and active rGE (Plomin, DeFries, & Loehlin, 1977). Particularly relevant to the concept of neighborhood self-selection is active rGE, which refers to individuals actively selecting environments based on their inherent preferences (Moffitt, 2005).

Because individuals are not randomly selected for certain environments as much as they are active agents in selecting, modifying, and adapting to the environments, this process is affected by their individual characteristics, which themselves are substantially affected by heritable materials. A review of 55 studies by Kendler and Baker (2007) showed that there are substantial genetic effects (average h2 = 0.27) on measures of the environment, including parenting behaviors, stressful life events, social support, or peer interactions. Nevertheless, there has not been a study that has directly tested the heritability of neighborhood characteristics. Most genetically-informed studies on more distal environmental effects (such as schools or neighborhoods) focused on their moderating effects only (Cleveland, 2003; Rowe, Almeida, & Jacobson, 1999). For example, a study by Connolly (2014) found that neighborhood disadvantage moderated the genetic effect on adolescent delinquency between the ages of 6 and 13 years, and between 14 and 17 years, where greater heritable effects were observed at higher levels of neighborhood disadvantage.

How might individual characteristics be genetically related to the neighborhoods that individuals live in? The key to understanding potential genetic effects on neighborhoods lies in the process of active rGE, according to which individuals actively ‘select’ their environments. In the case of neighborhoods, the selection process is both the selection of a particular neighborhood to live in as well as the variety of neighborhoods that are available, also determined to a certain extent by individual traits.

Neighborhood socioeconomic status is defined as socioeconomic status of individual houses or their inhabitants, and, in the context of the United States, socioeconomic status is strongly affected by the level of education, which in turn has been found to be positively associated with cognitive ability or intelligence (L. Gottfredson, 1997a; Neisser et al., 1996; Strenze, 2007). Differences in intelligence have a large heritable component which has been found to increase with age (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990; Devlin & Daniels, 1997; Haworth et al., 2010). Moreover, a more direct link between cognitive ability or intelligence and career success, as well as intelligence and more positive developmental adjustment outcomes in general, was also established by numerous studies (Caspi, Wright, Moffitt, & Silva, 1998; L. Gottfredson, 2004; Judge, Higgins, Thoresen, & Barrick, 1999; Schmidt & Hunter, 2004). Thus, it stands to reason that neighborhood socioeconomic status should have a heritable or genetic component, and individual cognitive ability might partially explain this variance.

Another candidate personality trait, which might play a significant role in affecting neighborhood characteristics, is self-control or the ability to exercise restraint in delaying immediate gratification and subduing our impulses. Perhaps the most prominent theory emphasizing the role of self-control is self-control theory by Gottfredson and Hirschi (1990). According to Gottfredson and Hirschi, all deviant and criminal behaviors are to some extent related to a lack of self-control. A great number of studies have provided consistent empirical support that (low) self-control is perhaps the single best predictor of deviant and criminal behaviors (Hay, 2001; Vazsonyi, Mikuška, & Kelley, 2017; Wright, Caspi, Moffitt, & Silva, 1999), as well as better health, better career prospects, or less substance use (Casey et al., 2011; Mischel et al., 2011; Moffitt et al., 2011). In this view, the association between neighborhood disorganization and low self-control would consider low self-control as the cause rather than the outcome, as individuals with low self-control would be more likely to self-select into neighborhoods with higher levels of social disorganization (Caspi, Taylor, Moffitt, & Plomin, 2000; Evans, Cullen, Burton Jr., & Dunaway, 1997). Both cognitive ability and (low) self-control have in fact been tested in a longitudinal study by Savolainen, Mason, Lyyra, Pulkkinen, and Kokko (2017); findings showed that childhood differences in cognitive skills as well as childhood antisocial propensity (both measured at age 8) were traits that significantly foretold the developmental cascade which led to higher socioeconomic exclusion in midlife.

Wednesday, February 17, 2021

Video game play is positively correlated with well-being

Video game play is positively correlated with well-being. Niklas Johannes, Matti Vuorre and Andrew K. Przybylski. Royal Society Open Science, February 17 2021.

Abstract: People have never played more video games, and many stakeholders are worried that this activity might be bad for players. So far, research has not had adequate data to test whether these worries are justified and if policymakers should act to regulate video game play time. We attempt to provide much-needed evidence with adequate data. Whereas previous research had to rely on self-reported play behaviour, we collaborated with two games companies, Electronic Arts and Nintendo of America, to obtain players' actual play behaviour. We surveyed players of Plantsvs.Zombies: Battle for Neighborville and Animal Crossing: New Horizons for their well-being, motivations and need satisfaction during play, and merged their responses with telemetry data (i.e. logged game play). Contrary to many fears that excessive play time will lead to addiction and poor mental health, we found a small positive relation between game play and affective well-being. Need satisfaction and motivations during play did not interact with play time but were instead independently related to well-being. Our results advance the field in two important ways. First, we show that collaborations with industry partners can be done to high academic standards in an ethical and transparent fashion. Second, we deliver much-needed evidence to policymakers on the link between play and mental health.

4. Discussion

How is video game play related to the mental health of players? This question is at the heart of the debate on how policymakers will act to promote or to restrict games’ place in our lives [7]. Research investigating that question has almost exclusively relied on self-reports of play behaviour, which are known to be inaccurate (e.g. [8]). Consequently, we lack evidence on the relation between play time and mental health that is needed to inform policy decisions. To obtain reliable and accurate play data, researchers must collaborate with industry partners. Here, we aimed to address these shortcomings in measurement and report a collaboration with two games companies, Electronic Arts and Nintendo of America, combining objective measures of game behaviour (i.e. telemetry) with self-reports (i.e. survey) for two games: Plantsvs.Zombies: Battle for Neighborville and Animal Crossing: New Horizons. We also explored whether the relation between play time and well-being varies with players' need satisfaction and motivations. We found a small positive relation between play time and well-being for both games. We did not find evidence that this relation was moderated by need satisfactions and motivations, but that need satisfaction and motivations were related to well-being in their own right. Overall, our findings suggest that regulating video games, on the basis of time, might not bring the benefits many might expect, though the correlational nature of the data limits that conclusion.

Our goal was to investigate the relation between play time, as a measure of actual play behaviour, and subjective well-being. We found that relying on objective measures is necessary to assess play time: although there was overlap between the amount of time participants estimated to have played and their actual play time as logged by the game companies, that relation was far from perfect. On average, players overestimated their play time by 0.5 to 1.6 hours. The size of that relation and the general trend to overestimate such technology use are in line with the literature, which shows similar trends for internet use [24] and smartphone use [8,23]. Therefore, when researchers rely on self-reports of play behaviour to test relations with mental health, measurement error and potential bias will necessarily lead to inaccurate estimates of true relationships. Previous work has shown that using self-reports instead of objective measures of technology use can both inflate [45,46] or deflate effects [44]. In our study, associations between objective play time and well-being were larger than those between self-reported play time and well-being. Had we relied on self-reports only, we could have missed a potentially meaningful association.

Players who objectively played more in the past two weeks also reported to experience higher well-being. This association aligns well with literature that emphasizes the benefits of video games as a leisure activity that contributes to people's mental health [42]. Because our study was cross-sectional, there might also be a self-selection effect: People who feel good might be more inclined to pick up their controller. Such a view aligns well with research that shows reciprocal relations between media use and well-being [64,65]. Equally plausible, there might be factors that affect both game play time and well-being [66,67]. For example, people with high incomes are likely to be healthier and more likely to be able to afford a console/PC and the game.

Even if we were to assume that play time directly predicts well-being, it remains an open question whether that effect is large enough to matter for people's subjective experience. From a clinical perspective, it is probably the effect is too small to be relevant for clinical treatments. Our effect size estimates were below the smallest effect size of interest for media effects research that Ferguson [68] proposes. For health outcomes, Norman and colleagues [69] argue that we need to observe a large effect size of around half a standard deviation for participants to feel an improvement. In the AC:NH model, 10 h of game play were associated with a 0.06 standard deviation increase in well-being. Therefore, a half standard deviation change would require approximately 80 h of play over the two weeks (translating to about 6 h per day). However, Anvari and Lakens demonstrated that people might subjectively perceive differences of about a third of a standard deviation on a measure of well-being similar to ours [70], suggesting that approximately three and a half hours of play might be associated with subjectively felt changes in well-being. Nevertheless, it is unclear whether typical increases in play go hand in hand with perceivable changes in well-being. However, even small relations might accumulate to larger effects over time, and finding boundary conditions, such as time frames under which effects are meaningful, is a necessary next step for research [71]. Moreover, we only studied one facet of positive mental health, namely affective well-being. Future research will need to consider other facets, such as negative mental health.

Although our data do not allow causal claims, they do speak to the broader conversation surrounding the idea of video game addiction (e.g. [15]). The discussion about video games has focused on fears about a large part of players becoming addicted [14,21]. Given their widespread popularity, many policymakers are concerned about negative effects of play time on well-being [7]. Our results challenge that view. The relation between play time and well-being was positive in two large samples. Therefore, our study speaks against an immediate need to regulate video games as a preventive measure to limit video game addiction. If anything, our results suggest that play can be an activity that relates positively to people's mental health—and regulating games could withhold those benefits from players.

We also explored the role of people's perceptions in the relation between play time and well-being. Previous work has shown that gamers' experience probably influences how playing affects mental health [51,52]. We explored such a possible moderation through the lens of self-determination theory [50]: We investigated whether changes in need satisfaction, enjoyment and motivation during play changed the association between play time and well-being. We found no evidence for moderation. Neither need satisfaction, nor enjoyment, nor extrinsic motivation significantly interacted with play time in predicting well-being. However, conditional on play time, satisfaction of the autonomy and relatedness need, as well as enjoyment were positively associated with well-being. Extrinsic motivation, by contrast, was negatively associated with well-being. These associations line up with research demonstrating that experiencing need satisfaction and enjoyment during play can be a contributing factor to user well-being, whereas an extrinsic motivation for playing probably does the opposite (e.g. [56]).

Although we cannot rule out that these player experiences had a moderating role, the estimates of the effect size suggest that any moderation is likely to be too small to be practically meaningful. In other words, our results do not suggest that player experience modulates the relation between play time and well-being, but rather contributes to it independently. For example, players who experience a high degree of relatedness during play will probably experience higher well-being, but a high degree of relatedness is unlikely to strengthen the relation between play time and well-being. Future research, focused on granular in-game behaviours such as competition, collaboration and advancement will be able to speak more meaningfully to the psychological affordances of these virtual contexts.

Conditional on those needs and motivations, play time was not significantly related to well-being anymore. We are cautious not to put too much stock in this pattern. A predictor becoming not significant when controlling for other predictors can have many reasons. Need satisfaction and motivations might mediate the relation between play time and well-being; conditioning on the mediator could mask the effect of the predictor [67]. Alternatively, if play time and player experiences are themselves related, including them all as predictors would result in some relations being overshadowed by others. We need empirical theory-driven research grounded in clear causal models and longitudinal data to dissect these patterns.

4.1. Limitations

We are mindful to emphasize that we cannot claim that play time causally affects well-being. The goal of this study was to explore whether and how objective game behaviour relates to mental health. We were successful in capturing a snapshot of that relation and gaining initial insight into the relations between video games and mental health. But policymakers and public stakeholders require evidence which can speak to the trajectory of play and its effect over time on well-being. Video games are not a static medium; both how we play and discuss them is in constant flux [72]. To build on the work we present here, there is an urgent need for collaborations with games companies to obtain longitudinal data that allow investigating all the facets of human play and its effects on well-being over time.

Longitudinal work would also address the question of how generalizable our findings are. We collected data during a pandemic. It is possible the positive association between play time and well-being we observed only holds during a time when people are naturally playing more and have less opportunity to follow other hobbies. Selecting two titles out of a wide range of games puts further limitations on how generalizable our results are. Especially Animal Crossing: New Horizons is considered a casual game with little competition. Therefore, although those two titles were drawn from different genres, we cannot generalize to players across all types of games [73]. The results might be different for more competitive games. Different games have different affordances [74] and, therefore, likely different associations with well-being. To be able to make recommendations to policymakers on making decisions across the diverse range of video games, we urge video game companies to share game play data from more titles from different genres and of different audiences. Making such large-scale data available would enable researchers to match game play with existing cohort studies. Linking these two data sources would enable generalizable, causal tests of the effect of video games on mental health.

Another limiting factor on the confidence in our results is the low response rate observed in both of our surveys. It is possible that various selection effects might have led to unrepresentative estimates of well-being, game play, or their relationship. Increasing response rates, while at the same time ensuring samples' representativeness, remains a challenge for future studies in this field.

Our results are also on a broad level—possibly explaining the small effect sizes we observed. When exploring effects of technology use on well-being, researchers can operate on several levels. As Meier & Reinecke [75] explain, we can choose to test effects on the device level (e.g. time playing on a console, regardless of game), the application level (e.g. time playing a specific game), or the feature level (e.g. using gestures in a multiplayer game). Here, we operated on the application level, which subsumes all possible effects on the feature level. In other words, when measuring time with a game, some features of the game will have positive effects; others will have negative effects. Measuring on the application level will thus only give us a view of ‘net' video game effects. Assessing game behaviour on a more granular level will be necessary to gain more comprehensive insights and make specific recommendations to policymakers. For that to happen, games companies will need to have transparent, accessible APIs and access points for researchers to investigate in-game behaviour and its effects on people's mental health. Such in-game behaviours also carry much promise for studying the therapeutic effects of games, for example, as markers of symptom strength in disorders [76]. In rare cases, researchers were able to make use of such APIs [47,49], but the majority of games data are still not accessible. For PvZ, EA provided a variety of in-game behaviours that we did not analyse here. We invite readers to explore those data on the OSF project of this manuscript.

We relied on objective measures of video game behaviour. These measures are superior to self-reported behaviour because they directly capture the variable of interest. However, capturing game sessions on the side of the video game companies comes with its own measurement error. Video game companies cannot perfectly measure each game session. For example, in our data processing, some game sessions had duplicate start and end times (for PvZ) or inaccurate start and end times, but accurate session durations (for AC:NH). Measurement error in logging technology use is a common issue (e.g. [12,77]), and researchers collaborating with industry partners need to understand how these partners collect telemetry. The field needs to embrace these challenges in measurement rather than defaulting to self-reports.

Last, this study was exploratory and we made decisions about data processing and analysis without specifying them a priori [78]. Such researcher degrees of freedom can yield different results, especially in the field of technology use and well-being [65,79]. In our process, we were as transparent as possible to enable others to examine and build upon our work [31]. To move beyond this initial exploration of objective game behaviour and well-being to a more confirmatory approach, researchers should follow current best practices: they should preregister their research before collecting data in collaboration with industry partners [80,81], before accessing secondary data sources [82], and consider the registered report format [83,84]. Following these steps will result in a more reliable knowledge base for policymakers.