Thursday, February 18, 2021

What do prime-age ‘NILF’ men do all day? A cautionary on universal basic income

What do prime-age ‘NILF’ men do all day? Nicholas Eberstadt, Evan Abramsky. AEI, Feb 8 2021. https://www.aei.org/articles/what-do-prime-age-nilf-men-do-all-day

To date, most of the debate about [the Universal Basic Income] has centered on its affordability—i.e., its staggering expense. But a scarcely less important question concerns the implications of such largesse for the recipients themselves and civil society. What would a guaranteed income mean for the quality of citizenship in our country, given that a UBI would allow some—perhaps many—adult beneficiaries to opt for a life that does not include gainful employment or other comparable work?

As it happens, an experiment of sorts is already underway to help us answer this very question. Thanks to the American Time Use Survey (ATUS) from the Bureau of Labor Statistics, we have detailed, self-reported information each year on how roughly 10,000 adult respondents spend their days—from the moment they wake until they sleep.1 These surveyed Americans include prime-age men who are not in labor force (or “NILF” to social scientists), ordinarily in their peak employment years, who are neither working nor looking for work. By examining the self-reported patterns of daily life of these grown men who do not have and are not seeking jobs, we may gain insights into the work-free existence that some UBI advocates hold to be a positive end in its own right.

---

The portrait of daily life that emerges from time-use surveys for grown men who are more or less entirely disconnected from the world of work is sobering. So far as can be divined statistically, their independence from obligations of the workforce does not translate into any obvious enhancement in their own quality of life or improvement in the well-being of others.

To go by the information they themselves report, quite the contrary seems to be true. Though they have nothing but time on their hands, they are not terribly involved in care for their home or for others in it. They are increasingly disinclined to embark on activities that take them outside the house. The central focus of their waking day is the television or computer scree, to which they commit as much time as many men and women devote to a full-time job. So far as we can tell, moreover, screen time is sucking up a still-increasing portion of their waking hours.

There would seem to be no shortage of anomie, alienation, or even despair in the daily lives of men entirely free from work in America today. Why, then, would we not expect a UBI—which would surely result in a detachment of more men from paid employment—to result in even more of the same?

Arguments can be made, of course, that UBI would attract a different sort of “unworking” man from those who predominate the prime-age male NEET population today. But the patterns we have presented on the daily routines of existing work-free men should make proponents of the UBI think long and hard. Instead of producing new community activists, composers, and philosophers, more paid worklessness in America might only further deplete our nation’s social capital at a time when good citizenship is already in painfully short supply.

Both men and women were more committed to their relationships if they perceived their partners as attractive; however, people tended to feel less committed the more attractive their partners perceived themselves

Committing to a romantic partner: Does attractiveness matter? A dyadic approach. Tita Gonzalez Aviles et al. Personality and Individual Differences, Volume 176, July 2021, 110765, February 16 2021. https://doi.org/10.1016/j.paid.2021.110765

Abstract: Physical attractiveness is a highly valued trait in prospective romantic partners. However, it is unclear whether romantic partners' attractiveness is associated with commitment to the relationship. We report the results of a study of 565 male-female couples residing in Austria, Germany, or Switzerland. Employing dyadic analytical methods, we show that both men and women were more committed to their relationships if they perceived their partners as attractive. However, attractiveness also had a negative effect on commitment: People tended to feel less committed the more attractive their partners perceived themselves. Furthermore, although partners perceived themselves as similar in attractiveness to their partners, analyses revealed that similarity was not associated with commitment. Together, the findings demonstrate that attractiveness does matter for commitment to existing romantic relationships and emphasize the value of dyadic approaches to studying romantic relationships.

Keywords: Actor-partner interdependence modelAttractionAttractivenessCommitmentDyadic response surface analysis


Even very subtle interactions with strangers yield short-term happiness

van Lange, Paul, and Simon Columbus. 2021. “Vitamin S: Why Is Social Contact, Even with Strangers, so Important to Well-being?.” PsyArXiv. February 18. doi:10.31234/osf.io/jaxck

Abstract: Even before COVID-19, it was well-known in psychological science that our well-being is strongly served by the quality of our close relationships. But is our well-being also served by social contact with people we know less well? In this article, we discuss three propositions to support the conclusion that the benefits of social contact also derive from interactions with acquaintances and even strangers. The propositions state that most interaction situations with strangers are benign (Proposition 1), that most strangers are benign (Proposition 2), and that most interactions with strangers enhance well-being (Proposition 3). These propositions are supported, first, by recent research designed to illuminate the primary features of interaction situations, showing that situations with strangers often represent low conflict of interest. Second, in our interactions with strangers, most people exhibit high levels of low-cost cooperation (social mindfulness) and high-cost helping if help to strangers is urgent. We close by sharing research examples which show that even very subtle interactions with strangers yield short-term happiness. Broader implications for COVID-19 and urbanization are discussed.


From 2009... Voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending; we estimate that $1 spent on preparedness is worth about $15 of future damage mitigated

Healy, A., & Malhotra, N. (2009). Myopic Voters and Natural Disaster Policy. American Political Science Review, 103(3), 387-406, Aug 2009. https://doi.org/10.1017/S0003055409990104

Abstract: Do voters effectively hold elected officials accountable for policy decisions? Using data on natural disasters, government spending, and election returns, we show that voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending. These inconsistencies distort the incentives of public officials, leading the government to underinvest in disaster preparedness, thereby causing substantial public welfare losses. We estimate that $1 spent on preparedness is worth about $15 in terms of the future damage it mitigates. By estimating both the determinants of policy decisions and the consequences of those policies, we provide more complete evidence about citizen competence and government accountability.


DISCUSSION

A government responding to the incentives implied by our results will underinvest in natural disaster preparedness. The inability of voters to effectively hold government accountable thus appears to contribute to significant inefficiencies in government spending because the results show that preparedness spending substantially reduces future disaster damage. Voters are, in a word, myopic. They are not, as we have shown, myopic in the sense that they respond more to spending just before an election than to spending a year or two earlier; rather, they are myopic in the sense that they are unwilling to spend on natural disasters before the disasters have occurred. An ounce of prevention would be far more efficient than a pound of cure, but voters seem interested only in the cure. The resulting inconsistencies in democratic accountability reduce public welfare by discouraging reelection-minded politicians from investing in protection, while encouraging them to provide assistance after harm has already occurred.

Although we consider our findings to be relevant to potential underinvestments in preparedness in areas beyond natural disasters such as preventive medicine, the government almost certainly does not underinvest in all kinds of preparedness. For example, after the attacks on September 11, large investments were made in preventing future attacks on passenger jets. One clear difference between airport security and most natural disaster preparedness measures is that airport security is highly observable and salient. Moreover, this example may be the exception that proves the rule we have demonstrated in this article. When voters provide their elected officials with incentives to make mistakes— ranging from insufficient investment in natural disaster preparedness to perhaps excessive attention to airline security—elected officials are likely to provide the inefficient policies that voters implicitly reward. Moreover, it is possible that major events such as Hurricane Katrina can heighten the value of natural disaster preparedness, but this effect may be temporary. For example, California passed Proposition 1E in 2006, a measure that provided bond financing for $4.1 billion in flood control measures, with $3 billion for upgrades to levees in the Central Valley, an area considered by experts to be exposed to catastrophic flooding due to insufficient protection from the existing levee network. Experts characterized the situation as a “ticking time bomb” in January 2005 (California ceeds were to be used to obtain federal matching funds for the projects, in addition to financial and technical assistance from federal agencies such as the Army Corps of Engineers. Despite repeated warnings about the risk of severe flooding in the Central Valley, large-scale action was implemented only after Hurricane Katrina made the danger salient. The importance of Hurricane Katrina in ensuring support for Proposition 1E is suggested by the short argument that supporters of the measure included on the ballot. The argument read, “Our nation learned a tragic lesson from Hurricane Katrina— we cannot continue to neglect our unsafe levees and flood control systems” (California Attorney General 2006). The measure passed easily, winning 64% of the vote, including 67% of the vote in Los Angeles County and 56% of the vote in relatively conservative Orange County, despite the fact that neither would be affected directly by the bulk of the proposed spending. For voters in these areas, it appears to be the case that levee repair became a public good that voters were willing to support after Hurricane Katrina made clear the potential costs of inaction.27

A similar phenomenon appears to have occurred at the federal level. Following Hurricane Katrina, Congress passed and President Bush signed the PostKatrina Emergency Reform Act of 2006, which reorganized FEMA and appropriated $3.6 billion for levees and other flood control measures.28 In the immediate aftermath of Katrina, voters in New Orleans also appear to have placed greater value on these preparedness projects. In late 2006, 30% of New Orleans residents said that “repairing the levees, pumps, and floodwalls” should be one of the top two priorities in the rebuilding efforts, ranking this item and crime control as their top two concerns (Kaiser Family Foundation 2007, 55). The increased voter concern for disaster protection appears to have faded significantly since then. By mid-2008, only 2% of New Orleans voters ranked “hurricane protection/rebuilding floodwalls, levees” as the top rebuilding concern (Kaiser Family Foundation 2008, 52). This apparent change in priorities for New Orleans residents suggests that even an event like Hurricane Katrina is likely to increase the salience of preparedness issues only temporarily. Interestingly, the case of Hurricane Katrina may be anomalous with respect to the electoral benefits of rethan $94.8 billion in relief payments to the Gulf Coast following Katrina (Congressional Budget Office 2007), and the Republican Party suffered heavy losses in the 2006 and 2008 elections. Unlike most disaster events, Hurricane Katrina was highly unique in the substantial amount of media coverage it received. In an Associated Press poll of U.S. news editors and in the Pew Research Center U.S. News Interest Index, Hurricane Katrina was the top world story of 2005 (Kohut, Allen, and Keeter 2005), and most of this coverage focused on the mishandled immediate logistical response to the disaster as opposed to the generous financial response that came later. Hence, voters may have been substantially affected by the early negative media coverage and carried those initially formed attitudes about the administration’s competence with them into the voting booth. Nevertheless, the case of Katrina offers two potential extensions to this research. Subsequent studies can explore how the salience of a disaster changes the political effectiveness of relief spending, in addition to more closely examining how logistical response differs from financial response.

Due to the transience of the effect that disasters have on the visibility of preparedness, it is important to note that there is some suggestive evidence that governments may be able to take action to make preparedness salient to voters in a more permanent fashion. In the late 1990s, FEMA introduced Project Impact, a grassroots disaster preparedness initiative that emphasized collaboration between government, businesses, and local community leaders, bypassing state governments (Birkland andWaterman 2008;Wachtendorf and Tierney 2001; Witt 1998). Under Project Impact, FEMA selected a group of 57 communities from all 50 states (as well as Puerto Rico and the District of Columbia) to receive either $500,000 or $1-million grants to pursue disaster preparedness and mitigation initiatives (Government Accounting Office 2002). The program targeted areas of varying size and disaster risk. Interviews with participants in the program indicate that people valued the program. It was also credited with helping limit damage from the February 2001 Nisqually earthquake in the Puget Sound, ironically on the very day that the program was cancelled by the Bush Administration (Holdeman 2005). Compared to other counties, the change in the Democrats’ vote share from 1996 to 2000 was 1.9% higher in Project Impact counties, a significant difference (p = .006) (Healy and Malhotra 2009). This estimate is only suggestive of the possibility that voters may have responded to Project Impact because it is not possible to control for the omitted variables that could be driving this difference.29 Future scholarship could use surveys, as well as lab and field experiments, to determine the extent to which voter decisions can be influenced by government efforts at increasing the salience of issues and policies in areas such as disaster preparedness.

Although our results indicate that the incumbent presidential party has not been rewarded for investing in disaster preparedness, it is possible that voters could credit members of Congress for those initiatives. A natural extension to this analysis is to explore whether similar effects are observed in House and Senate elections. We conducted a preliminary exploration of this question by estimating analogous models predicting the vote share for the incumbent Senate party in the county as the dependent variable. For a variety of potential reasons, we did not obtain precise coefficient estimates from which to draw firm conclusions.30 Across all specifications that we considered, though, preparedness spending entered with a near-zero coefficient. We anticipate that future research more closely examining Congressional elections will find that members of Congress, like presidents, are not rewarded for preparedness spending.

Subsequent research could also apply our empirical strategy of simultaneously examining voting decisions, government policy, and associated outcomes to issues such as education or health care, as well as explore potential ingredients for improved retrospection. A more complete understanding of how citizens value preparedness and relief across a variety of domains could both advance our theoretical understanding of retrospective voting and help inform policy making. Through an analysis of voter responses to disaster relief and preparedness spending, we have addressed outstanding questions in the long-standing and extensive literature on citizen competence in democratic societies. Examining actual decisions by the electorate, we found heterogeneity with respect to the public’s responsiveness to various government policies. However, we have also shown that the mere presence of responsiveness does not necessarily indicate citizen competence and that failures in accountability can lead to substantial welfare losses.

Many sex differences in humans are largest under optimal conditions and shrink as conditions deteriorate; sex differences in growth, social behavior, and cognition illustrate the approach

Now You See Them, and Now You Don’t: An Evolutionarily Informed Model of Environmental Influences on Human Sex Differences. David C. Geary. Neuroscience & Biobehavioral Reviews, February 17 2021. https://doi.org/10.1016/j.neubiorev.2021.02.020

Highlights

• The magnitude of human sex differences varies across contexts

• An evolutionarily informed model of these environmental influences is discussed

• Many sex differences are largest under optimal conditions and shrink as conditions deteriorate

• Human sex differences in growth, social behavior, and cognition illustrate the approach

• The approach has implications for better understanding sex-specific vulnerabilities

Abstract: The contributions of evolutionary processes to human sex differences are vigorously debated. One counterargument is that the magnitude of many sex differences fluctuates from one context to the next, implying an environment origin. Sexual selection provides a framework for integrating evolutionary processes and environmental influences on the origin and magnitude of sex differences. The dynamics of sexual selection involve competition for mates and discriminative mate choices. The associated traits are typically exaggerated and condition-dependent, that is, their development and expression are very sensitive to social and ecological conditions. The magnitude of sex differences in sexually selected traits should then be largest under optimal social and ecological conditions and shrink as conditions deteriorate. The basics of this framework are described, and its utility is illustrated with discussion of fluctuations in the magnitude of human physical, behavioral, and cognitive sex differences.

Keywords: Sex differencessexual selectioncognitioncondition-dependentstressor


Precarious Manhood Beliefs in 62 Nations: Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action

Precarious Manhood Beliefs in 62 Nations. Bosson, Jennifer K. et al. Accepted Journal of Cross-Cultural Psychology, Feb 2021. https://dial.uclouvain.be/pr/boreal/object/boreal:243234

Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.


Working outside the home did nothing to help people feel socially connected, nor did video calls with friends and family; people living with a romantic partner were most likely to improve in social connection after social distancing measures

Okabe-Miyamoto K, Folk D, Lyubomirsky S, Dunn EW (2021) Changes in social connection during COVID-19 social distancing: It’s not (household) size that matters, it’s who you’re with. PLoS ONE 16(1): e0245009. https://doi.org/10.1371/journal.pone.0245009

Popular version: Partners help us stay connected during pandemic | News (ucr.edu)

Abstract: To slow the transmission of COVID-19, countries around the world have implemented social distancing and stay-at-home policies—potentially leading people to rely more on household members for their sense of closeness and belonging. To understand the conditions under which people felt the most connected, we examined whether changes in overall feelings of social connection varied by household size and composition. In two pre-registered studies, undergraduates in Canada (NStudy 1 = 548) and adults primarily from the U.S. and U.K. (NStudy 2 = 336) reported their perceived social connection once before and once during the pandemic. In both studies, living with a partner robustly and uniquely buffered shifts in social connection during the first phases of the pandemic (βStudy 1 = .22, βStudy 2 = .16). In contrast, neither household size nor other aspects of household composition predicted changes in connection. We discuss implications for future social distancing policies that aim to balance physical health with psychological health.

Discussion

Across two pre-registered studies that followed the same participants from before the COVID-19 pandemic into its early stages, we found that living with a partner was the strongest predictor of shifts in social connection across time. This finding replicated across two different samples—a sample of undergraduates at a Canadian university and a sample of adults from mostly the U.S. and the U.K. Both of our studies revealed robust positive regression coefficients indicating that people living with a partner were more likely to improve in social connection after social distancing guidelines were in place than those not living with a partner. This finding is consistent with past research demonstrating that being in a relationship is one of the strongest predictors of connection and well-being [1145], in part because happier people are more likely to find partners [4647]. Additionally, during times of worry and uncertainty, partners have been found to be more valuable for coping than other types of household members [26]. Moreover, recent research has shown that, on average, romantic relationships have not deteriorated over the course of the pandemic; indeed, people are relatively more willing to forgive their partners during COVID-19 [48]. In light of this evidence, it is not surprising that partners showed the strongest effect, especially during a pandemic.

Contrary to our pre-registered hypotheses, changes in loneliness were not predicted by any other aspects of household composition. Furthermore, we found only nonsignificant trends for the impact of household size, including living alone, on social connection during COVID-19, perhaps because both our studies included small samples of those living in large households and households of one. It is important to keep in mind that the pandemic has forced people to spend unusually large amounts of time confined to home. Given that interpersonal interactions must be positive to contribute to one’s overall sense of connectedness [10], those who live in larger households—relative to those who live alone or in smaller households—may have had more interactions that were negative (e.g., due to bickering or lack of privacy and alone time) and, as a result, failed to experience benefits in terms of social connection. Moreover, our studies measured experiences fairly early in the pandemic (April 2020); thus, as people continue to distance over long periods of time, their feelings of social connection may suffer. Going beyond household size and structure, future studies should examine the effects of relationship quality on social connection over time.

When examining how other features of household composition were associated with shifts in social connection during the pandemic, we obtained mixed findings regarding living with pets and null findings for all other household variables. However, because households are multifaceted, larger sample sizes will be needed to fully dissect the household composition findings, as well as to reveal interactions (such as with household size, gender, or country of residence). For example, studies with larger sample sizes may uncover differences in connection between those in households of four (with a partner and two children) versus households of five (with a partner and three children), and so on. Importantly, future investigators may wish to further unpack the role of household dynamics, as some households include unhealthy relationships that may be exacerbated by social distancing measures and others include housemates that minimally interact. As such, the quality and frequency of interaction among household members—perhaps with experience sampling or daily diary measures—is an important factor to explore in future work.

Implications and conclusions

Directed by social distancing interventions in the spring of 2020, millions of people were no longer commuting to work, attending school, or leaving their homes to spend time with friends and family. These extraordinary conditions likely led people to rely more on their household members to fulfill their needs for closeness, belonging, and connection [10]. The results from our two studies revealed that living with a partner—but not how many people or who else one lives with—appeared to confer unique benefits during these uncertain and unprecedented times. Indeed, demonstrating its robustness, this finding replicated across our two studies, despite weak and opposite correlations between household size and living with a partner (r = -.06 in Study 1 and .11 in Study 2).

In light of these results, policy makers might consider developing guidelines for social/physical distancing that protect people’s physical health while ensuring they retain a sense of closeness and connection by spending time in close proximity with partners, even outside their households. Some areas in the world, such as New Zealand, have implemented a strategy known as the “social bubble,” which is the easing of social distancing to allow close contact with another household [49]. Such approaches might be especially helpful for individuals who have been unintentionally and disproportionally socially isolated by social distancing measures, such as those who are cut-off, separated from their partners, or generally struggling with staying at home. However, social bubbles pose a risk of increased infection rates [49]. Hence, just as safe sex education aims to reduce the rate of sexually transmitted diseases and unintended pregnancy, education on safe social distancing (or social bubbling) strategies might guide individuals across the globe how to connect with others safely while simultaneously curtailing COVID-19 rates. In sum, recommendations that reduce the risk of transmission while prioritizing social connection can ensure that people’s physical and psychological health are optimally balanced.

Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike & aversion

“I hate this part right here”: Embodied, subjective experiences of listening to aversive music. Henna-Riikka Peltola, Jonna Katariin Vuoskoski. Psychology of Music, February 17, 2021. https://doi.org/10.1177/0305735620988596

Abstract: Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike and aversion as well. To date, only limited research has been carried out to understand the subjective experience of listening to aversive music. This qualitative study explored people’s negative experiences associated with music listening, with the aim to understand what kinds of emotions, affective states, and physical responses are associated with listening to aversive music. One hundred and two participants provided free descriptions of (1) musical features of aversive music; (2) subjective physical sensations, thoughts and mental imagery evoked by aversive music; (3) typical contexts where aversive music is heard; and (4) the similarities and/or differences between music-related aversive experiences and experiences of dislike in other contexts. We found that responses to aversive music are characterized by embodied experiences, perceived loss of agency, and violation of musical identity, as well as social or moral attitudes and values. Furthermore, two “experiencer types” were identified: One reflecting a strong negative attitude toward unpleasant music, and the other reflecting a more neutral attitude. Finally, we discuss the theoretical implications of our findings in the broader context of music and emotion research.

Keywords: negative emotions, embodiment, emotion, listening, qualitative, valence

Although the main focus of previous research has been on the paradoxical enjoyment of negative emotions, some work on the unpleasant aspects of music and sounds has been carried out. Dermott (2012) summarized neuroscientific findings relating to auditory preferences, and presented typical aversive features of non-musical sounds. In general, loud and distorted sounds are usually considered as unpleasant, and certain frequencies are likely to trigger aversive responses: Sharpness (high-frequency energy of a sound) and roughness (rapid amplitude modulation of a sound) are major determinants of unpleasantness, but they can be less aversive at low volume. However, in the context of music, aversion to sounds is at least partially context-dependent and a matter of exposure and familiarization. For instance, the development of music technology and the introduction of distortion in rock music has challenged the traditional Western concepts of music aesthetics (Dermott, 2012). Cunningham et al. (2005) investigated aversive musical features, and discovered certain features explaining why a piece of music was hated: Bad or clichéd lyrics, catchiness (the “earworm effect”), voice quality of a singer, over-exposure, perceptions of pretentiousness, and extramusical associations (such as the influence of music videos or unpleasant personal experiences) were identified as the main factors making music unpleasant.

Furthermore, listeners’ psychological strategies in relation to musical taste have been preliminarily investigated. Ackermann (2019) used interviews to explore negative attitudes toward disliked music, and synthesized four themes of “legitimization strategies” that are used to justify these attitudes. The themes cover (1) music-specific legitimization strategies, where the focus is on the compositional aspects of music, the interpretation of the musician or composer, the lyrics and semantic content, and other aesthetic criteria; (2) listener-specific legitimization strategies, where the focus is on the emotional or mood-related responses to music, physical reactions, and other aspects relating to the self and identity; (3) social legitimation strategies, where the focus is on in-group and out-group relations; and finally (4) cross-category subject areas, consisting of aspects such as the exaggerated emotionalization (Kitsch) of music, the authenticity and commerciality of music, and differing definitions between music and noise. The first three strategies seem to be applicable for disliking singing voices in popular music as well. Merrill and Ackermann (2020) found that emotional reasons, factual reasons, bodily reactions and urges, and social reasons were rationales for the negative evaluation of pop-singers’ voices (see also Merrill, 2019). The preliminary work of these two scholars show that, in addition to socio-cultural perspectives and aspects relating to social identity, psychological, emotional, and physical responses play a crucial role in aversive musical experiences.

Krueger (2019) has proposed that music’s materiality is the key reason behind its power over listeners. The fact that we resonate (physically) with sounds explains why humans react to high volume and certain frequencies, but particularly musical sounds “seem to penetrate consciousness in a qualitatively deeper way than input from other perceptual modalities,” as Krueger (2019) states. Thus, music and soundscapes that are not made or chosen by the listener, can strongly affect them, and potentially even negate individual agency and consent by “hacking” their self-regulatory system. These mechanisms have been previously investigated in studies focusing on music and affect regulation, highlighting the positive effects of intentional music listening for self-regulative purposes (for a review on different approaches to affective self-regulation through music, see Baltazar & Saarikallio, 2017). According to Krueger (2019), it is possible to weaponize these processes, and thus use music as a technology for “affective mind invasion” and, in the worst case, torture, as was done by the United States military in the so-called “global war on terror.” Recorded cases of the military playing loud rock music from speakers during operations, as well as looping offensive unfamiliar heavy metal music or endless repetitions of Western children songs to “soften up detainees prior to questioning” instead of weaponizing sheer noise suggest that symbolic musical “messages” combined with high-volume sounds are effective and subtle ways of affecting one’s mind compared to more apparent forms of violence (Garratt, 2018, pp. 42–44).

The aim of the present study is to explore people’s negative experiences associated with music listening. We aim to understand what kinds of emotions, affective states, and physical responses are associated with aversive music, identify commonalities in the verbal descriptions, and reflect on the theoretical implications of these aversive musical experiences for the wider music and emotion research community.

Substantial heritability of neighborhood disadvantage: Individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults

Understanding neighborhood disadvantage: A behavior genetic analysis. Albert J. Ksinan, Alexander T.Vazsonyi. Journal of Criminal Justice, Volume 73, March–April 2021, 101782. https://doi.org/10.1016/j.jcrimjus.2021.101782

Abstract

Purpose Studies have shown that disadvantaged neighborhoods are associated with higher levels of crime and delinquent behaviors. Existing explanations do not adequately address how individuals select neighborhood. Thus, the current study employed a genetically-informed design to test whether living in a disadvantaged neighborhood might be partly explained by individual characteristics, including self-control and cognitive ability.

Method A sibling subsample of N = 1573 Add Health siblings living away from their parents at Wave 4 was used in twin analyses to assess genetic and environmental effects on neighborhood disadvantage. To evaluate which individual-level variables might longitudinally predict neighborhood disadvantage, a sample of N = 12,405 individuals was used.

Results Findings provided evidence of significant heritability (32%) of neighborhood disadvantage. In addition, a significant negative effect by adolescent cognitive ability on neighborhood disadvantage 14 years later was observed (β = −0.04, p = .002). Follow-up analyses showed a genetic effect on the association between cognitive ability and neighborhood disadvantage.

Conclusions Study findings indicate substantial heritability of neighborhood disadvantage, showing that individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults.


Introduction

Criminologists have extensively focused on the impact of neighborhood social disorganization on crime and deviance since the first half of the 20th century (Shaw & McKay, 1942). Research has provided evidence that neighborhoods with disorganized structural characteristics, including high levels of mobility, high rates of poverty, or high numbers of single-parent families, were associated with higher levels of criminal behavior (Bursik & Grasmick, 1999; Morenoff, Sampson, & Raudenbush, 2001; Sampson, 1985; Sampson, Raudenbush, & Earls, 1997; Wilson, 1987).

These hypothesized neighborhood effects have generally been considered to flow in one direction, namely from neighborhoods to individuals. However, a small number of studies have hypothesized and tested the opposite, namely that individuals select into their neighborhoods. Given that neighborhood variables reflect the aggregation of the qualities and characteristics of individual members, it seems likely that certain individual traits might predict neighborhood characteristics (Hedman & van Ham, 2012). If individual traits do in fact predict neighborhood characteristics and all psychological traits are to a certain extent heritable (Turkheimer, 2000), then it stands to reason that neighborhood characteristics will show some heritable effect as well. The current study used a genetically-informed design to test for both genetic and environmental effects on selecting into certain neighborhoods and to test whether individual characteristics (self-control and cognitive ability) have developmental effects on this selection process.

A neighborhood is defined as a geographically unique subsection or area, part of a larger community. Typically, neighborhoods are operationalized using geographic boundaries defined by an administrative agency (such as the Census Bureau), which partitions neighborhoods into tracts or blocks (Sampson, Morenoff, & Gannon-Rowley, 2002).

The traditional framework for studying neighborhood effects is rooted in social disorganization theory. According to this theory, every individual is prone to engage in some deviant or criminal behaviors. Bonds to society make these behaviors too costly and thus effectively prevent crime from happening. The neighborhood process through which it controls the behaviors of its members is termed collective efficacy (Morenoff et al., 2001) or the ability of individuals sharing a neighborhood to work together and to solve issues related to their neighborhood. In this way, individuals engage in effective indirect social control in order to prevent neighborhoods from deteriorating. A typical example of such indirect social control is when adults monitor youth loitering in the neighborhood and are willing to confront them when they disturb or disrupt a public space (Sampson et al., 1997). A well-functioning neighborhood is a complex and cohesive system of social networks, rooted in both the family as well as the community (Sampson, Morenoff, & Earls, 1999).

Neighborhood structural factors such as high poverty, single-parent families, residential instability, high unemployment, or a high number of minority inhabitants, are associated with lower levels of neighborhood organization or an inability of the community to maintain effective social control, according to social disorganization theory (Sampson, 1997; Sampson & Groves, 1989). The impact of these structural factors might lead to alienation of neighborhood members and low levels of investment in the community, which in turn leads to greater social disorder and thus higher proneness to disorder and crime (Leventhal & Brooks-Gunn, 2000; Leventhal, Dupéré, & Brooks-Gunn, 2009; Molnar, Miller, Azrael, & Buka, 2004; Sampson & Groves, 1989).

Empirical support for social disorganization theory and the concept of collective efficacy in predicting crime and delinquency has been provided by a number of studies that have used hierarchical or multi-level modeling. For example, Sampson et al. (1997) found that concentrated disadvantage, immigration concentration, and residential (in)stability significantly predicted collective efficacy, which in turn mediated the effects of disadvantage and residential (in)stability on several measures of violence. Similarly, Sampson and Raudenbush (1999) found that collective efficacy of a neighborhood predicted lower levels of disorder and crime (see also Molnar et al., 2004; Sampson, 1997; Valasik & Barton, 2017).

In contrast, a more recent approach to studying neighborhood effects has focused on neighborhood characteristics, including individual-level variables (as opposed to predicting rates in neighborhood). Based on Leventhal and Brooks-Gunn's review (2000), neighborhoods affect a plethora of individual adjustment measures. Among them, neighborhood SES was found to positively predict educational attainment, mental health, as well as negatively predict individual delinquency and criminal behavior (Leventhal et al., 2009).

Individuals do not randomly allocate into neighborhoods, but rather, they actively seek out and select their neighborhoods. If neighborhoods consist of individual members, it stands to reason that the likelihood of living in a particular place is, to some certain extent, affected by individual characteristics, and thus, that neighborhood characteristics are also affected by individual differences. This is referred to as ‘self-selection’. In the current definition, self-selection refers to a broader concept than simply ‘individuals making deliberate choices when deciding where to live.’ Such a view would be imprecise and potentially harmful, as it might put too much emphasis on personal responsibility for potentially detrimental living conditions. Rather, self-selection refers to a more impersonal process where individuals with different life histories occupy different life trajectories that lead them to different places of residence, and, in many cases, living in a particular neighborhood is not so much a volitional process or act, but rather a situation that cannot be easily changed.

The idea that a self-selection process might be taking place related to an association between an individual (or a family) and neighborhood characteristics is certainly not new. In fact, the issue with non-independence of neighborhood sorting and individual characteristics has been mentioned by several authors (Sampson & Sharkey, 2008). However, individual characteristics that were identified to influence self-sorting into particular neighborhoods were of a social nature, such as being a renter versus a homeowner, being single, or being an immigrant, just to name a few (Hedman & van Ham, 2012). At present, however, there does not appear to be a clear understanding about the potential effect of self-selection on neighborhood effects. Some research did not find support for neighborhood effect once self-selection was accounted for (Oreopoulos, 2003), while other studies found that neighborhood effects remained significant after accounting for self-selection (Aaronson, 1998; Dawkins, Shen, & Sanchez, 2005; Galster, Marcotte, Mandell, Wolman, & Augustine, 2007). Thus, the evidence is quite mixed.

Behavior genetic studies partition phenotypic variance into three sources: heritability, shared environment, and nonshared environment. Over the past three decades, studies have consistently shown both environmental and genetic influences on the vast majority of individual traits (Plomin, DeFries, Knopik, & Neiderheiser, 2013; Polderman et al., 2015). However, genetic effects are not limited to individual characteristics. In fact, some presumably environmental effects have also been found to be correlated with genetic predispositions. There are three types of gene-environment correlations: passive rGE, evocative rGE, and active rGE (Plomin, DeFries, & Loehlin, 1977). Particularly relevant to the concept of neighborhood self-selection is active rGE, which refers to individuals actively selecting environments based on their inherent preferences (Moffitt, 2005).

Because individuals are not randomly selected for certain environments as much as they are active agents in selecting, modifying, and adapting to the environments, this process is affected by their individual characteristics, which themselves are substantially affected by heritable materials. A review of 55 studies by Kendler and Baker (2007) showed that there are substantial genetic effects (average h2 = 0.27) on measures of the environment, including parenting behaviors, stressful life events, social support, or peer interactions. Nevertheless, there has not been a study that has directly tested the heritability of neighborhood characteristics. Most genetically-informed studies on more distal environmental effects (such as schools or neighborhoods) focused on their moderating effects only (Cleveland, 2003; Rowe, Almeida, & Jacobson, 1999). For example, a study by Connolly (2014) found that neighborhood disadvantage moderated the genetic effect on adolescent delinquency between the ages of 6 and 13 years, and between 14 and 17 years, where greater heritable effects were observed at higher levels of neighborhood disadvantage.

How might individual characteristics be genetically related to the neighborhoods that individuals live in? The key to understanding potential genetic effects on neighborhoods lies in the process of active rGE, according to which individuals actively ‘select’ their environments. In the case of neighborhoods, the selection process is both the selection of a particular neighborhood to live in as well as the variety of neighborhoods that are available, also determined to a certain extent by individual traits.

Neighborhood socioeconomic status is defined as socioeconomic status of individual houses or their inhabitants, and, in the context of the United States, socioeconomic status is strongly affected by the level of education, which in turn has been found to be positively associated with cognitive ability or intelligence (L. Gottfredson, 1997a; Neisser et al., 1996; Strenze, 2007). Differences in intelligence have a large heritable component which has been found to increase with age (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990; Devlin & Daniels, 1997; Haworth et al., 2010). Moreover, a more direct link between cognitive ability or intelligence and career success, as well as intelligence and more positive developmental adjustment outcomes in general, was also established by numerous studies (Caspi, Wright, Moffitt, & Silva, 1998; L. Gottfredson, 2004; Judge, Higgins, Thoresen, & Barrick, 1999; Schmidt & Hunter, 2004). Thus, it stands to reason that neighborhood socioeconomic status should have a heritable or genetic component, and individual cognitive ability might partially explain this variance.

Another candidate personality trait, which might play a significant role in affecting neighborhood characteristics, is self-control or the ability to exercise restraint in delaying immediate gratification and subduing our impulses. Perhaps the most prominent theory emphasizing the role of self-control is self-control theory by Gottfredson and Hirschi (1990). According to Gottfredson and Hirschi, all deviant and criminal behaviors are to some extent related to a lack of self-control. A great number of studies have provided consistent empirical support that (low) self-control is perhaps the single best predictor of deviant and criminal behaviors (Hay, 2001; Vazsonyi, Mikuška, & Kelley, 2017; Wright, Caspi, Moffitt, & Silva, 1999), as well as better health, better career prospects, or less substance use (Casey et al., 2011; Mischel et al., 2011; Moffitt et al., 2011). In this view, the association between neighborhood disorganization and low self-control would consider low self-control as the cause rather than the outcome, as individuals with low self-control would be more likely to self-select into neighborhoods with higher levels of social disorganization (Caspi, Taylor, Moffitt, & Plomin, 2000; Evans, Cullen, Burton Jr., & Dunaway, 1997). Both cognitive ability and (low) self-control have in fact been tested in a longitudinal study by Savolainen, Mason, Lyyra, Pulkkinen, and Kokko (2017); findings showed that childhood differences in cognitive skills as well as childhood antisocial propensity (both measured at age 8) were traits that significantly foretold the developmental cascade which led to higher socioeconomic exclusion in midlife.

Wednesday, February 17, 2021

Video game play is positively correlated with well-being

Video game play is positively correlated with well-being. Niklas Johannes, Matti Vuorre and Andrew K. Przybylski. Royal Society Open Science, February 17 2021. https://doi.org/10.1098/rsos.202049

Abstract: People have never played more video games, and many stakeholders are worried that this activity might be bad for players. So far, research has not had adequate data to test whether these worries are justified and if policymakers should act to regulate video game play time. We attempt to provide much-needed evidence with adequate data. Whereas previous research had to rely on self-reported play behaviour, we collaborated with two games companies, Electronic Arts and Nintendo of America, to obtain players' actual play behaviour. We surveyed players of Plantsvs.Zombies: Battle for Neighborville and Animal Crossing: New Horizons for their well-being, motivations and need satisfaction during play, and merged their responses with telemetry data (i.e. logged game play). Contrary to many fears that excessive play time will lead to addiction and poor mental health, we found a small positive relation between game play and affective well-being. Need satisfaction and motivations during play did not interact with play time but were instead independently related to well-being. Our results advance the field in two important ways. First, we show that collaborations with industry partners can be done to high academic standards in an ethical and transparent fashion. Second, we deliver much-needed evidence to policymakers on the link between play and mental health.

4. Discussion

How is video game play related to the mental health of players? This question is at the heart of the debate on how policymakers will act to promote or to restrict games’ place in our lives [7]. Research investigating that question has almost exclusively relied on self-reports of play behaviour, which are known to be inaccurate (e.g. [8]). Consequently, we lack evidence on the relation between play time and mental health that is needed to inform policy decisions. To obtain reliable and accurate play data, researchers must collaborate with industry partners. Here, we aimed to address these shortcomings in measurement and report a collaboration with two games companies, Electronic Arts and Nintendo of America, combining objective measures of game behaviour (i.e. telemetry) with self-reports (i.e. survey) for two games: Plantsvs.Zombies: Battle for Neighborville and Animal Crossing: New Horizons. We also explored whether the relation between play time and well-being varies with players' need satisfaction and motivations. We found a small positive relation between play time and well-being for both games. We did not find evidence that this relation was moderated by need satisfactions and motivations, but that need satisfaction and motivations were related to well-being in their own right. Overall, our findings suggest that regulating video games, on the basis of time, might not bring the benefits many might expect, though the correlational nature of the data limits that conclusion.

Our goal was to investigate the relation between play time, as a measure of actual play behaviour, and subjective well-being. We found that relying on objective measures is necessary to assess play time: although there was overlap between the amount of time participants estimated to have played and their actual play time as logged by the game companies, that relation was far from perfect. On average, players overestimated their play time by 0.5 to 1.6 hours. The size of that relation and the general trend to overestimate such technology use are in line with the literature, which shows similar trends for internet use [24] and smartphone use [8,23]. Therefore, when researchers rely on self-reports of play behaviour to test relations with mental health, measurement error and potential bias will necessarily lead to inaccurate estimates of true relationships. Previous work has shown that using self-reports instead of objective measures of technology use can both inflate [45,46] or deflate effects [44]. In our study, associations between objective play time and well-being were larger than those between self-reported play time and well-being. Had we relied on self-reports only, we could have missed a potentially meaningful association.

Players who objectively played more in the past two weeks also reported to experience higher well-being. This association aligns well with literature that emphasizes the benefits of video games as a leisure activity that contributes to people's mental health [42]. Because our study was cross-sectional, there might also be a self-selection effect: People who feel good might be more inclined to pick up their controller. Such a view aligns well with research that shows reciprocal relations between media use and well-being [64,65]. Equally plausible, there might be factors that affect both game play time and well-being [66,67]. For example, people with high incomes are likely to be healthier and more likely to be able to afford a console/PC and the game.

Even if we were to assume that play time directly predicts well-being, it remains an open question whether that effect is large enough to matter for people's subjective experience. From a clinical perspective, it is probably the effect is too small to be relevant for clinical treatments. Our effect size estimates were below the smallest effect size of interest for media effects research that Ferguson [68] proposes. For health outcomes, Norman and colleagues [69] argue that we need to observe a large effect size of around half a standard deviation for participants to feel an improvement. In the AC:NH model, 10 h of game play were associated with a 0.06 standard deviation increase in well-being. Therefore, a half standard deviation change would require approximately 80 h of play over the two weeks (translating to about 6 h per day). However, Anvari and Lakens demonstrated that people might subjectively perceive differences of about a third of a standard deviation on a measure of well-being similar to ours [70], suggesting that approximately three and a half hours of play might be associated with subjectively felt changes in well-being. Nevertheless, it is unclear whether typical increases in play go hand in hand with perceivable changes in well-being. However, even small relations might accumulate to larger effects over time, and finding boundary conditions, such as time frames under which effects are meaningful, is a necessary next step for research [71]. Moreover, we only studied one facet of positive mental health, namely affective well-being. Future research will need to consider other facets, such as negative mental health.

Although our data do not allow causal claims, they do speak to the broader conversation surrounding the idea of video game addiction (e.g. [15]). The discussion about video games has focused on fears about a large part of players becoming addicted [14,21]. Given their widespread popularity, many policymakers are concerned about negative effects of play time on well-being [7]. Our results challenge that view. The relation between play time and well-being was positive in two large samples. Therefore, our study speaks against an immediate need to regulate video games as a preventive measure to limit video game addiction. If anything, our results suggest that play can be an activity that relates positively to people's mental health—and regulating games could withhold those benefits from players.

We also explored the role of people's perceptions in the relation between play time and well-being. Previous work has shown that gamers' experience probably influences how playing affects mental health [51,52]. We explored such a possible moderation through the lens of self-determination theory [50]: We investigated whether changes in need satisfaction, enjoyment and motivation during play changed the association between play time and well-being. We found no evidence for moderation. Neither need satisfaction, nor enjoyment, nor extrinsic motivation significantly interacted with play time in predicting well-being. However, conditional on play time, satisfaction of the autonomy and relatedness need, as well as enjoyment were positively associated with well-being. Extrinsic motivation, by contrast, was negatively associated with well-being. These associations line up with research demonstrating that experiencing need satisfaction and enjoyment during play can be a contributing factor to user well-being, whereas an extrinsic motivation for playing probably does the opposite (e.g. [56]).

Although we cannot rule out that these player experiences had a moderating role, the estimates of the effect size suggest that any moderation is likely to be too small to be practically meaningful. In other words, our results do not suggest that player experience modulates the relation between play time and well-being, but rather contributes to it independently. For example, players who experience a high degree of relatedness during play will probably experience higher well-being, but a high degree of relatedness is unlikely to strengthen the relation between play time and well-being. Future research, focused on granular in-game behaviours such as competition, collaboration and advancement will be able to speak more meaningfully to the psychological affordances of these virtual contexts.

Conditional on those needs and motivations, play time was not significantly related to well-being anymore. We are cautious not to put too much stock in this pattern. A predictor becoming not significant when controlling for other predictors can have many reasons. Need satisfaction and motivations might mediate the relation between play time and well-being; conditioning on the mediator could mask the effect of the predictor [67]. Alternatively, if play time and player experiences are themselves related, including them all as predictors would result in some relations being overshadowed by others. We need empirical theory-driven research grounded in clear causal models and longitudinal data to dissect these patterns.

4.1. Limitations

We are mindful to emphasize that we cannot claim that play time causally affects well-being. The goal of this study was to explore whether and how objective game behaviour relates to mental health. We were successful in capturing a snapshot of that relation and gaining initial insight into the relations between video games and mental health. But policymakers and public stakeholders require evidence which can speak to the trajectory of play and its effect over time on well-being. Video games are not a static medium; both how we play and discuss them is in constant flux [72]. To build on the work we present here, there is an urgent need for collaborations with games companies to obtain longitudinal data that allow investigating all the facets of human play and its effects on well-being over time.

Longitudinal work would also address the question of how generalizable our findings are. We collected data during a pandemic. It is possible the positive association between play time and well-being we observed only holds during a time when people are naturally playing more and have less opportunity to follow other hobbies. Selecting two titles out of a wide range of games puts further limitations on how generalizable our results are. Especially Animal Crossing: New Horizons is considered a casual game with little competition. Therefore, although those two titles were drawn from different genres, we cannot generalize to players across all types of games [73]. The results might be different for more competitive games. Different games have different affordances [74] and, therefore, likely different associations with well-being. To be able to make recommendations to policymakers on making decisions across the diverse range of video games, we urge video game companies to share game play data from more titles from different genres and of different audiences. Making such large-scale data available would enable researchers to match game play with existing cohort studies. Linking these two data sources would enable generalizable, causal tests of the effect of video games on mental health.

Another limiting factor on the confidence in our results is the low response rate observed in both of our surveys. It is possible that various selection effects might have led to unrepresentative estimates of well-being, game play, or their relationship. Increasing response rates, while at the same time ensuring samples' representativeness, remains a challenge for future studies in this field.

Our results are also on a broad level—possibly explaining the small effect sizes we observed. When exploring effects of technology use on well-being, researchers can operate on several levels. As Meier & Reinecke [75] explain, we can choose to test effects on the device level (e.g. time playing on a console, regardless of game), the application level (e.g. time playing a specific game), or the feature level (e.g. using gestures in a multiplayer game). Here, we operated on the application level, which subsumes all possible effects on the feature level. In other words, when measuring time with a game, some features of the game will have positive effects; others will have negative effects. Measuring on the application level will thus only give us a view of ‘net' video game effects. Assessing game behaviour on a more granular level will be necessary to gain more comprehensive insights and make specific recommendations to policymakers. For that to happen, games companies will need to have transparent, accessible APIs and access points for researchers to investigate in-game behaviour and its effects on people's mental health. Such in-game behaviours also carry much promise for studying the therapeutic effects of games, for example, as markers of symptom strength in disorders [76]. In rare cases, researchers were able to make use of such APIs [47,49], but the majority of games data are still not accessible. For PvZ, EA provided a variety of in-game behaviours that we did not analyse here. We invite readers to explore those data on the OSF project of this manuscript.

We relied on objective measures of video game behaviour. These measures are superior to self-reported behaviour because they directly capture the variable of interest. However, capturing game sessions on the side of the video game companies comes with its own measurement error. Video game companies cannot perfectly measure each game session. For example, in our data processing, some game sessions had duplicate start and end times (for PvZ) or inaccurate start and end times, but accurate session durations (for AC:NH). Measurement error in logging technology use is a common issue (e.g. [12,77]), and researchers collaborating with industry partners need to understand how these partners collect telemetry. The field needs to embrace these challenges in measurement rather than defaulting to self-reports.

Last, this study was exploratory and we made decisions about data processing and analysis without specifying them a priori [78]. Such researcher degrees of freedom can yield different results, especially in the field of technology use and well-being [65,79]. In our process, we were as transparent as possible to enable others to examine and build upon our work [31]. To move beyond this initial exploration of objective game behaviour and well-being to a more confirmatory approach, researchers should follow current best practices: they should preregister their research before collecting data in collaboration with industry partners [80,81], before accessing secondary data sources [82], and consider the registered report format [83,84]. Following these steps will result in a more reliable knowledge base for policymakers.