Thursday, February 18, 2021

What do prime-age ‘NILF’ men do all day? A cautionary on universal basic income

What do prime-age ‘NILF’ men do all day? Nicholas Eberstadt, Evan Abramsky. AEI, Feb 8 2021. https://www.aei.org/articles/what-do-prime-age-nilf-men-do-all-day

To date, most of the debate about [the Universal Basic Income] has centered on its affordability—i.e., its staggering expense. But a scarcely less important question concerns the implications of such largesse for the recipients themselves and civil society. What would a guaranteed income mean for the quality of citizenship in our country, given that a UBI would allow some—perhaps many—adult beneficiaries to opt for a life that does not include gainful employment or other comparable work?

As it happens, an experiment of sorts is already underway to help us answer this very question. Thanks to the American Time Use Survey (ATUS) from the Bureau of Labor Statistics, we have detailed, self-reported information each year on how roughly 10,000 adult respondents spend their days—from the moment they wake until they sleep.1 These surveyed Americans include prime-age men who are not in labor force (or “NILF” to social scientists), ordinarily in their peak employment years, who are neither working nor looking for work. By examining the self-reported patterns of daily life of these grown men who do not have and are not seeking jobs, we may gain insights into the work-free existence that some UBI advocates hold to be a positive end in its own right.

---

The portrait of daily life that emerges from time-use surveys for grown men who are more or less entirely disconnected from the world of work is sobering. So far as can be divined statistically, their independence from obligations of the workforce does not translate into any obvious enhancement in their own quality of life or improvement in the well-being of others.

To go by the information they themselves report, quite the contrary seems to be true. Though they have nothing but time on their hands, they are not terribly involved in care for their home or for others in it. They are increasingly disinclined to embark on activities that take them outside the house. The central focus of their waking day is the television or computer scree, to which they commit as much time as many men and women devote to a full-time job. So far as we can tell, moreover, screen time is sucking up a still-increasing portion of their waking hours.

There would seem to be no shortage of anomie, alienation, or even despair in the daily lives of men entirely free from work in America today. Why, then, would we not expect a UBI—which would surely result in a detachment of more men from paid employment—to result in even more of the same?

Arguments can be made, of course, that UBI would attract a different sort of “unworking” man from those who predominate the prime-age male NEET population today. But the patterns we have presented on the daily routines of existing work-free men should make proponents of the UBI think long and hard. Instead of producing new community activists, composers, and philosophers, more paid worklessness in America might only further deplete our nation’s social capital at a time when good citizenship is already in painfully short supply.

Both men and women were more committed to their relationships if they perceived their partners as attractive; however, people tended to feel less committed the more attractive their partners perceived themselves

Committing to a romantic partner: Does attractiveness matter? A dyadic approach. Tita Gonzalez Aviles et al. Personality and Individual Differences, Volume 176, July 2021, 110765, February 16 2021. https://doi.org/10.1016/j.paid.2021.110765

Abstract: Physical attractiveness is a highly valued trait in prospective romantic partners. However, it is unclear whether romantic partners' attractiveness is associated with commitment to the relationship. We report the results of a study of 565 male-female couples residing in Austria, Germany, or Switzerland. Employing dyadic analytical methods, we show that both men and women were more committed to their relationships if they perceived their partners as attractive. However, attractiveness also had a negative effect on commitment: People tended to feel less committed the more attractive their partners perceived themselves. Furthermore, although partners perceived themselves as similar in attractiveness to their partners, analyses revealed that similarity was not associated with commitment. Together, the findings demonstrate that attractiveness does matter for commitment to existing romantic relationships and emphasize the value of dyadic approaches to studying romantic relationships.

Keywords: Actor-partner interdependence modelAttractionAttractivenessCommitmentDyadic response surface analysis


Even very subtle interactions with strangers yield short-term happiness

van Lange, Paul, and Simon Columbus. 2021. “Vitamin S: Why Is Social Contact, Even with Strangers, so Important to Well-being?.” PsyArXiv. February 18. doi:10.31234/osf.io/jaxck

Abstract: Even before COVID-19, it was well-known in psychological science that our well-being is strongly served by the quality of our close relationships. But is our well-being also served by social contact with people we know less well? In this article, we discuss three propositions to support the conclusion that the benefits of social contact also derive from interactions with acquaintances and even strangers. The propositions state that most interaction situations with strangers are benign (Proposition 1), that most strangers are benign (Proposition 2), and that most interactions with strangers enhance well-being (Proposition 3). These propositions are supported, first, by recent research designed to illuminate the primary features of interaction situations, showing that situations with strangers often represent low conflict of interest. Second, in our interactions with strangers, most people exhibit high levels of low-cost cooperation (social mindfulness) and high-cost helping if help to strangers is urgent. We close by sharing research examples which show that even very subtle interactions with strangers yield short-term happiness. Broader implications for COVID-19 and urbanization are discussed.


From 2009... Voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending; we estimate that $1 spent on preparedness is worth about $15 of future damage mitigated

Healy, A., & Malhotra, N. (2009). Myopic Voters and Natural Disaster Policy. American Political Science Review, 103(3), 387-406, Aug 2009. https://doi.org/10.1017/S0003055409990104

Abstract: Do voters effectively hold elected officials accountable for policy decisions? Using data on natural disasters, government spending, and election returns, we show that voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending. These inconsistencies distort the incentives of public officials, leading the government to underinvest in disaster preparedness, thereby causing substantial public welfare losses. We estimate that $1 spent on preparedness is worth about $15 in terms of the future damage it mitigates. By estimating both the determinants of policy decisions and the consequences of those policies, we provide more complete evidence about citizen competence and government accountability.


DISCUSSION

A government responding to the incentives implied by our results will underinvest in natural disaster preparedness. The inability of voters to effectively hold government accountable thus appears to contribute to significant inefficiencies in government spending because the results show that preparedness spending substantially reduces future disaster damage. Voters are, in a word, myopic. They are not, as we have shown, myopic in the sense that they respond more to spending just before an election than to spending a year or two earlier; rather, they are myopic in the sense that they are unwilling to spend on natural disasters before the disasters have occurred. An ounce of prevention would be far more efficient than a pound of cure, but voters seem interested only in the cure. The resulting inconsistencies in democratic accountability reduce public welfare by discouraging reelection-minded politicians from investing in protection, while encouraging them to provide assistance after harm has already occurred.

Although we consider our findings to be relevant to potential underinvestments in preparedness in areas beyond natural disasters such as preventive medicine, the government almost certainly does not underinvest in all kinds of preparedness. For example, after the attacks on September 11, large investments were made in preventing future attacks on passenger jets. One clear difference between airport security and most natural disaster preparedness measures is that airport security is highly observable and salient. Moreover, this example may be the exception that proves the rule we have demonstrated in this article. When voters provide their elected officials with incentives to make mistakes— ranging from insufficient investment in natural disaster preparedness to perhaps excessive attention to airline security—elected officials are likely to provide the inefficient policies that voters implicitly reward. Moreover, it is possible that major events such as Hurricane Katrina can heighten the value of natural disaster preparedness, but this effect may be temporary. For example, California passed Proposition 1E in 2006, a measure that provided bond financing for $4.1 billion in flood control measures, with $3 billion for upgrades to levees in the Central Valley, an area considered by experts to be exposed to catastrophic flooding due to insufficient protection from the existing levee network. Experts characterized the situation as a “ticking time bomb” in January 2005 (California ceeds were to be used to obtain federal matching funds for the projects, in addition to financial and technical assistance from federal agencies such as the Army Corps of Engineers. Despite repeated warnings about the risk of severe flooding in the Central Valley, large-scale action was implemented only after Hurricane Katrina made the danger salient. The importance of Hurricane Katrina in ensuring support for Proposition 1E is suggested by the short argument that supporters of the measure included on the ballot. The argument read, “Our nation learned a tragic lesson from Hurricane Katrina— we cannot continue to neglect our unsafe levees and flood control systems” (California Attorney General 2006). The measure passed easily, winning 64% of the vote, including 67% of the vote in Los Angeles County and 56% of the vote in relatively conservative Orange County, despite the fact that neither would be affected directly by the bulk of the proposed spending. For voters in these areas, it appears to be the case that levee repair became a public good that voters were willing to support after Hurricane Katrina made clear the potential costs of inaction.27

A similar phenomenon appears to have occurred at the federal level. Following Hurricane Katrina, Congress passed and President Bush signed the PostKatrina Emergency Reform Act of 2006, which reorganized FEMA and appropriated $3.6 billion for levees and other flood control measures.28 In the immediate aftermath of Katrina, voters in New Orleans also appear to have placed greater value on these preparedness projects. In late 2006, 30% of New Orleans residents said that “repairing the levees, pumps, and floodwalls” should be one of the top two priorities in the rebuilding efforts, ranking this item and crime control as their top two concerns (Kaiser Family Foundation 2007, 55). The increased voter concern for disaster protection appears to have faded significantly since then. By mid-2008, only 2% of New Orleans voters ranked “hurricane protection/rebuilding floodwalls, levees” as the top rebuilding concern (Kaiser Family Foundation 2008, 52). This apparent change in priorities for New Orleans residents suggests that even an event like Hurricane Katrina is likely to increase the salience of preparedness issues only temporarily. Interestingly, the case of Hurricane Katrina may be anomalous with respect to the electoral benefits of rethan $94.8 billion in relief payments to the Gulf Coast following Katrina (Congressional Budget Office 2007), and the Republican Party suffered heavy losses in the 2006 and 2008 elections. Unlike most disaster events, Hurricane Katrina was highly unique in the substantial amount of media coverage it received. In an Associated Press poll of U.S. news editors and in the Pew Research Center U.S. News Interest Index, Hurricane Katrina was the top world story of 2005 (Kohut, Allen, and Keeter 2005), and most of this coverage focused on the mishandled immediate logistical response to the disaster as opposed to the generous financial response that came later. Hence, voters may have been substantially affected by the early negative media coverage and carried those initially formed attitudes about the administration’s competence with them into the voting booth. Nevertheless, the case of Katrina offers two potential extensions to this research. Subsequent studies can explore how the salience of a disaster changes the political effectiveness of relief spending, in addition to more closely examining how logistical response differs from financial response.

Due to the transience of the effect that disasters have on the visibility of preparedness, it is important to note that there is some suggestive evidence that governments may be able to take action to make preparedness salient to voters in a more permanent fashion. In the late 1990s, FEMA introduced Project Impact, a grassroots disaster preparedness initiative that emphasized collaboration between government, businesses, and local community leaders, bypassing state governments (Birkland andWaterman 2008;Wachtendorf and Tierney 2001; Witt 1998). Under Project Impact, FEMA selected a group of 57 communities from all 50 states (as well as Puerto Rico and the District of Columbia) to receive either $500,000 or $1-million grants to pursue disaster preparedness and mitigation initiatives (Government Accounting Office 2002). The program targeted areas of varying size and disaster risk. Interviews with participants in the program indicate that people valued the program. It was also credited with helping limit damage from the February 2001 Nisqually earthquake in the Puget Sound, ironically on the very day that the program was cancelled by the Bush Administration (Holdeman 2005). Compared to other counties, the change in the Democrats’ vote share from 1996 to 2000 was 1.9% higher in Project Impact counties, a significant difference (p = .006) (Healy and Malhotra 2009). This estimate is only suggestive of the possibility that voters may have responded to Project Impact because it is not possible to control for the omitted variables that could be driving this difference.29 Future scholarship could use surveys, as well as lab and field experiments, to determine the extent to which voter decisions can be influenced by government efforts at increasing the salience of issues and policies in areas such as disaster preparedness.

Although our results indicate that the incumbent presidential party has not been rewarded for investing in disaster preparedness, it is possible that voters could credit members of Congress for those initiatives. A natural extension to this analysis is to explore whether similar effects are observed in House and Senate elections. We conducted a preliminary exploration of this question by estimating analogous models predicting the vote share for the incumbent Senate party in the county as the dependent variable. For a variety of potential reasons, we did not obtain precise coefficient estimates from which to draw firm conclusions.30 Across all specifications that we considered, though, preparedness spending entered with a near-zero coefficient. We anticipate that future research more closely examining Congressional elections will find that members of Congress, like presidents, are not rewarded for preparedness spending.

Subsequent research could also apply our empirical strategy of simultaneously examining voting decisions, government policy, and associated outcomes to issues such as education or health care, as well as explore potential ingredients for improved retrospection. A more complete understanding of how citizens value preparedness and relief across a variety of domains could both advance our theoretical understanding of retrospective voting and help inform policy making. Through an analysis of voter responses to disaster relief and preparedness spending, we have addressed outstanding questions in the long-standing and extensive literature on citizen competence in democratic societies. Examining actual decisions by the electorate, we found heterogeneity with respect to the public’s responsiveness to various government policies. However, we have also shown that the mere presence of responsiveness does not necessarily indicate citizen competence and that failures in accountability can lead to substantial welfare losses.

Many sex differences in humans are largest under optimal conditions and shrink as conditions deteriorate; sex differences in growth, social behavior, and cognition illustrate the approach

Now You See Them, and Now You Don’t: An Evolutionarily Informed Model of Environmental Influences on Human Sex Differences. David C. Geary. Neuroscience & Biobehavioral Reviews, February 17 2021. https://doi.org/10.1016/j.neubiorev.2021.02.020

Highlights

• The magnitude of human sex differences varies across contexts

• An evolutionarily informed model of these environmental influences is discussed

• Many sex differences are largest under optimal conditions and shrink as conditions deteriorate

• Human sex differences in growth, social behavior, and cognition illustrate the approach

• The approach has implications for better understanding sex-specific vulnerabilities

Abstract: The contributions of evolutionary processes to human sex differences are vigorously debated. One counterargument is that the magnitude of many sex differences fluctuates from one context to the next, implying an environment origin. Sexual selection provides a framework for integrating evolutionary processes and environmental influences on the origin and magnitude of sex differences. The dynamics of sexual selection involve competition for mates and discriminative mate choices. The associated traits are typically exaggerated and condition-dependent, that is, their development and expression are very sensitive to social and ecological conditions. The magnitude of sex differences in sexually selected traits should then be largest under optimal social and ecological conditions and shrink as conditions deteriorate. The basics of this framework are described, and its utility is illustrated with discussion of fluctuations in the magnitude of human physical, behavioral, and cognitive sex differences.

Keywords: Sex differencessexual selectioncognitioncondition-dependentstressor


Precarious Manhood Beliefs in 62 Nations: Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action

Precarious Manhood Beliefs in 62 Nations. Bosson, Jennifer K. et al. Accepted Journal of Cross-Cultural Psychology, Feb 2021. https://dial.uclouvain.be/pr/boreal/object/boreal:243234

Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.Precarious manhood beliefs portray manhood, relative to womanhood, as a social status that is hard to earn, easy to lose, and proven via public action. Here, we present cross-cultural data on a brief measure of precarious manhood beliefs (the Precarious Manhood Beliefs scale [PMB]) that covaries meaningfully with other cross-culturally validated gender ideologies and with country-level indices of gender equality and human development. Using data from university samples in 62 countries across 13 world regions (N = 33,417), we demonstrate: (1) the psychometric isomorphism of the PMB (i.e., its comparability in meaning and statistical properties across the individual and country levels); (2) the PMB’s distinctness from, and associations with, ambivalent sexism and ambivalence toward men; and (3) associations of the PMB with nation-level gender equality and human development. Findings are discussed in terms of their statistical and theoretical implications for understanding widely-held beliefs about the precariousness of the male gender role.


Working outside the home did nothing to help people feel socially connected, nor did video calls with friends and family; people living with a romantic partner were most likely to improve in social connection after social distancing measures

Okabe-Miyamoto K, Folk D, Lyubomirsky S, Dunn EW (2021) Changes in social connection during COVID-19 social distancing: It’s not (household) size that matters, it’s who you’re with. PLoS ONE 16(1): e0245009. https://doi.org/10.1371/journal.pone.0245009

Popular version: Partners help us stay connected during pandemic | News (ucr.edu)

Abstract: To slow the transmission of COVID-19, countries around the world have implemented social distancing and stay-at-home policies—potentially leading people to rely more on household members for their sense of closeness and belonging. To understand the conditions under which people felt the most connected, we examined whether changes in overall feelings of social connection varied by household size and composition. In two pre-registered studies, undergraduates in Canada (NStudy 1 = 548) and adults primarily from the U.S. and U.K. (NStudy 2 = 336) reported their perceived social connection once before and once during the pandemic. In both studies, living with a partner robustly and uniquely buffered shifts in social connection during the first phases of the pandemic (βStudy 1 = .22, βStudy 2 = .16). In contrast, neither household size nor other aspects of household composition predicted changes in connection. We discuss implications for future social distancing policies that aim to balance physical health with psychological health.

Discussion

Across two pre-registered studies that followed the same participants from before the COVID-19 pandemic into its early stages, we found that living with a partner was the strongest predictor of shifts in social connection across time. This finding replicated across two different samples—a sample of undergraduates at a Canadian university and a sample of adults from mostly the U.S. and the U.K. Both of our studies revealed robust positive regression coefficients indicating that people living with a partner were more likely to improve in social connection after social distancing guidelines were in place than those not living with a partner. This finding is consistent with past research demonstrating that being in a relationship is one of the strongest predictors of connection and well-being [1145], in part because happier people are more likely to find partners [4647]. Additionally, during times of worry and uncertainty, partners have been found to be more valuable for coping than other types of household members [26]. Moreover, recent research has shown that, on average, romantic relationships have not deteriorated over the course of the pandemic; indeed, people are relatively more willing to forgive their partners during COVID-19 [48]. In light of this evidence, it is not surprising that partners showed the strongest effect, especially during a pandemic.

Contrary to our pre-registered hypotheses, changes in loneliness were not predicted by any other aspects of household composition. Furthermore, we found only nonsignificant trends for the impact of household size, including living alone, on social connection during COVID-19, perhaps because both our studies included small samples of those living in large households and households of one. It is important to keep in mind that the pandemic has forced people to spend unusually large amounts of time confined to home. Given that interpersonal interactions must be positive to contribute to one’s overall sense of connectedness [10], those who live in larger households—relative to those who live alone or in smaller households—may have had more interactions that were negative (e.g., due to bickering or lack of privacy and alone time) and, as a result, failed to experience benefits in terms of social connection. Moreover, our studies measured experiences fairly early in the pandemic (April 2020); thus, as people continue to distance over long periods of time, their feelings of social connection may suffer. Going beyond household size and structure, future studies should examine the effects of relationship quality on social connection over time.

When examining how other features of household composition were associated with shifts in social connection during the pandemic, we obtained mixed findings regarding living with pets and null findings for all other household variables. However, because households are multifaceted, larger sample sizes will be needed to fully dissect the household composition findings, as well as to reveal interactions (such as with household size, gender, or country of residence). For example, studies with larger sample sizes may uncover differences in connection between those in households of four (with a partner and two children) versus households of five (with a partner and three children), and so on. Importantly, future investigators may wish to further unpack the role of household dynamics, as some households include unhealthy relationships that may be exacerbated by social distancing measures and others include housemates that minimally interact. As such, the quality and frequency of interaction among household members—perhaps with experience sampling or daily diary measures—is an important factor to explore in future work.

Implications and conclusions

Directed by social distancing interventions in the spring of 2020, millions of people were no longer commuting to work, attending school, or leaving their homes to spend time with friends and family. These extraordinary conditions likely led people to rely more on their household members to fulfill their needs for closeness, belonging, and connection [10]. The results from our two studies revealed that living with a partner—but not how many people or who else one lives with—appeared to confer unique benefits during these uncertain and unprecedented times. Indeed, demonstrating its robustness, this finding replicated across our two studies, despite weak and opposite correlations between household size and living with a partner (r = -.06 in Study 1 and .11 in Study 2).

In light of these results, policy makers might consider developing guidelines for social/physical distancing that protect people’s physical health while ensuring they retain a sense of closeness and connection by spending time in close proximity with partners, even outside their households. Some areas in the world, such as New Zealand, have implemented a strategy known as the “social bubble,” which is the easing of social distancing to allow close contact with another household [49]. Such approaches might be especially helpful for individuals who have been unintentionally and disproportionally socially isolated by social distancing measures, such as those who are cut-off, separated from their partners, or generally struggling with staying at home. However, social bubbles pose a risk of increased infection rates [49]. Hence, just as safe sex education aims to reduce the rate of sexually transmitted diseases and unintended pregnancy, education on safe social distancing (or social bubbling) strategies might guide individuals across the globe how to connect with others safely while simultaneously curtailing COVID-19 rates. In sum, recommendations that reduce the risk of transmission while prioritizing social connection can ensure that people’s physical and psychological health are optimally balanced.

Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike & aversion

“I hate this part right here”: Embodied, subjective experiences of listening to aversive music. Henna-Riikka Peltola, Jonna Katariin Vuoskoski. Psychology of Music, February 17, 2021. https://doi.org/10.1177/0305735620988596

Abstract: Although the majority of previous research on music-induced responses has focused on pleasurable experiences and preferences, it is undeniable that music is capable of eliciting strong dislike and aversion as well. To date, only limited research has been carried out to understand the subjective experience of listening to aversive music. This qualitative study explored people’s negative experiences associated with music listening, with the aim to understand what kinds of emotions, affective states, and physical responses are associated with listening to aversive music. One hundred and two participants provided free descriptions of (1) musical features of aversive music; (2) subjective physical sensations, thoughts and mental imagery evoked by aversive music; (3) typical contexts where aversive music is heard; and (4) the similarities and/or differences between music-related aversive experiences and experiences of dislike in other contexts. We found that responses to aversive music are characterized by embodied experiences, perceived loss of agency, and violation of musical identity, as well as social or moral attitudes and values. Furthermore, two “experiencer types” were identified: One reflecting a strong negative attitude toward unpleasant music, and the other reflecting a more neutral attitude. Finally, we discuss the theoretical implications of our findings in the broader context of music and emotion research.

Keywords: negative emotions, embodiment, emotion, listening, qualitative, valence

Although the main focus of previous research has been on the paradoxical enjoyment of negative emotions, some work on the unpleasant aspects of music and sounds has been carried out. Dermott (2012) summarized neuroscientific findings relating to auditory preferences, and presented typical aversive features of non-musical sounds. In general, loud and distorted sounds are usually considered as unpleasant, and certain frequencies are likely to trigger aversive responses: Sharpness (high-frequency energy of a sound) and roughness (rapid amplitude modulation of a sound) are major determinants of unpleasantness, but they can be less aversive at low volume. However, in the context of music, aversion to sounds is at least partially context-dependent and a matter of exposure and familiarization. For instance, the development of music technology and the introduction of distortion in rock music has challenged the traditional Western concepts of music aesthetics (Dermott, 2012). Cunningham et al. (2005) investigated aversive musical features, and discovered certain features explaining why a piece of music was hated: Bad or clichéd lyrics, catchiness (the “earworm effect”), voice quality of a singer, over-exposure, perceptions of pretentiousness, and extramusical associations (such as the influence of music videos or unpleasant personal experiences) were identified as the main factors making music unpleasant.

Furthermore, listeners’ psychological strategies in relation to musical taste have been preliminarily investigated. Ackermann (2019) used interviews to explore negative attitudes toward disliked music, and synthesized four themes of “legitimization strategies” that are used to justify these attitudes. The themes cover (1) music-specific legitimization strategies, where the focus is on the compositional aspects of music, the interpretation of the musician or composer, the lyrics and semantic content, and other aesthetic criteria; (2) listener-specific legitimization strategies, where the focus is on the emotional or mood-related responses to music, physical reactions, and other aspects relating to the self and identity; (3) social legitimation strategies, where the focus is on in-group and out-group relations; and finally (4) cross-category subject areas, consisting of aspects such as the exaggerated emotionalization (Kitsch) of music, the authenticity and commerciality of music, and differing definitions between music and noise. The first three strategies seem to be applicable for disliking singing voices in popular music as well. Merrill and Ackermann (2020) found that emotional reasons, factual reasons, bodily reactions and urges, and social reasons were rationales for the negative evaluation of pop-singers’ voices (see also Merrill, 2019). The preliminary work of these two scholars show that, in addition to socio-cultural perspectives and aspects relating to social identity, psychological, emotional, and physical responses play a crucial role in aversive musical experiences.

Krueger (2019) has proposed that music’s materiality is the key reason behind its power over listeners. The fact that we resonate (physically) with sounds explains why humans react to high volume and certain frequencies, but particularly musical sounds “seem to penetrate consciousness in a qualitatively deeper way than input from other perceptual modalities,” as Krueger (2019) states. Thus, music and soundscapes that are not made or chosen by the listener, can strongly affect them, and potentially even negate individual agency and consent by “hacking” their self-regulatory system. These mechanisms have been previously investigated in studies focusing on music and affect regulation, highlighting the positive effects of intentional music listening for self-regulative purposes (for a review on different approaches to affective self-regulation through music, see Baltazar & Saarikallio, 2017). According to Krueger (2019), it is possible to weaponize these processes, and thus use music as a technology for “affective mind invasion” and, in the worst case, torture, as was done by the United States military in the so-called “global war on terror.” Recorded cases of the military playing loud rock music from speakers during operations, as well as looping offensive unfamiliar heavy metal music or endless repetitions of Western children songs to “soften up detainees prior to questioning” instead of weaponizing sheer noise suggest that symbolic musical “messages” combined with high-volume sounds are effective and subtle ways of affecting one’s mind compared to more apparent forms of violence (Garratt, 2018, pp. 42–44).

The aim of the present study is to explore people’s negative experiences associated with music listening. We aim to understand what kinds of emotions, affective states, and physical responses are associated with aversive music, identify commonalities in the verbal descriptions, and reflect on the theoretical implications of these aversive musical experiences for the wider music and emotion research community.

Substantial heritability of neighborhood disadvantage: Individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults

Understanding neighborhood disadvantage: A behavior genetic analysis. Albert J. Ksinan, Alexander T.Vazsonyi. Journal of Criminal Justice, Volume 73, March–April 2021, 101782. https://doi.org/10.1016/j.jcrimjus.2021.101782

Abstract

Purpose Studies have shown that disadvantaged neighborhoods are associated with higher levels of crime and delinquent behaviors. Existing explanations do not adequately address how individuals select neighborhood. Thus, the current study employed a genetically-informed design to test whether living in a disadvantaged neighborhood might be partly explained by individual characteristics, including self-control and cognitive ability.

Method A sibling subsample of N = 1573 Add Health siblings living away from their parents at Wave 4 was used in twin analyses to assess genetic and environmental effects on neighborhood disadvantage. To evaluate which individual-level variables might longitudinally predict neighborhood disadvantage, a sample of N = 12,405 individuals was used.

Results Findings provided evidence of significant heritability (32%) of neighborhood disadvantage. In addition, a significant negative effect by adolescent cognitive ability on neighborhood disadvantage 14 years later was observed (β = −0.04, p = .002). Follow-up analyses showed a genetic effect on the association between cognitive ability and neighborhood disadvantage.

Conclusions Study findings indicate substantial heritability of neighborhood disadvantage, showing that individuals themselves might potentially contribute to a self-selection process that explains which neighborhoods they occupy as adults.


Introduction

Criminologists have extensively focused on the impact of neighborhood social disorganization on crime and deviance since the first half of the 20th century (Shaw & McKay, 1942). Research has provided evidence that neighborhoods with disorganized structural characteristics, including high levels of mobility, high rates of poverty, or high numbers of single-parent families, were associated with higher levels of criminal behavior (Bursik & Grasmick, 1999; Morenoff, Sampson, & Raudenbush, 2001; Sampson, 1985; Sampson, Raudenbush, & Earls, 1997; Wilson, 1987).

These hypothesized neighborhood effects have generally been considered to flow in one direction, namely from neighborhoods to individuals. However, a small number of studies have hypothesized and tested the opposite, namely that individuals select into their neighborhoods. Given that neighborhood variables reflect the aggregation of the qualities and characteristics of individual members, it seems likely that certain individual traits might predict neighborhood characteristics (Hedman & van Ham, 2012). If individual traits do in fact predict neighborhood characteristics and all psychological traits are to a certain extent heritable (Turkheimer, 2000), then it stands to reason that neighborhood characteristics will show some heritable effect as well. The current study used a genetically-informed design to test for both genetic and environmental effects on selecting into certain neighborhoods and to test whether individual characteristics (self-control and cognitive ability) have developmental effects on this selection process.

A neighborhood is defined as a geographically unique subsection or area, part of a larger community. Typically, neighborhoods are operationalized using geographic boundaries defined by an administrative agency (such as the Census Bureau), which partitions neighborhoods into tracts or blocks (Sampson, Morenoff, & Gannon-Rowley, 2002).

The traditional framework for studying neighborhood effects is rooted in social disorganization theory. According to this theory, every individual is prone to engage in some deviant or criminal behaviors. Bonds to society make these behaviors too costly and thus effectively prevent crime from happening. The neighborhood process through which it controls the behaviors of its members is termed collective efficacy (Morenoff et al., 2001) or the ability of individuals sharing a neighborhood to work together and to solve issues related to their neighborhood. In this way, individuals engage in effective indirect social control in order to prevent neighborhoods from deteriorating. A typical example of such indirect social control is when adults monitor youth loitering in the neighborhood and are willing to confront them when they disturb or disrupt a public space (Sampson et al., 1997). A well-functioning neighborhood is a complex and cohesive system of social networks, rooted in both the family as well as the community (Sampson, Morenoff, & Earls, 1999).

Neighborhood structural factors such as high poverty, single-parent families, residential instability, high unemployment, or a high number of minority inhabitants, are associated with lower levels of neighborhood organization or an inability of the community to maintain effective social control, according to social disorganization theory (Sampson, 1997; Sampson & Groves, 1989). The impact of these structural factors might lead to alienation of neighborhood members and low levels of investment in the community, which in turn leads to greater social disorder and thus higher proneness to disorder and crime (Leventhal & Brooks-Gunn, 2000; Leventhal, Dupéré, & Brooks-Gunn, 2009; Molnar, Miller, Azrael, & Buka, 2004; Sampson & Groves, 1989).

Empirical support for social disorganization theory and the concept of collective efficacy in predicting crime and delinquency has been provided by a number of studies that have used hierarchical or multi-level modeling. For example, Sampson et al. (1997) found that concentrated disadvantage, immigration concentration, and residential (in)stability significantly predicted collective efficacy, which in turn mediated the effects of disadvantage and residential (in)stability on several measures of violence. Similarly, Sampson and Raudenbush (1999) found that collective efficacy of a neighborhood predicted lower levels of disorder and crime (see also Molnar et al., 2004; Sampson, 1997; Valasik & Barton, 2017).

In contrast, a more recent approach to studying neighborhood effects has focused on neighborhood characteristics, including individual-level variables (as opposed to predicting rates in neighborhood). Based on Leventhal and Brooks-Gunn's review (2000), neighborhoods affect a plethora of individual adjustment measures. Among them, neighborhood SES was found to positively predict educational attainment, mental health, as well as negatively predict individual delinquency and criminal behavior (Leventhal et al., 2009).

Individuals do not randomly allocate into neighborhoods, but rather, they actively seek out and select their neighborhoods. If neighborhoods consist of individual members, it stands to reason that the likelihood of living in a particular place is, to some certain extent, affected by individual characteristics, and thus, that neighborhood characteristics are also affected by individual differences. This is referred to as ‘self-selection’. In the current definition, self-selection refers to a broader concept than simply ‘individuals making deliberate choices when deciding where to live.’ Such a view would be imprecise and potentially harmful, as it might put too much emphasis on personal responsibility for potentially detrimental living conditions. Rather, self-selection refers to a more impersonal process where individuals with different life histories occupy different life trajectories that lead them to different places of residence, and, in many cases, living in a particular neighborhood is not so much a volitional process or act, but rather a situation that cannot be easily changed.

The idea that a self-selection process might be taking place related to an association between an individual (or a family) and neighborhood characteristics is certainly not new. In fact, the issue with non-independence of neighborhood sorting and individual characteristics has been mentioned by several authors (Sampson & Sharkey, 2008). However, individual characteristics that were identified to influence self-sorting into particular neighborhoods were of a social nature, such as being a renter versus a homeowner, being single, or being an immigrant, just to name a few (Hedman & van Ham, 2012). At present, however, there does not appear to be a clear understanding about the potential effect of self-selection on neighborhood effects. Some research did not find support for neighborhood effect once self-selection was accounted for (Oreopoulos, 2003), while other studies found that neighborhood effects remained significant after accounting for self-selection (Aaronson, 1998; Dawkins, Shen, & Sanchez, 2005; Galster, Marcotte, Mandell, Wolman, & Augustine, 2007). Thus, the evidence is quite mixed.

Behavior genetic studies partition phenotypic variance into three sources: heritability, shared environment, and nonshared environment. Over the past three decades, studies have consistently shown both environmental and genetic influences on the vast majority of individual traits (Plomin, DeFries, Knopik, & Neiderheiser, 2013; Polderman et al., 2015). However, genetic effects are not limited to individual characteristics. In fact, some presumably environmental effects have also been found to be correlated with genetic predispositions. There are three types of gene-environment correlations: passive rGE, evocative rGE, and active rGE (Plomin, DeFries, & Loehlin, 1977). Particularly relevant to the concept of neighborhood self-selection is active rGE, which refers to individuals actively selecting environments based on their inherent preferences (Moffitt, 2005).

Because individuals are not randomly selected for certain environments as much as they are active agents in selecting, modifying, and adapting to the environments, this process is affected by their individual characteristics, which themselves are substantially affected by heritable materials. A review of 55 studies by Kendler and Baker (2007) showed that there are substantial genetic effects (average h2 = 0.27) on measures of the environment, including parenting behaviors, stressful life events, social support, or peer interactions. Nevertheless, there has not been a study that has directly tested the heritability of neighborhood characteristics. Most genetically-informed studies on more distal environmental effects (such as schools or neighborhoods) focused on their moderating effects only (Cleveland, 2003; Rowe, Almeida, & Jacobson, 1999). For example, a study by Connolly (2014) found that neighborhood disadvantage moderated the genetic effect on adolescent delinquency between the ages of 6 and 13 years, and between 14 and 17 years, where greater heritable effects were observed at higher levels of neighborhood disadvantage.

How might individual characteristics be genetically related to the neighborhoods that individuals live in? The key to understanding potential genetic effects on neighborhoods lies in the process of active rGE, according to which individuals actively ‘select’ their environments. In the case of neighborhoods, the selection process is both the selection of a particular neighborhood to live in as well as the variety of neighborhoods that are available, also determined to a certain extent by individual traits.

Neighborhood socioeconomic status is defined as socioeconomic status of individual houses or their inhabitants, and, in the context of the United States, socioeconomic status is strongly affected by the level of education, which in turn has been found to be positively associated with cognitive ability or intelligence (L. Gottfredson, 1997a; Neisser et al., 1996; Strenze, 2007). Differences in intelligence have a large heritable component which has been found to increase with age (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990; Devlin & Daniels, 1997; Haworth et al., 2010). Moreover, a more direct link between cognitive ability or intelligence and career success, as well as intelligence and more positive developmental adjustment outcomes in general, was also established by numerous studies (Caspi, Wright, Moffitt, & Silva, 1998; L. Gottfredson, 2004; Judge, Higgins, Thoresen, & Barrick, 1999; Schmidt & Hunter, 2004). Thus, it stands to reason that neighborhood socioeconomic status should have a heritable or genetic component, and individual cognitive ability might partially explain this variance.

Another candidate personality trait, which might play a significant role in affecting neighborhood characteristics, is self-control or the ability to exercise restraint in delaying immediate gratification and subduing our impulses. Perhaps the most prominent theory emphasizing the role of self-control is self-control theory by Gottfredson and Hirschi (1990). According to Gottfredson and Hirschi, all deviant and criminal behaviors are to some extent related to a lack of self-control. A great number of studies have provided consistent empirical support that (low) self-control is perhaps the single best predictor of deviant and criminal behaviors (Hay, 2001; Vazsonyi, Mikuška, & Kelley, 2017; Wright, Caspi, Moffitt, & Silva, 1999), as well as better health, better career prospects, or less substance use (Casey et al., 2011; Mischel et al., 2011; Moffitt et al., 2011). In this view, the association between neighborhood disorganization and low self-control would consider low self-control as the cause rather than the outcome, as individuals with low self-control would be more likely to self-select into neighborhoods with higher levels of social disorganization (Caspi, Taylor, Moffitt, & Plomin, 2000; Evans, Cullen, Burton Jr., & Dunaway, 1997). Both cognitive ability and (low) self-control have in fact been tested in a longitudinal study by Savolainen, Mason, Lyyra, Pulkkinen, and Kokko (2017); findings showed that childhood differences in cognitive skills as well as childhood antisocial propensity (both measured at age 8) were traits that significantly foretold the developmental cascade which led to higher socioeconomic exclusion in midlife.