Tuesday, August 4, 2020

Highly feminist women who desire sexist men experienced more cognitive dissonance (operationalized as negative affect) than women lower in feminist attitudes

Feminism and mate preference: A study on relational cognitive dissonance. Aslı Yurtsever, Arın Korkmaz, Zeynep Cemalcilar. Personality and Individual Differences, Volume 168, January 1 2021, 110297. https://doi.org/10.1016/j.paid.2020.110297

Abstract: Evolution proposes differences in mate preferences between the two sexes. Females prefer mates who can invest in them and their offspring. In the contemporary era, gender ideologies are not always in line with these premises, but desires still could be. The conflict between ideology and desire could trigger cognitive dissonance in contemporary feminist women. We recruited 246 women online to investigate the occurrence of dissonance based on feminist attitudes, and whether dissonance reduction strategies (i.e., behavior change, cognition change) differed based on their preference for consistency. Results showed that highly feminist women who desire sexist men experienced more cognitive dissonance (operationalized as negative affect) than women lower in feminist attitudes. Preference for consistency moderated cognitive dissonance's association with behavior, but not cognition change.

Keywords: Cognitive dissonanceMate preferenceFeminismPreference for consistency

4. Discussion

The current study showed that desire toward evolutionarily preferable mate behaviors conflicted with feminist attitudes, creating cognitive dissonance. We predicted that when attraction was held constant, such behaviors would trigger cognitive dissonance in heterosexual feminist women, deeming them sexist. Indeed, our pilot study supported this finding, and its association with high negative affect as indicative of cognitive dissonance. In the experiment, in line with our hypothesis, feminist women attributed to the vignette protagonist similar dissonance regardless of the type of sexist behaviors, be it overt or subtle. We found support for findings on within-sex variation in mate preferences; desiring resource display was challenged by those who had strong endorsement of feminism, and their desire toward any sexist-deemed behavior proved problematic. Hughes and Aung (2017) found several individual differences that moderated women's mate preferences. We expanded their list with feminism; feminist women were put off by resource display, unlike their non-feminist counterparts. Less feminist women did not experience dissonance in the subtle condition because the man's manner of displaying resources was deemed attractive (as evolutionary trends and traditional gender roles suggest) and was not misaligned with any prior attitudes. They still experienced higher NA in the overt condition compared to the control. Although Harmon-Jones (2000) found that NA measures dissonance irrespective of aversive situations, this could still be due to the overall unpleasantness of the interaction, and not a reflection of feminist attitudes.
Our findings showed that once women experienced cognitive dissonance, they employed dissonance reduction strategies to relieve the emerging negative arousal. This supports previous research in the validity of assessing affect as an indicator of cognitive dissonance. Overall, we found that women who felt high negative affect were more likely to use behavior change (i.e., terminate the interaction). Furthermore, individual's preference for consistency moderated this effect. Previous research had examined preference for consistency as an individual difference that predicted cognitive dissonance (Nolan & Nail, 2014). We, in return, investigated the moderating role of preference for consistency on dissonance reduction strategies. We showed, contrary to our hypothesis, that the association of negative affect with behavior change was stronger for women who were low (vs. high) on preference for consistency. This may be because high PFC participants sought consistency with the “going on the date” decision, and not with their feminist attitudes. That is, once people engage in attitude deviating behaviors, seeking consistency is fixed on the deviation and not on the attitudes, demonstrating the foot-in-the-door effect (Guadagno & Cialdini, 2010).
Interestingly, there was no systematic explanation for employing cognition change after experiencing dissonance. All participants employed it for overt and subtle sexist men, independent of their negative affect and level of feminism. However, in the control condition, only women low on feminism used it. We are not surprised, as without attitude violation, there is no need for cognition change. Less feminist women, conversely, may have needed to change cognitions to adapt to the feminism-aligned treatment of the control man. As for the unexpected findings on NA and PFC, Vaidis and Bran (2018) differentiated between “inconsistency resolution” and “arousal reduction” in dissonance reduction processes. Following that, we argue that our model was based on negative arousal; therefore, PFC was not a suitable variable in explaining cognitions that seek to resolve the inconsistency.
In the current study, it is evident that various cognitive dissonance reduction strategies, such as behavior and/or cognition change, can be employed depending on the individual's dispositions and the context. This study allowed participants to choose several strategies, thus enabling us to approach real life and see that strategies are not necessarily concomitant. People may change cognitions while they terminate the relationship to modify their narrative about the date and feel better, or they may use it to keep dating without feeling dissonance. McGrath (2017) argued that which strategy to use depends on its likelihood of success and effortfulness. In a dating context, termination is a conclusive strategy to end dissonance, whereas cognition change requires effortful restructuring and has the potential to recur. Therefore, the data revealed higher use of behavior change overall, even though both strategies were used.

4.1. Limitations and suggestions for further research

We treated behavior change and cognition change as concurrent independent dimensions; further research should explore reduction strategies with forced-choice paradigms. Our vignettes read from the point of view of a fictional protagonist, inducing vicarious dissonance, to avoid the attraction constant being met with resistance, as was found in our pilot study. However, making termination decisions for another person might be more straightforward, and this might be why our model did not explain the process of cognition change. We manipulated various behaviors of male gender roles and found an additive effect; different contexts and behaviors should be examined to parse this effect. Measures other than negative affect could be implemented to assess dissonance. Additionally, having recruited participants online resulting in a self-selected sample might limit the generalizability of our findings. Future research can investigate how cognitive change strategies influence long-term attitudes and behaviors. Finally, our findings should be taken into consideration within the cultural characteristics of the Turkish population. As a collectivist culture enforcing traditional gender roles that reinforce evolutionary mate preferences, the dissonance feminist participants felt might be indicative of tension between their ideologies and cultural values rather than with sexual desire. This finding should be further replicated in cultures with higher gender equality and lower traditional values.

Overall, their results demonstrate that increased contact opportunities with forced migrants contribute to increases in prejudice

The dynamic relationship between contact opportunities, positive and negative intergroup contact, and prejudice: A longitudinal investigation. Patrick Kotzur, Ulrich Wagner. Journal of Personality and Social Psychology, Jul 2020. DOI: 10.1037/pspi0000258

Abstract: We investigated the dynamics of naturally increasing contact opportunities, frequencies of positive and negative intergroup contact experiences, and prejudice toward forced migrants, in 2 three-wave longitudinal studies (Study 1, N = 183, adult community sample; Study 2, N = 758, nation-wide adult probability sample) in Germany using latent growth curve and parallel process analyses. We examined (research question 1) whether prejudice increases or decreases with increased contact opportunities; (research question 2) whether the rate of change in prejudice is related to the rate of change of positive/negative contact; (research question 3) whether the trajectories of change in prejudice shift as a function of the histories of prior positive/negative contact; and (research question 4) whether the rate of change in positive/negative contact frequencies depends on prior prejudice levels. Across both studies, prejudice increased with increased contact opportunities, as did positive and negative contact frequencies (ad research question 1). Whereas changes in negative contact were significantly related to changes in prejudice in both studies, no such relationships emerged as significant for positive contact (ad research question 2). We did not find any supportive evidence for our research questions 3 and 4. Overall, our results demonstrate that increased contact opportunities can contribute to increases in prejudice. Moreover, they indicate that the trajectories of negative contact and prejudice may be more substantially intertwined than the trajectories of positive contact and prejudice.

Monday, August 3, 2020

In India, Don’t Hate the Matchmaker: A Netflix hit about arranged marriages reflects Indian society a lot more than critics want to admit

In India, Don’t Hate the Matchmaker: A Netflix hit about arranged marriages reflects Indian society a lot more than critics want to admit. Shruti Rajagopalan. Bloomberg, August 2, 2020. https://www.bloomberg.com/opinion/articles/2020-08-02/netflix-s-indian-matchmaking-is-only-too-accurate

Even as the Netflix show “Indian Matchmaking” has grown into a global hit, it’s incensed many Indians. The issue isn’t that most couples don’t go for goat yoga on their first date. Critics accuse the show of stereotyping and commodifying women, lacking diversity and promoting a backwards vision of marriage where astrologers and meddling parents are more influential than the preferences of brides and grooms.

They complain that the series, which follows matchmaker Sima Taparia as she jets between Mumbai and the U.S. to arrange marriages, perpetuates an outdated, offensive and regressive marriage market. In fact, the real problem may be their discomfort with the way marriage works in India, with social stability prized over individual happiness.

It’s true that India’s 1.35 billion citizens occupy different centuries simultaneously. A small fraction still practices child marriage, with some communities holding betrothal ceremonies as soon as a girl is born. At the other end of the spectrum, there is growing acceptance of queer relationships, divorce and even avoiding marriage altogether.

But most Indian marriages are still arranged. That’s because, for the most part, the purpose of marriage in Indian society is not love but family, children and social stability expressed by confining marriage within caste boundaries. According to the 2011-12 India Human Development Survey, only about 5% of Indians marry outside their caste. The share has remained remarkably stable over the decades since independence, even though India’s economy and society have progressed in many other ways.  1

Studies show that the education levels of the prospective bride or groom don’t make marriages across castes more likely. Even college-educated, urban, middle-class Indians show a strong preference to marry within caste.

This isn’t only a matter for Hindus either. Muslims in South Asia marry within their biradari or jaat — a stand-in for Hindu caste. Indian Christians differentiate between those who converted and those who came to India centuries ago; they marry based on whatever one’s caste was before conversion.

The reason Guyanese-born Nadia faces a limited set of options in the show is not because of her South American birth, but because Indians who were shipped as indentured laborers to the New World were mostly lower castes, or so perceived. American-born Indians are almost always upper caste and are highly valued in the Indian marriage market, despite or maybe because of, their “foreign” status.

The fact that “Indian Matchmaking” packages women as slim, tall, fair, presentable, likable, flexible and so on is, once again, a consequence of using marriage to preserve caste lines. When the purpose of marriage is to find love, companionship and compatibility, then the focus is on the characteristics of the individual. The marriage market is akin to a matching market, similar to Tinder or Uber.

But, in a world where marriage exists to maintain caste lines, the nature of the marriage market more closely resembles a commodity market, where goods are graded into batches. Within every batch, the commodity is substitutable — as in wheat or coffee exchanges.

This is why reading matrimonial ads or listening to Sima going over biodatas — a kind of matrimonial resume — is triggering for many Indian women. Once caste, family, economic strata, looks, height, etc., are graded, all women within a particular grade are considered substitutes for one another, primarily to continue the family line.

[...]

1  To understand the consequences for the few who dare go against their caste or family’s choice, a more representative film on Netflix is Nagraj Manjule’s “Sairat.”

The social weaver bird create nests than can weigh 1 ton & house 200 birds in individual chambers; their cooperative behaviors include chick rearing & defense against snakes & falcons

Not even scientists can tell these birds apart. But now, computers can. Erik Stokstad. Science Magazine, Jul 28 2020. https://www.sciencemag.org/news/2020/07/not-even-scientists-can-tell-these-birds-apart-now-computers-can
Project: Cooperation and population dynamics in the Sociable Weaver. Cape Town University, 2019. http://www.fitzpatrick.uct.ac.za/fitz/research/programmes/understanding/sociable_weavers 
The aptly named Sociable Weaver Philetairus socius is a highly social species that is endemic to the Kalahari region of southern African. As the common name suggests, these weavers work together to accomplish diverse tasks, from building their highly distinctive thatched nests to help raising the chicks and defending the nest and colony mates from predators. Their fascinating social structure and different types of cooperative behaviour make them an ideal study model to investigate the benefits and costs of sociality and the evolutionary mechanisms that allow cooperation to evolve and be maintained. 
Cooperation represents an evolutionary puzzle because natural selection is thought to favour selfish individuals over co-operators. However, theory and studies in humans suggest that co-operators are preferred as social and sexual partners. Partner choice may therefore provide a powerful explanation for the evolution and stability of cooperation, alongside kin selection and self-serving benefits, but we lack an understanding of its importance in natural systems

These results suggest that rather than being simply covert partisans, nonpartisans process the world in a way different, different brain areas, from their partisan counterparts

Neural nonpartisans. Darren Schreiber, Greg Fronzo, Alan Simmons, Chris Dawes, Taru Flagan & Martin Paulus. Journal of Elections, Public Opinion and Parties, Aug 3 2020. Download citation https://doi.org/10.1080/17457289.2020.1801695

ABSTRACT: While affective conflict between partisans is driving much of modern politics, it is also driving increasing numbers to eschew partisan labels. A dominant theory is that these self-proclaimed independents are merely covert partisans. In the largest functional brain imaging study of neuropolitics to date, we find differences between partisans and nonpartisans in the right medial temporal pole, orbitofrontal/medial prefrontal cortex, and right ventrolateral prefrontal cortex, three regions often engaged during social cognition. These results suggest that rather than being simply covert partisans, nonpartisans process the world in a way different from their partisan counterparts.

The Correlation-Causation Taboo: Making explicit causal inference taboo does not stop people from doing it; they just do it in a less transparent, regulated, sophisticated & informed way

The Correlation-Causation Taboo. Neuroskeptic. Discover Magazine, July 31, 2020. https://www.discovermagazine.com/the-sciences/the-correlation-causation-taboo

Should psychologists be more comfortable discussing causality?

"Correlation does not imply causation" is a basic motto of science. Every scientist knows that observing a correlation between two things doesn't necessarily mean that one of them causes the other.

But according to a provocative new paper, many researchers in psychology are drawing the wrong lessons from this motto. The paper is called The Taboo Against Explicit Causal Inference in Nonexperimental Psychology and it comes from Michael P. Grosz et al. The article makes a lot of points, but to me the main insight of the piece was this: Many studies in psychology are implicitly about causality, without openly saying as much.

Consider, for example, this highly cited 2011 study which showed that children with better self-control have better health and social outcomes years later as adults.

This 2011 paper never claimed to have shown causality. It was, after all, an observational, correlational design, and correlation is not causation. But Grosz et al. say that the study only makes sense in the context of an implicit belief that self-control does (or probably does) causally influence outcomes.

The title of the 2011 paper suggests that it was a study about predicting the outcomes. Prediction can be an important goal, but Grosz et al. point out that if the study had really been about prediction, it would make sense to consider a whole range of possible predictors. A purely predictive study wouldn't focus on a single factor. The paper also probably wouldn't be so highly cited, if readers really thought it said nothing about causality.

Grosz et al. analyze three other influential "observational" psychology papers and in all cases, they find evidence of unstated causal claims and assumptions, swept under a correlational rug.

As they put it, "Similar to when sex or drugs are made taboo, making explicit causal inference taboo does not stop people from doing it; they just do it in a less transparent, regulated, sophisticated and informed way."

The authors go on to argue that there's actually nothing wrong with talking about causality in the context of observational research — but the causal assumptions and claims need to be made explicit, so that they can be critically evaluated.

To be clear, the authors are not saying that correlation implies causation. They argue that it is sometimes possible to draw inferences about causation from correlational evidence, if we have enough evidence to rule out non-causal alternative explanations. This kind of inference is "very difficult. However, this is not a good reason to render explicit causal inference taboo."

Sunday, August 2, 2020

Dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms: We estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws

Borgschulte, Mark and Guenzel, Marius and Liu, Canyao and Malmendier, Ulrike, CEO Stress, Aging, and Death (June 1, 2020). CEPR Discussion Paper No. DP14933, Available at SSRN: https://ssrn.com/abstract=3638037

Abstract: We show that increased job demands due to takeover threats and industry crises have significant adverse consequences for managers' long-term health. Using hand-collected data on the dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms, we estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws. CEOs also stay on the job longer, with no evidence of a compensating differential in the form of lower pay. In a second analysis, we find diminished longevity arising from increases in job demands caused by industry-wide downturns during a CEO's tenure. Finally, we utilize machine-learning age-estimation methods to detect visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs' apparent age by roughly one year over the next decade.


Rolf Degen summarizing... Contrary to an influential psychological finding, most laughs in everyday conversations were responses to something comical, and not just instances of social smoothing

What's your laughter doing there? A taxonomy of the pragmatic functions of laughter. Chiara Mazzocconi, Ye Tian, Jonathan Ginzburg. IEEE Transactions on Affective Computing, May 2020. https://ieeexplore.ieee.org/abstract/document/9093177

Abstract: Laughter is a crucial signal for communication and managing interactions. Until now no consensual approach has emerged for classifying laughter. We propose a new framework for laughter analysis and classification, based on the pivotal assumption that laughter has propositional content. We propose an annotation scheme to classify the pragmatic functions of laughter taking into account the form, the laughable, the social, situational, and linguistic context. We apply the framework and taxonomy proposed in a multilingual corpus study (French, Mandarin Chinese and English), involving a variety of situational contexts. Our results give rise to novel generalizations about the range of meanings laughter exhibits, the placement of the laughable, and how placement and arousal relate to the functions of laughter. We have tested and refuted the validity of the commonly accepted assumption that laughter directly follows its laughable. In the concluding section, we discuss the implications our work has for spoken dialogue systems. We stress that laughter integration in spoken dialogue systems is not only crucial for emotional and affective computing aspects, but also for aspects related to natural language understanding and pragmatic reasoning. We formulate the emergent computational challenges for incorporating laughter in spoken dialogue systems.


The Great Stagnation (the fifty-year decline in growth for the U.S. and other advanced economies) – Causes and Cures

Carr, Douglas, The Great Stagnation – Causes and Cures (July 28, 2020). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3662638

Abstract
This paper addresses the fifty-year decline in growth for the U.S. and other advanced economies.
The paper develops a growth model based upon an economy’s capital accounts and illustrates how customary growth factors such as labor and total factor productivity are embedded within investment ratios, permitting estimation of investment, that largely determines growth rate as well as the natural rate of interest, which is the capital factor share of growth. The model explains declines of these measures and finds convergence among natural interest, total factor productivity, and labor growth.

The paper identifies two investment regimes which crossed paths in the U.S. in the early 1970’s, one based upon depreciation and the other determined by the capital factor share of the private market sector. Constrictions on the private market sector from growing government spending limit the potential for higher levels of private investment necessary to offset greater depreciation from rapid obsolescence of increasingly high-tech investments.

Present trends worsen stagnation, but lifting constriction of private investment would allow full realization of benefits from technology investment’s high productivity, boosting U.S. growth to over 7% annually, and would benefit other advanced economies as well.
Keywords: Growth model, economic growth, investment, interest rate, natural rate of interest

JEL Classification: E22, E43, E44, F43, O16, O41, O47


Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries

Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries. Samantha J.Heintzelman et al. Journal of Research in Personality, August 1 2020, 104003. https://doi.org/10.1016/j.jrp.2020.104003

Highlights
• Meaning in life was perceived to be both created and discovered, and to be common.
• Beliefs about meaning related to experiences of meaning in life.
• Technology was perceived as both providing supports and challenges to meaning.
• There were national differences in perceptions and experiences of meaning.
• Relationships and happiness were rated as top sources of meaning across 8 nations.

Abstract: We examined how lay beliefs about meaning in life relate to experiences of personal meaning. In Study 1 (N=406) meaning in life was perceived to be a common experience, but one that requires effort to attain, and these beliefs related to levels of meaning in life. Participants viewed their own lives as more meaningful than the average person’s, and technology as both creating challenges and providing supports for meaning. Study 2 (N=1,719) showed cross-country variation in levels of and beliefs about meaning across eight countries. However, social relationships and happiness were identified as the strongest sources of meaning in life consistently across countries. We discuss the value of lay beliefs for understanding meaning in life both within and across cultures.

Keywords: meaning in lifepsychological well-beinglay beliefscross-cultural

Check also Meaning and Evolution: Why Nature Selected Human Minds to Use Meaning. Roy F. Baumeister and William von Hippel. Evolutionary Studies in Imaginative Culture, Vol. 4, No. 1, Symposium on Meaning and Evolution (Spring 2020), pp. 1-18. https://www.bipartisanalliance.com/2020/05/the-scientific-worldview-suggested-that.html

Also Happiness, Meaning, and Psychological Richness. Shigehiro Oishi, Hyewon Choi, Minkyung Koo, Iolanda Galinha, Keiko Ishii, Asuka Komiya, Maike Luhmann, Christie Scollon, Ji-eun Shin, Hwaryung Lee, Eunkook M. Suh, Joar Vittersø, Samantha J. Heintzelman, Kostadin Kushlev, Erin C. Westgate, Nicholas Buttrick, Jane Tucker, Charles R. Ebersole, Jordan Axt, Elizabeth Gilbert, Brandon W. Ng, Jaime Kurtz & Lorraine L. Besser . Affective Science volume 1, pages107–115, Jun 23 2020. https://www.bipartisanalliance.com/2020/06/investigating-whether-some-people.html

Saturday, August 1, 2020

Driverless dilemmas (the need for autonomous vehicles to make high-stakes ethical decisions): Those arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety

Doubting Driverless Dilemmas. Julian De Freitas et al. Perspectives on Psychological Science, July 31, 2020. https://doi.org/10.1177/1745691620922201

Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.

Keywords: moral judgment, autonomous vehicles, driverless policy



Trolley dilemmas are incredibly unlikely to occur on real roads

The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely on real roads. This is because they require that the vehicle will certainly kill one individual or another, with no other location to steer the vehicle, no way to buy more time, and no steering maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that AVs can gather information about the social characteristics of people, e.g., whether they are criminals, or contributors to society. Yet many of these social characteristics are inherently unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing between.

Lacking in these discussions are realistic examples or evidence of situations where human drivers have had to make such choices. This makes it premature to consider them as part of any practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives in the thought experiments can be identified in reality, it is unhelpful to consider them when making AV policies or regulations.

Trolley dilemmas cannot be reliably detected by any real-world perception system

For the purposes of a thought experiment, it is simplifying to assume that one is already in a trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would first need to be trained how to do this perfectly. After all, since the overwhelming majority of driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill), versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this distinction would require unrealistic capabilities for technology in the present or near future, including (i) knowing all relevant physical details about the environment that could influence whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of pedestrians and drivers, so that their choices can be taken into account. Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV must optimize toward solving a dilemma whose very existence is incredibly challenging to detect. Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means that they would need to be simulated and multiplied within any dataset, potentially introducing unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas where there aren’t any.


Trolley dilemmas cannot be reliably acted upon by any real-world control system

Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a considered decision about whom of two people to harm, yet does not have enough control to instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether (Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options left, it’s unlikely that these options will neatly map onto two options that require a moral rule to arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more constrained its options are the less likely it is to have the control to successfully execute a choice— and if it can’t execute a choice, then there’s no real dilemma.

Friday, July 31, 2020

Does indoctrination of youngsters work? Teaching the ethics of eating meat shows robust decreases of meat consumption

Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat. Eric Schwitzgebel, Bradford Cokelet, Peter Singer. Cognition, Volume 203, October 2020, 104397. https://doi.org/10.1016/j.cognition.2020.104397

Abstract: Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.

Keywords: Consumer choiceEthics instructionExperimental philosophyMoral psychologyMoral reasoningVegetarianism

Check also Chapter 15. The Behavior of Ethicists. Eric Schwitzgebel and Joshua Rust. In A Companion to Experimental Philosophy, edited by Justin Sytsma and Wesley Buckwalter. Aug 17 2017. https://www.bipartisanalliance.com/2017/08/the-behavior-of-ethicists-ch-15-of.html

Scientists shocked! Rainfall, drought, flooding, and extreme storms modeling is poor, “It could mean we’re not getting future climate projections right.”

Missed wind patterns are throwing off climate forecasts of rain and storms. Paul Voosen. Science Magazine, Jul 29, 2020 , doi:10.1126/science.abe0713

Climate scientists can confidently tie global warming to impacts such as sea-level rise and extreme heat. But ask how rising temperatures will affect rainfall and storms, and the answers get a lot shakier. For a long time, researchers chalked the problem up to natural variability in wind patterns—the inherently unpredictable fluctuations of a chaotic atmosphere.

Now, however, a new analysis has found that the problem is not with the climate, it’s with the massive computer models designed to forecast its behavior. “The climate is much more predictable than we previously thought,” says Doug Smith, a climate scientist at the United Kingdom’s Met Office who led the 39-person effort published this week in Nature. But models don’t capture that predictability, which means they are unlikely to correctly predict the long-term changes that are most influenced by large-scale wind patterns: rainfall, drought, flooding, and extreme storms. “Obviously we need to solve it,” Smith says.

The study, which includes authors from several leading modeling centers, casts doubt on many forecasts of regional climate change, which are crucial for policymaking. It also means efforts to attribute specific weather events to global warming, now much in vogue, are rife with errors. “The whole thing is concerning,” says Isla Simpson, an atmospheric dynamicist and modeler at the National Center for Atmospheric Research, who was not involved in the study. “It could mean we’re not getting future climate projections right.”

The study does not cast doubt on forecasts of overall global warming, which is driven by human emissions of greenhouse gases. And it has a hopeful side: If models could be refined to capture the newfound predictability of winds and rains, they could be a boon for farming, flood management, and much else, says Laura Baker, a meteorologist at the University of Reading who was not involved in the study. “If you have reliable seasonal forecasts, that could make a big difference.”

The study stems from efforts at the Met Office to predict changes in the North Atlantic Oscillation (NAO), a large-scale wind pattern driven by the air pressure difference between Iceland and the Azores. The pressure difference reverses every few years, shunting the jet stream north or south; a more northerly jet stream drives warm, wet winters in northern Europe while drying out the continent’s south, and vice versa. In previous attempts to project the pattern decades into the future, a single model might yield opposite forecasts in different runs. The uncertainty seemed “huge and irreducible,” Smith says.

At first, the Met Office model did no better. But when the team ran the same model multiple times, with slightly different initial conditions, to forecast the NAO a season or a year into the future, a weak signal appeared in the ensemble average. Although it did not match the strength of the real NAO, it did match the overall pattern of its gyrations. But on individual model runs, the signal was drowning in noise.

The new work uses an ensemble of 169 model runs to find the same weak but predictable NAO pattern persisting for up to a decade. For each year since 1960, the team forecasted the NAO pattern 2 to 9 years in the future. When compared with weather records, the ensemble results showed the same pattern, ultimately explaining four-fifths of the NAO’s behavior. The massive computational effort suggests changes in the NAO are more predictable than models capture by an order of magnitude, Smith says. It also suggests individual models aren’t properly accounting for the ocean or atmospheric forces shaping the NAO.

The missed predictability appears to be universal. “This is being pursued everywhere,” says Yochanan Kushnir, a climate scientist at Columbia University, whose team reported last week in Scientific Reports that rainfall in the Sahel zone is more predictable than models indicate. In forthcoming work, a group led by Benjamin Kirtman, an atmospheric scientist and model developer at the University of Miami, will flag similar missed predictability in wind patterns above many of the world’s oceans.

Kirtman thinks something fundamental is wrong with the models’ code. For the time being, he says, “You’re probably making pretty profound mistakes in your climate change assessment” by relying on regional forecasts. For example, models predicted that the Horn of Africa, which is heavily influenced by Indian Ocean winds, would get wetter with climate change. But since the early 1990s, rains have plummeted and the region has dried.

The missing predictability also undermines so-called event attribution, which attempts to link extreme weather to climate change by using models to predict how sea surface warming is altering wind patterns. The changes in winds, in turn, affect the odds of extreme weather events, like hurricanes or floods. But the new work suggests “the probabilities they derive will probably not be correct,” Smith says.

What’s not clear yet is why climate models get circulation changes so wrong. One leading hypothesis is that the models fail to capture feedbacks into overall wind patterns from individual weather systems, called eddies. “Part of that eddy spectrum may simply be missing,” Smith says. Models do try to approximate the effects of eddies, but at just kilometers across, they are too small to simulate directly. The problem could also reflect poor rendering of the stratosphere, or of interactions between the ocean and atmosphere. “It’s fascinating,” says Jennifer Kay, a climate scientist at the University of Colorado, Boulder. “But there’s also a lot left unanswered.”

While researchers around the globe hunt down the missing predictability, Smith and his colleagues will take advantage of the weak NAO signal they have in hand. The Met Office and its partners announced this month they will produce temperature and precipitation forecasts looking 5 years ahead, and will use the NAO signal to help calibrate regional climate forecasts for Europe and elsewhere.

But until modelers figure out how to confidently forecast changes in the winds, Smith says, “We can’t take the models at face value."

Population studies suggest that increased availability of pornography is associated with reduced sexual aggression at the population level

Pornography and Sexual Aggression: Can Meta-Analysis Find a Link? Christopher J. Ferguson, Richard D. Hartley. Trauma, Violence, & Abuse, July 21, 2020. https://doi.org/10.1177/1524838020942754

Abstract: Whether pornography contributes to sexual aggression in real life has been the subject of dozens of studies over multiple decades. Nevertheless, scholars have not come to a consensus about whether effects are real. The current meta-analysis examined experimental, correlational, and population studies of the pornography/sexual aggression link dating back from the 1970s to the current time. Methodological weaknesses were very common in this field of research. Nonetheless, evidence did not suggest that nonviolent pornography was associated with sexual aggression. Evidence was particularly weak for longitudinal studies, suggesting an absence of long-term effects. Violent pornography was weakly correlated with sexual aggression, although the current evidence was unable to distinguish between a selection effect as compared to a socialization effect. Studies that employed more best practices tended to provide less evidence for relationships whereas studies with citation bias, an indication of researcher expectancy effects, tended to have higher effect sizes. Population studies suggested that increased availability of pornography is associated with reduced sexual aggression at the population level. More studies with improved practices and preregistration would be welcome.

Keywords: pornography, sexual aggression, rape, domestic violence


It seems that the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present doesn't require episodic memory

Getting Better Without Memory. Julia G Halilova, Donna Rose Addis, R Shayna Rosenbaum. Social Cognitive and Affective Neuroscience, nsaa105, July 30 2020. https://doi.org/10.1093/scan/nsaa105

Abstract: Does the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present require episodic memory? A developmental amnesic person with impaired episodic memory (H.C.) was compared with two groups of age-matched controls on tasks assessing the Big Five personality traits and social competence in relation to the past, present, and future. Consistent with previous research, controls believed that their personality had changed more in the past five years than it will change in the next five years (i.e. the end-of-history illusion), and rated their present and future selves as more socially competent than their past selves (i.e. social improvement illusion), although this was moderated by self-esteem. Despite her lifelong episodic memory impairment, H.C. also showed these biases of temporal self-appraisal. Together, these findings do not support the theory that the temporal extension of the self-concept requires the ability to recollect richly detailed memories of the self in the past and future.

Keyword: episodic memory, self-appraisal, developmental amnesia, case study, end-of-history illusion, social improvement illusion


Effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy: Those who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’

van Allen, Zack, Deanna Walker, Tamir Streiner, and John M. Zelenski. 2020. “Enacted Extraversion as a Well-being Enhancing Strategy in Everyday Life.” PsyArXiv. July 30. doi:10.31234/osf.io/349yh

Abstract: Lab-based experiments and observational data have consistently shown that extraverted behavior is associated with elevated levels of positive affect. This association typically holds regardless of one’s dispositional level of trait extraversion, and individuals who enact extraverted behaviors in laboratory settings do not demonstrate costs associated with acting counter-dispositionally. Inspired by these findings, we sought to test the efficacy of week-long ‘enacted extraversion’ interventions. In three studies, participants engaged in fifteen minutes of assigned behaviors in their daily life for five consecutive days. Studies 1 and 2 compared the effect of adding more introverted or extraverted behavior (or a control task). Study 3 compared the effect of adding social extraverted behavior or non-social extraverted behavior (or a control task). We assessed positive affect and several indicators of well-being during pretest (day 1) and post-test (day 7), as well as ‘in-the-moment’ (days 2-6). Participants who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’ when compared to introverted and control behaviors. We did not observe strong evidence to suggest that this effect was more pronounced for dispositional extraverts. The current research explores the effects of extraverted behavior on other indicators of well-being and examines the effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy.




Women rate feeling bad about themselves in breakup sex, maybe due to women’s sexual regret when participating in a one-time sexual encounter

The psychology of breakup sex: Exploring the motivational factors and affective consequences of post-breakup sexual activity. James B. Moran, T. Joel Wade, Damian R. Murray. Evolutionary Psychology, July 30, 2020. https://doi.org/10.1177/1474704920936916

Abstract: Popular culture has recently publicized a seemingly new postbreakup behavior called breakup sex. While the media expresses the benefits of participating in breakup sex, there is no research to support these claimed benefits. The current research was designed to begin to better understand this postbreakup behavior. In the first study, we examined how past breakup sex experiences made the individuals feel and how people predict they would feel in the future (n = 212). Results suggested that men are more likely than women to have felt better about themselves, while women tend to state they felt better about the relationship after breakup sex. The second study (n = 585) investigated why men and women engage in breakup sex. Results revealed that most breakup sex appears to be motivated by three factors: relationship maintenance, hedonism, and ambivalence. Men tended to support hedonistic and ambivalent reasons for having breakup sex more often than women. The two studies revealed that breakup sex may be differentially motivated (and may have different psychological consequences) for men and women and may not be as beneficial as the media suggests.

Keywords: breakup sex, sexual strategy theory, fiery limbo, postbreakup behavior, ex-sex, gender differences


Study 1: Discussion
Study 1 was conducted to understand how individuals feel when they have engaged in breakup sex and to understand how they might feel about it in the future. The 11 items were further used to assess whether there were gender differences between men and women. Results revealed that men, more than women, reported greater receptivity to breakup sex regardless of the extraneous factors in the relationship (e.g., differences in mate value, who initiated the breakup).

There was no gender difference regarding whether individuals would have breakup sex if they loved their partner. However, unexpectedly, men more than women reported that they would participate in sexual behaviors they normally would not engage in. This engagement in atypical/less frequent sexual behavior may reflect a mate retention tactic since research indicates that men perform oral sex as a benefit-provisioning mate retention tactic (Pham & Shackelford, 2013). Thus, performing sexual behaviors they normally would not do could be an indicator of mate retentive behaviors.

The hypotheses that women would rate feeling bad about themselves was supported. This finding could be due to women’s sexual regret when participating in a one-time sexual encounter (Eshbaugh & Gute, 2008; Galperin et al., 2013). These findings are contrary to the popular media idea that breakup sex is good for both men and women. These results suggest that between men and women, men feel best after breakup sex and would have breakup sex for some different reasons than women would.



Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Relationships: Those who identified as male or non-binary reported more such fantasies than those who identified as female

Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Romantic Relationships. Justin J. Lehmiller. Archives of Sexual Behavior,Jul 29 2020. https://rd.springer.com/article/10.1007/s10508-020-01788-7

Abstract: The present research explored fantasies about consensual nonmonogamous relationships (CNMRs) and the factors that predict such fantasies in a large and diverse online sample (N = 822) of persons currently involved in monogamous relationships. Nearly one-third (32.6%) of participants reported that being in some type of sexually open relationship was part of their favorite sexual fantasy of all time, of whom most (80.0%) said that they want to act on this fantasy in the future. Those who had shared and/or acted on CNMR fantasies previously generally reported positive outcomes (i.e., meeting or exceeding their expectations and improving their relationships). In addition, a majority of participants reported having fantasized about being in a CNMR at least once before, with open relationships being the most popular variety. Those who identified as male or non-binary reported more CNMR fantasies than those who identified as female. CNMR fantasies were also more common among persons who identified as anything other than heterosexual and among older adults. Erotophilia and sociosexual orientation were uniquely and positively associated with CNMR fantasies of all types; however, other individual difference factors (e.g., Big Five personality traits, attachment style) had less consistent associations. Unique predictors of infidelity fantasies differed from CNMR fantasies, suggesting that they are propelled by different psychological factors. Overall, these results suggest that CNMRs are a popular fantasy and desire among persons in monogamous romantic relationships. Clinical implications and implications for sexual fantasy research more broadly are discussed.



Thursday, July 30, 2020

Strongly unified belief in the linear non-threshold model among panel members and their refusal to acknowledge that a low dose of radiation could exhibit a threshold, & an excessive degree of self-interest

The Muller-Neel dispute and the fate of cancer risk assessment. Edward J. Calabrese. Environmental Research, July 23 2020, 109961. https://www.sciencedirect.com/science/article/abs/pii/S0013935120308562

ABSTRACT: The National Academy of Sciences (NAS) Atomic Bomb Casualty Commission (ABCC) human genetic study (i.e., The Neel and Schull, 1956a report) showed an absence of genetic damage in offspring of atomic bomb survivors in support of a threshold model, but was not considered for evaluation by the NAS Biological Effects of Atomic Radiation (BEAR) I Genetics Panel. The study therefore could not impact the Panel's decision to recommend the linear non-threshold (LNT) dose-response model for risk assessment.1 Summaries and transcripts of the Panel meetings failed to reveal an evaluation of this study, despite its human relevance and ready availability, relying instead on data from Drosophila and mice. This paper explores correspondence among and between BEAR Genetics Panel members, including James Néel, the study director, and other contemporaries to assess why the Panel failed to use these data and how the decision to recommend the LNT model affected future cancer risk assessment policies and practices. This failure of the Genetics Panel was due to: (1) a strongly unified belief in the LNT model among panel members and their refusal to acknowledge that a low dose of radiation could exhibit a threshold, a conclusion that the Néel/Schull atomic bomb study could support, and (2) an excessive degree of self-interest among panel members who experimented with animal models, such as Hermann J. Muller, and feared that human genetic studies would expose the limitations of extrapolating from animal (especially Drosophila) to human responses and would strongly shift research investments/academic grants from animal to human studies. Thus, the failure to consider the Néel/Schull atomic bomb study served both the purposes of preserving the LNT policy goal and ensuring the continued dominance of Muller and his similarly research-oriented colleagues.


6. Conclusion

Human genetic data from over 25 years of the ABCC study (i.e., 1946–1972) demonstrated support for a threshold model for radiation-induced genetic damage in humans, but that information were both ignored and then rejected by the BEAR I and BEIR II Genetics Committees, respectively. The findings, now nearly 50 years later (Grant et al., 2015), have consistently continued to contradict a linear dose response, supporting a threshold response for a complex array of endpoints of genetic damage in humans. Furthermore, the decision to base the LNT recommendation on the male mouse data of Russell is now seen as flawed (Calabrese, 2017a,b), providing no support for the BEIR (1972) decision in favor of LNT.

The failure to assess the human genetic study of Neel and Schull (1956a) at this most crucial time in risk-assessment history represents a profound abrogation of responsibility by the NAS leadership and the BEAR Genetics Panels. This affirmative “failure of responsibility” appears to have been a goal of Muller as it would ensure the adoption of LNT and the continued professional dominance of Muller and his like-thinking and similar research-oriented colleagues. The adoption of LNT occurred during a “perfect storm” consisting of: heightened societal fear of nuclear confrontation; continuing nuclear fallout from atmospheric testing; ideologically based policy and scientific leadership of the Rockefeller Foundation and the US NAS; and a handpicked, highly LNT-biased Genetics Panel that was dominated by an even more-determined Hermann Muller to ensure adoption of the LNT. This history should represent a profound embarrassment to the US NAS, regulatory agencies worldwide, and especially the US EPA, and the risk-assessment community whose founding principles were so ideologically determined and accepted with little if any critical reflection.


Novel psychological construct characterised by high empathy and dark traits, the Dark Empath, is identified and described relative to personality, aggression, dark triad (DT) facets and wellbeing

The Dark Empath: Characterising dark traits in the presence of empathy. Nadja Heym et al. Personality and Individual Differences, July 29 2020, 110172. https://doi.org/10.1016/j.paid.2020.110172

Highlights
• Latent profile analysis identifies 4 groups based on empathy and dark traits.
• Dark empath (DE, high empathy, dark traits) partly maintains an antagonistic core.
• DE and DT (low empathy, dark traits) are similar in vulnerable dark triad facets.
• DE and DT differ in extraversion, agreeableness, indirect aggression & wellbeing.
• Outside of the dark triad (empaths, typicals), empathy is unrelated to aggression.

Abstract: A novel psychological construct characterised by high empathy and dark traits: the Dark Empath (DE) is identified and described relative to personality, aggression, dark triad (DT) facets and wellbeing. Participants (n = 991) were assessed for narcissism, Machiavellianism, psychopathy, cognitive empathy and affective empathy. Sub-cohorts also completed measures of (i) personality (BIG5), indirect interpersonal aggression (n = 301); (ii) DT facets of vulnerable and grandiose narcissism, primary and secondary psychopathy and Machiavellianism (n = 285); and (iii) wellbeing (depression, anxiety, stress, anhedonia, self-compassion; n = 240). Latent profile analysis identified a four-class solution comprising the traditional DT (n = 128; high DT, low empathy), DE (n = 175; high DT, high empathy), Empaths (n = 357; low DT, high empathy) and Typicals (n = 331; low DT, average empathy). DT and DE were higher in aggression and DT facets, and lower in agreeableness than Typicals and Empaths. DE had higher extraversion and agreeableness, and lower aggression than DT. DE and DT did not differ in grandiose and vulnerable DT facets, but DT showed lower wellbeing. The DE is less aggressive and shows better wellbeing than DT, but partially maintains an antagonistic core, despite having high extraversion. The presence of empathy did not increase risk of vulnerability in the DE.


Music training is ineffective regardless of outcome measure (verbal, non-verbal, speed-related, etc.), participants’ age, & duration of training; & has no impact on people’s non-music cognitive skills or academic achievement

Cognitive and academic benefits of music training with children: A multilevel meta-analysis. Giovanni Sala & Fernand Gobet. Memory & Cognition, Jul 29 2020. https://rd.springer.com/article/10.3758/s13421-020-01060-2

Abstract: Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics). This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence. The present meta-analytic review (N = 6,984, k = 254, m = 54) shows that this belief is incorrect. Once the quality of study design is controlled for, the overall effect of music training programs is null (g¯ ≈ 0) and highly consistent across studies (τ2 ≈ 0). Results of Bayesian analyses employing distributional assumptions (informative priors) derived from previous research in cognitive training corroborate these conclusions. Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls (g¯ ≈ 0.200, p < .001). Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training. Furthermore, we note that, beyond meta-analysis of experimental studies, a considerable amount of cross-sectional evidence indicates that engagement in music has no impact on people’s non-music cognitive skills or academic achievement. We conclude that researchers’ optimism about the benefits of music training is empirically unjustified and stems from misinterpretation of the empirical data and, possibly, confirmation bias.


Wednesday, July 29, 2020

Political affiliation of prospective partners: Those in the political out-group are seen as less attractive, less dateable, and less worthy of matchmaking efforts; these effects are modest in size

The Democracy of Dating: How Political Affiliations Shape Relationship Formation. Matthew J. Easton and John B. Holbein. Journal of Experimental Political Science, Jul 29 2020. https://doi.org/10.1017/XPS.2020.21

Abstract: How much does politics affect relationship building? Previous experimental studies have come to vastly different conclusions – ranging from null to truly transformative effects. To explore these differences, this study replicates and extends previous research by conducting five survey experiments meant to expand our understanding of how politics does/does not shape the formation of romantic relationships. We find that people, indeed, are influenced by the politics of prospective partners; respondents evaluate those in the political out-group as being less attractive, less dateable, and less worthy of matchmaking efforts. However, these effects are modest in size – falling almost exactly in between previous study estimates. Our results shine light on a literature that has, up until this point, produced a chasm in study results – a vital task given concerns over growing levels of partisan animus in the USA and the rapidly expanding body of research on affective polarization.


Dementia Incidence Among US Adults Born 1893-1949: Incidence is lower for those born after the mid-1920s, & this lower incidence is not associated with early-life environment as measured in this study

Association of Demographic and Early-Life Socioeconomic Factors by Birth Cohort With Dementia Incidence Among US Adults Born Between 1893 and 1949. Sarah E. Tom et al. JAMA Netw Open. 2020;3(7):e2011094, July 27 2020, doi:10.1001/jamanetworkopen.2020.11094

Key Points
Question  Are dementia incidence trends by birth cohort associated with early-life environment?

Findings  In this cohort study of 4277 participants in the Adult Changes in Thought study who were born between 1893 and 1949 and were followed up for up to 20 years (1994-2015), the age- and sex-adjusted dementia incidence was lower among those born during the Great Depression (1929-1939) and the period during World War II and postwar (1940-1949) compared with those born in the period before the Great Depression (1921-1928). The association between birth cohort and dementia incidence remained when accounting for early-life socioeconomic environment, educational level, and late-life vascular risk factors.

Meaning  The study’s findings indicate that dementia incidence is lower for individuals born after the mid-1920s compared with those born earlier, and this lower incidence is not associated with early-life environment as measured in this study.


Abstract
Importance  Early-life factors may be important for later dementia risk. The association between a more advantaged early-life environment, as reflected through an individual’s height and socioeconomic status indicators, and decreases in dementia incidence by birth cohort is unknown.

Objectives  To examine the association of birth cohort and early-life environment with dementia incidence among participants in the Adult Changes in Thought study from 1994 to 2015.

Design, Setting, and Participants  This prospective cohort study included 4277 participants from the Adult Changes in Thought study, an ongoing longitudinal population-based study of incident dementia in a random sample of adults 65 years and older who were born between 1893 and 1949 and are members of Kaiser Permanente Washington in the Seattle region. Participants in the present analysis were followed up from 1994 to 2015. At enrollment, all participants were dementia-free and completed a baseline evaluation. Subsequent study visits were held every 2 years until a diagnosis of dementia, death, or withdrawal from the study. Participants were categorized by birth period (defined by historically meaningful events) into 5 cohorts: pre–World War I (1893-1913), World War I and Spanish influenza (1914-1920), pre–Great Depression (1921-1928), Great Depression (1929-1939), and World War II and postwar (1940-1949). Participants’ height, educational level, childhood financial stability, and childhood household density were examined as indicators of early-life environment, and later-life vascular risk factors for dementia were assessed. Cox proportional hazards regression models, adjusted for competing survival risk, were used to analyze data. Data were analyzed from June 1, 2018, to April 29, 2020.

Main Outcomes and Measures  Participants completed the Cognitive Abilities Screening Instrument every 2 years to assess global cognition. Those with scores indicative of cognitive impairment completed an evaluation for dementia, with dementia diagnoses determined during consensus conferences using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 4th edition.

Results  Among 4277 participants, the mean (SD) age was 74.5 (6.4) years, and 2519 participants (58.9%) were women. The median follow-up was 8 years (interquartile range, 4-12 years), with 730 participants developing dementia over 24 378 person-years. The age-specific dementia incidence was lower for those born in 1929 and later compared with those born earlier. Compared with participants born in the pre–Great Depression years (1921-1928), the age- and sex-adjusted hazard ratio was 0.67 (95% CI, 0.53-0.85) for those born in the Great Depression period (1929-1939) and 0.62 (95% CI, 0.29-1.31) for those born in the World War II and postwar period (1940-1949). Although indicators of a more advantaged early-life environment and higher educational level (college or higher) were associated with a lower incidence of dementia, these variables did not explain the association between birth cohort and dementia incidence, which remained when vascular risk factors were included and were similar by sex.

Conclusions and Relevance  Age-specific dementia incidence was lower in participants born after the mid-1920s compared with those born earlier. In this population, the decrease in dementia incidence may reflect societal-level changes or individual differences over the life course rather than early-life environment, as reflected through recalled childhood socioeconomic status and measured height, educational level, and later-life vascular risk.




Discussion
Among those born at the turn of the 20th century through the mid-20th century who participated in the ACT study, the age-specific dementia incidence was lower for participants born in 1929 and later compared with those born earlier. This trend was not explained by recalled childhood socioeconomic status and measured height, which reflect early-life environment, nor was it explained by educational level and vascular risk as an older adult. The literature on secular dementia trends reports a decrease in dementia incidence starting in the 1990s.1-5 This timing is consistent with participants in the 1929 to 1939 birth cohorts who are entering the eighth decade of life, when dementia risk increases.2,4,31 Political and economic changes during the first half of the 20th century may have had different implications for dementia risk based on the participant’s age during those experiences.32 Analysis by birth cohort captures this intersection of age and calendar time. Our results suggest that societal-level changes in the first half of the 20th century that were not captured by the individual early-life measures or the educational levels used in this study may have been associated with decreases in dementia incidence.
The 40% decrease in the US mortality rate from 1900 to 1940 was likely owing to the decrease in infectious diseases,33 which disproportionately occur in the young. The decrease in dementia incidence observed in the ACT study began with birth cohorts who were born in the middle of this period. These early-life health gains may be factors in the decreased dementia incidence. Although we accounted for family-level socioeconomic status variables and height, these variables may not have captured all changes, such as economic innovation13 and nutritional improvement,12 that may have been associated with decreases in mortality. In addition, variables included in this study may not have captured public heath improvements during this period.33 It is possible that unmeasured differences were more important for assessing dementia risk by birth cohort than the socioeconomic factors we measured.
Across birth cohorts, participants with lower financial status and greater household density in childhood had a lower risk of developing dementia, which is inconsistent with our hypothesis and the results of previous studies.34,35 While the Great Depression was a time of financial hardship, those in the pre–Great Depression and the World War I and Spanish influenza birth cohorts were the least likely to report the ability to afford both basic needs and small luxuries, and they had the smallest proportion of participants reporting the most stable childhood financial quartile. This pattern may reflect problems with measurement or sample selection. Participant responses may reflect experiences in later childhood and early adolescence, as recall of early-life experiences may be difficult. In contrast, parental educational levels, which were constant throughout childhood and adolescence for most of the birth cohorts, were higher for the World War I and Spanish influenza cohort and the pre–Great Depression cohort compared with cohorts born earlier. This pattern suggests a higher early-life standard of living in the more recent birth cohorts. Another possibility is that because these 2 birth cohorts were the oldest, those who survived to participate in the study were able to compensate for adverse early-life environments or had less accurate recall than younger participants.
Our study considered death as a competing risk, while a previous case-control study did not.34,35 Most ACT participants were members of Kaiser Permanente Washington (formerly Group Health) when they were younger than 65 years, during which they primarily received health insurance through large employers. It is likely that those with lower financial status and higher household density during childhood survived adverse experiences to be able to participate the sample.
Together with height, an individual’s parental educational level, childhood financial stability, and childhood household density are likely to reflect their early-life environment. These variables did not explain the decrease in dementia incidence among the more recent birth cohorts. In a minimally adjusted model, the decrease in dementia incidence began with the Great Depression birth cohort, suggesting that societal-level experiences during later childhood to adolescence may have been more important than those during the in-utero through early childhood phase. If this earliest stage of life were important for dementia incidence, we would expect those born in the Great Depression cohort to have the greatest dementia risk. The largest difference in college completion was found between the pre–Great Depression and Great Depression birth cohorts. This disruption to economic opportunity for those born in the pre–Great Depression years may have had implications for dementia risk. The inclusion of late-life vascular risk factors did not appreciably alter the association between a more recent birth cohort and a lower incidence of dementia, which is consistent with analyses of the Einstein Aging Cohort10 and the Framingham Heart Study, which considered the cohort of study entry.1
We found similar associations between birth cohort and decreased dementia incidence in 2 previous studies. An analysis of the English Longitudinal Study of Aging examined 2 birth cohorts based on birth-year median (1902-1925 and 1926-1943),9 and an analysis of the Einstein Aging Study used a data-focused approach to detect a changing point in continuous birth years.10 Our birth cohort categories were based on historically meaningful events. Because the ACT study is larger than the Einstein Aging Study, we were able to separate participants born after 1928 into 2 groups. In the ACT study, the most recent birth cohort (1940–1949) had higher educational levels and childhood financial stability compared with cohorts born earlier. Such categorization also allowed for the separation of worldwide economic disruption from family-level financial stability.
Our analysis may not have captured differences in adult social experiences. Educational level is associated with subsequent occupation and employment patterns. However, birth cohort may reflect experience of events during the 20th century that had broad implications, regardless of educational level. For example, men born in the first 2 decades of the 20th century are likely to have served in the armed forces during World War II and to have benefitted from the GI bill. Men and women from those birth cohorts would also have benefitted from the postwar economic expansion. Our analysis did not capture such adult experiences.
Limitations
Our study has several limitations. Participants in older cohorts necessarily had to survive longer to be included in the study. Because the greatest risk factor for dementia is age, the requirement of survival among the pre–World War I and World War I and Spanish influenza birth cohorts as a requirement to enter the ACT study may create differences in dementia risk that are difficult to detect in these groups. Our results suggest that the most recent birth cohorts may continue to experience lower age-specific dementia incidence. However, follow-up period is shorter in these birth cohorts. The ACT study participants are from 1 health system in the Pacific Northwest, and their educational level is high. The cohort is a random sample of age-eligible members of Kaiser Permanente Washington; results therefore reflect this specific population but may not be generalizable to the US population. Our results are consistent with a sample from the Bronx, New York,10 and a nationally representative sample from the United Kingdom,9 suggesting that the decrease in dementia incidence by birth cohort may be a widespread phenomenon. Because ACT study participants may be socioeconomically advantaged, the measures of early-life environment included in this study may not be sensitive enough to detect meaningful differences that have implications for dementia incidence trends by birth cohort.
The study did not include key health variables from later in the life course that are associated with dementia risk, notably midlife hypertension, hearing loss, late-life depression, diabetes, physical inactivity, and social isolation.6 As the ACT study is currently collecting data on most of these variables, future studies will be able to more fully capture life-course dementia risk factors. As a long-standing study, the follow-up included substantial age overlap of multiple birth cohorts, which had been a limitation in previous studies.9 Dementia diagnosis procedures have been consistent throughout the study. The large size of the ACT study and the theoretical basis of the cohort groups allowed for the inclusion of 2 cohort groups born after 1928 that aligned with historically meaningful events, whereas previous studies have considered only 1 group born after the mid-1920s.9,10

Dementia incidence has decreased in more recent birth cohorts. Our measures of early-life socioeconomic status and educational level do not account for these differences in this study population. Birth cohort may reflect other historical and social changes that occurred during childhood or adulthood.

Self-control is associated with numerous positive outcomes, such as well-being; we argue that hedonic goal pursuit is equally important, & conflicting long-term goals can undermine it in the form of intrusive thoughts

Beyond Self-Control: Mechanisms of Hedonic Goal Pursuit and Its Relevance for Well-Being. Katharina Bernecker, Daniela Becker. Personality and Social Psychology Bulletin, July 26, 2020. https://doi.org/10.1177/0146167220941998

Abstract: Self-control helps to align behavior with long-term goals (e.g., exercising to stay fit) and shield it from conflicting hedonic goals (e.g., relaxing). Decades of research have shown that self-control is associated with numerous positive outcomes, such as well-being. In the present article, we argue that hedonic goal pursuit is equally important for well-being, and that conflicting long-term goals can undermine it in the form of intrusive thoughts. In Study 1, we developed a measure of trait hedonic capacity, which captures people’s success in hedonic goal pursuit and the occurrence of intrusive thoughts. In Studies 2A and 2B, people’s trait hedonic capacity relates positively to well-being. Study 3 confirms intrusive thoughts as major impeding mechanism of hedonic success. Studies 4 and 5 demonstrate that trait hedonic capacity predicts successful hedonic goal pursuit in everyday life. We conclude that hedonic goal pursuit represents a largely neglected but adaptive aspect of self-regulation.

Keywords: hedonic goals, self-control, self-regulation, well-being

Popular version: Hedonism Leads to Happiness. Zurich Univ. Press Release, Jul 27 2020. https://www.media.uzh.ch/en/Press-Releases/2020/Hedonism.html




The inherent difficulty in accurately appreciating the engaging aspect of thinking activity could explain why people prefer keeping themselves busy, rather than taking a moment for reflection & imagination

Hatano, Aya, Cansu Ogulmus, Hiroaki Shigemasu, and Kou Murayama. 2020. “Thinking About Thinking: People Underestimate Intrinsically Motivating Experiences of Waiting.” PsyArXiv. July 29. doi:10.31234/osf.io/n2ctk

Abstract: The ability to engage in internal thoughts without external stimulation is one of the hallmarks of unique characteristics in humans. The current research tested the hypothesis that people metacognitively underestimate their capability to positively engage in just thinking. Participants were asked to sit and wait in a quiet room without doing anything for a certain amount of time (e.g., 20 min). Before the waiting task, they made a prediction about how intrinsically motivating the task would be at the end of the task; they also rated their experienced intrinsic motivation after the task. Across six experiments we consistently found that participants’ predicted intrinsic motivation for the waiting task was significantly less than experienced intrinsic motivation. This underestimation effect was robustly observed regardless of the independence of predictive rating, the amount of sensory input, duration of the waiting task, timing of assessment, and cultural contexts of participants. This underappreciation of just thinking also led participants to proactively avoid the waiting task when there was an alternative task (i.e. internet news checking), despite that their experienced intrinsic motivation was actually not statistically different. These results suggest the inherent difficulty in accurately appreciating the engaging aspect of thinking activity, and could explain why people prefer keeping themselves busy, rather than taking a moment for reflection and imagination, in our daily life.




Gender differences in the trade-off between objective equality and efficiency: The results show that females prefer objective equality over efficiency to a greater extent than males do

Gender differences in the trade-off between objective equality and efficiency. Valerio Capraro. Judgment and Decision Making, Vol. 15, No. 4, July 2020, pp. 534–544. http://journal.sjdm.org/19/190510/jdm190510.pdf

Abstract: Generations of social scientists have explored whether males and females act differently in domains involving competition, risk taking, cooperation, altruism, honesty, as well as many others. Yet, little is known about gender differences in the trade-off between objective equality (i.e., equality of outcomes) and efficiency. It has been suggested that females are more equal than males, but the empirical evidence is relatively weak. This gap is particularly important, because people in power of redistributing resources often face a conflict between equality and efficiency. The recently introduced Trade-Off Game (TOG) – in which a decision-maker has to unilaterally choose between being equal or being efficient – offers a unique opportunity to fill this gap. To this end, I analyse gender differences on a large dataset including N=6,955 TOG decisions. The results show that females prefer objective equality over efficiency to a greater extent than males do. The effect turns out to be particularly strong when the TOG available options are “morally” framed in such a way to suggest that choosing the equal option is the right thing to do.

Keywords: trade-off game, gender, equality, efficiency


Some charities are much more cost-effective than others, which means that they can do more with the same amount of money; yet most donations do not go to the most effective charities. Why is that?

Donors vastly underestimate differences in charities’ effectiveness. Lucius Caviola et al. Judgment and Decision Making, Vol. 15, No. 4, July 2020, pp. 509–516. http://journal.sjdm.org/20/200504/jdm200504.pdf

Abstract: Some charities are much more cost-effective than other charities, which means that they can save many more lives with the same amount of money. Yet most donations do not go to the most effective charities. Why is that? We hypothesized that part of the reason is that people underestimate how much more effective the most effective charities are compared with the average charity. Thus, they do not know how much more good they could do if they donated to the most effective charities. We studied this hypothesis using samples of the general population, students, experts, and effective altruists in six studies. We found that lay people estimated that among charities helping the global poor, the most effective charities are 1.5 times more effective than the average charity (Studies 1 and 2). Effective altruists, in contrast, estimated the difference to be factor 30 (Study 3) and experts estimated the factor to be 100 (Study 4). We found that participants donated more to the most effective charity, and less to an average charity, when informed about the large difference in cost-effectiveness (Study 5). In conclusion, misconceptions about the difference in effectiveness between charities is thus likely one reason, among many, why people donate ineffectively.

Keywords: cost-effectiveness, charitable giving, effective altruism, prosocial behavior, helping



Action and inaction are perceived and evaluated differently; these asymmetries have been shown to have real impact on choice behavior in both personal & interpersonal contexts

Omission and commission in judgment and decision making: Understanding and linking action‐inaction effects using the concept of normality. Gilad Feldman  Lucas Kutscher  Tijen Yay. Social and Personality Psychology Compass, July 27 2020. https://doi.org/10.1111/spc3.12557

Abstract: Research on action and inaction in judgment and decision making now spans over 35 years, with ever‐growing interest. Accumulating evidence suggests that action and inaction are perceived and evaluated differently, affecting a wide array of psychological factors from emotions to morality. These asymmetries have been shown to have real impact on choice behavior in both personal and interpersonal contexts, with implications for individuals and society. We review impactful action‐inaction related phenomena, with a summary and comparison of key findings and insights, reinterpreting these effects and mapping links between effects using norm theory's (Kahneman & Miller, 1986) concept of normality. Together, these aim to contribute towards an integrated understanding of the human psyche regarding action and inaction.