Thursday, April 1, 2021

Cues associated with food trigger release of homeostasis regulating hormones, which regulate hunger, prevent hypoglycemia, & improve metabolism; those anticipatory hormonal responses are largely learned phenomena

Food anticipatory hormonal responses: a systematic review of animal and human studies. Aleksandrina Skvortsova et al. Neuroscience & Biobehavioral Reviews, April 1 2021. https://doi.org/10.1016/j.neubiorev.2021.03.030

Highlights

• Cues associated with food trigger release of homeostasis regulating hormones.

• Food anticipatory hormonal responses are consistently found in animals and humans.

• These responses regulate hunger, prevent hypoglycemia, and improve metabolism.

• Food anticipatory hormonal responses are largely learned phenomena.

• Food anticipatory hormonal activity is impaired in eating and metabolic disorders.

Abstract: Food anticipatory hormonal responses (cephalic responses) are proactive physiological processes, that allow animals to prepare for food ingestion by modulating their hormonal levels in response to food cues. This process is important for digesting food, metabolizing nutrients and maintaining glucose levels within homeostasis. In this systematic review, we summarize the evidence from animal and human research on cephalic responses. Thirty-six animal and fifty-three human studies were included. The majority (88%) of studies demonstrated that hormonal levels are changed in response to cues previously associated with food intake, such as feeding time, smell, and sight of food. Most evidence comes from studies on insulin, ghrelin, pancreatic polypeptide, glucagon, and c-peptide. Moreover, impaired cephalic responses were found in disorders related to metabolism and food intake such as diabetes, pancreatic insufficiency, obesity, and eating disorders, which opens discussions about the etiological mechanisms of these disorders as well as on potential therapeutic opportunities.

Keywords: anticipatory hormone releasecephalic responsesfood

4. Discussion

There is a large body of research demonstrating that cephalized organisms (ranging from insects to mammals) anticipate food intake via environmental cues with the aim to maintain homeostasis by adjusting their hormonal levels. Anticipatory hormonal changes, so-called cephalic responses, were found in a wide range of hormones but most evidence exists for insulin, ghrelin, pancreatic polypeptide, glucagon, and c-peptide. Animal research is very consistent in finding anticipatory hormonal changes with almost all studies demonstrating significant results, while the majority of human research also finds anticipatory hormonal changes. There is also some evidence for impaired cephalic responses in several metabolic and eating disorders in comparison to healthy participants, although more research is needed. Taken together, the current systematic review shows that the release of a wide range of hormones happens prior to food consumption both in animals and humans and it plays an important role in preparing the organisms for the food ingestion.

The direction of the hormonal changes in response to food anticipation mirrors the hormonal changes in response to food digestion: insulin, ghrelin, glucagon, pancreatic polypeptide, gastrin, and c-peptide levels increase. These processes indicate early adaptive preparation of the organism to the food digestion. The only hormone that does not have a direct relation to metabolism, but was repeatedly investigated in the context of food anticipation, is cortisol (corticosterone in rodents). Cortisol and corticosterone increase were found during food anticipation (Ott et al., 2012) (Coover et al., 1984Moberg et al., 1975). Moreover, levels of cortisol and corticosterone dropped rapidly after food consumption (Moberg et al., 1975). Speculatively, food anticipation triggers a stress response in the organism and, therefore, leads to cortisol release. Possibly, an increase in stress hormones is necessary to increase alertness in animals in anticipation of food (Feillet, 2010).

It is still not entirely known to what extend cephalic responses are triggered by classical conditioning and whether some of them can be inborn. The only study included in this review that investigated this question directly, (Bernstein and Woods, 1980) demonstrated that cephalic insulin release in response to sweet taste is absent in newborn rat pups but already present in 21-22 day-old rats. Also, multiple experiments in both animals and humans showed that cephalic responses are present in subjects who followed fixed eating pattern in contrast to subjects who were fed ad libitum or without a fixed pattern (Holmes et al., 1989Moberg et al., 1975Woods et al., 1977). Therefore, evidence points that cephalic responses are to a large extent dependent on classical conditioning. That is, organisms learn that certain stimuli predict the availability of the food, and respond to these stimuli with cephalic hormones release to prepare the body for food consumption. Cephalic responses have been shown to be elicited not only by the cues that naturalistically predict food (such as time of eating/feeding or smell of food) but also by conditioning to neutral stimuli such as the sound of a door opening (Strubbe, 1992), or a mixed stimulus of a sound and a light (Storlien, 1985).

In addition to memory processes, such innate component, as circadian modulation, seems to affect food anticipatory hormonal responses. One study included in this review investigated a role of circadian clock in the cephalic responses (Patton et al., 2014). Patton and colleagues (2014) demonstrated food anticipatory corticosterone and ghrelin release to be more pronounced in the mice that were fed during the dark phase, than in the mice fed during the light phase. Mice are nocturnal animals, and free fed mice tend to exhibit food anticipatory activity during night. Therefore, the food anticipatory activity seems to be enhanced in the cases when feeding schedule corresponds to the light-dark rhythms. In case when there is a mismatch between dark-light cycle and the feeding pattern, for example, if food is given only in the usual sleep phase, the food anticipatory hormonal responses still appear (Feillet, 2010Mistlberger, 1994) but might be of a smaller magnitude than in cases when there is no such a mismatch (Patton et al., 2014).

Another not well understood question about the cephalic responses, is what stimuli trigger it and in what cases. There is a discrepancy between animal and human research regarding the question whether mere taste elicits anticipatory hormone release or consumption of whole organoleptic stimulation of foods is needed. A large number of human studies failed to find anticipatory insulin release in response to a sweet taste of a nun-nutritive or low caloric substance alone (Abdallah et al., 1997Bruce et al., 1987Cedernaes et al., 2016Härtel et al., 1993Morricone et al., 2000Smeets et al., 2005Teff et al., 1995). At the same time, the response was found in the large number of studies that used sham feeding with whole foods (Buysschaert et al., 1985Glasbrenner et al., 1995Goldschmiedt et al., 1990Teff et al., 199519931991). Moreover, a number of studies found that there are responders and non-responders to the taste stimulation (Bellisle et al., 1985Dhillon et al., 2017Teff et al., 1991). This might indicate that a combination of tactile, olfactory and taste stimulation provided by whole foods is needed to elicit a reliable cephalic response in humans. At the same time, it seems not to be the case in animals. All animal studies included into this review found cephalic hormone release to the sweet taste alone. Various reasons can explain this discrepancy between animal and human research. For example, cognitive factors play an important role in food anticipation in humans. Also, most people have previous experience with tasting various sweeteners that might affect their cephalic responses, while laboratory animals usually follow standard diets and are naïve to low caloric sweeteners.

Underlying neural mechanisms of the cephalic hormonal responses were investigated only in a few animal and human studies. It is proposed that in response to the food cues, the brain initiates insulin secretion by directing the signal through the vagus nerve to the pancreas (Woods, 1991). Animal research demonstrated that vagotomy, a surgical removal of a part of vagus nerve, leads to the disappearance of cephalic responses (Bernstein and Woods, 1980Herath et al., 1999Storlien, 1985). Human research confirms these results: administration of atropine, a drug that opposes the actions of the vagus nerve by blocking the acetylcholine receptors, was shown to abolish cephalic insulin (Sjöström et al., 1980) and pancreatic polypeptide release (Veedfald et al., 2016). Another substrate that has been proposed to underlie cephalic responses is the ventromedial nucleus of the hypothalamus, a brain area that is linked to satiety (Kurrasch et al., 2007). Animals with lesions of this area exhibit no cephalic responses to sweet taste and a complex stimulus previously associated with food (Berthoud et al., 1980a,bStorlien, 1985). Another human study demonstrated that the upper hypothalamus might play a role in cephalic hormone release but only when stimulated by both sweet taste and high energy content: they showed that the injection of glucose, and not aspartame (sweet non-caloric taste) or maltodextrin (non-sweet carbohydrate) leads to significant decreases in the activity in the upper hypothalamus (Smeets et al., 2005).

Interestingly, human research shows an important role of cognition in food anticipatory hormonal releases. For example, a mere discussion about food triggered insulin release (Feldman and Richardson, 1986) and expected food palatability influenced cephalic responses (Rigamonti et al., 2015). Moreover, Crum and colleagues (Crum et al., 2011) demonstrated that the decrease of ghrelin levels after food consumption was larger in magnitude in participants who thought that they had consumed a high caloric shake in comparison to the participants who thought that the shake was low caloric (in reality it was the same shake). Cognition might also explain the discrepancy in the results between the studies that measured cephalic responses in humans to food consumption and sham feeding. While animal research found that both of these methods are very successful in eliciting anticipatory hormone release, a number of human studies that involved sham feeding, found no such responses. Participants in the studies with sham feeding knew that they would have to discharge the consumed food, and possibly, this knowledge might have affected their anticipatory hormonal responses. These studies point to the importance of conscious expectations in this physiological process: mere thoughts about food that people have might affect their hormonal responses. Additionally, the role of cognitive capacities in food anticipatory responses have never been studied before. For instance, no research in this topic have been done in infants or in people with cognitive disabilities. Future research should look at how cognition about food and cognitive capacities influence learned food anticipatory responses.

Studies that investigated cephalic responses in clinical populations demonstrate that anticipatory hormonal responses were affected in patients with diabetes with cardiac autonomic neuropathy (Buysschaert et al., 1985Glasbrenner et al., 1995), obesity (Brede et al., 2017Johnson and Wildman, 1983Osuna et al., 1986Sjöström et al., 1980), eating disorders (Broberg and Bernstein, 1989Monteleone et al., 20102008Moyer et al., 1993), pancreatic insufficiency (Wøjdemann et al., 2000), and kidney and pancreas transplantation (Secchi et al., 1995). However, the number of studies that included clinical populations is limited. It remains unknown whether disturbed cephalic responses play a causal role in the development of some of these disorders, or, alternatively, are consequences of them. Future studies should investigate the role of cephalic responses in the development of metabolic disorders and the possibility of using cephalic responses as a diagnostic tool for some of these disorders.

Several limitations of the studies included into the current review should be mentioned. First of all, the risk of bias assessment demonstrated that the majority of the studies included in this review did not report enough information to make it possible to objectively assess the bias. Particularly it applies to the animal research that did not provide information regarding the method of assigning animals to different conditions and blinding of the personnel. Similar problems, but to a lesser extent, are present in human studies. Only one study preregistration was available in open access and the majority of the studies did not report whether the analysis of outcome was done by blinded personnel. Moreover, none of the studies described whether the statistical power calculation had been done prior to the study what makes it difficult to interpret null findings. Furthermore, we found that while almost all animal studies found cephalic responses, human research varied more with respect to the results. This phenomenon can be explained by several reasons. Firstly, a larger publication bias might exist in animal research. It has been recently demonstrated that animal studies with null-findings are often not published creating a large bias in the animal literature (ter Riet et al., 2012). Secondly, additional factors might play a role in humans that are presumably less important in animals, for example, cognition. Most of the human studies, however, did not take into considerations such factors, such as expectation of participants, even though cognitive factors have been shown to affect cephalic responses (Crum et al., 2011Feldman and Richardson, 1986). Ignoring these potential confounding factors, might have led to the occasional null findings in human studies. Most of the animal and human research has been done either in males or in mixed-sex samples. Only one study included in this review looked at sex differences: they demonstrated that anticipatory ghrelin release peaks at different times in male and female mice and also in orchidectomized mice. Sex differences have been also found in glucose and lipid metabolism (Gur et al., 1995), which may potentially also affect anticipatory hormonal responses. Therefore, it is essential that future research focus on potential sex differences in the food anticipatory hormone release. Furthermore, many different methods and protocols were used for measuring anticipatory hormonal responses, and almost no study compared different methods. These differences complicate interpreting null-findings of some of the studies. For example, several studies that involved sham feedings found no cephalic insulin release (Crystal and Teff, 2006Teff et al., 1995), however, as every study used a different procedure of sham feeding (different foods, various times of chewing, various moments of sample collections), it remains unknown whether these null findings can be explained by the protocol used or whether cephalic insulin release does not occur in all cases.

Better understanding of cephalic hormonal responses brings several clinical possibilities. First, consumption of artificial low-caloric or non-nutritional sweeteners increases in modern society as these sweeteners are often added to common beverages and foods. It is still poorly understood how such discrepancy between the sweet taste and a low nutritional content can affect cephalic responses and whether it plays a role in the development of obesity and metabolic disorders. Moreover, the evidence from the current review indicates that cephalic responses might be affected in patients with metabolic disorders, however, the number of studies on this topic is limited. For example, it is unknown how impairment of cephalic responses progresses from obesity to metabolic syndrome and diabetes type 2. Possibly, measuring cephalic responses might be used as a predictive tool for the development of metabolic disorders.

The present review confirmed that there is a large body of literature supporting the existence of food anticipatory hormonal release. Moreover, there is some preliminary evidence at impairment of anticipatory hormonal responses in a range of disorders related to metabolism and food intake. More research is needed to understand the role of such impairments in cephalic responses and possibility to use cephalic responses as a predictor for the development of metabolic disorders.

Only-children prioritized benevolence values less and power values more than those with siblings; only-children’s value priorities became relatively less self-centered with age

The Values of Only-Children: Power and Benevolence in the Spotlight. Neil L Griffiths et al. Journal of Research in Personality, April 1 2021, 104096. https://doi.org/10.1016/j.jrp.2021.104096

Highlights

• Personality differences between only-children and siblings were revealed through their values.

• In their value priorities, only-children tended to be more self-centered than individuals with siblings.

• Only-children prioritized benevolence values less and power values more than those with siblings.

• Only-children’s value priorities became relatively less self-centered with age.

• Effects on values complement effects on traits that were previously considered unnoticeable.

Abstract: The stereotype that only-children are more self-centered than others has gained little support from studies on personality traits but had not been previously tested with respect to personal values, which are also an important part of personality. Data from 3085 Australian adults revealed that only-children give more importance to power values and less importance to benevolence values than individuals with siblings. These differences, which are consistent with the stereotype, were strongest in young people but diminished gradually with age and disappeared in those over 62 years old. The results challenge the view that personality is largely unaffected by shared life-experiences associated with family structure, at least regarding the values aspect of personality.

Keywords: Only-childrenPersonal valuesPersonalityPersonality Development


Latent assets: Despite the past centuries’ economic setbacks and challenges, are there reasons for optimism about Africa's economic prospects?

Africa's Latent Assets. Soren J. Henn & James A. Robinson. NBER Working Paper 28603. March 2021. DOI 10.3386/w28603

Abstract: Despite the past centuries’ economic setbacks and challenges, are there reasons for optimism about Africa's economic prospects? We provide a conceptual framework and empirical evidence that show how the nature of African society has led to three sets of unrecognized “latent assets.” First, success in African society is talent driven and Africa has experienced high levels of perceived and actual social mobility. A society where talented individuals rise to the top and optimism prevails is an excellent basis for entrepreneurship and innovation. Second, Africans, like westerners who built the world's most successful effective states, are highly skeptical of authority and attuned to the abuse of power. We argue that these attitudes can be a critical basis for building better institutions. Third, Africa is “cosmopolitan.” Africans are the most multilingual people in the world, have high levels of religious tolerance, and are welcoming to strangers. The experience of navigating cultural and linguistic diversity sets Africans up for success in a globalized world.


Increasing taxes leads to lower GDP & personal consumption; consumption taxes are likely to have the smallest effects on saving & work decisions and hence the smallest negative consequences on future economic growth

The Economic Effects of Financing a Large and Permanent Increase in Government Spending. Jaeger Nelson, Kerk Phillips. Congressional Budget Office, Working Paper 2021-03, March 2021. https://www.cbo.gov/system/files/2021-03/57021-Financing.pdf

Abstract

In this working paper, we analyze the long-term economic effects of financing a large and permanent increase in government expenditures of 5 percent to 10 percent of gross domestic product (GDP) annually. This paper does not assess the economic effects of the increased government spending and focuses solely on the effects of their financing.

The first part of the paper reviews the channels through which different financing mechanisms affect the economy. Specifically, the review focuses on how taxes on labor income, capital income, and consumption affect how much people work and save. The general finding is that increasing taxes leads to lower GDP and personal consumption. Of the different tax policies examined, consumption taxes are likely to have the smallest effects on saving and work decisions and hence the smallest negative consequences on future economic growth. Finally, deficit financing leads to higher interest rates, a lower capital stock, lower GDP, and a greater risk of a fiscal crisis.

In practice, the Congressional Budget Office uses a suite of models to assess the economic effects of fiscal policy. The second part of the paper uses one of CBO’s modeling frameworks—the life-cycle growth model—to illustrate the economic and distributional implications of raising revenues to finance a targeted amount of government spending (either 5 percent or 10 percent of GDP) through three different tax policies: a flat labor tax, a flat income tax, and a progressive income tax. To maintain deficit neutrality, tax rates for all three tax policies must rise over time to offset behavioral responses that result in smaller tax bases. After 10 years, the level of GDP by 2030 is between 3 percent and 10 percent lower than it would be without the increase in expenditures and revenues. In those scenarios, younger households experience greater loss in lifetime consumption and hours worked than older households. Additionally, the fall in lifetime consumption and hours worked is largest for higher-income households and smallest for lowerincome households when a progressive income tax is used. A progressive income tax generates the largest decline in total output. It also generates the smallest decline in consumption among the bottom two-thirds of the income distribution.

Keywords: government spending, financing, taxes

JEL Classification: E62, H2, H31, H62


Intergenerational Standard of Living

The percentage change in lifetime consumption across different birth cohorts is a useful metric for understanding the relative trade-offs across generations, but it does not capture underlying trends in economic growth and the rising standard of living (that is, the rise in real per capita consumption over time). Although the financing mechanisms analyzed in this paper generate the largest reductions in lifetime consumption among younger generations, those same generations also experience higher levels of real consumption over their lifetime than their older counterparts because of the rise in labor productivity over time.

In the model’s benchmark economy, the average household born in 2020 experiences 2.4 times as much real consumption over its lifetime as the average household born 80 years earlier, in 1940 (see Figure 9). Although financing large government spending programs reduces real lifetime consumption, on average, among households born in 2020, they still are projected to have real lifetime consumption equal to between 1.7 and 2.1 times that of their 1940 counterparts, on average.


Spread of SARS-CoV-2 in Germany: The evidence suggests that “good” weather attracts individuals to outdoor (safer) environments, thus, deterring people from indoor (less safe) environments

Home alone? Effect of weather-induced behaviour on spread of SARS-CoV-2 in Germany. Slava Yakubenko. Economics & Human Biology, April 1 2021, 100998, https://doi.org/10.1016/j.ehb.2021.100998

Highlights

• Weather had a significant effect on the spread of SARS-CoV-2 in Germany during the “first wave”.

• Regions reported lower growth rates of the number of new cases after days with high temperatures, no rain and low humidity.

• The empirical evidence shows that “good” weather attracted people to parks, while “bad” weather caused more visits to residential areas.

• The weather effects on the growth rate of infections are significantly stronger during the full contact ban imposed in some regions.

• Jointly this evidence suggests that weather had not only biological, but also behavioural influence on the epidemic of COVID-19 in Germany.

Abstract: In early 2020 the world was struck by the epidemic of novel SARS-CoV-2 virus. Like many others, German government has introduced severe contact restrictions to limit the spread of infection. This paper analyses effects of weather on the spread of the disease under the described circumstances. We demonstrate that regions reported lower growth rates of the number of the infection cases after days with higher temperatures, no rain and low humidity. We argue that this effect is channelled through human behaviour. The evidence suggests that “good” weather attracts individuals to outdoor (safer) environments, thus, deterring people from indoor (less safe) environments. Understanding this relationship is important for improving the measures aiming at combating the spread of the virus.

Keywords: COVID-19WeatherBehaviourGermany


Wednesday, March 31, 2021

We examined whether violent media exposure would be associated with increased aggression, which would then spread within social networks like a contagious disease

Violent media use and aggression: Two longitudinal network studies. Martin Delhove &Tobias Greitemeyer. The Journal of Social Psychology, Mar 30 2021. https://doi.org/10.1080/00224545.2021.1896465

Abstract: Exposure to violent media has been widely linked to increased aggression. In the present research, we examined whether violent media exposure would be associated with increased aggression, which would then spread within social networks like a contagious disease. Two groups of first year psychology students completed a questionnaire three times over the course of a year, measuring their media exposure, aggression, personality, and social relations within the group. Cross-sectional analysis provided mixed results in regards to the link between violent media and aggression. Siena analysis found no evidence of homophily (i.e., participants were not more likely to be friends with others similar to themselves) nor of social influence (i.e., participant’s behavior did not predict a change in their friends’ behavior). However, given the relatively small sample sizes and the weak ties between participants, more work is needed to assess the spread of violent media effects.

KEYWORDS: Video gamesviolent mediaaggressionsocial networklongitudinalSiena

General discussion

The present work had three main goals. First, we aimed at replicating the link between violent media exposure and increased aggression, bringing new evidence in the current debate within media psychology. Second, we tried the case for homophily, the preference of individuals toward befriending others alike to themselves, in the context of aggression and media consumption. Third, and most importantly, we meant to test the recent claim that violent-media-related aggression could spread from consumers to their close ones (e.g., Greitemeyer, 2018).

Cross-sectional results provided mixed support for the link between violent media use and aggressive outcomes. In Study 1, the relationship between violent media use and aggressive behavior and trait aggression was not consistent. In Study 2, higher consumption of aggressive media was linked to the perception of aggressive behavior as more socially normative, but anger and aggressive behavior did not relate to violent media use. Across all six time points of the two studies, neutral media use related frequently to the different measures of aggressive outcomes, suggesting that frequency of media use, rather than the actual violent content one is exposed to, relates to some aspects of aggression. The longitudinal social network analyses did not support our hypotheses either. Homophily did not appear to influence the creation and continuation of relationships when looking at media use and aggression. Moreover, we could not find signs of social influence, be it on aggressive behaviors or norms about aggression. Overall, we did not find violent media to have a longitudinal effect on aggression.

Limitations

As any work, the present research has several limitations. One of which concerns our samples. We may have lacked a sufficient sample size in order to uncover some of the effects. After exclusions, we had at most 137 participants filling our questionnaire for one time point. Assuming p < .05, the effect size we could detect with a power of .8 was r = .24. Recent meta-analyses in the context of violent video game (Anderson et al., 2010; Greitemeyer & Mügge, 2014) have estimated the effect size at r = .19, which could even be an over-estimation (Hilgard et al., 2017). Hence, even with our best cross-sectional analysis, we were lacking sufficient power.

Unfortunately, because we conducted the study over the course of two following years, there are some potential overlap between participants in Study 1 and 2, meaning that we could not conduct an integrated data analysis to improve our statistical power. Adding to this, we had a high turn-over with only slightly over half of our participants completing all three data collection phases in both studies, hindering the capacity of our Siena analysis to establish the actual effects present in our networks.

Moreover, the use of students as participants in research has been criticized (Peterson, 2001; but see Druckman & Kam, 2009). Most importantly for the present context, participants in our study indicated that the ties to the fellow students were not very close. In fact, our participants did not know each other before data collection started, that is, all of the ties consisted of newly acquainted individuals. Our hope was that by selecting a recently formed network, we would be able to observe the creation of stronger ties as the year went by, giving us an opportunity to evaluate the many changes that would appear as students started to know each other. Although we observed stronger ties at later time points (see Tables 2 and 5) this may not have been sufficient or we may have needed to continue data collection at a later time, once our participants had the opportunity to form more significant friendships. Given that individuals are most strongly affected by their relatives and close friends (Christakis & Fowler, 2009), future work examining the impact of stronger relationships on the spread of aggressive media-related aggression might support the social influence hypothesis.

Adding to this, the specific use of psychology students may be problematic when studying aggression. Indeed, research on the differences in personality among study majors found, among other things, that psychology students tended to score higher on agreeableness (Vedel, 2016) and lower on dark triad traits (Vedel & Thomsen, 2017) than some of the other majors, especially economics or business students. This could be part of an explanation as to the low scores we found on reported aggressive behaviors and norms. Replications using a different sample would be beneficial in the future.

Finally, although it is a great tool for social network analysis, RSiena cannot make use of the whole data we collected. As we have previously stated, behavioral dynamics of continuous variables cannot be implemented yet. In combination with the aforementioned low variance in our aggressive behavior measurement, we could have failed to find some patterns that are actually present in our sample. Another piece of information that would have been beneficial for a more precise model in the future is the relationship type. As stated above, one is more likely to be influenced by those who are closer to her/him. That is, acquaintances typically have a much lesser impact than one’s best friend. Hopefully, future improvements of Siena will allow testing of more complex models in the future, which could shed a new light on works like ours.

Future perspectives

The present work provides some insight on how media use and aggression interact in human networks. However, there are still many things to explore in this field. In the following, we present some ideas that we deem interesting for future works.

First of all, aggression is manifold, and studies on the effect of violent media took interest in many different outcomes. Potential measures of interest include hostile expectation bias (Bushman & Anderson, 2002) or desensitization to violence (Bartholow et al., 2006). Similarly, in light of Vachon et al.’s (2014) finding that empathy and aggression are only weakly related, one may want to use more sound measures of empathy (i.e., by using measures which also include dissonant responses such as sadism or schadenfreude) to explore its role in the context of violent media use in society. As a matter of fact, everyday sadism (Greitemeyer, 2015) and dark personality (Delhove & Greitemeyer, 2020) have been found to relate to violent video game use. Furthermore, based on the correlations between neutral media use and some of our aggression measures, it would be pertinent to compare the effects of violent content and of overall frequency of media consumption on aggression.

Another promising direction could emerge from exploring the effect of prosocial games on prosocial behaviors. An example of such effect stems from the work of Greitemeyer and Osswald (20092010). These authors asked their participants to play either a prosocial or a neutral video game and found that prosocial games decreased hostile expectation biases and the accessibility of antisocial thoughts as well as increased prosocial behaviors and the accessibility of prosocial thoughts. Outside of a laboratory setting, Prot et al. (2014) conducted a large-scale, cross-cultural correlational study and a two-year longitudinal study. They found that prosocial media use was linked to higher levels of helping behaviors which was mediated by increased empathy. Meta-analysis of prosocial video game use found that this effect was of a similar magnitude as that of violent video game use (Greitemeyer & Mügge, 2014). We would suggest that future work also delve into the eventual spread of positive effects of prosocial media.

In both a sample of dating, engaged, and married individuals & a dyadic sample of married couples, the strongest predictors of overall desire for touch were sex (being female) and high relationship quality (actor & partner)

Individual and relational differences in desire for touch in romantic relationships. Brittany K. Jakubiak, Julian D. Fuentes, Brooke C. Feeney. Journal of Social and Personal Relationships, March 25, 2021. https://doi.org/10.1177/02654075211003331

Abstract: Although touch is common in romantic relationships and is generally beneficial, people differ in the extent to which they desire to give and receive touch. The current research identified individual and relationship characteristics that predict overall desire for touch and unique desire for overtly affectionate versus indirectly affectionate forms of touch. In both a sample of dating, engaged, and married individuals (Study 1) and a dyadic sample of married couples (Study 2), the strongest predictors of overall desire for touch were sex (being female) and high relationship quality (actor and partner). Attachment avoidance also predicted lower desire for touch overall (Study 1), and actor and partner attachment avoidance predicted lower desire for indirectly affectionate touch, in particular (Study 2). Finally, greater psychological distress predicted greater desire for indirectly affectionate touch in both studies. This novel descriptive information about desire for touch provides a foundation for future intervention work.

Keywords: Affection, attachment, individual differences, romantic relationships, touch


The results suggest that people low in moral character are likely to eventually dominate cheating-enabling environments, where they then cheat extensively

Selection effects on dishonest behavior. Petr Houdek   Štěpán Bahník   Marek Hudík   Marek Vranka. Judgment and Decision Making, Vol. 16, No. 2, March 2021, pp. 238-266. http://journal.sjdm.org/20/200824b/jdm200824b.html

Abstract: In many situations people behave ethically, while elsewhere dishonesty reigns. Studies of the determinants of unethical behavior often use random assignment of participants in various conditions to identify contextual or psychological factors influencing dishonesty. However, in many real-world contexts, people deliberately choose or avoid specific environments. In three experiments (total N = 2,124) enabling self-selection of participants in two similar tasks, one of which allowed for cheating, we found that participants who chose the task where they could lie for financial gain reported a higher number of correct predictions than those who were assigned it at random. Introduction of financial costs for entering the cheating-allowing task led to a decrease in interest in the task; however, it also led to more intense cheating. An intervention aimed to discourage participants from choosing the cheating-enabling environment based on social norm information did not have the expected effect; on the contrary, it backfired. In summary, the results suggest that people low in moral character are likely to eventually dominate cheating-enabling environments, where they then cheat extensively. Interventions trying to limit the preference of this environment may not have the expected effect as they could lead to the selection of the worst fraudsters.

Keywords: cheating, self-selection, behavioral ethics, honesty-humility

5  General discussion

People choose different situations based on their personality and preferences. In the case of cheating, the moral character of individuals affects the situation selection (Cohen & Morse, 2014). Moral or guilt-prone people are ready to stop behavior that could harm others and even sacrifice financial reward to do so. On the other hand, unscrupulous people seek such situations (Wiltermuth & Cohen, 2014). Accordingly, we found that participants low in honesty-humility tend to prefer cheating-enabling environments, where their rate of cheating can further escalate.

Based on our results, we recommend enriching the experimental methodology by including the possibility of selection of conditions by participants. Experimental designs typically involve measurement of behavior in assigned conditions, and even participants who would not prefer or encounter these conditions in real life are forced to deal with them in an experiment. While the ability to choose one’s circumstances may be sometimes limited, inclusion of the possibility of self-selection of participants in different conditions would allow for generalization of experimental findings even in situations where people can select their environment and it would thus improve external validity of experiments.

From a practical perspective, our results show the importance of influencing self-selection of people into companies, departments, and other groups. If individuals motivated only by self-interest perceive public office as an opportunity to enrich themselves, the people with low moral character will seek to become civil servants and politicians. Indeed, studies conducted in India show that people who cheat in a laboratory task are more likely to prefer public sector jobs (Banerjee, Baul & Rosenblat, 2015; Hanna & Wang, 2017). Likewise, Ukrainian law students who cheat and bribe in experimental games are more likely to aspire to careers such as judges, prosecutors, and government lawyers (Gans-Morse, 2019). On the other hand, the self-selection of honest people exists in the Danish public sector (Barfort et al., 2019; for cross-country analysis, see Olsen et al., 2018). Selection of honest people in occupations in which dishonesty may have high societal costs could often be more effective than efforts trying to reduce dishonesty of people who have already chosen them.

The reported studies tested many effects, especially related to moderation of the studied effects by personality characteristics. While some of the tested effects were supported by strong evidence or replicated in a subsequent study, other effects were supported by weaker evidence, or the pattern of results between studies was more ambiguous. We did not control the experiment-wise error rate because we were not primarily interested in whether there is any significant association. However, the number of tested effects means that there is a higher chance that some of them are falsely positive or negative, and the positive results supported by weaker evidence should be interpreted with caution and subject to future replications.

The design used in this article can be further extended in various ways. In particular, it is possible that any determinants of cheating that have been observed in experiments without taking self-selection into account may not influence people who would be actually present in real-world cheating-enabling environments (Houdek, 2017, 2019). Such environments may include only individuals with low levels of honesty-humility personal traits who are prepared to cheat regardless of any intervention. Moreover, if an intervention makes cheating more reprehensible or costly to these individuals, they may simply move to a similar environment without the intervention. The self-selection may eventually negate any positive effects of the intervention on the overall level of cheating (e.g., Nettle, Nott & Bateson, 2012). Future studies may directly test this potential implication of our findings.

Another topic for future research are reasons for self-selection into groups. These reasons might vary and result in a specific composition of a group, which can further influence behavior of its present or future members. For example, in certain professions (investment banker, salesperson, advertiser), dishonesty or deception could be perceived as a signal of a person’s skills, and honest people may therefore avoid these professions. Such adverse selection could eventually lead to persistent dishonesty in these professions (Gunia & Levine, 2016). Yet another possibility for extension is to examine whether selection affects enforcement and punishment. With more cheaters, enforcement and punishment may be more diffused, which may attract additional cheaters in the group (Conley & Wang, 2006). While we have considered a monetary fee associated with the choice of the cheating-enabling environment, another possibility is to include non-monetary costs — such as reputational — of choosing the cheating-enabling environment. Finally, all the cheating behavior in our experiments might have been perceived as basically victimless. Future research may examine self-selection in cases where cheating has identifiable victims.

Results suggest that economic factors primarily were related to homicide and suicide cross‐nationally; per capita gun ownership was not an indicator factor cross‐nationally

Examining homicides and suicides cross‐nationally: Economic factors, guns and video games. Christopher J. Ferguson  Sven Smith. International Journal of Psychology, March 30 2021. https://doi.org/10.1002/ijop.12760

Abstract: Understanding why different nations have different homicide and suicide rates has been of interest to scholars, policy makers and the general public for years. Multiple theories have been offered, related to the economy, presence of guns and even exposure to violence in video games. In the current study, several factors were considered in combination across a sample of 92 countries. These included income inequality (Gini index), Human Capital Index (education and employment), per capita gun ownership and per capita expenditure on video games. Results suggest that economic factors primarily were related to homicide and suicide cross‐nationally. Video game consumption was not a major indicative factor (other than a small negative relationship with homicides). More surprisingly, per capita gun ownership was not an indicator factor cross‐nationally. The results suggest that a focus on economic factors and income inequality are most likely to bear fruit regarding reduction of violence and suicide.


Memorable meals: Remembered eating happiness was predicted by the worst eating experience, but not by the best or final eating experience

Villinger K, Wahl DR, Schupp HT, Renner B (2021) Memorable meals: The memory-experience gap in day-to-day experiences. PLoS ONE 16(3): e0249190, Mar 30 2021. https://doi.org/10.1371/journal.pone.0249190

Abstract: Research shows that retrospective memory is often more extreme than in-the-moment experiences. While investigations into this phenomenon have mostly focused on distinct, one-time experiences, we examined it with respect to recurring day-to-day experiences in the eating domain, focusing on variables of the snapshot model—i.e., the most intense and the final experience. We used a smartphone-based Ecological Momentary Assessment to assess the food intake and eating happiness of 103 participants (82.52% female, Mage = 21.97 years) over eight days, and then calculated their best (positive peak), worst (negative peak) and final experiences. Remembered eating happiness was assessed immediately after the study (immediate recall) and after four weeks (delayed recall). A significant memory-experience gap was revealed at immediate recall (d = .53). Remembered eating happiness was predicted by the worst eating experience (β = .41, p < .001), but not by the best or final eating experience. Analyzing changes over time did not show a significant memory-experience gap at delayed recall, but did reveal a similar influence of the worst eating experience (β = .39, p < .001). Findings indicate that, in the domain of eating, retrospective memory is mainly influenced by negative experiences. Overall, the results indicate that the snapshot model is a valid conceptualization to explain recall of both outstanding and day-to-day experiences.


Tuesday, March 30, 2021

Provocative & biased perspective: The neuromodulator dopamine may be a crucial link between neural circuits performing Bayesian inference and the perceptual idiosyncrasies of people with schizophrenia

Illusions, Delusions, and Your Backwards Bayesian Brain: A Biased Visual Perspective. Born R.T., Bencomo G.M. Brain Behav Evol,  Mar 2021. https://doi.org/10.1159/000514859

Abstract: The retinal image is insufficient for determining what is “out there,” because many different real-world geometries could produce any given retinal image. Thus, the visual system must infer which external cause is most likely, given both the sensory data and prior knowledge that is either innate or learned via interactions with the environment. We will describe a general framework of “hierarchical Bayesian inference” that we and others have used to explore the role of cortico-cortical feedback in the visual system, and we will further argue that this approach to “seeing” makes our visual systems prone to perceptual errors in a variety of different ways. In this deliberately provocative and biased perspective, we argue that the neuromodulator, dopamine, may be a crucial link between neural circuits performing Bayesian inference and the perceptual idiosyncrasies of people with schizophrenia.

Keywords: Cerebral cortexDopamineNeuromodulatorsSchizophreniaSensory systemsVision


Closing Remarks and Future Directions

This has been an admittedly biased review of several different bodies of literature, perhaps illustrating the pitfalls inherent in overly strong priors. Our aim from the start was to be provocative and, whether the ideas presented here are absolutely correct in their detail is less important than the dialogue and future studies that we are hoping to inspire. Besides, it takes only one additional inhibitory interneuron intercalated into a circuit to completely invert the sign of a predicted effect or influence! And, as noted above, simply changing the subtype of a neuromodulator’s receptor can lead to very different effects on the same circuit.

In this spirit, we close with a few thoughts on several specific areas that we think merit deeper investigation. First, at the circuit level, the mechanisms by which top-down information interacts with local circuits remain largely unknown, exacerbated by the fact that many of these interactions take place in layer 1. While modern approaches using serial-section electron microscopy (EM) have begun to flesh out the details of local circuits [Morgan and Lichtman, 2013], layer 1 has not been amenable to traditional EM-based connectomics, because, as previously noted, the vast majority of the inputs are from distant sources. However, such distant sources might soon be identifiable in serial EM reconstructions by using recently developed methods that allow neural tracing with viral vectors carrying different genetically encoded labels that are distinguishable with EM [Cruz-Lopez et al., 2018; Zhang et al., 2019]. New, nondestructive imaging methods also promise to extend the distances over which circuits can be reconstructed at the ultrastructural level [Kuan et al., 2020].

Second, most studies on the influence of top-down information on perception and cognition have been done in humans and NHPs, where tools to study circuit mechanisms are lagging compared to those in rodent models. In the future, this border zone needs to be more thoroughly investigated, both by improving our toolkit for circuit-level manipulations in NHP [Dai et al., 2015; El-Shamayleh et al., 2016; Galvan et al., 2017] and seeking out meaningful touchpoints between studies on NHPs and rodents, in the spirit of Figure 4.

Third, the mechanisms by which neuromodulators influence specific cortical circuits are poorly understood; a myriad of cellular and synaptic effects have been described but understanding the overall effects will require sophisticated computational models [Seamans and Yang, 2004].

Fourth, how circuit-level influences of neuromodulators lead to changes in perception and behavior remains deeply mysterious. This is true, not only for dopamine, but other neuromodulators as well. Chief among those that seem ripe for investigation is serotonin (5-HT), given the powerful perceptual distortions that are produced by hallucinogenic drugs, most of which are believed to act through 5-HT2A receptors [Nichols, 2004; González-Maeso et al., 2007; Halberstadt, 2015]. The historical events [Pollan, 2019] that led to these drugs being classified as “schedule 1” made them virtually inaccessible to the scientific community for many years. Thankfully, this historical influence appears to be on the wane, and we hope that perceptual scientists will make use of this powerful set of tools for future studies on perception.

Finally, the body of literature showing a reduced susceptibility to contextual visual illusions and abnormal corollary discharge in patients with schizophrenia, while suggestive, remains difficult to interpret for a variety of reasons including the fact that most of these patients are on a variety of psychoactive medications, are often condemned by their illness to extremely difficult socioeconomic situations, and frequently have other neuropsychiatric diagnoses. In this regard, several studies showing diminished top-down perceptual effects in the normal population that correlate with “cognitive-perceptual schizotypal traits” [Teufel et al., 2010; Bressan and Kramer, 2013] seem particularly promising, particularly given the possibility of conducting large-scale psychophysical studies online, using tools such as Amazon’s “Mechanical Turk” [Rajalingham et al., 2015; de Leeuw and Motz, 2016].

There is a tremendous gap between the conceptual simplicity of Bayesian inference and our understanding of the neural mechanisms that might implement it. Even such seemingly basic questions as how neural systems represent probability remain unsettled [Beck et al., 2008; Ma and Jazayeri, 2014; Haefner et al., 2016; Walker et al., 2020]. The situation might seem hopeless. Connectomics has revealed seemingly Byzantine cortical circuitry [Bock et al., 2011] which can adopt a variety of different functional modes under the influence of multiple systems of neuromodulators [Bargmann and Marder, 2013], each having scores of effects at different levels of the circuit [Seamans and Yang, 2004; Tritsch and Sabatini, 2012]. While new experimental tools to probe circuit function are surely part of the solution, ultimately, what is most needed are synthetic computational models, i.e., models that themselves represent the consensus of an entire modeling community [Bower, 2015], which can integrate results across different levels of investigation into (hopefully) simpler explanations at the level of circuit motifs that perform canonical computations [Douglas and Martin, 2007; Kouh and Poggio, 2008; Carandini and Heeger, 2011; Miller, 2016] in the service of behavioral goals [Krakauer et al., 2017]. 

People perceive themselves to adhere more strictly to COVID-19 guidelines than others

People perceive themselves to adhere more strictly to COVID-19 guidelines than others. Andreas Mojzisch,Christian Elster &Markus Germar. Psychology, Health & Medicine, Mar 29 2021. https://doi.org/10.1080/13548506.2021.1906435

Abstract: People have a fair idea of how they are supposed to behave to slow down the spread of COVID-19. But what about people’s perception of their own compared to others’ adherence to the guidelines? Building on prior research on self-enhancement biases, we predicted that people perceive themselves to adhere more strictly to the COVID-19 guidelines than others. To test this hypothesis, we conducted a large-scale online experiment (N = 1,102), using a sample from four countries (UK, US, Germany, Sweden). As predicted, people perceived themselves to adhere to the COVID-19 guidelines more strictly than both the average citizen of their country and their close friends. These findings were robust across countries. Furthermore, findings were not moderated by whether people first thought about themselves or about others. In conclusion, our study provides a robust demonstration of how a long-standing psychological effect perseveres, even during a once-in-a-lifetime health crisis.

Keywords: COVID-19better-than-average-effectholier-than-thou effectself-enhancementsocial comparison


Favorable socioeconomic environment in childhood appears to have a positive effect on offspring’s compassion in their middle adulthood; this effect may attenuate by middle age

Saarinen AI, Keltner D, Dobewall H, Lehtimäki T, Keltikangas-Järvinen L, Hintsanen M (2021) The relationship of socioeconomic status in childhood and adulthood with compassion: A study with a prospective 32-year follow-up. PLoS ONE 16(3): e0248226, Mar24 2021. https://doi.org/10.1371/journal.pone.0248226

Abstract: The objective of this study was to investigate (i) whether childhood family SES predicts offspring’s compassion between ages 20–50 years and (ii) whether adulthood SES predicts compassion or vice versa. We used the prospective population-based Young Finns data (N = 637–2300). Childhood family SES was evaluated in 1980; participants’ adulthood SES in 2001 and 2011; and compassion for others in 1997, 2001, and 2012. Compassion for others was evaluated with the Compassion scale of the Temperament and Character Inventory. The results showed that high childhood family SES (a composite score of educational level, occupational status, unemployment status, and level of income) predicted offspring’s higher compassion between ages 30–40 years but not in early adulthood or middle age. These results were obtained independently of a variety of potential confounders (disruptive behavior in childhood; parental mental disorder; frequency of parental alcohol use and alcohol intoxication). Moreover, high compassion for others in adulthood (a composite score of educational level, occupational status, and unemployment status) predicted higher adulthood SES later in their life (after a 10-year follow-up), but not vice versa. In conclusion, favorable socioeconomic environment in childhood appears to have a positive effect on offspring’s compassion in their middle adulthood. This effect may attenuate by middle age. High compassion for others seems to promote the achievement of higher SES in adulthood.


4 Discussion

This study showed that high childhood family SES predicts offspring’s higher compassion in middle adulthood (approximately at the age of 30–40 years), but not in early adulthood or middle age. Hence, living in economically advantaged circumstances seemed to have a positive influence on one’s disposition to feel compassion for others in adulthood. Moreover, we found that high compassion in adulthood predicted higher adulthood SES, but not vice versa.

The positive relationship of childhood family SES with offspring’s compassion is in line with previous literature. Previous studies suggest that high childhood family SES is strongly linked to favorable psychosocial qualities of home environment that may promote compassion development. Specifically, high family SES is proposed to be linked to lower parental stress levels [1214], higher maternal support for the offspring [1214], higher quality of the parent-child communication [16], and better family climate [12]. High quality of the parent-child relationship, in turn, predicts offspring’s higher compassion in adulthood [39].

Importantly, the influence of childhood family SES on offspring’s compassion was not significant in early adulthood (approximately ages of 20–25 years) or in middle age (approximately ages of 45–50 years). This study did not investigate potential mechanisms between SES and compassion but there may be some potential explanations. Firstly, it may be that compassion-related qualities need to be a comparatively stable feature of one’s identity, before one is able to conduct prosocial actions toward outgroup-individuals and to forgive for others who have behaved aggressively [40]. In order to form a stable identity, one needs to mentally go through the past events in childhood and adolescence, including parenting practices and childhood family circumstances. Previously, the age of 30 years is found to be a critical age period for personality traits to become more stabilized [41]. We speculate that this may potentially provide one explanation why parental SES predicted compassion beginning at the age of 30 years. Secondly, compassion increased over age in all the SES groups. Hence, in middle age, there may have not been enough variance in compassion to obtain statistically significant differences between SES groups. Thirdly, there were fewer participants in extreme age ranges that may likely resulted in broader confidence intervals and weaker statistical significance of the associations.

The results showed that high compassion predicts higher SES in adulthood. This may be explained by the motivational component of compassion leading to higher willingness to prosocial behavior [3] that, in turn, is related to higher social connectedness [42]. Further, experiencing compassion is related to more frequent actions to promote common goals in one’s social communities [43]. These social benefits of compassion may promote compassionate individuals’ higher status at occupational environments. In addition, compassion may protect against work stress and burnout because high compassion is related to better coping with stress [4445] and more favorable health behavior [46].

Previous studies have suggested that high adulthood SES is related to weaker compassion-related qualities, such as lower ability to recognize others’ emotional states [19] and less frequent altruistic and helpful behavior toward others [2021]. Those studies, however, did not control for childhood family SES. In this study, we took into account childhood family SES and obtained no association between offspring’s adulthood SES and compassion.

The current study had some methodological limitations that are necessary to be taken into consideration. In Finland, there is a comprehensive social welfare system with quite a strong progressive taxation. Further, unemployed individuals are typically provided with satisfactory unemployment benefits. Additionally, there is a 9-year-long comprehensive school for the whole age group, so that all the citizens may likely have basic educational knowledge. Hence, even “low SES” may likely refer to satisfactory levels of socioeconomic circumstances (i.e. having apartment, food, and health care) and, conversely, there are very few individuals with extreme wealth in Finland. Consequently, our results cannot be generalized to populations with extremely low and high levels of SES where the link between compassion and SES might be different. In addition, there may be possible cultural differences in the SES-compassion relationships that may restrict the generalizability of our findings and that could be addressed in up-coming studies. Overall, our findings suggest that even comparatively small increases in childhood family SES (within a reasonable SES range) have a beneficial influence on offspring’s compassion in adulthood.

This study had also a variety of strengths. Firstly, to our knowledge, this study was the first to investigate the relationship of childhood family SES with compassion over a long-term prospective follow-up (32 years) into adulthood. Further, we investigated the relationship between adulthood SES and compassion over an 11-year follow-up. Secondly, we could take into consideration a variety of other covariates (child’s disruptive behavior, parental mental disorder, parents’ frequency of alcohol use and intoxication). Thirdly, we used SES composite scores consisting of several SES indicators (level of income, occupational status, educational level, employment status), in order to capture the multidimensional aspects of socioeconomic circumstances. Fourthly, we had a large population-based sample with intergenerational design and three respondents from each family (mother, father, and child). Finally, as academic-level education is provided free-or-charge for the Finnish citizens, childhood family SES may not largely determine offspring’s SES development. Hence, the effects of one’s own characteristics (such as compassion) on later SES development can be more clearly observed in our Finnish sample than in some other countries.

Commonly, unemployment or other socioeconomic troubles are treated using public employment services such as vocational training [47]. There is evidence, however, that compassion may have favorable influences on socioeconomic status: for example, compassionate practices at work place are found to predict higher work engagement and to protect against burnout in stressful circumstances over a 6-month follow-up [48]. Our study showed that compassion is related to higher socioeconomic status over an 11-year follow-up. Further, there is evidence that compassion may be enhanced even with a few-week-long compassion intervention [49], including practices to e.g. increase tolerance to other’s suffering and to shift attention from self-monitoring to recognizing others’ emotional states [50]. Finally, there is evidence that women coming from low-SES childhood families may not be willing to contact health-care professionals and must be contacted even ten times in order to get them to participate in psychotherapy [51]. Our study suggests that individuals with low childhood SES may have a lower level of compassion that, in turn, may potentially be manifested as a distrust toward health-care professionals. Consequently, individuals coming from socioeconomically harsh environments could be treated with particular warmth and trust, as has been suggested also previously [52]. 

Decreases in desires for a relationship are significantly associated with greater life satisfaction; the results are used to suggest how many singles may be able to maintain high levels of life satisfaction in the face of social stigmata

Reduced relationship desire is associated with better life satisfaction for singles in Germany: An analysis of pairfam data. Elyakim Kislev. Journal of Social and Personal Relationships, March 30, 2021. https://doi.org/10.1177/02654075211005024

Abstract: This research estimates the extent to which life satisfaction of singles is influenced by their desire to be single. Regression analyses on data from the Panel Analysis of Intimate Relationships and Family Dynamics (pairfam) studies are used to investigate this question, paying particular attention to longitudinal differences between never-married and divorced/separated men and women. Panel data analyses between different waves of the pairfam data indicate that decreases in desires for a relationship are significantly associated with greater life satisfaction. These patterns hold for all but one of the demographic groups investigated (divorced/separated men). The results are used to suggest how many singles may be able to maintain high levels of life satisfaction in the face of social stigmata.

Keywords: Divorce, life satisfaction, marriage, singlehood