Friday, July 29, 2022

Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be

Does hazing actually increase group solidarity? Re-examining a classic theory with a modern fraternity. Aldo Cimino, Benjamin J.Thomas. Evolution and Human Behavior, July 29 2022. https://doi.org/10.1016/j.evolhumbehav.2022.07.001

Abstract: Anthropologists and other social scientists have long suggested that severe initiations (hazing) increase group solidarity. Because hazing groups tend to be highly secretive, direct and on-site tests of this hypothesis in the real world are nearly non-existent. Using an American social fraternity, we report a longitudinal test of the relationship between hazing severity and group solidarity. We tracked six sets of fraternity inductees as they underwent the fraternity's months-long induction process. Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be.


Keywords: HazingNewcomersRites of passageFraternities


Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge

Ward, Adrian F. and Zheng, Frank and Broniarczyk, Susan M., I Share, Therefore I Know? Sharing Online Content — Even Without Reading It — Inflates Subjective Knowledge (June 9, 2022). SSRN: http://dx.doi.org/10.2139/ssrn.4132814

Abstract: Billions of people across the globe use social media to acquire and share information. A large and growing body of research examines how consuming online content affects what people know. The present research investigates a complementary, yet previously unstudied question: how might sharing online content affect what people think they know? We posit that sharing may inflate subjective knowledge through a process of internalized social behavior. Sharing signals expertise; thus, sharers can avoid conflict between their public and private personas by coming to believe that they are as knowledgeable as their posts make them appear. We examine this possibility in the context of “sharing without reading,” a phenomenon that allows us to isolate the effect of sharing on subjective knowledge from any influence of reading or objective knowledge. Six studies provide correlational (study 1) and causal (studies 2, 2a) evidence that sharing—even without reading—increases subjective knowledge, and test the internalization mechanism by varying the degree to which sharing publicly commits the sharer to an expert identity (studies 3-5). A seventh study investigates potential consequences of sharing-inflated subjective knowledge on downstream behavior.

Keywords: subjective knowledge, word of mouth, social media, self-perception


Introduction of Sharia law in northern Nigeria: Decreases in infant mortality thru increased vaccination rates, duration of breastfeeding and prenatal health care; there were also increases in primary school enrollment

Islamic Law and Investments in Children: Evidence from the Sharia Introduction in Nigeria. Marco Alfano. Journal of Health Economics, July 21 2022, 102660. https://doi.org/10.1016/j.jhealeco.2022.102660

Abstract: Islamic law lays down detailed rules regulating children’s upbringing. This study examines the effect of such rules on investments in children by analysing the introduction of Sharia law in northern Nigeria. Triple-differences estimates using temporal, geographical and religious variation together with large, representative survey data show decreases in infant mortality. Official government statistics further confirm improvements in survival. Findings also show that Sharia increased vaccination rates, duration of breastfeeding and prenatal health care. Evidence suggests that Sharia improved survival by specifying strict child protection laws and by formalising children’s duty to maintain their parents in old age or in sickness.


JEL: O15 J12 J13

Keywords: BreastfeedingInfant SurvivalIslamNigeria

5.3 Primary school enrolment
Panel C of table 2 reports the results pertaining to primary school enrolment. I use information contained in the household questionnaire to merge children to their mothers and select 16 children born 1989 to 1998 (aged between 4 and 13 at the time of interview). In Nigeria, the school year starts in September. Accordingly, I redefine the year of birth and recode children born after September as being born in the following year. The sample consists of 6,125 children, who enrolled between the (school) years 1993/94 and 2002/03.
To calculate the age at which each child started school, I combine information on the years of education a child completed together with his or her age at interview Only 4% of children aged 6 to 24 repeat a year of school and less than 0.1% of children in the same age bracket drop out (DHS Final Report, 2003). Since their school starting age cannot be precisely calculated, I omit these individuals from the analysis. In Nigeria, children should enrol in school at the age of 6. For the whole country in 2003, school enrolment was relatively low, 46% of girls and 41% of boys aged 6 to 9 have never attended school (DHS Final Report, 2003).
Despite official regulations, children in Nigeria enrol in school at various different ages. To illustrate this phenomenon, I select children in school born between 1989 and 1994 (i.e. children who were due to start school before the introduction of the Sharia) and plot the distribution of the ages at which they started school in figure 5. The solid graph relates to children residing in Sharia states, the dashed to children in the rest of the country. In both samples, less than a quarter of children, who enrol in school, do so at the age of six. Almost 40% start school before that age and around a third begin school aged 7 or older. To take account of the aforementioned variation in the age at which children start school together with the legal requirement to start school at the age of six, I define the dependent variable as taking the value 1 if a child entered school between the ages of 4 to 6. For children due to enter school before the introduction of the Sharia, 43% of children entered school between 4 and 6 years old.
The difference in differences estimates in panel C of table 2 indicate that in states that introduced the Sharia, the probability of school enrolment (aged 6 or younger) increased after the Sharia by 8 to 10 percentage points. As before, the effect is robust to various specifications (columns 1 to 3). In contrast to this, the probability of school enrolment before the age of 6 hardly changed in the rest of the country after the introduction of the Sharia. The triple differences estimates in column 5 suggest that the Sharia increased the probability of children enrolling in school between the ages 4 and 6 by around 15 percentage points. For the partitioned ethnicities sample, the parameter estimates are slightly larger, 22 percentage points.
Finally, I use information on the exact year of birth of children (as always adjusted for the September cut off) to investigate whether changes in school enrolment occurred for children due to enter in the school year 2000/01. As before, I estimate the event study 17 framework outlined in equation 6. The baseline sample in this case consists of children born in the school year 1989/90, i.e. children due to start school between 1993/94 and 1995/96, depending on whether they started school aged 4, 5 or 6. The results in panel a of figure 6 report the estimates for states that introduced the Sharia. For this sample, conditional differences between Muslims and Christians for children due to enter school before the introduction of the Sharia are similar to the base year. The estimates for γθ fluctuate around 0 and are not statistically significant. By contrast, for children due to enter school after the school year 2000/01, the point estimates increase in size and become statistically significant. Panel b shows that for the remainder of the country, the conditional differences between Muslims and Christians remain similar to the baseline year throughout the time period under consideration.
Columns 3 and 4 of table 3 show that the impact of the Sharia on primary school enrolment was slightly larger for girls than for boys. The parameter estimate for boys is around 12 percentage points (column 3). The corresponding figure for girls is around 22 percentage points (column 4). A possible explanation for this heterogeneity is connected with the pretreatment means reported towards the top of table 3. For children due to enter school before the introduction of the Sharia, the proportion of boys entering school aged 4 to 6 was slightly higher than for girls (0.46 for the former and 0.39 for the latter). The Sharia explicitly states that young boys and girls should be treated equally. Parents following these rules should enrol boys and girls at the same rates. Combined with pre-existing disadvantages for girls this change in behaviour would lead to a stronger effect for girls than for boys.

Instead of religious skepticism and a related increase in progressivism...: UFO sightings promote a more conservative worldview

Kitamura, Shuhei. 2022. “UFOs: The Political Economy of Unidentified Threats.” OSF Preprints. July 29. doi:10.31219/osf.io/tme8f

Abstract: In this paper, I study the effect of Unidentified Flying Objects (UFOs) on political outcomes in the United States. Exploiting a random variation in the visibility of UFOs in the sky, I find that UFO sightings before general elections between 2000-2016 increased the vote share of the Republican presidential candidates. I also find that UFO sightings led voters to believe that the government should increase federal spending on military defense and on technology and science, although the latter effect was marginal. The results indicate that voters regard UFOs as unidentified threats to national security that warrant further defense enhancements and scientific research.


Political candidates: More differentiation between positive than negative options; after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity

Is good more alike than bad? Positive-negative asymmetry in the differentiation between options. A study on the evaluation of fictitious political profiles. Magdalena Jablonska, Andrzej Falkowski and Robert Mackiewicz. Front. Psychol., July 28 2022. https://doi.org/10.3389/fpsyg.2022.923027


Abstract: Our research focuses on the perception of difference in the evaluations of positive and negative options. The literature provides evidence for two opposite effects: on the one hand, negative objects are said to be more differentiated (e.g., density hypothesis), on the other, people are shown to see greater differences between positive options (e.g., liking-breeds-differentiation principle). In our study, we investigated the perception of difference between fictitious political candidates, hypothesizing greater differences among the evaluations of favorable candidates. Additionally, we analyzed how positive and negative information affect candidate evaluation, predicting further asymmetries. In three experiments, participants evaluated various candidate profiles presented in a numeric and narrative manner. The evaluation tasks were designed as individual or joint assessments. In all three studies, we found more differentiation between positive than negative options. Our research suggests that after exceeding a certain, relatively small level of negativity, people do not see any further increase in negativity. The increase in positivity, on the other hand, is more gradual, with greater differentiation among positive options. Our findings are discussed in light of cognitive-experiential self-theory and density hypothesis.


General discussion

In our research we analyzed the perceived differences among the sets of favorable and unfavorable options. More specifically, the aim of our studies was to investigate how people see the difference between good and bad political candidates. Certainly, they would vote for the good ones and not vote for the bad, but how do they compare the good candidate to a better one; and the bad to a worse? We looked for the answers to these questions in three experiments. In Study 1, participants compared the similarity of fictitious candidates to the best possible candidate or the worst possible one. We did not provide descriptions of the best and the worst possible and instead asked the participants to imagine such political figures. On the basis of some preliminary research, we chose some positive and some negative features and used them to prepare descriptions of five different candidates: the very bad, the bad, neutral, the good and the very good one. We presented their descriptions in a form of scales with negative and positive anchors. We used the same five descriptions and the same form of presentation in Study 2. This time, however, the participants not only assessed candidates’ similarities to the best and to the worst possible politicians but also estimated the probability of voting and likeability of the candidates as well as were asked to compare two profiles and decide how similar they were. We slightly changed the design in Study 3 in which we used narrative descriptions of the candidates. We conducted our research in the political setting, because candidate evaluation and selection is a process that many people at least occasionally undertake and which has important social, political and economic implications.

Our focus was on the differences between the evaluations of positive and negative candidates. The literature on differentiation provides evidence for two contradictory effects. On the one hand, negative information has been found to have more complex conceptual representations and lead to a wider response repertoire (Rozin and Royzman, 2001). Linguistic research and studies using spatial arrangement methods have also shown negative categories to be more diverse, with more words used to describe negative events and states (Rozin et al., 2010). Likewise, the proponents of density hypothesis (Unkelbach et al., 2008a) found that positive entities are more related (and thus denser) compared to their negative counterparts. On the other hand, literature provides convincing evidence for an opposite effect, that is a better differentiation between positive entities. For instance, Denrell (2005) found that people have more knowledge and more differentiated representations of liked than disliked social stimuli. In a similar vein, Smallman and others (Smallman et al., 2014Smallman and Becker, 2017) have shown that people make finer evaluative distinctions when rating appealing than unappealing options.

Following this line of research, we assume better differentiation between positive and not negative options to be a norm, especially when making evaluations of social objects or deciding which option to select. Thus, in our research we predicted that participants would be more likely to see the difference between favorable than unfavorable candidates. In our settings that should result in different evaluations of the good and the best candidates, while the evaluations of the bad and the worst one should not differ (Hypothesis 1). We also predicted that additional information about the candidates would be more likely to change a candidate’s image if the valence of the extra information is opposite to the current image. That is, if a candidate is already favorable, the new positive information might help him or her only to some degree, while negative information would significantly harm his or her image. On the contrary, when a candidate is presented in a negative manner, a new piece of negative information would not hurt him or her much, whereas an additional piece of positive information might be quite beneficial for the candidate’s image (Hypothesis 2). Finally, drawing on two earlier hypotheses—on the better differentiation of positive options and an asymmetrical effect of additional positive and negative features—we formulated a hypothesis that joined together these two predictions, assuming that additional positive information would improve the evaluation of an already good candidate, whereas additional negative information would not harm a bad candidate profile (Hypothesis 3).

The results supported our hypotheses. In Study 1 and Study 2 we found that there were no differences in the evaluations of negative candidates, such as a candidate with overall score –24 and a candidate with overall score –48 (the numbers refer to the balance of the evaluations on six different dimensions) were perceived as equally bad. Still, the participants perceived candidates with overall scores + 24 and + 48 as significantly different. The effect was replicated in Study 3, in which candidates were described in a narrative form. This result supports our Hypothesis 1. Importantly, whereas the results of Study 1 and 3 provided only an indirect test of the hypothesized effect, Study 2 gave a direct test as the participants saw both profiles together and were asked to assess their perceived similarity.

Our second research interest was to test how additional positive and negative pieces of information change candidate perception depending candidate valence. As expected, positive features increased candidate evaluation, whereas negative ones decreased it but these effects were not symmetrical, undermining the normative predictions of for instance the contrast model of similarity. This confirms our Hypothesis 2. Furthermore, we obtained a mixed support for Hypothesis 3. The results of Study 1 and Study 2 showed that whereas adding negative features to a candidate’s profile would not change his or her evaluation when this profile was already negative, additional positive features strengthened the image of a unfavorable candidate. However, we did not observe any effect of additional positive features in the evaluations of candidates whose images were presented in a narrative form in Study 3. One possible explanation is that two additional positive characteristics carried less information (i.e., were less diagnostic) than their negative counterparts.

Overall, our findings suggest that people do not see much of a difference between political candidates with many negative features, regardless of the extent to which they are presented as bad. As it seems, at least in the political domain, if an overall evaluation goes below some standard, people do not differentiate between bad options. The effect may be attributed to different motivations in the processing of positive and negative options. If all available alternatives are unappealing, it does not really matter which one of them is worse. After all, they all seem equally bad and, indeed, why anyone would support a bad candidate? This was the case for assessing the similarity to an ideal or bad politician (Study 1, 2, and 3) as well as liking and voting intention (Study 2 and 3). Thus, regardless of their initial expectations people would not vote for a politician if his or her features fall below a certain standard. One possibility that explains this effect is that they would not be able to justify their decision (Shafir et al., 1993).

Importantly, even the standards of “good” and “bad” are not symmetrical, so that it is relatively easy to be deemed as inadequate for the post but rather difficult to be perceived as a good candidate. The effect was especially visible in Study 1 and 2, where there was a dramatic drop in the evaluation of unfavorable candidates, with extremely low, bottom values for candidates’ similarity to an ideal politician and very high similarity to a bad politician. This extremity effect can partially account for the lack of differentiation between negative options. Still, no differences between unfavorable candidate profiles, as predicted in Hypothesis 1, were also found in Study 3, where candidates were presented in a narrative manner and where evaluations were less extreme. Overall, the results of three studies follow our Hypothesis 1, in which we predicted that the evaluations of negative candidates should not differ significantly. However, if the judgment pertains to attractive options, then the decision which one of them is better gains on importance. As visible in our studies, there were significant differences between favorable candidates. Importantly, no ceiling effect was observed. Thus, the bottom effects observed for negative candidate profiles were not paralleled by the symmetrical ceiling effect for positive candidates, suggesting that the participants differentiated their answers when they thought such differentiations were appropriate, providing evidence for better differentiation between positive options.

The results may be explained with regard to two independent information processing systems proposed by Epstein in his cognitive-experiential self-theory (Epstein, 1990Kirkpatrick and Epstein, 1992). The evolutionally older experiential system operates in an automatic and holistic manner, whereas the rational system is “a deliberative, verbally mediated, primarily conscious analytical system that functions by a person’s understanding of conventionally established rules of logic and evidence” (Denes-Raj and Epstein, 1994, p. 819). It seems that whereas an intense dislike toward negative options is an outcome of the experiential system, a better and more discriminative analysis of positive options is governed by the rational system. The finding can be also interpreted with the distinction into sufficient and necessary conditions, where a necessary condition is one which must be present in order for the event to occur but it does not guarantee the event, while a sufficient condition is a condition that will produce the event. Thus, it seems that the list of necessary conditions to be deemed as inadequate for the post is much shorter than the one for an ideal politician. Consequently, the standards for what it means to be good and bad are not symmetrical.

Our findings have important implications for density hypothesis (Unkelbach et al., 2008aAlves et al., 2016), according to which the distribution range of positivity is much narrower than the range of negativity. It seems reasonable to assume that an optimal spectrum is narrower than the negative one and, as shown in many empirical studies on density hypothesis, that the inner structure of positive information is denser than the structure of negative entities. Still, in our opinion it does not imply a better differentiation between negative options. As our studies suggest, the structure of positive categories may be denser but this density is accompanied by (or maybe is a reason for) a better discrimination between favorable options. After all, after rejecting all negative alternatives, people put in much effort to decide which of the remaining options is the best or at least acceptable—although the extent of this effort is moderated by decision importance and individual differences (e.g., a distinction into maximisers and satisficers Schwartz et al., 2002). Thus, if the structure of positive entities is denser, it is likely that people use finer combs to disentangle it.

We are aware of some important drawbacks of our study. First, we did not investigate how people estimate real candidates and, consequently, we did not take into account the importance of political views or associations that some voters may feel for different political parties. This research direction should be taken by other scholars. For instance, it is interesting to analyze how well people differentiate between candidates that are from their party compared to the members of the opposing party. Furthermore, the way we constructed our candidate profiles may pose certain limitations on the ecological validity of the study. Although, the use of such profiles was justified by our intention to have a maximal control over analyzed stimuli, further studies should investigate more complex stimuli. Also, it is interesting to analyze how well people differentiate between options, depending on the modality in which they were presented. For instance, in our studies we found that numerical candidate profiles were evaluated more extremely than candidates presented descriptively. Thus, presentation modality as well as the range of a positive and negative spectrum are further areas of research. Overall, our research provides valuable insight into positive-negative asymmetry with regard to a less-explored area of a differentiation between positive and negative options in the political setting. Contrary to the findings on the better differentiation between negative options, we find evidence for the opposite effect, showing that the evaluations of a few favorable objects are actually more nuanced. 

Thursday, July 28, 2022

Over the past 14 years, Americans have become less explicitly and implicitly biased against people of different races, skin tones, or sexual preferences

Patterns of Implicit and Explicit Attitudes: IV. Change and Stability From 2007 to 2020. Tessa E. S. Charlesworth, Mahzarin R. Banaji. Psychological Science, July 27, 2022. https://doi.org/10.1177/09567976221084257

Abstract: Using more than 7.1 million implicit and explicit attitude tests drawn from U.S. participants to the Project Implicit website, we examined long-term trends across 14 years (2007–2020). Despite tumultuous sociopolitical events, trends from 2017 to 2020 persisted largely as forecasted from past data (2007–2016). Since 2007, all explicit attitudes decreased in bias between 22% (age attitudes) and 98% (race attitudes). Implicit sexuality, race, and skin-tone attitudes also continued to decrease in bias, by 65%, 26%, and 25%, respectively. Implicit age, disability, and body-weight attitudes, however, continued to show little to no long-term change. Patterns of change and stability were generally consistent across demographic groups (e.g., men and women), indicating widespread, macrolevel change. Ultimately, the data magnify evidence that (some) implicit attitudes reveal persistent, long-term change toward neutrality. The data also newly reveal the potential for short-term influence from sociopolitical events that temporarily disrupt progress toward neutrality, although attitudes eventually return to long-term homeostasis in trends.

Keywords: implicit attitude change, explicit attitude change, Implicit Association Test (IAT), long-term change, time-series analysis, autoregressive-integrated-moving-average (ARIMA) model, open data, open materials, preregistered.

Small effect, yet significant: The intergenerational transmission of sexual frequency

The intergenerational transmission of sexual frequency. Scott T. Yabiku & Lauren Newmyer. Biodemography and Social Biology, Jul 27 2022. https://www.tandfonline.com/doi/abs/10.1080/19485565.2022.2104691

Abstract: Intergenerational relationships are one of the most frequently studied topics in the social sciences. Within the area of family, researchers find intergenerational similarity in family behaviors such as marriage, divorce, and fertility. Yet less research has examined the intergenerational aspects of a key proximate determinant of fertility: sexual frequency. We use the National Survey of Families and Households to examine the relationship between sexual frequency of parents and the sexual frequency of children when adults. We link parental sexual frequency in 1987/1988, when children were ages 5–18, to the sexual frequency of the children in 2001–2003 when these grown children were ages 18–34. We find a modest, yet significant association, between parental and adult children sexual frequency. A mechanism behind this association appears to be the higher likelihood of being in a union among children of parents with high sexual frequency.


Wednesday, July 27, 2022

The impact of time spent playing video games on well-being is probably too small to be subjectively noticeable and not credibly different from zero

Time spent playing video games is unlikely to impact well-being. Matti Vuorre et al. Royal Society Open Science. July 27 2022. https://doi.org/10.1098/rsos.220411

Abstract: Video games are a massively popular form of entertainment, socializing, cooperation and competition. Games' ubiquity fuels fears that they cause poor mental health, and major health bodies and national governments have made far-reaching policy decisions to address games’ potential risks, despite lacking adequate supporting data. The concern–evidence mismatch underscores that we know too little about games' impacts on well-being. We addressed this disconnect by linking six weeks of 38 935 players’ objective game-behaviour data, provided by seven global game publishers, with three waves of their self-reported well-being that we collected. We found little to no evidence for a causal connection between game play and well-being. However, results suggested that motivations play a role in players' well-being. For good or ill, the average effects of time spent playing video games on players’ well-being are probably very small, and further industry data are required to determine potential risks and supportive factors to health.

3.1. Effects between play and well-being over time

We then focused on our first research objective: determining the extent to which game play affects well-being. Scatterplots describing the associations between (lagged) hours played and well-being are shown in figure 3. The meta-analysis of play time and affect indicated that, on average, video game play had little to no effect on affect, with 68% posterior probability of a positive effect (figure 4, top left). The 95% most likely effect sizes of a one-hour daily increase in play on the 13-point SPANE scale ([−0.09, 0.16]) indicated that the effect was not credibly different from zero: the magnitude and associated uncertainty of this effect suggests that there is little to no practical causal connection (given our assumptions described above) between game play in the preceding two weeks and current affect.

4. Discussion

Evidence about video games' potential impacts so far has suffered from several limitations, most notably inaccurate measurement and a lack of explicit, testable causal models. We aimed to remedy these shortcomings by pairing objective behavioural data with self-reports of psychological states. Across six weeks, seven games and 38 935 players, our results suggest that the most pronounced hopes and fears surrounding video games may be unfounded: time spent playing video games had limited if any impact on well-being. Similarly, well-being had little to no effect on time spent playing.

We conclude the effects of playing are negligible because they are very unlikely to be large enough to be subjectively noticed. Anvari & Lakens [55] demonstrated that the smallest perceptible difference on PANAS, a scale similar to SPANE, was 0.20 (2%) on a 5-point Likert scale. In our study, 1 h day−1 increase in play resulted in 0.03 unit increase in well-being: assuming linearity and equidistant response categories, the average player would have to play 10 h more per day than typical to notice changes (i.e. 2% [0.26 units]) in well-being. Moreover, our model indicated 99% probability that the effect of increasing daily play time by one hour on well-being is too small to be subjectively noticeable. Even if effects steadily accumulated over time—an unrealistic assumption—players would notice a difference only after 17 weeks.

We also studied the roles of motivational experiences during play. Conceptually replicating previous cross-sectional findings [21], our results suggested that intrinsic motivation positively and extrinsic motivation negatively affects well-being. Motivations’ suggested effects were larger, and we can be more confident in them, than those of play time. However, the effect of a 1-point deviation from a player's typical intrinsic motivation on affect did not reach the threshold of being subjectively noticeable (0.10 estimate versus 0.26 threshold). Similarly: we cannot be certain a 1-point increase is a large or a small shift—participants' average range on the 7-point intrinsic motivation scale was 0.36. Until future work determines what constitutes an adequate ‘treatment’, these conclusions remain open to future investigation and interpretation. Our findings, therefore, suggest that amount of play does not, on balance, undermine well-being. Instead, our results align with a perspective that the motivational experiences during play may influence well-being [23]. Simply put, the subjective qualities of play may be more important than its quantity. The extent to which this effect generalizes or is practically significant remains an open question.

4.1. Limitations

Although we studied the play and well-being of thousands of people across diverse games, our study barely scratched the surface of video game play more broadly. Hundreds of millions of players play tens of thousands of games on online platforms. We were only able to study seven games, and thus the generalizability of our findings is limited [29]. To truly understand why people play and to what effect, we need to study a broader variety of games, genres and players. Moreover, we analysed total game time, which is the broadest possible measure of play. Although it is necessary to begin at a broad level [12,56], future work must account for the situations, motivations and contexts in which people play [57]. Additionally, play time is a skewed variable because a minority of players spend a great amount of time playing. This means that the Gaussian assumptions of the RICLPM might be threatened, and future simulation work should investigate how the RICLPM deals with skewed data. We also emphasize that our conclusions regarding the causal nature of the observed associations are tentative: without theoretical and empirical identification of confounds, our and future studies will probably produce biased estimates. Finally, industry-provided behavioural data have their own measurement error and there are differences between publishers. Independent researchers must continue working with industry to better understand behavioural data and their limitations.

Is the role of sleep in memory consolidation overrated? It seems so.

Is the role of sleep in memory consolidation overrated? Mohammad Dastghei et al. Neuroscience & Biobehavioral Reviews, July 26 2022, 104799. https://doi.org/10.1016/j.neubiorev.2022.104799

Highlights

• evidence of sleep-independent memory consolidation is reviewed

• plasticity mechanisms are active during sleep and wakefulness

• quiet waking is particularly conducive to plasticity induction and consolidation

• sleep is one among several behavioral states that allow for effective memory formation


Abstract: Substantial empirical evidence suggests that sleep benefits the consolidation and reorganization of learned information. Consequently, the concept of “sleep-dependent memory consolidation” is now widely accepted by the scientific community, in addition to influencing public perceptions regarding the functions of sleep. There are, however, numerous studies that have presented findings inconsistent with the sleep-memory hypothesis. Here, we challenge the notion of “sleep-dependency” by summarizing evidence for effective memory consolidation independent of sleep. Plasticity mechanisms thought to mediate or facilitate consolidation during sleep (e.g., neuronal replay, reactivation, slow oscillations, neurochemical milieu) also operate during non-sleep states, particularly quiet wakefulness, thus allowing for the stabilization of new memories. We propose that it is not sleep per se, but the engagement of plasticity mechanisms, active during both sleep and (at least some) waking states, that constitutes the critical factor determining memory formation. Thus, rather than playing a "critical" role, sleep falls along a continuum of behavioral states that vary in their effectiveness to support memory consolidation at the neural and behavioral level.


Keywords: SleepMemory consolidationElectroencephalogram (EEG)ReactivationReplayWakefulnessSynaptic plasticity


Tuesday, July 26, 2022

People discriminate against each other more for their political leanings than for other facets of their identity

Separated by Politics? Disentangling the Dimensions of Discrimination. Alexander G. Theodoridis, Stephen N. Goggin & Maggie Deichert. Political Behavior, Jul 23 2022. https://rd.springer.com/article/10.1007/s11109-022-09809-y

Abstract: How rampant is political discrimination in the United States, and how does it compare to other sources of bias in apolitical interactions? We employ a conjoint experiment to juxtapose the discriminatory effects of salient social categories across a range of contexts. The conjoint framework enables identification of social groups’ distinct causal effects, ceteris paribus, and minimizes ‘cheap talk,’ social desirability bias, and spurious conclusions from statistical discrimination. We find pronounced discrimination along the lines of party and ideology, as well as politicized identities such as religion and sexual orientation. We also find desire for homophily along more dimensions, as well as specific out-group negativity. We also find important differences between Democrats and Republicans, with discrimination by partisans often focusing on other groups with political relevance of their own. Perhaps most striking, though, is how much discrimination emerges along political lines – both partisan and ideological. Yet, counter-stereotypic ideological labels can counter, and even erase, the discriminatory consequences of party.


Have beliefs in conspiracy theories increased over time? It seems not.

Have beliefs in conspiracy theories increased over time? Joseph Uscinski et al. PLoS July 20, 2022. https://doi.org/10.1371/journal.pone.0270429

Abstract: The public is convinced that beliefs in conspiracy theories are increasing, and many scholars, journalists, and policymakers agree. Given the associations between conspiracy theories and many non-normative tendencies, lawmakers have called for policies to address these increases. However, little evidence has been provided to demonstrate that beliefs in conspiracy theories have, in fact, increased over time. We address this evidentiary gap. Study 1 investigates change in the proportion of Americans believing 46 conspiracy theories; our observations in some instances span half a century. Study 2 examines change in the proportion of individuals across six European countries believing six conspiracy theories. Study 3 traces beliefs about which groups are conspiring against “us,” while Study 4 tracks generalized conspiracy thinking in the U.S. from 2012 to 2021. In no instance do we observe systematic evidence for an increase in conspiracism, however operationalized. We discuss the theoretical and policy implications of our findings.

Conclusion

Numerous cross-sectional polls show that large numbers of people believe conspiracy theories, and online conspiracy theory content is plentiful. Perhaps because of this, many scholars, journalists, and policymakers are concerned that conspiracism is increasing. However, little systematic evidence demonstrating such increases has been produced. As one journalist at Vox put it, “there’s no hard evidence that conspiracy theories are circulating more widely today than ever before. But…it has certainly seemed like average Americans have bought into them more and more” [40].

The lack of systematic evidence owes to the fact that conspiracy theories became the subject of a sustained research program only around 2010. Regardless, claims about increases in conspiracy theory beliefs must be both testable and falsifiable if they are to be taken seriously. Minimally, hypothesized increases should be detectable using standard methods (such as, but not limited to, polling). If such hypotheses cannot be substantiated with supportive evidence, they should be appropriately qualified, refined to match the available evidence, or abandoned.

Across four studies––including four distinct operationalizations of conspiracism, temporal comparisons spanning between seven months and 55 years, and tens of thousands of observations from seven nations––we find only scant evidence that conspiracism, however operationalized, has increased. Although beliefs in 13 out of 52 conspiracy theories significantly increased over time (including those in both Study 1 and Study 2), these increases do not constitute sufficient evidence against the null hypothesis. In fact, we identified more decreases than increases, and the decreases were larger in magnitude than the increases. That only a quarter of the conspiracy theories we examined found more support over time––none of which involve the COVID-19 pandemic or QAnon––contradicts common wisdom.

The baseline levels of conspiracism we observe are concerning and social scientists should continue efforts at correcting them [e.g., 41]. By the same token, our finding that conspiracy theory beliefs are generally not increasing has implications for public discourse. Claims that beliefs in conspiracy theories are on the rise suggest that a new factor is to blame, or that a meaningful change in an old factor has occurred. In this vein, social media has––perhaps erroneously––taken much of the blame for supposed increases in conspiracy theory beliefs [CBS 7], which has implications for policies regarding content moderation and access.

However, we do not observe supporting evidence that beliefs in conspiracy theories or generalized conspiracy thinking have increased during the Internet/social media era. Instead, our findings comport with arguments that the Internet may be less hospitable to conspiracy theories than is often assumed [42]. Our findings also comport with studies demonstrating that online conspiracy theories, “infodemics,” and echo chambers may not be as widespread [4345] or influential as sometimes claimed [46], and are reflective of studies arguing that people are not engaging with or sharing conspiracy theories online as much as sometimes assumed [4749]. Finally, the patterns we observe align with a broad literature on conspiracy theory beliefs showing that people are unlikely to believe a conspiracy theory unless they are both 1) already disposed to believe conspiracy theories generally, and 2) inclined towards the content of that particular conspiracy theory or the source from which it emanates [39,50,51]. In other words, online conspiracy theories might not persuade as much as reinforce existing views. Our findings are more congruent with the latter process than the former.

That said, our investigation is not without limitations. We are limited to the conspiracy theories polled on previously, and we cannot make claims about conspiracy theories we did not investigate. Still, we expect that we are more likely to observe growth in the types of ideas that researchers thought worthwhile to ask the public about than those they chose to ignore. We acknowledge that the many claims about increases in conspiracism are often vague and could mean numerous things. We have therefore tested several operationalizations of conspiracism in our four studies, but future research should continue testing for increases in other ways as well. We further acknowledge that no single study can poll in all political contexts. Some beliefs not included in Study 2 could be increasing in the six European countries polled; moreover, conspiracy theory beliefs could be increasing in some countries not accounted for here. We note that most polling of conspiracy theory beliefs has taken place in the U.S. during the last decade––efforts to comprehensively measure conspiracy theory beliefs with national polls across the globe are only slowly emerging [e.g., 52]. More work outside the U.S. is needed to test our central hypothesis more comprehensively.

We implore caution in making sweeping inferences from our findings. Our study should not be used to make claims about, or to excuse the behavior of, political elites who weaponize conspiracy theories. Moreover, trends in the coverage of conspiracy theories by news outlets or in the rhetorical use of conspiracy theories by political elites fall outside the purview of our investigation, as do the use of conspiracy theories by fake news purveyors, though we recommend that researchers continue to consider these topics.

Questions regarding the growth in conspiracy theory beliefs are important, with far-reaching normative and empirical implications for our understanding of political culture, free speech, Internet regulation, and radicalization. That we observe little supportive evidence for such growth, however operationalized, should give scholars, journalists, and policymakers pause. This is not to dismiss the availability of conspiracy theories online, the large numbers of people who believe in some conspiracy theories, or the potential consequences of those beliefs; nor is it to preclude the possibility of increases in the future, in ways not tested here, or in other socio-political contexts. It may be that conspiracy theories have been a constant, but that scholars, policymakers, and journalists are only recently beginning to pay appropriate attention to them. Thus, our findings offer both good and bad news: good, in that conspiracy theory beliefs are not increasing across the board; bad, in that conspiracy theories may be a more persistent and ubiquitous feature of human society than is desirable. Scholars still have much to discover about the psychology of conspiracy theory beliefs, as well as the role that elite communication and the information environment play in promoting those beliefs. In the meantime, we recommend caution in sounding alarms regarding the “golden age” of conspiracy theories and the degeneration of society into a “post-truth” era.


The ideomotor principle holds that anticipating the sensory consequences of a movement triggers an associated motor response; paper thinks it is time to generalize this principle

Ideomotor learning: Time to generalize a longstanding principle. Birte Moeller, Roland Pfister. Neuroscience & Biobehavioral Reviews, July 22 2022, 104782. https://doi.org/10.1016/j.neubiorev.2022.104782

Highlights

• The ideomotor principle is a bedrock of contemporary approaches to action control

• Ideomotor (IM)-learning traditionally focuses on action-effect associations

• We apply the concept of common coding of action and perception to IM-learning

• The same mechanism should result in action-action and stimulus-stimulus learning

• This extension connects and integrates various approaches to human action control

Abstract: The ideomotor principle holds that anticipating the sensory consequences of a movement triggers an associated motor response. Even though this framework dates back to the 19th century, it continues to lie at the heart of many contemporary approaches to human action control. Here we specifically focus on the ideomotor learning mechanism that has to precede action initiation via effect anticipation. Traditional approaches to this learning mechanism focused on establishing novel action-effect (or response-effect) associations. Here we apply the theoretical concept of common coding for action and perception to argue that the same learning principle should result in response-response and stimulus-stimulus associations just as well. Generalizing ideomotor learning in such a way results in a powerful and general framework of ideomotor action control, and it allows for integrating the two seemingly separate fields of ideomotor approaches and hierarchical learning.


Keywords: ideomotor learningaction controlresponse-response associationstimulus-stimulus association


Big discounting... On their fertile days, women are more likely to opt for smaller now than for larger larger later rewards, be it money, food, or sex

Discounting for Money, Food, and Sex, over the Menstrual Cycle. Benjamin T. Vincent, Mariola Sztwiertnia, Rebecca Koomen & Jasmine G. Warren. Evolutionary Psychological Science, Jul 25 2022. https://rd.springer.com/article/10.1007/s40806-022-00334-z

Abstract: Sexual desire, physical activity, economic choices and other behaviours fluctuate over the menstrual cycle. However, we have an incomplete understanding of how preferences for smaller sooner or larger later rewards (known as delay discounting) change over the menstrual cycle. In this pre-registered, cross-sectional study, Bayesian linear and quadratic binomial regression analyses provide compelling evidence that delay discounting does change over the menstrual cycle. Data from 203 naturally cycling women show increased discounting (preference for more immediate rewards) mid-cycle, which is at least partially driven by changes in fertility. This study provides evidence for a robust and broad-spectrum increase in delay discounting (Cohen’s h ranging from 0.1 to 0.4) around the fertile point in the menstrual cycle across multiple commodities (money, food, and sex). We also show, for the first time, that discounting changes over the menstrual cycle in a pseudo-control group of 99 women on hormonal contraception. Interestingly, such women increase their discounting of sex toward the end of the menstrual phase — possibly reflecting a prioritisation of bonding-related sexual activity before menstrual onset.

Discussion

The present study aimed to address three questions. Firstly, we explored whether cycle phase (EF, LF, LP) influences delay discounting across the food, sex, and money commodities in NC women (see Fig. 1). Secondly, by assessing non-linear trends over the cycle for both NC and HC women, we found evidence for a quadratic trend for all commodities and both groups (see Fig. 2). For the NC group, immediate choices increased from the EF to the LF (when most fertile) and then decreased to the LP across all commodities. The same pattern occurred for the HC group for money and food; however, the opposite pattern was shown for sex. Thirdly, we explored whether change in discounting across the cycle could (at least partially) be explained by changes in fertility. This does seem to be the case, as NC women showed a linear increase in immediate choices across all commodities as a function of fertility (see Fig. 3).

The results summarised in Fig. 1 are in line with those of Lucas and Koff (2017). Similar to Lucas and Koff’s (2017) results, the probability of choosing immediate rewards increased from the EF to the LF, followed by a drop again in the LP; however, in contrast to their findings, they remained somewhat elevated compared to the EF phase. Our results go further, however, in that we show this pattern of behaviour generalises across all the commodities tested — money, food, and sex. The present findings did contradict the finding of lower preference for immediate monetary rewards near ovulation (Smith et al., 2014). Although they used hormonal assays to determine cycle phase which is more robust than the count-back method used in the present study, the authors only took measures on 2 days. The findings of the present study match those of Lucas and Koff (2017) which measured day of cycle at multiple timepoints.

Discounting for money, food, and sex changes over the menstrual cycle for NC women, with more immediate choices being made on average around the most fertile LF phase (see Fig. 2), is in line with the ovulatory shift hypothesis. NC individuals would theoretically be expected to optimise resources and immediate choices when conception is at a higher risk (da Matta et al., 2013; Gildersleeve et al., 2014). Additionally, there was support that some of the variation in discounting behaviour in NC women over the menstrual cycle can be attributed to fertility (see Fig. 3). Women who are NC are, on average, more likely to choose immediate rewards at more fertile points in their cycle. NC women should choose immediate rewards at peak fertility, especially sex, as they would be anticipating offspring. Biologically, we know that hormone levels during this point lead to increased impulsivity which is consistent with this result (Diekhof, 2015). The finding of preference for food increasing with fertility contradicts those by Fessler (2003) who reported a decrease in calorie consumption with fertility. However, this could be the result of the present study utilising discounting measures which represent a preference and do not necessarily translate to behaviour. For medium-/high-fertility levels, NC women are more likely to choose immediate lower-quality sex than HC women. This inspires the hypothesis that because HC bodies are more hormonally similar to pregnancy than NC bodies, perhaps higher-quality sex may be preferred as a method of partner bonding over an immediate but low-quality opportunity for intercourse and conception.

Interestingly, HC women discount more for money and food, only matched by NC women at peak fertility. The rise and fall of immediate choices for money and food were even more exaggerated compared to NC women. Most hormonal contraceptives function through simulating pregnancy. Therefore, HC individuals would need resources for offspring. Again, biologically this is plausible as HC hormone levels are most similar to the fertile point of the cycle. The exception to the above was discounting of sex by HC women. The probability of choosing immediate low-quality sex over delayed high-quality sex seemed to be approximately stable but with an increase toward the end of the menstrual cycle. This could be noise that may disappear in a study with more participants or using a longitudinal design. If not, a speculative hypothesis (to be tested in a future study) would be that it represents a desire to have sex before the next menstrual period begins. Given that there is no risk of conception to HC individuals, they may be opportunistically choosing intercourse later in the cycle (LP) to avoid having intercourse during menstruation. Vaginal intercourse during menstruation has medical implications, including an increased risk of sexually transmitted infections (STIs) and endometriosis (Mazokopakis & Samonis, 2018). Another plausible explanation for this result could be that HC can lead to increased libido as it improves PMS and studies have shown increased female-initiated activity later in the pill cycle (LP; Guillebaud, 2017). The quadratic trends seen within the HC group were unexpected, and although hypotheses can be speculated, there is a need for future research to consider the mechanisms underlying these findings, and whether HC type has a role.

Additionally, not comparing different types of HC is problematic as the different types have varying hormonal effects (Hampson, 2020). Further studies should include an HC sample when investigating the menstrual cycle and account for the type and duration of contraceptive use. Both the present study and Lucas and Koff (2017) used cross-sectional designs which is problematic as menstrual cycles vary between females and as such a within-subject design would be beneficial for future research. While the Cohen’s h effect sizes are “small”, this should be interpreted with extreme caution — they are entirely unlike Cohen’s d effect sizes, taking no account of the difference normalised by the degree of variance, nor the sample size. As such, in the context of delay discounting, a change from ~ 7.5 to ~ 20% immediate low-quality over delayed high-quality sex choices (see Fig. 1, top right) could be considered as a behaviourally meaningful change. We do not make strong claims based on our data about the exact shape of this change over the menstrual cycle, nor about the location of the peak of immediate choices. The present study used quadratic regression which has been used before in tracking how variables change over the menstrual cycle (Kuukasjärvi et al., 2004). Future research should continue with this non-linear form of analysis over days (not categorical phases) to encompass changes across the menstrual cycle phases as other analyses may fail to capture its cyclical nature.

Monday, July 25, 2022

World War II Blues: The Long–lasting Mental Health Effect of Childhood Trauma

World War II Blues: The Long–lasting Mental Health Effect of Childhood Trauma. Mevlude Akbulut-Yuksel, Erdal Tekin & Belgi Turan. NBER Working Paper 30284. Jul 2022. DOI 10.3386/w30284

Abstract: There has been a revival of warfare and threats of interstate war in recent years as the number of countries engaged in armed conflict surged dramatically, reaching to levels unprecedented since the end of Cold War. This is happening at a time when the global burden of mental health illness is also on the rise. We examine the causal impact of early life exposure to warfare on long–term mental health, using novel data on the amount of bombs dropped in German cities by Allied Air Forces during World War II (WWII) and German Socioeconomic Panel. Our identification strategy leverages a generalized difference-in-differences design, exploiting the plausibly exogenous variation in the bombing intensity suffered by German cities during the war as a quasi-experiment. We find that cohorts younger than age five at the onset of WWII or those born during the war are in significantly worse mental health later in life when they are between ages late 50s and 70s. Specifically, an increase of one-standard deviation in the bombing intensity experienced during WWII is associated with about a 10 percent decline in an individual’s long–term standardized mental health score. This effect is equivalent to a 16.8 percent increase in the likelihood of being diagnosed with clinical depression. Our analysis also reveals that this impact is most pronounced among the youngest children including those who might have been in-utero at some point during the war. Our investigation further suggests that measures capturing the extent of destruction in healthcare infrastructure, the increase in the capacity burden of the healthcare system, and wealth loss during WWII exacerbate the adverse impact of bombing exposure on long–term mental health, while the size of war relief funds transferred to municipalities following the war has a mitigating impact. Our findings are robust across a variety of empirical checks and specifications. With the mental health impact of childhood exposure to warfare persisting well into the late stages of life, the global burden of mental illness may be aggravated for many years to come. Our findings imply that prioritizing children and a long–term horizon in public health planning and response may be critical to mitigating the adverse mental health consequences of exposure to armed conflict.


Belgian lottery players & the number 19: Another victim of the COVID-19 pandemic?

Number 19: Another Victim of the COVID-19 Pandemic? Patrick Roger, Catherine D’Hondt, Daria Plotkina & Arvid Hoffmann. Journal of Gambling Studies, Juil 19 2022. https://rd.springer.com/article/10.1007/s10899-022-10145-3

Abstract: Conscious selection is the mental process by which lottery players select numbers nonrandomly. In this paper, we show that the number 19, which has been heard, read, seen, and googled countless times since March 2020, has become significantly less popular among Belgian lottery players after the World Health Organization named the disease caused by the coronavirus SARS-CoV-2 “COVID-19”. We argue that the reduced popularity of the number 19 is due to its negative association with the COVID-19 pandemic. Our study triangulates evidence from field data from the Belgian National Lottery and survey data from a nationally representative sample of 500 Belgian individuals. The field data indicate that the number 19 has been played significantly less frequently since March 2020. However, a potential limitation of the field data is that an unknown proportion of players selects numbers randomly through the “Quick Pick” computer system. The survey data do not suffer from this limitation and reinforce our previous findings by showing that priming an increase in the salience of COVID-19 prior to the players’ selection of lottery numbers reduces their preference for the number 19. The effect of priming is concentrated amongst those with high superstitious beliefs, further supporting our explanation for the reduced popularity of the number 19 during the COVID-19 pandemic.


Conclusion

The number 19 has been heard/read/seen/googled countless times since the beginning of the COVID-19 pandemic in March 2020. This study used Belgium as the testing ground for a natural experiment investigating whether the global pandemic has influenced the popularity of the number 19 among lottery players. To obtain a comprehensive understanding of the impact of the COVID-19 pandemic on the popularity of the number 19 in the context of lotteries, we triangulated evidence from field and survey data. Specifically, using Belgian National Lottery data, we analyzed a sample of 836 draws over the period March 2017 to February 2021. This sample included data on both Euromillions and the Lotto and allowed us to define the first three years as the benchmark period and the last year as the COVID period. To accurately measure the popularity of a given number in the lottery data, we built a popularity index that was inferred from the actual proportion of winners among a subset of ranks. The main advantage of this methodology was that it enabled us to identify any potential shift in conscious selection between the two periods.

Relying on nonparametric permutation tests, our univariate results led us to reject the null hypothesis of unchanged popularity of the number 19 between the two periods. Specifically, we found that 19 is the only number with a significant decline in popularity during the COVID period. Next, we ran regression models to control for potentially confounding effects that could impact the proportion of winners at a given draw. These multivariate findings confirmed a significant decline in the popularity of the number 19 since the start of the COVID-19 pandemic.

To complement our results from the field study that used data obtained from the Belgian lottery games, we performed a survey study in the same country. The advantage of this approach was twofold. First, the survey study allowed us to vary the extent to which participants were exposed to information about COVID-19. Second, the survey study allowed us to measure the strength of participants’ superstitious beliefs, which is recognized as an important factor explaining lottery players’ choice of numbers (e.g., Simon (1998), Farrell et al. (2000), Wang et al. (2016), Polin et al. (2021)). The survey data thus allowed us to check for an interaction effect between the strength of players’ superstitious beliefs and the extent to which our experimental manipulation affected their number preferences.

Using ordinal logistic regression to determine the extent to which priming with the salience of COVID-19 impacts the survey participants’ selection of the number 19 in the two consecutive Lotto tickets, we found evidence that COVID-19 priming significantly reduces the probability that players will select the number 19, which fully supports the results from the field study. Importantly, the effect of COVID-19 priming is still significant when controlling for participants’ socio-demographic variables. Of particular interest, the findings reveal that the effect of COVID-19 priming is moderated by participants’ superstitious beliefs, being concentrated amongst individuals with high levels of superstitious beliefs.

Overall, although there could be unobserved variables affecting the results, the results of both studies provide converging evidence that is consistent with our theoretical explanation that an increased salience of the number 19 due to the COVID-19 pandemic is associated with an increase in the availability of negative feelings or memories associated with this number, which is associated with a decline in the popularity of the number 19 in games of chance. The strong consistency between both studies is also evident from the substantial correlation between the overall preferences for specific numbers in the survey data and the field data. Furthermore, building upon the findings from the survey study showing that the difference in choice frequency for the number 19 between primed and unprimed players is mainly driven by the choices of superstitious players, we hypothesize that the actual decline in the popularity of the number 19 during the Covid period as observed in the field study is likewise most likely driven by the preferences of superstitious people, who are more likely to select numbers themselves when playing Euromillions/Lotto (as opposed to using the Quick Pick system to randomly select numbers for them). Finally, based on a textual analysis of the responses provided by the survey participants regarding their associations with the number 19 which only incidentally mentioned COVID-19, we conclude that the COVID-19 pandemic might mainly generate a subconscious aversion to the number 19, which could lead people to select it less frequently when playing games of chance.

Our results have various implications for public policy, raising several questions. First, the existence of conscious selection in lottery games indicates that people can take decisions that are not based on economic objectives. In particular, playing popular numbers leads to paying an overvalued effective price.Footnote20 However, the distribution of numbers actually chosen by lottery players remains unknown to the general public. Researchers have used various methods to estimate this distribution (e.g., Farrell et al. 2000; Roger and Broihanne 2007) or obtained real data for some draws (e.g., Simon 1998; Polin et al. 2021). At a time where open data policies develop in various countries,Footnote21 an important policy question is transparency around games of chance. Should players be able to know, after each draw, the aggregate distribution of choices (like Fig. 7)? Should players have free access to the entire set of combinations chosen by players at the preceding draw (like Fig. 8)? The well-being of players who bet unconsciously on popular numbers (for reasons that are not linked to religious considerations, superstitious beliefs, etc.) would be enhanced by such measures if they change their number selection based on this information.

Second, our results raise another, and even more important, question linked to education about probability theory. Economic decisions under risk and uncertainty require estimations of probability distributions. For at least four decades, it is well-known that people have a distorted understanding of probabilities (Kahneman and Tversky, 1979). This distortion seems to be related to intelligence/cognitive ability (Choi et al., 2022). Moreover, since probability distortion is already present at a young age (Steelandt et al., 2013), learning elementary probability theory through games of chance in the first years of school might be beneficial for improving people’s future decision processes. In extreme cases, understanding what a uniform distribution is and what independent events are could potentially save lives. For example, in 2005, the number 53 did not show up in the Italian lottery for almost two years. As a result, some people had completely unreasonable reactions about this number, falling prey to the gambler’s fallacy (e.g., Suetens et al. 2016). In an article published just after the number 53 was finally drawn,Footnote22 newspaper The Guardian reported that “Four died in 53-related incidents. A woman drowned herself in the sea off Tuscany leaving a note admitting that she had spent her family’s savings on the number. A man from Signa near Florence shot his wife and son before killing himself. A man was arrested in Sicily this week for beating his wife out of frustration at debts incurred by his 53 habit.”

Another insightful illustration of the need for a better understanding of probability theory in the general population is the case of saliva Covid tests for children. The analysis of a number of different tests by Kivelä et al. (2021) shows that the average specificity of these tests is 99%. In France, these tests were used on a systematic basis in schools when the prevalence of the disease was close to 0.5% in children. Bayes’ theorem tells us that getting a positive test means a probability of being really positive of only one third.Footnote23 In other words, two thirds of positive tests were false positive that may have contributed to  potentially unjustified school closures.

To a broader extent, illustrating conscious selection for children would be easy and might also make them develop their critical thinking skills, a crucial skill for real-problem solving that is not included in standard IQ tests (Halpern and Dunn, 2021). For example, following Kahneman and Tversky (1974), a classroom could be divided in two groups, each group being exposed to a draw of a wheel of fortune (numbered from 1 to 10). Then, the two groups are asked a given question (e.g., estimating the weight of a pet shown on a picture). The comparison of the resulting estimates in the two groups would show that these estimates are related to the number drawn on the wheel of fortune, which is completely irrelevant information to the task at hand.

Our results add to the literature on number preferences and on the interaction between emotions, preferences, and decision-making. One limitation of this study was that it does not provide insights into whether the observed change in the popularity of the number 19 is permanent or transitory. There are several arguments supporting the hypothesis that the effect may be transitory. First, the COVID-19 pandemic should eventually come to an end and/or be replaced by another worldwide topic of concern. Chun and Turk-Browne (2007) show that the limited capacity of human memory indicates that attention determines what is encoded by memory. Moreover, Roy et al. (2005) and Kress and Aue (2017) report that biased memories are the result of an optimism bias, making it likely that the negative association between the number 19 and bad memories will weaken over time. At the time of this writing (September 2021), there is also anecdotal evidence that the media and people in everyday life have begun to shorten “COVID-19” to “COVID”. In the future, this change in terminology in the popular press could lead to a (partial) disconnect in the public’s mind between the number 19 and any bad memories or feelings associated with COVID-19. Google searches for the number 19 already provide tangible evidence of such an evolution (https://trends.google.com). For example, Google searches for the number 19 halved from May to October 2021, after a sharp increase in March 2020 (followed by a decrease until June 2020 and a period of stability for almost one year, until April 2021).

This study paves the way for further research on the impact of the COVID-19 pandemic on number preferences, emotions, and associated decision-making. First, future research should address the aforementioned question whether the decline in the popularity of the number 19 is temporary or permanent. Second, it would also be of interest to investigate whether other games of chance (e.g., sport betting) or other types of decision-making involving a selection of numbers have been similarly affected by the COVID-19 pandemic. Finally, research using data from other countries that were impacted by the COVID pandemic in different respects than Belgium (e.g., because they are more geographically isolated or have less developed health systems) would provide additional insights into the interaction between emotions and decision-making.