Saturday, February 4, 2023

Unlike a machine, in which dedicated components are entrusted with fixed functions, the brain operates more like a complex dynamic system in which changing coalitions of neurons can perform varying tasks depending on the context

Improving the study of brain-behavior relationships by revisiting basic assumptions. Christiana Westlin et al. Trends in Cognitive Sciences, February 2 2023.


The study of brain-behavior relationships has been guided by several foundational assumptions that are called into question by empirical evidence from human brain imaging and neuroscience research on non-human animals.

Neural ensembles distributed across the whole brain may give rise to mental events rather than localized neural populations. A variety of neural ensembles may contribute to one mental event rather than one-to-one mappings. Mental events may emerge as a complex ensemble of interdependent signals from the brain, body, and world rather than from neural ensembles that are context-independent.

A more robust science of brain-behavior relationships awaits if research efforts are grounded in alternative assumptions that are supported by empirical evidence and which provide new opportunities for discovery.

Abstract: Neuroimaging research has been at the forefront of concerns regarding the failure of experimental findings to replicate. In the study of brain-behavior relationships, past failures to find replicable and robust effects have been attributed to methodological shortcomings. Methodological rigor is important, but there are other overlooked possibilities: most published studies share three foundational assumptions, often implicitly, that may be faulty. In this paper, we consider the empirical evidence from human brain imaging and the study of non-human animals that calls each foundational assumption into question. We then consider the opportunities for a robust science of brain-behavior relationships that await if scientists ground their research efforts in revised assumptions supported by current empirical evidence.

Keywords: brain-behavior relationshipswhole-brain modelingdegeneracycomplexityvariation

Concluding remarks

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods (see Outstanding questions). Our suggestions require more than merely improved methodological rigor for traditional experimental design (Box 1). Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context [51,56]). To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology (e.g., [123,124]), as well as cybernetics and systems theory (e.g., [125,126]). At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

Outstanding questions

Well-powered brain-wide analyses imply that meaningful signals exist in brain regions that are considered nonsignificant in studies with low within-subject power, but is all of the observed brain activity necessarily supporting a particular behavior? By thresholding out weak yet consistent effects, are we removing part of the complex ensemble of causation? What kinds of technical innovations or novel experimental methods would allow us to make progress in answering this question?

How might we incorporate theoretical frameworks, such as a predictive processing framework, to better understand the involvement of the whole-brain in producing a mental event? Such an approach hypothesizes the involvement of the whole-brain as a general computing system, without implying equipotentiality (i.e., that all areas of the brain are equally able to perform the same function).

Why are some reported effects (e.g., the Stroop effect) seemingly robust and replicable if psychological phenomena are necessarily degenerate? These effects should be explored to determine if they remain replicable outside of constrained laboratory contexts and to understand what makes them robust.

Given that measuring every signal in a complex system is unrealistic given the time and cost constraints of a standard neuroimaging experiment, how can we balance the measurement of meaningful signals in the brain, body, and world with the practical realities of experimental constraints?

Is the study of brain-behavior relationships actually in a replication crisis? And if so, is it merely a crisis of method? Traditional assumptions suggest that scientists should replicate sample summary statistics and tightly control variation in an effort to estimate a population summary statistic, but perhaps this goal should be reconsidered.

Friday, February 3, 2023

Within internet there exists the 90-9-1 principle (also called the 1% rule), which dictates that a vast majority of user-generated content in any specific community comes from the top 1% active users, with most people only listening in

Vuorio, Valtteri, and Zachary Horne. 2023. “A Lurking Bias: Representativeness of Users Across Social Media and Its Implications for Sampling Bias in Cognitive Science.” PsyArXiv. February 2. doi:10.31234/

Abstract: Within internet there exists the 90-9-1 principle (also called the 1% rule), which dictates that a vast majority of user-generated content in any specific community comes from the top 1% active users, with most people only listening in. When combined with other demographic biases among social media users, this casts doubt as to how well these users represent the wider world, which might be problematic considering how user-generated content is used in psychological research and in the wider media. We conduct three computational studies using pre-existing datasets from Reddit and Twitter; we examine the accuracy of the 1% rule and what effect this might have on how user-generated content is perceived by performing and comparing sentiment analyses between user groups. Our findings support the accuracy of the 1% rule, and we report a bias in sentiments between low- and high-frequency users. Limitations of our analyses will be discussed.

Contrary to this ideal, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success = deciding a paper’s merit based on its media coverage is unwise

A discipline-wide investigation of the replicability of Psychology papers over the past two decades. Wu Youyou, Yang Yang, and Brian Uzzi. Proceedings of the National Academy of Sciences, January 30, 2023, 120 (6) e2208863120.

Significance: The number of manually replicated studies falls well below the abundance of important studies that the scientific community would like to see replicated. We created a text-based machine learning model to estimate the replication likelihood for more than 14,000 published articles in six subfields of Psychology since 2000. Additionally, we investigated how replicability varies with respect to different research methods, authors 'productivity, citation impact, and institutional prestige, and a paper’s citation growth and social media coverage. Our findings help establish large-scale empirical patterns on which to prioritize manual replications and advance replication research.

Abstract: Conjecture about the weak replicability in social sciences has made scholars eager to quantify the scale and scope of replication failure for a discipline. Yet small-scale manual replication methods alone are ill-suited to deal with this big data problem. Here, we conduct a discipline-wide replication census in science. Our sample (N = 14,126 papers) covers nearly all papers published in the six top-tier Psychology journals over the past 20 y. Using a validated machine learning model that estimates a paper’s likelihood of replication, we found evidence that both supports and refutes speculations drawn from a relatively small sample of manual replications. First, we find that a single overall replication rate of Psychology poorly captures the varying degree of replicability among subfields. Second, we find that replication rates are strongly correlated with research methods in all subfields. Experiments replicate at a significantly lower rate than do non-experimental studies. Third, we find that authors’ cumulative publication number and citation impact are positively related to the likelihood of replication, while other proxies of research quality and rigor, such as an author’s university prestige and a paper’s citations, are unrelated to replicability. Finally, contrary to the ideal that media attention should cover replicable research, we find that media attention is positively related to the likelihood of replication failure. Our assessments of the scale and scope of replicability are important next steps toward broadly resolving issues of replicability.


This research uses a machine learning model that quantifies the text in a scientific manuscript to predict its replication likelihood. The model enables us to conduct the first replication census of nearly all of the papers published in Psychology’s top six subfield journals over a 20-y period. The analysis focused on estimating replicability for an entire discipline with an interest in how replication rates vary by subfield, experimental and non-experimental methods, the other characteristics of research papers. To remain grounded in the human expertise, we verified the results with available manual replication data whenever possible. Together, the results further provide insights that can advance replication theories and practices.
A central advantage of our approach is its scale and scope. Prior speculations about the extent of replication failure are based on relatively small, selective samples of manual replications (21). Analyzing more than 14,000 papers in multiple subfields, we showed that replication success rates differ widely by subfields. Hence, not one replication failure rate estimated from a single replication project is likely to characterize all branches of a diverse discipline like Psychology. Furthermore, our results showed that subfield rates of replication success are associated with research methods. We found that experimental work replicates at significantly lower rates than non-experimental methods for all subfields, and subfields with less experimental work replicate relatively better. This finding is worrisome, given that Psychology’s strong scientific reputation is built, in part, on its proficiency with experiments.
Analyzing replicability alongside other metrics of a paper, we found that while replicability is positively correlated with researchers’ experience and competence, other proxies of research quality, such as an author’s university prestige and the paper’s citations, showed no association with replicability in Psychology. The findings highlight the need for both academics and the public to be cautious when evaluating research and scholars using pre- and post-publication metrics as proxies for research quality.
We also correlated media attention with a paper’s replicability. The media plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results. Ideally, the media would have a positive relationship (or a null relationship) with replication success rates in Psychology. Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success. Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.
We envision two possible applications of our approach. First, the machine learning model could be used to estimate replicability for studies that are difficult or impossible to manually replicate, such as longitudinal investigations and special or difficult-to-access populations. Second, predicted replication scores could begin to help prioritize manual replications of certain studies over others in the face of limited resources. Every year, individual scholars and organizations like Psychological Science Accelerator (67) and Collaborative Replication and Education Project (68) encounter the problem of choosing from an abundance of Psychology studies which ones to replicate. Isager and colleagues (69) proposed that to maximize gain in replication, the community should prioritize replicating studies that are valuable and uncertain in their outcomes. The value of studies could be readily approximated by citation impact or media attention, but the uncertainty part is yet to be adequately measured for a large literature base. We suggest that our machine learning model could provide a quantitative measure of replication uncertainty.
We note that our findings were limited in several ways. First, all papers we made predictions about came from top-tier journal publications. Future research could examine papers from lower-rank journals and how their replicability associate with pre- and post-publication metrics (70). Second, the estimates of replicability are only approximate. At the subfield-level, five out of six subfields in our analysis were represented by only one top journal. A single journal does not capture the scope of the entire subfield. Future research could expand the coverage to multiple journals for one subfield or cross-check the subfield pattern derived using other methods (e.g., prediction markets). Third, the training sample used to develop the model used nearly all the manual replication data available, yet still lacked direct manual replication for certain psychology subfields. While we conducted a series of transfer learning analyses to ensure the model’s applicability beyond the scope of the training sample, implementation of the model in the subfields of Clinical Psychology and Developmental Psychology, where actual manual replication studies are scarce should be done judiciously. For example, when estimating a paper’s replicability, we advise users to review a paper’s other indicators of replicability, like original study statistics, aggregated expert forecast, or prediction market. Nevertheless, our model can continue to be improved as more manual replication results become available.
Future research could go in several directions: 1) our replication scores could be combined with other methods like prediction markets (16) or non-text-based machine learning models (2728) to further refine estimates for Psychology studies; 2) the design of the study could be repeated to conduct replication censuses in other disciplines; and 3) the replication scores could be further correlated with other metrics of interest.
The replicability of science, which is particularly constrained in social science by variability, is ultimately a collective enterprise improved by an ensemble of methods. In his book The Logic of Scientific Discovery, Popper argued that “we do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them” (1). However, as true as Popper’s insight about repetition and repeatability is, it must be recognized that tests come with a cost of exploration. Machine learning methods paired with human acumen present an effective approach for developing a better understanding of replicability. The combination balances the costs of testing with the rewards of exploration in scientific discovery.

Thursday, February 2, 2023

Do unbiased people act more rationally?—The more unbiased people assessed their own risk of COVID-19 compared to that of others, the less willing they were to be vaccinated

Do unbiased people act more rationally?—The case of comparative realism and vaccine intention. Kamil Izydorczak, Dariusz Dolinski, Oliver Genschow, Wojciech Kulesza, Pawel Muniak, Bruno Gabriel Salvador Casara and Caterina Suitner. Royal Society Open Science, February 1 2023.

Abstract: Within different populations and at various stages of the pandemic, it has been demonstrated that individuals believe they are less likely to become infected than their average peer. This is known as comparative optimism and it has been one of the reproducible effects in social psychology. However, in previous and even the most recent studies, researchers often neglected to consider unbiased individuals and inspect the differences between biased and unbiased individuals. In a mini meta-analysis of six studies (Study 1), we discovered that unbiased individuals have lower vaccine intention than biased ones. In two pre-registered, follow-up studies, we aimed at testing the reproducibility of this phenomenon and its explanations. In Study 2 we replicated the main effect and found no evidence for differences in psychological control between biased and unbiased groups. In Study 3 we also replicated the effect and found that realists hold more centric views on the trade-offs between threats from getting vaccinated and getting ill. We discuss the interpretation and implication of our results in the context of the academic and lay-persons' views on rationality. We also put forward empirical and theoretical arguments for considering unbiased individuals as a separate phenomenon in the domain of self–others comparisons.

5. General discussion

Comparative optimism is a robust phenomenon. The bias proved to be present inter-contextually [46], and since the first theoretical works in the 1980s, it is still considered a replicable and practically significant effect. Furthermore, the bias has been successfully discovered by multiple research teams in many settings during the COVID-19 pandemic [4951]. But do social psychologists have a firm understanding of why this bias occurs and its consequences?

As with many other collective irrationalities, we can too often be taken in by the ‘rational = desirable’ narrative. In such a narrative we implicitly or explicitly assume that the most desirable state would be ‘unbiased’, and, if the examined population fails to adhere to this pattern, we conclude that the cognitive processes we examine are somewhat ‘flawed’. In the presented studies, we concluded that those who are ‘unbiased’ more often abstain from taking one of the most (if not the most) effective, evidence based and affordable actions that could protect them from deadly threat. A seemingly ‘rational’ mental approach to the issue of COVID-19 contraction is related to a more irrational response to that threat—namely not getting vaccinated.

In the mini meta-analysis and two pre-registered studies, we discovered that those who express either comparative pessimism or optimism have a higher intention to get vaccinated for COVID-19 than those who are unbiased. The relationship of comparative pessimism to pro-health behaviour seems more intuitive, and the positive relationship of comparative optimism comes as a surprise, but our discovery is not isolated in that regard [52].

In Study 2, we found no evidence of a relationship between psychological control and comparative optimism with vaccine intention.

In Study 3 we found a common denominator of people who are realists and who have a lower vaccine intention. It turned out that both phenomena are related to lower COVID-19 ThreatDifference (ThreatDisease − ThreatVaccine). Furthermore, in line with the extended protection motivation theory (PMT [47,48]), the trade-off between risks of the disease and risks of the vaccine proved to predict being unbiased, and this relationship is partly mediated by vaccine intention.

Our studies present evidence that counters the ‘rational = desirable’ narrative, but that could lead into another trap: assuming that it is irrationalities and biases that help us cope more effectively. We think that such a narrative can be an equally false over-simplification and our studies offer more compelling explanations.

Collective irrationalities, such as comparative optimism may neither enhance nor hamper our coping abilities. They may, in turn, be a by-product of ongoing coping processes, possibly leading to greater protection (in the case of our studies, vaccination against COVID-19). From the perspective of our studies, it is clear that we might wrongfully ascribe a causal role to these biases.

While one might think that comparative optimism may cause reckless behaviour, such as refusal to vaccinate, Study 3 suggests another plausible alternative mechanism: ThreatDifference might be the reason for stronger or weaker vaccine intention (along with many other factors; see [43,53]) and comparative optimism might be a result of knowing one's own efforts, such as vaccination. In fact, a recent experimental study [52] provides evidence that being more aware of one's own self-protective effort enhances comparative optimism.

It is also noteworthy that comparative biases may arise in part from a lack of information about the comparative target, and that providing people with information about the comparative target diminishes the bias [54]. Accordingly, the comparative optimists in our study may have lacked information about the preventive behaviour of others.

The case of the relationship between comparative optimism and constructive pro-health behaviour is complex. On the one hand, we have evidence for both the benefits and drawbacks of CO [55]. On the other hand, CO may be the result rather than the cause of pro-health behaviour. Clearly there are many contextual factors involved and we should discard the overly simplistic view of an inherently beneficial or inherently harmful nature of comparative optimism (which also might be the case for many other collective irrationalities).

Our paper presents a pre-registered and high-powered line of research, which addresses differences between comparative optimists and the ‘unbiased’—a category of individuals that has most often been either left undiscussed or barely mentioned in previous studies regarding CO. Examining the bias from the perspective of the unbiased and using a mixed method approach that combined theory-driven hypotheses with a bottom-up strategy, thus giving a voice to participants, offered the opportunity to enrich theoretical knowledge on comparative bias and led to the surprising discovery that being unbiased can be related to a less pro-health attitude.

5.1. Limitations and future directions

The main limitation of our study is the lack of behavioural measures. This was a result of an early stage of our research project, which took place before COVID-19 vaccines were available. For that reason, we gathered data only about vaccine intention. In follow-up studies the vaccines were available but we decided to examine the intention of the yet unvaccinated to ensure the direct comparability of follow-up studies with the studies from a mini meta-analysis. This limitation leads to another one—at the time of Study 2 and especially Study 3, the number of unvaccinated was shrinking and we can expect that they might differ from the general population in many ways (for example, from study to study, we observed the diminishing share of ‘realists’). This constitutes a limit for the generalization of our conclusions.

The future direction of research regarding the differences between unbiased and comparative optimists should concentrate on actual behaviours rather than intentions or declarations. Moreover, future studies should enhance the scope of generalization by investigating more representative samples.

Another limitation is the possibility of an alternative explanation of our results. We interpret the results of Study 3 in the light of the extended PMT theory, assuming that the relationship between predicted outcomes of falling ill and getting vaccinated leads to engagement or disengagement with vaccination, which it turn results in them feeling superior (comparatively optimistic) or similar (comparatively realistic) to others.

But an alternative is probable. Following Gigerenzer's theory of ‘fast and frugal heuristics' [56], people can often make more ecologically valid decisions when they follow heuristics, without engaging in deep, analytical processes.

Perhaps people who chose the ecologically rational option to take the vaccine did so because they followed their intuition/shortcuts when making the decision. By doing so, they estimated the trade-offs between the disease and vaccine in line with the mainstream message (media, experts and authorities). If these individuals followed intuition in this respect, they may also be more prone to the default bias, namely optimistic bias. On the other hand, people who engage in processing the information more reflectively might end up being more sceptical towards vaccination and also less prone to the optimistic bias.

These alternative explanations could be empirically tested—if pro-vaccine attitudes could be ascribed to using more ‘fast and frugal heuristics’, people more sceptical of the vaccines should be able to recall more information about vaccines (regardless of their epistemic status) and provide more elaborate explanations for their stance.

As a general direction for future research on comparative biases, we advocate for considering a categorical approach to measuring biases—individuals who do not exhibit a bias should be treated as a separate category, especially when empirical results would indicate a substantial inflation of scores signalling a lack of bias (a similar inflation has been identified in the case of dehumanization—see [57], p. 12). Alternatively, if one decides to treat comparative bias as a continuous scale, a nonlinear relationship should be investigated. If comparative biases can have two directions, it is reasonable to expect that different directions might have different correlations.

The stated goal of the app is to produce a list of courses that would be easy for engineering majors to excel in effortlessly, where the majority of the class is young women that would not necessarily find the class easy, putting engineering majors in a position to help a pool of potential "mates"

Need help with students who've turned my class into a dating service. Jan 2023.

Controversial Post — You may use comments ONLY to suggest improvements. You may use answers ONLY to provide a solution to the specific question asked below. Moderators will remove debates, arguments or opinions without notice. See: Why do the moderators move comments to chat and how should I behave afterwards?

I'm a professor at a local university. I'm passionate about teaching, and am proud to teach 100-level science and mathematics courses to young and aspiring students.

Some senior engineering students created a sort of dating service/app, "How I Met My Future Wife" (not the actual name, but close enough). It advertises itself as a way for smart young guys to meet "potential marriage material", by helping them social with "young, cultured, educated women". It works by aggregating diversity data my university publishes. This data is intended to help make a case for having more women and minorities in STEM courses so that post-university, we have more diverse representation in the worlds of science, business, and engineering. These senior engineering students used it to create a database of courses that are statistically likely to have a large proportion of young women from certain cultural backgrounds.

The stated goal of the app is to produce a list of courses that would be easy for engineering majors to excel in effortlessly, where the majority of the class is young women that would not necessarily find the class easy. It basically puts engineering majors in a position to ingratiate themselves with a large pool of potential "mates", and even guides users through getting reduced tuition or even taking the course for free (i.e. "auditing" a course; take it for free, but it doesn't affect your GPA, so as to prevent students from gaming the system and boosting their GPAs with easy courses).

A number of 100-level science courses are having record levels of senior-level STEM students auditing these courses, and a number of female students have approached me, noting they are disgusted and uncomfortable with the amount of "leching" taking place (edit: there are no unwanted advances, but it's painfully obvious to some students what's taking place). It's also demoralizing several of them, since we routinely have cases where a young man is leading open labs as if they're a teacher themselves (in order to "wow" their female classmates, offer "private free tutoring sessions", etc). Some of the young students in my class take up these offers, and this further demoralizes other female students seeing this happen (i.e. only attractive women being offered tutoring sessions). This is further compounded by the condescension involved (i.e. one self-admitted user of the app told me "this material that others struggle with is so easy for me, and I'm doing it for laughs and phone numbers.").

How can I stop this?

People auditing the course don't have to take the exams, or attend regularly. They can showboat in a course that's easy for them at zero risk or cost to themselves. I have no means to kick people from the course, despite this obvious behavior, and the people abusing the course can basically come and go as they please.

The university administration refuses to even acknowledge the problem exists (mostly, to my knowledge, because they don't want to admit fault or harm being caused by publishing such granular diversity reports), a few fellow profs either find it comical, or are happy that open labs are so full of volunteer tutors (perk to them, I guess). It seems that all parties are ignoring the young students I teach. I don't know if there are any legal routes, and there's no way I could do a public name-and-shame without jeopardizing my career. I'm at a total loss here.


I scheduled a morning meeting with a senior colleague who has helped me with hard problems in the past (sort of the "go to guy" when things get rough). My husband and I had a long serious talk with him, and it's been made clear the university won't help me with this, as it would mean a "black left eye" for them, and I'd be tossed to the wolves on the left and right. If I want to pursue this further, I have to be prepared to forfeit my career, credibility (i.e. be black-balled in industry), and face lawsuits and SLAPP attacks from the university. With our combined salaries, my husband and I are barely making ends meet. My only real recourse is to counsel my students, while hoping that the app eventually gets more unwanted attention. In short, the problem will have to "solve itself", while numerous female students endure even more adversity in STEM by a program intended to help them.

Wednesday, February 1, 2023

Check Rolf Degen's Twitter page to get great summaries of Psychology papers

 Check Rolf Degen's takes! @DegenRolf

Exploring the impact of money on men’s self-reported markers of masculinity: Men thought that their erect penis size was at least 21.1% above the population mean, but those rewarded with money were more realistic

Smaller prize, bigger size? Exploring the impact of money on men’s self-reported markers of masculinity. Jacob Dalgaard Christensen, Tobias Otterbring and Carl-Johan Lagerkvist. Front. Psychol., February 1 2023, Volume 14 - 2023.

Abstract: Bodily markers, often self-reported, are frequently used in research to predict a variety of outcomes. The present study examined whether men, at the aggregate level, would overestimate certain bodily markers linked to masculinity, and if so, to what extent. Furthermore, the study explored whether the amount of monetary rewards distributed to male participants would influence the obtained data quality. Men from two participant pools were asked to self-report a series of bodily measures. All self-report measures except weight were consistently found to be above the population mean (height and penis size) or the scale midpoint (athleticism). Additionally, the participant pool that received the lower (vs. higher) monetary reward showed a particularly powerful deviation from the population mean in penis size and were significantly more likely to report their erect and flaccid penis size to be larger than the claimed but not verified world record of 34 cm. These findings indicate that studies relying on men’s self-reported measures of certain body parts should be interpreted with great caution, but that higher monetary rewards seem to improve data quality slightly for such measures.

4. Discussion

The present study shows that men seem to self-report their physical attributes in a self-view-bolstering way, although not for weight, consistent with earlier findings (Neermark et al., 2019). Specifically, at the aggregate level, men reported being marginally more athletic compared to the scale midpoint, claimed to be significantly taller compared to the Danish mean for individuals of similar ages, and stated that their erect penis size was several centimeters longer than the available Danish population mean. The finding that participants do not seem to have over-reported their weight but likely exaggerated their height slightly also implies that they sought to present themselves as more physically fit. Together, these results indicate that, when interested in bodily variables important to men’s self-view and identity, such variables should not be done through self-report; especially not if they concern private bodily measures linked to masculinity (i.e., penis size). Indeed, men deviated substantially more in their reporting of private (vs. publicly visible) body measures, as the overall sample mean in erect penis size was at least 21.1% above the Danish population mean, while only 1% above the Danish mean in height among men of similar ages and roughly equal to the population mean in weight.

Interestingly, giving participants a higher (vs. lower) monetary reward reduced the average self-reported estimate of both erect and flaccid penis size, but had no impact on the more publicly visible measures. To underscore the point that participants in the low monetary reward group provided less accurate self-report estimates, we further found participants in this group to be significantly more likely to report that their erect and flaccid penis size was larger than the claimed world record of 34 cm (Kimmel et al., 2014Kim, 2016Zane, 2021). However, the means of erect penis size were still significantly above the available Danish population mean for both the low and high payment groups. As such, even with the higher monetary reward, our results regarding private self-report data do not appear to be trustworthy.

While our results indicate that men may have exaggerated their penis size and, to a lesser extent, their height and athleticism in a self-view-bolstering way, it is important to note that extreme values based on self-report can be the result not only of deliberate exaggerations but also of measurement error. We find a measurement error account unlikely to be the main driver of our results for several reasons. First, regarding penis size, the deviation of more than 20% (upward) from the stated Danish population mean is too extreme to realistically have occurred simply due to measurement error, and a measurement error account should arguably stipulate both under- and over-reporting, which is not congruent with the current results. Second, self-reported penis size has previously been found to correlate positively with social desirability scores (King et al., 2019), suggesting that some men deliberately exaggerate their penis size. Still, our study would have been strengthened by asking participants to also measure other body parts with the ruler that are not commonly connected to masculinity (e.g., their forearms). Such instructions would have allowed us to more explicitly test whether, as we believe, men strategically exaggerate only those bodily cues that are linked to masculinity or, alternatively, whether they over-report all bodily measures, irrespective of their “macho” meaning. It is possible that men, on average, are more inclined to lie about their penis size than their height, weight, or athleticism, considering that the penis is typically concealed and hence easier to lie about without getting caught in everyday interactions, whereas people cannot easily hide their height, weight, and body shape.

In conclusion, our results suggest that private data related to bodily cues of masculinity can only be reliably collected in the lab, where conditions can be fully controlled. Given our findings, scientific studies with self-report data concerning penis size should be interpreted with great caution. However, one remedy to reduce exaggerated response patterns seems to be higher monetary rewards given to participants. Indeed, one study found monetary incentives to be the top priority for online panel participants, and further revealed that data quality can be positively related to monetary compensation (Litman et al., 2015), supporting our argument that increased payments may be important for accessing high-quality data on the private (penis) measures investigated herein. It is possible that participants who received the larger monetary payment, on average, were less inclined to exaggerate the size of their penis because they felt a stronger need to reply (more) honestly. In contrast, those who received the smaller monetary payment may have been more motivated to exaggerate their penis size due to anger for the low payment coupled with the activation of self-threat when receiving questions about male markers of masculinity. Indeed, self-threat has been shown to magnify the self-serving bias (Campbell and Sedikides, 1999) and participants receiving the low monetary reward might have been more prone to engage in (extreme) protest responses—as our Chi-square analyses indicate—due to psychological reactance following the low payment (MacKenzie and Podsakoff, 2012).

Future research could examine, for instance, whether oath scripts or the implementation of interactive survey techniques, with direct feedback to participants when their responses exceed certain probability thresholds, may reduce exaggerated response patterns in studies with self-report measures (Kemper et al., 2020). Before such studies are conducted, the most telling take-away message based on the current results—regarding the aggregate “believability” in men’s self-reported penis size—is perhaps best captured by a quote from the New York Times bestselling author Darynda Jones: “Never trust a man with a penis.”

What features make teddy bears comforting? Because the emotional bonds outweigh the bear’s physical characteristics.

What makes a teddy bear comforting? A participatory study reveals the prevalence of sensory characteristics and emotional bonds in the perception of comforting teddy bears. Anne-Sophie Tribot,Nathalie Blanc,Thierry Brassac,François Guilhaumon,Nicolas Casajus &Nicolas Mouquet. The Journal of Positive Psychology, Jan 30 2023.

Abstract: Considered as a transitional object, the comforting power of the teddy bear has often been asserted in many past studies without knowing its underlying determinants. Through a participatory study conducted during the European Researchers’ Night, this study aims to identify characteristics of teddy bears that influence their comforting power, including visual, olfactory and kinesthetic properties. We also tested the effect of ownership on comforting power. Our study revealed that the emotional bond shared with a teddy bear is a predominant factor. However, we identified characteristics that play a significant role in the perception of comfort, which lies in a combination of visual, olfactory, and especially kinesthetic characteristics. Through these results, our study identifies the determinants spontaneously taken into account in the attribution of teddy bears’ capacity to provide comfort. These results were independent of participants’ age, reminiscent of the teddy bear’s ability to provide comfort at all stages of life.

Tuesday, January 31, 2023

A growing literature points to children’s influence on parents’ behavior, including parental investments in children; this study finds an earlier predictor of investment, offspring genotype

Child-Driven Parenting: Differential Early Childhood Investment by Offspring Genotype. Asta Breinholt, Dalton Conley. Social Forces, soac155, January 18 2023.

Abstract: A growing literature points to children’s influence on parents’ behavior, including parental investments in children. Further, previous research has shown differential parental response by socioeconomic status to children’s birth weight, cognitive ability, and school outcomes—all early life predictors of later socioeconomic success. This study considers an even earlier, more exogenous predictor of parental investments: offspring genotype. Specifically, we analyze (1) whether children’s genetic propensity toward educational success affects parenting during early childhood and (2) whether parenting in response to children’s genetic propensity toward educational success is socially stratified. Using data from the Avon Longitudinal Survey of Parents and Children (N = 6,247), we construct polygenic indexes (PGIs) for educational attainment (EA) and regress cognitively stimulating parenting behavior during early childhood on these PGIs. We apply Mendelian imputation to construct the missing parental genotype. This approach allows us to control for both parents’ PGIs for EA and thereby achieve a natural experiment: Conditional on parental genotype, the offspring genotype is randomly assigned. In this way, we eliminate the possibility that child’s genotype may be proxying unmeasured parent characteristics. Results differ by parenting behavior: (1) parents’ singing to the child is not affected by the child’s EA PGI, (2) parents play more with children with higher EA PGIs, and (3) non-college-educated parents read more to children with higher education PGIs, while college-educated parents respond less to children’s EA PGI.

Compared to those who have had a COVID-19 infection, those who have not yet experienced infection anticipate they will experience greater negative emotion, and this may have implications for preventive behaviors

Getting COVID-19: Anticipated negative emotions are worse than experienced negative emotions. Amanda J.Dillard, Brian P.Meier. Social Science & Medicine, Volume 320, March 2023, 115723.


Anticipated and recalled negative emotions for COVID-19 infection were compared.

People who have never had COVID may overestimate their negative emotion for infection.

More negative emotion, particularly when anticipated, relates to vaccination and intentions.


Objective: When people think about negative events that may occur in the future, they tend to overestimate their emotional reactions, and these “affective forecasts” can influence their present behavior (Wilson and Gilbert, 2003). The present research examined affective forecasting for COVID-19 infection including the associations between emotions and preventive intentions and behavior.

Methods: In two studies, we compared individuals’ anticipated emotions and recalled emotions for COVID-19 infection. Study 1 asked college students (N = 219) and Study 2 asked general adults (N = 401) to either predict their emotions in response to a future COVID-19 infection or to recall their emotions associated with a previous infection.

Results: In both studies, reliable differences in negative emotions emerged. Those who were predicting their feelings associated with a future infection anticipated more negative emotion than those who were recalling their feelings associated with a past infection reported. Greater negative emotion in both studies was significantly associated with being more likely to have been vaccinated as well as higher intentions to get the booster vaccine.

Conclusions: These findings suggest that compared to those who have had a COVID-19 infection, those who have not yet experienced infection anticipate they will experience greater negative emotion, and this may have implications for preventive behaviors. In general, these findings suggest that people may have an impact bias for COVID-19 infection.

Keywords: COVID-19Affective forecasting theoryAnticipated emotionVaccine behaviorBehavior intentions

9. General discussion

In two studies with college students and general adults, we compared affective forecasts to affective experiences of a COVID-19 infection. In both studies, when individuals thought about the prospect of contracting COVID-19, they anticipated more regret, guilt, anger, and fear than individuals who had the virus recalled experiencing. Higher negative emotion was meaningful in that it was related to greater likelihood of having been vaccinated as well as higher intentions to get the booster.

Although similar differences in anticipated versus recalled negative emotions were observed in both the college students and general adults, the negative emotions were overall higher in the latter group. In the sample of general adults, perceived severity of COVID-19 also significantly differed among those anticipating versus recalling infection, a finding which was not observed in the college students. Together, these findings may suggest that relative to college students, the general adults felt more threatened by COVID-19. On one hand, this notion of greater perceived threat among an older sample is reasonable given that age is a risk factor for more severe disease. On the other hand, the anticipation of greater negative emotion among the older sample does not fit with recent studies finding that older individuals, compared to younger, are faring better emotionally during the pandemic (including some of the same emotions we tested; Carstensen et al., 2020Knepple Carney, Graf, Hudson and Wilson, 2021) or that older adults are more optimistic about COVID-19 (Bruine de Bruin, 2021). However, this distinction may relate to emotions about how one would fare with COVID-19 infection (as measured in our research) versus how one is coping emotionally with the pandemic. In fact, although several studies have examined people's emotions during the pandemic, none that we know of have examined people's anticipated or recalled emotional reactions to contracting COVID-19.

Our findings are in line with affective forecasting theory, and the specific error known as the impact bias. The impact bias occurs when people overestimate the intensity and duration of their future emotions (Gilbert and Wilson, 2007; for a review, see Wilson and Gilbert, 2003). Early research on the impact bias showed it for outcomes such as breaking up with a romantic partner or failing to get a job promotion, but it has since been found for many diverse events and outcomes (Dunn et al., 2003Finkenauer et al., 2007Gilbert et al., 1998Hoerger, 2012Hoerger et al., 2009Kermer et al., 2006Sieff et al., 1999Van Dijk, 2009). Researchers have argued that the impact bias likely underpins many health decisions, but relatively few studies have tested the bias and its behavioral implications (Halpern and Arnold, 2008Rhodes and Strain, 2008). Given our findings that anticipated emotions were more intense than recalled experienced emotions, our data are suggestive of an impact bias for COVID-19 infection. These data are among the first to apply affective forecasting ideas to this unusually novel and severe virus.

Although our research is an important first step in highlighting the potential of an impact bias for COVID-19, our studies do not provide definitive evidence. This is because we assessed recalled emotions which may differ from actual experienced emotions. For example, it could be that participants who were recalling their emotions from a past infection experienced just as much negative emotion as those who were anticipating an infection, but they remember the emotions as less intense. This idea would be supported by research suggesting that recalled emotions are susceptible to various cognitive biases and processes (for a review see Levine and Safer, 2002). For example, one's expectations about how they should have felt, one's coping or adaptation since the event, and even personality factors may influence recalled emotions (Hoerger et al., 2009Ottenstein and Lischetzke, 2020Wilson et al., 2003). Arguably, some of these factors could influence one's anticipated emotions too. However, a future study that uses a within-subjects, longitudinal design, assessing the same individuals before, during and after they experience COVID-19, can provide definitive evidence of an impact bias (see more discussion of this idea in the Limitations section).

One question raised by our findings is, would it benefit people to learn that individuals who contract a virus like COVID-19 may experience less negative emotion than others predict? On one hand, reducing negative emotion in those who have never experienced infection could have the undesired effect of discouraging preventive behavior like getting vaccinated. Indeed, our data would support this notion. On the other hand, many people have experienced high distress due to the pandemic (Belen, 2022Shafran et al., 2013). While emotions associated with infection may play only a small role in this distress, learning that these emotions may be overestimated (and that people may do better than they anticipate) could be helpful information. Related to this, one strategy to reduce negative emotions surrounding the COVID-19 pandemic is to encourage mindfulness (Dillard and Meier, 2021Emanuel et al., 2010). Mindfulness is about focusing one's attention on the ongoing, present moment (Brown and Ryan, 2003). People who practice mindfulness may be less inclined to think about future outcomes, or anticipate strong negative emotions associated with these outcomes.

The question above relates to a broad dilemma, faced by researchers in psychology, medicine and other fields, about using emotions to promote health behaviors. That is, to what extent is it acceptable to use, or to increase, people's existing negative emotions to motivate health behaviors? For example, to encourage women to get mammograms, is it appropriate to use interventions to increase their fear (or other negative emotions), or to not correct their existing strong negative emotions about breast cancer? Although some women may hold stronger negative emotions than warranted (e.g., they may be of lower-than-average risk), correcting them could have the unfortunate consequence of reducing their likelihood of getting screened. The answer to this dilemma may well depend on factors such as context (e.g., whether there is a ‘right’ preventive action that is appropriate for most people) or emotion threshold (e.g., when is a negative emotion too much, leading to additional distress, and when is it just enough to motivate behavior). In general, more research should be devoted to determining the conditions relating to this dilemma and affective forecasting is a ripe context for investigating them.

In both studies, we found that individuals who anticipated or recalled greater negative emotion associated with COVID-19 infection were more likely to have been vaccinated and they also reported higher intentions to get the booster. Although our data were correlational, they fit with the broad literature that show emotions, including anticipated ones, can be a powerful influence on heath behaviors (e.g., see Williams and Evans, 2014 for a review), including vaccine behavior (Brewer et al., 2016Chapman and Coups, 2006Wilson et al., 2003). Our findings also fit with recent research finding that emotions like fear and regret are positively associated with COVID-19 vaccination and other preventive behaviors (Coifman et al., 2021Reuken et al., 2020Wolff, 2021). More research is needed on associations between different types of emotions and health behaviors. For example, are experienced emotions as important as recalled or anticipated emotions in motivating health behavior? And does accuracy of recalled or anticipated emotions matter in this context? Testing associations between these emotions and health behavior may be difficult as the emotions likely share overlap especially for health threats people are familiar with and have prior experience.

It is important to consider the timing of this research which occurred during Fall 2021. In a recent large-scale longitudinal investigation, researchers examined both American and Chinese adults’ emotions and behavior over the course of the pandemic (Li et al., 2021). They found that negative emotions like fear, anxiety, and worry were heightened in the beginning of the pandemic, but later, during phases of ongoing risk, returned to baseline levels. Their research also showed that while emotions were predictive of preventive behaviors like wearing a mask early in the pandemic, they were not predictive later. In the present research, we observed meaningful differences between anticipated and recalled emotions associated with COVID-19 infection, and both were associated with vaccine behavior. Thus, although emotional reactions have apparently lessened, our findings may speak to the power of affective forecasting and its implications for present behavior.

10. Limitations

This research is not without limitations. Most importantly, both studies used a between-subjects design in which participants were not randomly assigned yet were asked different questions depending on their experience with COVID-19 infection. Although we believe their negative emotion differences related to affective forecasting errors, the differences may have been due to other factors. For example, people who have contracted COVID-19 and people who have not may differ in various ways. Notably, we did not find differences for demographics like age or gender, or various psychosocial variables that were measured in the surveys (see supplementary material for details). Given that an experimental design would be impossible as one cannot randomly assign people to have a COVID-19 infection or not, future studies might incorporate additional baseline measures (e.g., COVID exposure, self-protective behaviors) when assessing these groups. A second related limitation is that although our method of comparing anticipated to recalled emotions is an approach that has been used to test affective forecasting errors (e.g., Dillard et al., 2021Gilbert et al., 1998Sieff et al., 1999), the preferred method is to use a within-subjects, longitudinal design (e.g., Smith et al., 2008Wilson et al., 2003Wilson et al., 2000). For example, people would be measured before and after a COVID-19 infection occurs, and their anticipated and experienced emotions can be directly compared. Of course, this design presents logistical challenges such as the difficulty in assessing people as they are experiencing an infection or having to follow people until an infection occurs (not knowing if it will occur). Following people over time may also allow researchers to examine prospective, actual behavior as opposed to the present studies’ approach which examined retroactive vaccine behavior and booster intentions. Although intentions may be a reliable predictor of behavior (Webb and Sheeran, 2006), finding associations between negative emotion and actual behavior would provide more direct support for the notion that the impact bias has behavioral implications. This may be particularly relevant if COVID vaccines become a yearly recommendation.

Finally, another limitation relates to the biases inherent in recalled emotions. First, individuals who were recalling their infection could have experienced it days, weeks, or even months before being in the study. Length of time since an outcome occurred can bias one's memory for the emotions they experienced during the outcome – in the direction of over or underestimating emotions (Wilson et al., 2003). However, others have found that people are relatively accurate in recalling past emotional experiences, especially in the short-term (Hoerger, 2012). At the time of our study, COVID-19 diagnosis was a new, recent phenomenon, having been around for a little over one year, and all participants' infections would have fallen in that same time frame. Nonetheless to resolve this issue, future studies might assess another group of individuals – those who are currently experiencing COVID-19 infection. However, as mentioned above, this assessment presents logistical challenges.

Monday, January 30, 2023

Almost everything we have been told about misinformation is misinformation... and moral panic

Misinformation on Misinformation: Conceptual and Methodological Challenges. Sacha Altay, Manon Berriche, Alberto Acerbi. Social Media + Society, January 28, 2023.

Abstract: Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges.


A Large Number of People are Misinformed

Headlines about the ubiquity of misbeliefs are rampant in the media and are most often based on surveys. But how well do surveys measure misbeliefs? Luskin and colleagues (2018) analyzed the design of 180 media surveys with closed-ended questions measuring belief in misinformation. They found that more than 90% of these surveys lacked an explicit “Don’t know” or “Not sure” option and used formulations encouraging guessing such as “As far as you know . . .,” or “Would you say that . . .” Often, participants answer these questions by guessing the correct answer and report holding beliefs that they did not hold before the survey (Graham, 2021). Not providing, or not encouraging “Don’t know” answers is known to increase guessing even more (Luskin & Bullock, 2011). Guessing would not be a major issue if it only added noise to the data. To find out, Luskin and colleagues (2018) tested the impact of not providing “Don’t know” answers and encouraging guessing on the prevalence of misbeliefs. They found that it overestimates the proportion of incorrect answers by nine percentage points (25 to 16), and, when considering only people who report being confident in holding a misperception, it overestimates incorrect answers by 20 percentage points (25 to 5). In short, survey items measuring misinformation overestimate the extent to which people are misinformed, eclipsing the share of those who are simply uninformed.
In the same vein, conspiratorial beliefs are notoriously difficult to measure and surveys tend to exaggerate their prevalence (Clifford et al., 2019). For instance, participants in survey experiments display a preference for positive response options (yes vs no, or agree vs disagree) which inflates agreement with statements, including conspiracy theories, by up to 50% (Hill & Roberts, 2021Krosnick, 2018). Moreover, the absence of “Don’t know” options, together with the impossibility to express one’s preference for conventional explanations in comparison to conspiratorial explanations, greatly overestimate the prevalence of conspiratorial beliefs (Clifford et al., 2019). These methodological problems contributed to unsupported alarmist narratives about the prevalence of conspiracy theories, such as Qanon going mainstream (Uscinski et al., 2022a).
Moreover, the misperceptions that surveys measure are skewed toward politically controversial and polarizing misperceptions, which are not representative of the misperceptions that people actually hold (Nyhan, 2020). This could contribute to fueling affective polarization by emphasizing differences between groups instead of similarities and inflate the prevalence of misbeliefs. When misperceptions become group markers, participants use them to signal group membership—whether they truly believe the misperceptions or not (Bullock et al., 2013). Responses to factual questions in survey experiments are known to be vulnerable to “partisan cheerleading” (Bullock et al., 2013Prior et al., 2015), in which, instead of stating their true beliefs, participants give politically congenial responses. Quite famously, a large share of Americans believed that Donald Trump’s inauguration in 2017 was more crowded than Barack Obama’s in 2009, despite being presented with visual evidence to the contrary. Partisanship does not directly influence people’s perceptions: misperceptions about the size of the crowds were largely driven by expressive responding and guessing. Respondents who supported President Trump “intentionally provide misinformation” to reaffirm their partisan identity (Schaffner & Luks, 2018, p. 136). The extent to which expressive responding contributes to the overestimation of other political misbeliefs is debated (Nyhan, 2020), but it is probably significant.
Solutions have been proposed to overcome these flaws and measure misbeliefs more accurately, such as including confidence-in-knowledge measures (Graham, 2020) and considering only participants who firmly and confidently say they believe misinformation items as misinformed (Luskin et al., 2018). Yet, even when people report confidently holding misbeliefs, these misbeliefs are highly unstable across time, much more so than beliefs (Graham, 2021). For instance, the responses of people saying they are 100% certain that climate change is not occurring have the same measurement properties as responses of people saying they are 100% certain the continents are not moving or that the sun goes around the Earth (Graham, 2021). A participant’s response at time T does not predict their answer at time T + 1. In other words, flipping a coin would give a similar response pattern.
So far, we have seen that even well-designed surveys overestimate the prevalence of misbeliefs. A further issue is that surveys unreliably measure exposure to misinformation and the occurrence of rare events such as fake news exposure. People report being exposed to a substantial amount of misinformation and recall having been exposed to particular fake headlines (Allcott & Gentzkow, 2017). To estimate the reliability of these measures, Allcott and Gentzkow (2017) showed participants the 14 most popular fake news during the American election campaign, together with 14 made-up “placebo fake news.” 15% of participants declared having been exposed to one of the 14 “real fake news,” but 14% also declared having been exposed to one of the 14 “fake news placebos.”
During the pandemic, many people supposedly engaged in extremely dangerous hygiene practices to fight COVID-19 because of misinformation encountered on social media, such as drinking diluted bleach (Islam et al., 2020). This led to headlines such as “COVID-19 disinformation killed thousands of people, according to a study” (Paris Match Belgique, 2020). Yet, the study is silent regarding causality, and cannot be taken as evidence that misinformation had a causal impact on people’s behavior (France info, 2020). For instance, 39% of Americans reported having engaged in at least one cleaning practice not recommended by the CDC, 4% of Americans reported drinking or gargling a household disinfectant, while another 4% reported drinking or gargling diluted bleach (Gharpure et al., 2020). These percentages should not be taken at face value. A replication of the survey found that these worrying responses are entirely attributable to problematic respondents who also reported “recently having had a fatal heart attack” or “eating concrete for its iron content” at a rate similar to that of ingesting household cleaners (Litman et al., 2020; reminiscent of the “lizardman’s constant” by Alexander, 2013). The authors conclude that “Once inattentive, mischievous, and careless respondents are taken out of the analytic sample we find no evidence that people ingest cleansers to prevent Covid-19 infection” (Alexander, 2013, p. 1). This is not to say that COVID-19 misinformation had no harmful effects (such as creating confusion or eroding trust in reliable information), but rather that surveys using self-reported measures of rare and dangerous behaviors should be interpreted with caution.

Misinformation Has a Strong Influence on People’s Behavior

Sometimes, people believe what they see on the internet and engagement metrics do translate into belief. Yet, even when misinformation is believed, it does not necessarily mean that it changed anyone’s mind or behavior. First, people largely consume politically congenial misinformation (Guess et al., 20192021). That is, they consume misinformation they already agree with, or are predisposed to accept. Congenial misinformation “preaches to the choir” and is unlikely to have drastic effects beyond reinforcing previously held beliefs. Second, even when misinformation changes people’s minds and leads to the formation of new (mis)beliefs, it is not clear if these (mis)beliefs ever translate into behaviors. Attitudes are only weak predictors of behaviors. This problem is well known in public policies as the value-action gap (Kollmuss & Agyeman, 2002). Most notoriously, people report being increasingly concerned about the environment without adjusting their behaviors accordingly (Landry et al., 2018).
Common misbeliefs, such as conspiracy theories, are likely to be cognitively held in such a way that limits their influence on behaviors (Mercier, 2020Mercier & Altay, 2022). For instance, the behavioral consequences that follow from common misbeliefs are often at odds with what we would expect from people actually believing them. As Jonathan Kay (2011, p. 185) noted, “one of the great ironies of the Truth movement is that its activists typically hold their meetings in large, unsecured locations such as college auditoriums—even as they insist that government agents will stop at nothing to protect their conspiracy for world domination from discovery.” Often, these misbeliefs are likely to be post hoc rationalizations of pre-existing attitudes, such as distrust of institutions.
In the real world, it is difficult to measure how much attitude change misinformation causes, and it is a daunting task to assess its impact on people’s behavior. Surveys relying on correlational data tell us little about causation. For example, belief in conspiracy theories is associated with many costly behaviors, such as COVID-19 vaccine refusal (Uscinski et al., 2022b). Does this mean that vaccine hesitancy is caused by conspiracy theories? No, it could be that both vaccine hesitancy and belief in conspiracy theories are caused by other factors, such as low trust in institutions (Mercier & Altay, 2022Uscinski et al., 2022b). A few ingenious studies allowed some causal inferences to be drawn. For instance, Kim and Kim (2019) used a longitudinal survey to capture people’s beliefs and behaviors both before and after the diffusion of the “Obama is a Muslim” rumor. They found that after the rumor spread, more people were likely to believe that Obama was a Muslim. Yet, this effect was “driven almost entirely by those predisposed to dislike Obama” (p. 307), and the diffusion of the rumor had no measurable effect on people’s intention to vote for Obama. This should not come as a surprise, considering that even political campaigns and political advertising only have weak and indirect effects on voters (Kalla & Broockman, 2018). As David Karpf (2019) writes “Generating social media interactions is easy; mobilizing activists and persuading voters is hard.”
The idea that exposure to misinformation (or information) has a strong and direct influence on people’s attitudes and behaviors comes from a misleading analogy of social influence according to which ideas infect human minds like viruses infect human bodies. Americans did not vote for Trump in 2016 because they were brainwashed. There is no such thing as “brainwashing” (Mercier, 2020). Information is not passed from brain to brain like a virus is passed from body to body. When humans communicate, they constantly reinterpret the messages they receive, and modify the ones they send (Claidière et al., 2014). The same tweet will create very different mental representations in each brain that reads it, and the public representations people leave behind them, in the form of digital traces, are only an imperfect proxy of their private mental representations. The virus metaphor, all too popular during the COVID-19 pandemic—think of the “infodemic” epithet—is misleading (Simon & Camargo, 2021). It is reminiscent of outdated models of communication (e.g., “hypodermic needle model”) assuming that audiences were passive and easily swayed by pretty much everything they heard or read (Lasswell, 1927). As Anderson (2021) notes “we might see the role of Facebook and other social media platforms as returning us to a pre-Katz and Lazarsfeld era, with fears that Facebook is “radicalizing the world” and that Russian bots are injecting disinformation directly in the bloodstream of the polity.” These premises are at odds with what we know about human psychology and clash with decades of data from communication studies.