Tuesday, November 8, 2022

People in historically rice-farming areas are less happy and socially compare more than people in wheat-farming areas

Lee, C.-S., Talhelm, T., & Dong, X. (2022). People in historically rice-farming areas are less happy and socially compare more than people in wheat-farming areas. Journal of Personality and Social Psychology, Nov 2022. https://doi.org/10.1037/pspa0000324

Abstract: Using two nationally representative surveys, we find that people in China’s historically rice-farming areas are less happy than people in wheat areas. This is a puzzle because the rice area is more interdependent, and relationships are an important predictor of happiness. We explore how the interdependence of historical rice farming may have paradoxically undermined happiness by creating more social comparison than wheat farming. We build a framework in which rice farming leads to social comparison, which makes people unhappy (especially people who are worse off). If people in rice areas socially compare more, then people’s happiness in rice areas should be more closely related to markers of social status like income. In two studies, national survey data show that income, self-reported social status, and occupational status predict people’s happiness twice as strongly in rice areas than wheat areas. In Study 3, we use a unique natural experiment comparing two nearby state farms that effectively randomly assigned people to farm rice or wheat. The rice farmers socially compare more, and farmers who socially compare more are less happy. If interdependence breeds social comparison and erodes happiness, it could help explain the paradox of why the interdependent cultures of East Asia are less happy than similarly wealthy cultures.


Partisan bias in false memories for misinformation about the 2021 Capitol riot: For both true and false events, participants remembered more events that favoured their political party

Partisan bias in false memories for misinformation about the 2021 U.S. Capitol riot. Dustin P. Calvillo, Justin D. Harris & Whitney C. Hawkins. Memory, Sep 28 2022. https://doi.org/10.1080/09658211.2022.2127771

Abstract: Memory for events can be biased. For example, people tend to recall more events that support than oppose their current worldview. The present study examined partisan bias in memory for events related to the January 6, 2021, Capitol riot in the United States. Participants rated their memory for true and false events that were either favourable to their political party or the other major political party in the United States. For both true and false events, participants remembered more events that favoured their political party. Regression analyses showed that the number of false memories that participants reported was positively associated with their tendency to support conspiracy beliefs and with their self-reported engagement with the Capitol riot. These results suggest that Democrats and Republicans remember the Capitol Riot differently and that certain individual difference factors can predict the formation of false memories in this context. Misinformation played an influential role in the Capitol riot and understanding differences in memory for this event is beneficial to avoiding similar tragedies in the future.

Keywords: False memoryfake newsmemory biaspolitical ideology


Most people feel others’ social lives are richer and livelier than theirs

Keeping Up With the Joneses: How Cognitive Availability Biases Everyday Social Comparisons. Sebastian Marc Deri. PhD dissertation, Cornell University 2022. https://ecommons.cornell.edu/bitstream/handle/1813/111944/Deri_cornellgrad_0058F_13102.pdf?sequence=1

Abstract: This dissertation documents the role that cognitive availability plays in distorting the conclusions that people reach about how they measure up to others in domains of everyday concern. The first chapter provides a review of the social comparison literature and an explanation of how my account of social comparison is novel. The second chapter (N=3,293, 11 studies, 3 pre-registered) documents the fact that most people feel others’ social lives are livelier than theirs, and that this is because they can’t help but to bring to mind highly social exemplars when making such comparisons. The third chapter (N= 2,747, 12 studies, 4 pre-registered) documents a robust tendency to compare to above average standards, which cannot solely be explained by motivational factors like social desirability or self-enhancement—adding a wrinkle to the standard above average effect literature by showing that, although people tend to think of themselves as above average in many domains, they also hold and compare themselves to above average standards. The fourth chapter (N=1,703, 3 studies, 1 pre-registered) documents the fact that people feel they are financially worse off than others when thinking about positive instances of wealth (e.g. having a lot in savings) and that this effect can be reversed if people are made to think of positive instances of low economic standing (e.g. having a lot of debt). The fifth and final chapter synthesizes these empirical findings, summarizes my cognitive availability account of social comparison, reviews why it is a novel contribution, and addresses any outstanding concerns.

FDA Deregulation Increases Safety and Innovation and Reduces Prices

Regulating the Innovators: Approval Costs and Innovation in Medical Technologies. Parker Rogers, October 27, 2022. https://parkerrogers.github.io/Papers/RegulatingtheInnovators_Rogers.pdf


Abstract: How does FDA regulation affect innovation and market concentration? I examine this question by exploiting FDA deregulation events that affected certain medical device types but not others. I use text analysis to gather comprehensive data on medical device innovation, device safety, firm entry, prices, and regulatory changes. My analysis of these data yields three core results. First, these deregulation events significantly increase the quantity and quality of new technologies in affected medical device types relative to control groups. These increases are particularly strong among small and inexperienced firms. Second, these events increase firm entry and lower the prices of medical procedures that use affected medical device types. Third, the rates of serious injuries and deaths attributable to defective devices do not increase measurably after these events. Perhaps counterintuitively, deregulating certain device types lowers adverse event rates significantly, consistent with firms increasing their emphasis on product safety as deregulation exposes them to more litigation.

 

---

After moving from Class III (high regulation) to II (moderate), device types exhibited a 200% increase in patenting and FDA submission rates relative to control groups. Patents filed after these events were also of significantly higher quality, as measured by a 200% increase in received citations and market valuations. These effects do not spill over into similar device types.1 For Class II to I deregulations, the rate of patent filings increased by 50%, though insignificantly, and the quality of patent filings exhibited a significant 10-fold improvement, suggesting that litigation better promotes innovation.

[...]

Down-classification yields considerable benefits, as the proponents of deregulation would predict, but what of product safety? Perhaps counterintuitively, I find that deregulation can improve product safety by exposing firms to more litigation. Despite some adverse event rates increasing after Class III to II events (albeit insignificantly), Class II to I events are associated with significantly lower adverse event rates.3 My analysis of patent texts also reveals that inventors focus more on product safety after deregulation. These results suggest that litigation encourages product safety more than regulation [...]

Monday, November 7, 2022

People who are more prone to cry also place a higher focus on morality in their judgements and actions

Only the Good Cry: Investigating the Relationship Between Crying Proneness and Moral Judgments and Behavior. Janis H. Zickfeld et al. Social Psychological Bulletin, Volume 17, Nov 3 2022. https://doi.org/10.32872/spb.6475

Abstract: People cry for various reasons and in numerous situations, some involving highly moral aspects such as altruism or moral beauty. At the same time, criers have been found to be evaluated as more morally upright—they are perceived as more honest, reliable, and sincere than non-criers. The current project provides a first comprehensive investigation to test whether this perception is adequate. Across six studies sampling Dutch, Indian, and British adults (N = 2325), we explored the relationship between self-reported crying proneness and moral judgments and behavior, employing self-report measures and actual behavior assessments. Across all studies, we observed positive correlations of crying proneness with moral judgments (r = .27 [.17, .38]) and prosocial behavioral tendencies and behaviors (r = .20 [.12, .28]). These associations held in three (moral judgment) or two (prosocial tendencies and behaviors) out of five studies when controlling for other important variables. Thus, the current project provides first evidence that crying is related to moral evaluation and behavior, and we discuss its importance for the literature on human emotional crying.


Those with the lowest ability to evaluate scientific evidence overestimate their skill the most

Calibration of scientific reasoning ability. Caitlin Drummond Otten, Baruch Fischhoff. Journal of Behavioral Decision Making, November 4 2022. https://doi.org/10.1002/bdm.2306

Abstract: Scientific reasoning ability, the ability to reason critically about the quality of scientific evidence, can help laypeople use scientific evidence when making judgments and decisions. We ask whether individuals with greater scientific reasoning ability are also better calibrated with respect to their ability, comparing calibration for skill with the more widely studied calibration for knowledge. In three studies, participants (Study 1: N = 1022; Study 2: N = 101; and Study 3: N = 332) took the Scientific Reasoning Scale (SRS; Drummond & Fischhoff, 2017), comprised of 11 true–false problems, and provided confidence ratings for each problem. Overall, participants were overconfident, reporting mean confidence levels that were 22.4–25% higher than their percentages of correct answers; calibration improved with score. Study 2 found similar calibration patterns for the SRS and another skill, the Cognitive Reflection Test (CRT), measuring the ability to avoid intuitive but incorrect answers. SRS and CRT scores were both associated with success at avoiding negative decision outcomes, as measured by the Decision Outcomes Inventory; confidence on the SRS, above and beyond scores, predicted worse outcomes. Study 3 added an alternative measure of calibration, asking participants to estimate the number of items answered correctly. Participants were less overconfident by this measure. SRS scores predicted correct usage of scientific information in a drug facts box task and holding beliefs consistent with the scientific consensus on controversial issues; confidence, above and beyond SRS scores, predicted worse drug facts box performance but stronger science-consistent beliefs. We discuss the implications of our findings for improving science-relevant decision-making.

5 GENERAL DISCUSSION

Across three studies, we find that people with greater ability to evaluate scientific evidence, as measured by scores on the SRS, have greater metacognitive ability to assess that skill. Using a confidence elicitation paradigm common to studies of confidence in knowledge, we found that individuals with low SRS scores greatly overestimate their skill, while those with high skills slightly underestimate their skill, a pattern previously found with confidence in beliefs (e.g., Kruger & Dunning, 1999; Lichtenstein et al., 1982; Lichtenstein & Fischhoff, 1977; Moore & Healy, 2008).

Study 2 replicated these patterns with confidence in SRS skills and with calibration for another skill-based task, the CRT, which assesses the ability to avoid immediately appealing but incorrect “fast lure” answers and then find correct ones (Attali & Bar-Hillel, 2020; Frederick, 2005; Pennycook et al., 2017). As a test of external validity, Study 2 also found that people with better SRS scores had better scores on the DOI, a self-report measure of avoiding negative decision outcomes (Bruine de Bruin et al., 2007); confidence on the SRS, controlling for knowledge, was associated with worse DOI scores.

Study 3 replicated the results of Studies 1 and 2, asking participants how confident they are in each SRS answer, now called “local calibration.” It also assessed “global calibration,” derived from asking participants how many items they thought they answered correctly. Overconfidence was much smaller with the global measure, as found elsewhere (e.g., Ehrlinger et al., 2008; Griffin & Buehler, 1999; Stone et al., 2011). This finding suggests that global calibration may be more appropriate for situations where individuals reflect and act on a set of tasks, rather than act on tasks one by one. However, in an experimental setting, it may also convey a demand characteristic, with an implicit challenge to be less confident (if interpreted as, “How many do you think that you really got right?”), artifactually reducing performance estimates and overconfidence.

Study 3 also included additional measures of construct and external validity. As a test of construct validity, we found that global confidence, controlling for scores, was unrelated to a self-report measure of intellectual humility (Leary et al., 2017), and local confidence, controlling for scores, was unexpectedly positively related to self-reported intellectual humility. These findings may reflect the limitations of self-report measures, including a desirability bias in reporting.

Study 3 further found that SRS scores predicted performance on two science-related tasks: extracting information from a drug facts box (Woloshin & Schwartz, 2011) and holding beliefs consistent with the scientific consensus (as in previous work, Drummond & Fischhoff, 2017). However, confidence, controlling for knowledge, played different roles for these outcomes: It was negatively associated with scores on the drug facts box test, but positively associated with holding beliefs consistent with science on controversial issues. These findings suggest that while those with greater confidence in their scientific reasoning ability may also be more confident in their beliefs on scientific issues, confidence that is out of step with knowledge may hinder decision-making. Neither scores nor confidence was related to self-reported adoption of pandemic health behaviors, perhaps reflecting partisan divisions that reduce the role of individual cognition (e.g., Bruine de Bruin et al., 2020). Future work could examine the role of confidence, above and beyond knowledge, in other science-relevant judgments and decisions, including falling sway to pseudoscientific claims or products.

Individuals' metacognitive understanding of the extent of their knowledge has been related to life events in many domains (Bruine de Bruin et al., 2007; Parker et al., 2018; Peters et al., 2019; Tetlock & Gardner, 2015). Overall, we find that unjustified confidence (Parker & Stone, 2014) in scientific reasoning ability, as reflected in self-reported confidence in the correctness of one's answer adding predictive value to SRS scores (Drummond & Fischhoff, 2017), is associated with reduced avoidance of negative outcomes and worse performance on tasks that require using scientific information, but greater acceptance of the scientific consensus on controversial issues. Unlike Peters et al. (2019), who found that mismatches between skill and confidence were associated with worse outcomes, we found that unjustified confidence (measured both locally and globally) was associated similarly with outcomes at all levels of reasoning ability. These findings may reflect differences between numeracy and scientific reasoning, differences between the studies' measures of confidence and outcomes, or interactions too weak to be detected with the statistical power of the present research. Our findings may also reflect our measures' range restrictions: Here, confidence was elicited as expected performance, thus restricting the extent to which participants with very low or high performance could display underconfidence or overconfidence, respectively. Future work could seek other measures that could further separate the respective contributions of scientific reasoning ability and metacognition about it, such as a subjective scientific reasoning ability scale similar to the Subjective Numeracy Scale (Fagerlin et al., 2007).

Overall, we observed patterns of metacognition for cognitive skills similar to those observed for beliefs, using conventional confidence elicitation methods with known artifacts. Prior research has proposed a variety of methods for measuring overconfidence, with varying strengths and limitations. We discuss several key limitations below; for further discussion of these measurement issues, we refer readers to Lichtenstein and Fischhoff (1977), Erev et al. (1994), Moore and Healy (2008), Fiedler and Unkelbach (2014), Parker and Stone (2014), and Yates (1982).

The dramatically poor performance of the lowest quartile, for both SRS and CRT, is notable. As the groups were identified based on SRS and CRT scores, some of the spread is artifactual (as noted by Lichtenstein & Fischhoff, 1977, and others). One known artifact is the truncated 50–100% response mode, which precludes perfect calibration for participants who answer fewer than 50% of the SRS questions correctly (N = 444 [43% of respondents] in Study 1; N = 34 [34%] in Study 2; and N = 148 [45%] in Study 3). In a post hoc analysis, we treated these respondents as though they had answered 50% of questions correctly. Even with this change, they were still overconfident, by, on average, 29.8% in Study 1 (SD = 10), 31.5% in Study 2 (SD = 11), and 30.5% in Study 3 (SD = 10).

One limitation of these results is that the SRS or CRT tests might have demand effects atypical of real-life tests of cognitive skills, such that participants assume that an experimental task would not be as difficult as these proved to be or want to appear knowledgeable in this setting (Fischhoff & Slovic, 1980). A second possible limitation is that reliance on imperfectly reliable empirical measures somehow affects the patterns of correlations and not just the differences between the groups (Erev et al., 1994; Lichtenstein & Fischhoff, 1977). Attempts to correct for such unreliability have had mixed results (Ehrlinger et al., 2008; Krueger & Mueller, 2002; Kruger & Dunning, 2002). Third, task incentives were entirely intrinsic; conceivably, if clearly explained, calibration-based material rewards might have improved performance. Here, too, prior results have been mixed (Ehrlinger et al., 2008; Mellers et al., 2014). Fourth, our measure of science education, whether participants had a college course, may have been too poor to detect a latent relationship. Fifth, for Study 2, some participants may have seen the CRT items before (Haigh, 2016; Thomson & Oppenheimer, 2016), potentially increasing their scores (Bialek & Pennycook, 2018), with uncertain effects on confidence and calibration. Finally, our version of the CRT, which asked participants to choose between the fast lure and the correct answer, produced higher scores that the usual open-ended response mode and hence might not generalize to other CRT research.

If our results regarding the similarity between calibration for cognitive skill and knowledge prove robust, future work might seek to improve public understanding of science (e.g., Bauer et al., 2007; Miller, 198319982004) by addressing separately the ability to think critically and the need to stop and think critically. If people are as overconfident in their scientific reasoning ability as many participants were here, it may not be enough to correct erroneous beliefs through science communication and education (e.g., Bauer et al., 2007; Miller, 198319982004). People may also need help in reflecting on the limits to their ability to evaluate evidence and their potential vulnerability to manipulation by misleading arguments as well as by misleading evidence. Mental models approaches to science communication offer one potential strategy, by affording an intuitive feeling for how complex processes work (e.g., Bruine de Bruin & Bostrom, 2013; Downs, 2014). The inoculation approach to combating misinformation (Cook et al., 2017; van der Linden et al., 2017) offers another potential strategy, refuting misinformation in advance, so that people have a better feeling for when and how to think about the issues and when and how they can be deceived. Developing effective interventions requires research examining the separate contributions of scientific reasoning ability and metacognition to improving science-relevant judgments and decisions.

Sunday, November 6, 2022

The idea that conservatives are more sensitive to disgust than liberals is a basic tenet of political psychology — and it may be a mere artifact of self-reports

Investigating the conservatism-disgust paradox in reactions to the COVID-19 pandemic: A reexamination of the interrelations among political ideology, disgust sensitivity, and pandemic response. Benjamin C. Ruisch et al. PLoS One, November 4, 2022. https://doi.org/10.1371/journal.pone.0275440

Abstract: Research has documented robust associations between greater disgust sensitivity and (1) concerns about disease, and (2) political conservatism. However, the COVID-19 disease pandemic raised challenging questions about these associations. In particular, why have conservatives—despite their greater disgust sensitivity—exhibited less concern about the pandemic? Here, we investigate this “conservatism-disgust paradox” and address several outstanding theoretical questions regarding the interrelations among disgust sensitivity, ideology, and pandemic response. In four studies (N = 1,764), we identify several methodological and conceptual factors—in particular, an overreliance on self-report measures—that may have inflated the apparent associations among these constructs. Using non-self-report measures, we find evidence that disgust sensitivity may be a less potent predictor of disease avoidance than is typically assumed, and that ideological differences in disgust sensitivity may be amplified by self-report measures. These findings suggest that the true pattern of interrelations among these factors may be less “paradoxical” than is typically believed.

General discussion

This research provides important insight into the conservatism-disgust paradox in responses to the pandemic, as well as the relations among each of these target constructs—disgust sensitivity, political ideology, and pandemic response. These studies identified multiple factors that influence the (apparent) strength of the relations among these variables, thereby pinpointing several factors that are likely to have contributed to this seemingly contradictory pattern of results (Table 3).



Table 3. List of hypotheses, the study in which each hypothesis was tested, and whether or not each hypothesis was supported.

https://doi.org/10.1371/journal.pone.0275440.t003

One contributing factor appears to be the predominant use of self-report measures of pandemic response in past research. Indeed, using a behavioral measure of virtual social distancing, we found that the relations between pandemic response and both ideology and disgust sensitivity were significantly attenuated, compared with self-report pandemic response measures. These findings are consistent with the possibility that these self-report measures may suffer from IV-DV conceptual overlap, while also being more susceptible to social desirability and other reporting biases [3638]. Particularly given that this same virtual behavioral measure has been shown to out-predict self-reports in predicting who contracts the COVID-19 virus [21], these results suggest that behavioral measures of pandemic response may provide a more accurate estimate of the extent of ideological differences in responses to the COVID-19 pandemic, as well as of the predictive power of disgust sensitivity for pandemic response. We found a similar divergence between self-report and non-self-report measures in the domain of disgust sensitivity. In this case, however, it was our experiential measure of disgust sensitivity that was the more powerful predictor of pandemic response. These findings identify important additional caveats and considerations for research examining the impact of disgust sensitivity on real-world outcomes, suggesting, in line with some past research, that self-reports of disgust sensitivity may correlate only modestly with other, more experiential or indirect indices of sensitivity to disgust—and that these measures/operationalizations may have different predictive power for different kinds of attitudes and behavior.

These findings also provide a means of beginning to reconcile some of the puzzling associations uncovered in other research on the COVID-19 pandemic. In particular, recent work suggests that—despite the putative disease-protective function of disgust—individuals who scored higher on self-reported disgust sensitivity may actually have been more likely to contract COVID-19 than those who self-reported less disgust sensitivity [23]. As documented here, however, self-reported disgust sensitivity appears to be only a relatively weak predictor of behavioral responses to the pandemic (indeed, adjusting for our experiential disgust measure rendered this association effectively nonexistent). Thus, although questions remain, these findings may bring us a step closer to understanding how self-reported disgust sensitivity could be a positive predictor of contracting the COVID-19 virus.

Perhaps the most intriguing findings, however, concern the relation of political ideology to self-report and experiential measures of disgust sensitivity. Using the DS-R, we replicated the well-documented ideological differences in self-reported disgust sensitivity. However, using our more experiential measure of disgust sensitivity—which presented participants with visual stimuli that closely corresponded to those described in the DS-R vignettes—we found no evidence of liberal-conservative differences in sensitivity to disgust.

Taken together, the findings discussed above suggest that methodological features of past research—particularly the heavy reliance on self-report measures of disgust sensitivity and pandemic response—may have inflated the relations among these three variables, and, thus, contributed to this seemingly contradictory pattern of results. In identifying the influence of these methodological factors, this research brings us a step closer to resolving the conservatism-disgust paradox, suggesting that the true pattern of interrelations among these variables is not as “paradoxical” as is typically assumed. That is, if, as these findings suggest, (1) the true relation between disgust sensitivity and pandemic response is smaller than previously suggested, and (2) ideological differences in disgust sensitivity are overestimated, then it is less surprising that conservatives exhibit less concern about the virus—particularly given that (3) ideological differences in responses to the pandemic may not be as dramatic as has been suggested by past research. The relatively small size of these effects makes it more likely that they would be subsumed by other concerns and motivations such as ideological identification and elite cues.

More generally, these findings also pose some challenges for past research and theory—particularly work suggesting a general relation between disgust sensitivity and political ideology. At the very least, these findings appear to suggest that liberals and conservatives do not differ in the form of disgust sensitivity that is most predictive of pandemic response. A more pessimistic interpretation, however, is that ideological differences in disgust sensitivity may generally be overestimated. That is, consistent with some recent critiques, it may be that self-report measures such as the DS-R amplify the true degree of ideological differences in disgust sensitivity, at least compared with measures that rely less on self-reports and self-beliefs about one’s own sensitivity to disgust.

Of course, our findings stand in contrast to a large body of research that suggests a connection between ideology and disgust, and, clearly, liberals and conservatives do reliably differ on many measures of disgust sensitivity (in particular, the DS-R and similar vignette-based measures). However, our findings also seem to align with other recent failures to replicate ideological differences in sensitivity to disgust using more indirect or experiential measures (e.g., [45]). Particularly in light of other research suggesting that people may have limited introspective ability into their own level of disgust sensitivity (e.g., work showing that self-reports sometimes do not significantly correlate with more indirect measures of disgust sensitivity; e.g., [185758]) a closer examination of the nature and extent of ideological differences in disgust sensitivity may be warranted.

These findings therefore suggest that there may be a theoretical gap in our understanding of the relation between ideology and disgust sensitivity: Why is it that ideological differences reliably emerge on some measures of disgust sensitivity (e.g., the DS-R) but not others—even, as we found, measures that assess responses to closely related, or even identical, situations and stimuli? One possibility is that the ideological differences on the DS-R and similar vignette-based measures of sensitivity to disgust can in part be attributed to factors other than disgust sensitivity per se.

For example, forthcoming research suggests that conservatives tend to self-report greater interoceptive sensitivity—that is, to subjectively feel that they are more sensitive to the internal physiological states and signals of their own bodies—although by objective metrics they are actually less sensitive than are liberals [68]. Moreover, other research suggests that conservatives’ overconfidence may extend beyond interoception to experiences, judgments, and perceptions writ large [69]. Extending these past findings to the domain of disgust sensitivity would seem to suggest that conservatives may be likely to subjectively feel that they are more sensitive to disgust than they actually are, perhaps explaining why self-report measures of disgust sensitivity—which in part assess self-beliefs about one’s own degree of sensitivity to disgust—show more robust associations with conservatism than measures of disgust that are rooted in more immediate experience.

Less interestingly, another potential explanation for the weaker relation between ideology and our experiential disgust measure may be that previously documented ideological differences in personality traits such as conscientiousness [70] lead conservatives to complete survey measures more thoughtfully, perhaps reading more carefully or engaging more deeply with the material. This, too, could help explain why conservatives report experiencing greater disgust in response to these vignettes—which require a degree of cognitive effort to process and mentally represent—but do not appear to differ as greatly when these same stimuli are presented visually. Future research may wish to assess these possibilities to deepen our understanding of the nature of the relation between ideology and sensitivity to disgust.

More generally, these findings suggest that caution may be warranted in the development and use of measures to assess these constructs—disgust sensitivity, political ideology, and pandemic response—and, especially, their interrelations. Given the close connections among these factors, coupled with potential confounds such as self-presentational concerns that may be at play for such impactful and politicized issues as the COVID-19 pandemic, the use of self-report measures, in particular, should be subject to close scrutiny.

Finally, it is important to note that while our studies consistently show that using self-report scales may overestimate the strength of the interrelations among disgust sensitivity, pandemic response, and political ideology, some of these effects may be specific to the population that we sampled. Indeed, the sociopolitical context surrounding the COVID-19 pandemic in the U.S. was in many ways unique, and these factors are likely to have shaped some of our effects. In particular, as discussed above, the stark political polarization surrounding the pandemic in the U.S. is likely to have been at least partially responsible for the inflated ideological differences in self-reported (versus behavioral) responses to the pandemic. Future research will need to examine the degree to which these processes extend beyond the U.S. to other nations and cultural contexts.

Industrial Revolution. Why Britain? The Right Place (in the Technology Space) at the Right Time

Why Britain? The Right Place (in the Technology Space) at the Right Time. Carl Hallmann, W. Walker Hanlon, and Lukas Rosenberger. NBER, Jul 5 2022. https://conference.nber.org/conf_papers/f171957.pdf

Abstract: Why did Britain attain economic leadership during the Industrial Revolution? We argue that Britain possessed an important but underappreciated innovation advantage: British inventors worked in technologies that were more central within the innovation network. We offer a new approach for measuring the innovation network using patent data from Britain and France in the 18th and early 19th century. We show that the network influenced innovation outcomes and then demonstrate that British inventors worked in more central technologies within the innovation network than inventors from France. Then, drawing on recently-developed theoretical tools, we quantify the implications for technology growth rates in Britain compared to France. Our results indicate that the shape of the innovation network, and the location of British inventors within it, can help explain the more rapid technological growth in Britain during the Industrial Revolution.

Excerpts from the introduction:

In this study, we argue that there is one important British advantage that has been largely overlooked: the possibility that British inventors may have been working “at the right place” in the technology space. Our idea builds on emerging literature in growth economics which finds that innovation in some technologies generates more spillover benefits than innovation in others (Acemoglu et al., 2016; Cai and Li, 2019; Huang and Zenou, 2020; Liu and Ma, 2021). As a result, a country’s allocation of researchers across technologies can substantially impact the overall rate of economic growth. In particular, this literature shows that technological progress will be faster in economies where more research effort is focused on technologies that generate more spillovers for other technologies; in other words, technologies that are more central in the technology space. Translating these ideas into the context of the Industrial Revolution, we ask: did Britain experienced more rapid technological progress because British inventors were more focused on technologies, such as steam engines, machine tools, or metallurgy, that generated stronger spillover benefits for other technologies and were therefore more central in the technology space? In contrast, could it have been the case that Continental economies like France experienced slower technological progress because they specialized in developing technologies, such as apparel, glass, or papermaking, which were more peripheral in the technology space?1 Put another way, we aim to examine whether Britain’s differential growth during the eighteenth and early nineteenth centuries can be explained by the distinct position of British inventors in the technology space. By starting with ideas from modern growth economics, our analysis is less subject to the type of “post hoc, proper hoc” concerns that have been raised about some other explanations (Crafts, 1977, 1995). Moreover, we offer a theoretically-grounded quantification describing exactly how much of Britain’s differential growth experience can be attributed to this mechanism. These two features differentiate our study from most existing work that aims to understand Britain’s growth lead during the Industrial Revolution. To structure our analysis, we begin with a growth model, from Liu and Ma (2021), that incorporates an innovation network. In this network, each node is a technology type, while each edge reflects the extent to which innovations in one technology type increase the chances of further innovation in another. This model provides a framework for thinking about how the distribution of researchers across technology sectors relates to the growth rate in the economy. It also generates specific expressions that, given the matrix of connections across sectors, allow us to quantify how different allocations of researchers across technology sectors will affect growth. The upshot is that allocations in which more researchers are working in technology sectors with greater spillovers will generate higher overall growth rates than others. Therefore, the growth maximizing allocation of researchers will feature more researchers working in more central technology sectors: specifically, those sectors with higher eigenvalue network centrality. Furthermore, the model delivers precise analytical relationships that allow us to quantify the implications of different allocations of research effort for the rate of economic growth. To examine whether these forces operated during the Industrial Revolution, we utilize patent data for Britain, from 1700 to 1849, and for France from 1791-1844.2 These historical patent data cover a large number of inventors and their inventions, providing a rich source of information on innovation during the Industrial Revolution.3 We follow a long line of work, dating back at least to Sullivan (1989), using patent data to better understand innovation patterns during this period. A key challenge in our setting is measuring spillovers across technology categories. The innovation literature typically uses patent citations, but these are not available in our historical setting. Instead, we introduce a new approach based on the idea that if there are spillovers between two technology categories, then inventors working primarily in one area will occasionally file patents in the other. In particular, we measure the extent of spillovers from technology category j to i based on the propensity of inventors who patent in j to subsequently patent in i. Since our approach is new, we validate it using modern data. Specifically, using U.S. patents from 1970-2014, we construct innovation networks using our approach as well as the citation-based approach used in modern studies. Comparing these networks shows that the two approaches generate networks that are extremely similar. This suggests that our method does a good job of recovering the underlying innovation network. Using our approach, we document technology networks in Britain and France that feature a dense central core of closely related—and mainly mechanical—technologies. One important question about our estimated networks is, do they reflect fundamental features of the underlying technologies or simply reflect the local innovation environment in each country? One way to test this is to compare the networks obtained from the two countries. If they are similar, they likely reflect fundamental technological features rather than idiosyncratic conditions. Conducting a direct comparison, however, is challenging because the two countries use very different technology categorizations. Therefore, it is necessary to construct a mapping of technology categories from one country’s categorizations to the other. To do so, we carefully identify a set of inventions that were patented in both countries. We can then use the categorization of these inventions in each system to construct a crosswalk between the technology categorizations used in the two countries. Using this mapping, we construct technology spillover matrices derived from French patents but in terms of British technology categories, or derived from British patents but expressed in French technology categories. This allows us to regress the entries of the technology matrices of one country on the entries of the other country. We find they are strongly positively related, despite the noise that is inherent in any mapping between different systems of technology categorization. This indicates that our innovation matrices not just reflect the local economic environment, but that a significant part of each represents an underlying ‘global’ network of technology spillovers. Next, we establish that the shape of the technology spillover network matters for innovation outcomes. As a first step, we follow existing work on modern patent data by analyzing how patenting rates vary across technology categories depending on the lagged knowledge stock in other categories, weighted by the strength of connections through the innovation matrix. Consistent with the theory, and the results in previous studies of modern data, we find a significant positive associations of patenting with the lagged network weighted knowledge stock, shrinking toward zero as lags increase. However, the lack of exogenous variation in the lagged knowledge stock means that this result could be due to common shocks that affect connected technology categories. Thus, in the second step, we provide evidence based on a source of quasi-exogenous variation in the timing of increases in the knowledge stock at some nodes of the innovation network. Specifically, we use the unexpected arrival of “macroinventions.” These are inventions which Mokyr (1990) describes as “a radical new idea, without a clear precedent, emerges more or less ab nihilo.” Using a list of 65 macroinventions from Nuvolari et al. (2021), we study whether the arrival of a new macroinvention in one technology category leads to a subsequent increase in patenting in downstream technology categories within the innovation network. Here, the identifying assumption is not that the location of macroinventions were random, but that the timing of their arrival at a given location was unpredictable within the time frame of analysis. Using pooled difference-in-difference and event study analyses for a time frame of ten years before and after the arrival of each macroinvention, we show that macroinventions are followed by significant increases of the patenting rates in technology categories sharing stronger (downstream) connections from the technology category of the macroinvention. In addition, we find no evidence of an increase in technology categories as a result of being upstream from the macroinvention technology category within the innovation network. This second result provides a valuable placebo check that provides additional confidence that our results are picking up the impact of spillovers through the innovation network. Next, we look at whether there are notable differences in the allocation of British and French inventors within the innovation network. In particular, we focus on whether British inventors were patenting in technology categories that were more central within the innovation network than French inventors. We do this by studying, within the sets of British and French patents whether foreign inventors (of British or French origin) were patenting in more central technology categories than domestic inventors. We find that among French patents, patents by British-based inventors were significantly more central compared to the average patents by French domestic inventors—and all other foreign inventors—, whereas among British patents, patents by French-based inventors were less central compared to the average patent by British domestic inventors. The pattern indicates that British inventors were more likely to work in central technology categories than French inventors. As more central nodes have stronger spillover connections to other technology categories, the more central locations occupied by British inventors are consistent with a greater “bang for the buck” of British innovation on the aggregate rate of technological progress. Finally, we quantify the growth implications of the observed innovation network and different allocations of inventors in Britain and France through the lens of the model. Existing estimates for Britain suggest that industrial production grew by between 3 and 3.5% during the first half of the nineteenth century (Broadberry et al., 2015). In France, estimates indicate growth rates of between 1.7 and 2.5% in the same period (Crouzet, 1996; Asselain, 2007). (Preliminary) Results from our quantification exercise show that differences in the allocation of inventors across technology categories led to a technology growth rate in Britain that was between 0.5 and 2.9 percent higher than the French technology growth rate. Thus, our results indicate that Britain’s more advantageous position in the innovation network can explain a substantial fraction, and possibly the entire difference, in growth rates between the British and French economies during the first half of the nineteenth century. In sum, the evidence presented in this paper shows that Britain benefited from an advantageous distribution of inventors across technology sectors during the Industrial Revolution, and that this difference meaningfully contributed to Britain’s more rapid industrialization. Our analysis takes as given the differences in the distribution of inventors across sectors. Thus, our mechanism complements explanations for the British advantage during the Industrial Revolution, in particular those that can explain why British inventors were more likely than the French to work on technologies that happened to be more central 4 within the innovation network, in particular mechanical technologies. For example, it could be that Britain’s practical Enlightenment tradition and well-developed apprenticeship system (Mokyr, 2009; Kelly et al., 2014) contributed to the British inventors’ greater ability for working on mechanical technologies, or that high wages and access to cheap coal steered British inventors to focus on labor-saving mechanical devices (Allen, 2009). Put differently, the contribution of our paper lies in demonstrating that Britain was at the right place in the technology space at the right time, rather than explain why it was there but France was not. In addition to improving our understanding of one of the most important questions in economic history, our study also contributes to work by growth economists on the importance of innovation networks. Relative to studies in this area (cited above), we offer two main contributions. First, we offer new methods that can help researchers study innovation networks further back in history, when standard tools such as systematic patent citations are unavailable. This opens up the possibility of studying the influence of innovation networks in different contexts or over longer periods. Second, our analysis of macroinventions provides additional, more causal, evidence that innovation networks matter for technology development. Third, our application demonstrates empirically the value of recent theoretical advances integrating innovation networks into economic growth models. Our work builds on a long line of literature using patent data to examine innovation during the Industrial Revolution and into the nineteenth century. Early papers in this area include Sullivan (1989) and Sullivan (1990). More recent work includes MacLeod et al. (2003), Khan and Sokoloff (2004), Moser (2005), Khan (2005), Brunt et al. (2012), Nicholas (2011), Nuvolari and Tartari (2011), Moser (2012), Bottomley (2014b), Bottomley (2014a), Burton and Nicholas (2017), Khan (2018), Bottomley (2019), Nuvolari et al. (2020), Nuvolari et al. (2021), Hallmann et al. (2021), and Hanlon (2022). Relative to this extensive literature, we are the first to study the role of innovation networks in influencing inventive activity during the Industrial Revolution.


1 Hallmann et al. (2021) show that technological leadership in invention of Britain relative to France varied across technologies, with Britain leading, besides others, in steam engines and textile technologies, and France leading, besides others, in papermaking and shoemaking. Mokyr (1990, Chapter 5) provides a historical overview on British technological lead or lag in invention relative to Continental Europe.

2 Both of these were periods during which the patent systems were largely stable. We end just before the major British patent reform of 1852 and the French patent reform of 1844.

3  Of course, not every useful invention was patented, as (Moser, 2012) has shown. 

4   A stable institutional environment and well-developed patent system may have contributed in shifting inventors from technologies that can be protected by secrecy toward technologies as mechanical devices that are easily reverse engineered and thus profit the most from patents (Moser, 2005). However, as both Britain and France had strong patent protection, it is unclear how this mechanism could explain the differential focus of British vs. French inventors on mechanical devices.

We document that 85% of patent applications in the United States include no female inventors and ask: why are women underrepresented in innovation?

Try, try, try again? Persistence and the gender innovation gap. Gauri Subramani, Abhay Aneja, and Oren Reshef. Berkeley, Nov 2022. https://haas.berkeley.edu/wp-content/uploads/Try-try-try-again-Persistence-and-the-Gender-Innovation-Gap.pdf

Abstract: We document that 85% of patent applications in the United States include no female inventors and ask: why are women underrepresented in innovation? We argue that differences in responses to early rejections between men and women are a significant contributor to the gender disparity in innovation. We evaluate the prosecution and outcomes of almost one million patent applications in the United States from 2001 through 2012 and leverage variation in patent examiners’ probabilities of rejecting applications to employ a quasi-experimental instrumental variables approach. Our results show that applications from women are less likely to continue in the patent process after receiving an early rejection. Roughly half of the overall gender gap in awarded patents during this period can be accounted for by the differential propensity of women to abandon applications. We explore why this may be the case and provide evidence that the gender gap in outcomes is reduced for applications that are affiliated with firms, consistent with a role for institutional support in mitigating gender disparities.


Saturday, November 5, 2022

Information avoidance: Our findings, together with additional survey evidence, suggest that behavioral biases inhibit the adoption of improved practices, and are consistent with inattention as a key driver of under-adoption

Why Businesses Fail: Underadoption of Improved Practices by Brazilian Micro-Enterprises. Priscila de Oliveira, Nov 2022. https://www.prisciladeoliveira.net/#research


Abstract: Micro firms in low and middle income countries often have low profitability and do not grow over time. Several business training programs have tried to improve management and business practices, with limited effects. We run a field experiment with micro-entrepreneurs in Brazil (N=742) to study the under-adoption of improved business practices, and shed light on the constraints and behavioral biases that may hinder their adoption. We randomly offer entrepreneurs reminders and micro-incentives of either 20 BRL (4 USD) or 40 BRL (8 USD) to implement record keeping or marketing for three consecutive months, following a business training program. Compared to traditional business training, reminders and micro-incentives significantly increase adoption of marketing (13.2 p.p.) and record keeping (19.2 p.p.), with positive effects on firm survival and investment over four months. Our findings, together with additional survey evidence, suggest that behavioral biases inhibit the adoption of improved practices, and are consistent with inattention as a key driver of under-adoption. In addition, our survey evidence on information avoidance points to it as a limiting factor to the adoption of record keeping, but not marketing activities. Taken together, the results suggest that behavioral biases affect firm decisions, with significant impact on firm survival.



Despite the popularity of growth mindset interventions in schools, positive results are rare and possibly spurious due to inadequately designed interventions, reporting flaws, and bias

Macnamara, B. N., & Burgoyne, A. P. (2022). Do growth mindset interventions impact students’ academic achievement? A systematic review and meta-analysis with recommendations for best practices. Psychological Bulletin, Nov 2022. https://doi.org/10.1037/bul0000352

Abstract: According to mindset theory, students who believe their personal characteristics can change—that is, those who hold a growth mindset—will achieve more than students who believe their characteristics are fixed. Proponents of the theory have developed interventions to influence students’ mindsets, claiming that these interventions lead to large gains in academic achievement. Despite their popularity, the evidence for growth mindset intervention benefits has not been systematically evaluated considering both the quantity and quality of the evidence. Here, we provide such a review by (a) evaluating empirical studies’ adherence to a set of best practices essential for drawing causal conclusions and (b) conducting three meta-analyses. When examining all studies (63 studies, N = 97,672), we found major shortcomings in study design, analysis, and reporting, and suggestions of researcher and publication bias: Authors with a financial incentive to report positive findings published significantly larger effects than authors without this incentive. Across all studies, we observed a small overall effect: d¯ = 0.05, 95% CI = [0.02, 0.09], which was nonsignificant after correcting for potential publication bias. No theoretically meaningful moderators were significant. When examining only studies demonstrating the intervention influenced students’ mindsets as intended (13 studies, N = 18,355), the effect was nonsignificant: d¯ = 0.04, 95% CI = [−0.01, 0.10]. When examining the highest-quality evidence (6 studies, N = 13,571), the effect was nonsignificant: d¯ = 0.02, 95% CI = [−0.06, 0.10]. We conclude that apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.


Impact Statement: This systematic review and meta-analysis suggest that, despite the popularity of growth mindset interventions in schools, positive results are rare and possibly spurious due to inadequately designed interventions, reporting flaws, and bias.


Those who self-report a strong moral character have a tendency to hypocrisy

Being good to look good: Self-reported moral character predicts moral double standards among reputation-seeking individuals. Mengchen Dong,Tom R. Kupfer,Shuai Yuan,Jan-Willem van Prooijen. British Journal of Psychology, November 4 2022. https://doi.org/10.1111/bjop.12608

Abstract: Moral character is widely expected to lead to moral judgements and practices. However, such expectations are often breached, especially when moral character is measured by self-report. We propose that because self-reported moral character partly reflects a desire to appear good, people who self-report a strong moral character will show moral harshness towards others and downplay their own transgressions—that is, they will show greater moral hypocrisy. This self-other discrepancy in moral judgements should be pronounced among individuals who are particularly motivated by reputation. Employing diverse methods including large-scale multination panel data (N = 34,323), and vignette and behavioural experiments (N = 700), four studies supported our proposition, showing that various indicators of moral character (Benevolence and Universalism values, justice sensitivity, and moral identity) predicted harsher judgements of others' more than own transgressions. Moreover, these double standards emerged particularly among individuals possessing strong reputation management motives. The findings highlight how reputational concerns moderate the link between moral character and moral judgement.

Practitioner points

- Self-reported moral character does not predict actual moral performance well.

- Good moral character based on self-report can sometimes predict strong moral hypocrisy.

- Good moral character based on self-report indicates high moral standards, while only for others but not necessarily for the self.

- Hypocrites can be good at detecting reputational cues and presenting themselves as morally decent persons.

GENERAL DISCUSSION

A well-known Golden Rule of morality is to treat others as you wish to be treated yourself (Singer, 1963). People with a strong moral character might be expected to follow this Golden Rule, and judge others no more harshly than they judge themselves. However, when moral character is measured by self-reports, it is often intertwined with socially desirable responding and reputation management motives (Anglim et al., 2017; Hertz & Krettenauer, 2016; Reed & Aquino, 2003). The current research examines the potential downstream effects of moral character and reputation management motives on moral decisions. By attempting to differentiate the ‘genuine’ and ‘reputation managing’ components of self-reported moral character, we posited an association between moral character and moral double standards on the self and others. Imposing harsh moral standards on oneself often comes with a cost to self-interest; to signal one's moral character, criticizing others' transgressions can be a relatively cost-effective approach (Jordan et al., 2017; Kupfer & Giner-Sorolla, 2017; Simpson et al., 2013). To the extent that the demonstration of a strong moral character is driven by reputation management motives, we, therefore, predicted that it would be related to increased hypocrisy, that is, harsher judgements of others' transgressions but not stricter standards for own misdeeds.

Across four studies varying from civic transgressions (Study 1), organizational misconducts (Study 2), to selfish decisions in economic games (Study 3 and the Pilot Study in the SM), we found consistent evidence that people reporting a strong (vs. weak) moral character were more likely to judge others' misdeeds harshly, especially for those highly motivated by reputation. This amplified moral harshness towards others was sometimes also accompanied with increased moral leniency towards the self (Study 3 and the Pilot Study in the SM). Taken together, self-reported moral character relates to differential moral standards on the self versus others, which was especially true for reputation-motivated individuals.

Although Study 1 only provided circumstantial evidence by interpreting moral judgements without specific targets and self-reported transgressive frequencies as a proxy of the ‘reputation managing’ component of self-reported moral character, we have good reasons to believe that these interpretations are legitimate. First, people often apply general moral rules to judgements of others instead of themselves (Dong et al., 2021). Second, self-reported moral performance is often influenced by strategic self-presentation (Dong et al., 2019; Shaw et al., 2014). As shown in our studies, people high (vs. low) on moral character reported fewer own transgressions (Study 1) when highly (vs. weakly) motivated by reputation management. However, they did not act more or less selfishly (Study 3).

Furthermore, Studies 2 and 3 consolidated our proposition by showing a significant interaction between moral character and target of moral judgements (i.e. self vs. other), only for people with high but not low reputation management motives. These findings were replicated across a variety of individual difference measures of moral character (including Benevolence and Universalism values, justice sensitivity, and moral identity) and reputation management motives (including Power and Achievement values, self-monitoring of socially desirable behaviours, and concern about social esteem and status), and emerged only when moral judgements had a salient influence on people's reputation (e.g. when the appraised behaviour was unfavourable rather than favourable in Study 3).

Theoretical contributions

The current findings contribute to the literature on both moral character and reputation management. Previous theorizing generally implies that moral character is genuinely and unconditionally good (Aquino & Reed, 2002; Kamtekar, 2004; Walker et al., 1987; Walker & Frimer, 2007). Consistent with this ‘genuine’ perspective on moral character, we found positive correlations of moral character with stringent moral judgements (Studies 1 and 3) and a high likelihood to behave morally (Study 3), although the relation between inherent moral character and actual moral deeds may be obscured by the presence of external sanctions (e.g. third-party punishment in the Pilot Study in the SM). More importantly, we complement previous studies on moral character by making two novel contributions.

First, the present studies suggest that there are both ‘genuine’ and ‘reputation managing’ components of self-reported moral character. Although this idea was implied in many previous studies (e.g. Anglim et al., 2017; Brick et al., 2017; Dong et al., 2019; Hertz & Krettenauer, 2016; Shaw et al., 2014), our work empirically demonstrates that people who report a strong moral character can be sensitive to moral contexts, and strategically tailor their moral performances accordingly. In particular, people may apply flexible moral standards consistent with reputation management goals, and display more moral harshness towards others than towards themselves. The findings accord with perspectives that emphasize the prominent role of reputation management in moral psychology (e.g. Jordan et al., 2016; Vonasch et al., 2018), including phenomena such as moral licencing (Blanken et al., 2015) and moral contagion (Kupfer & Giner-Sorolla, 2021).

Second, our work illuminates how exactly reputation management motives moderate the link between self-reported moral character and moral decisions. Beyond previous research suggesting to control for, or eliminate, reputation concerns in moral character measurements (Lee et al., 2008; Paulhus, 1984), these studies demonstrated when, and for whom, moral character precisely predicts moral decisions. When individuals had low reputation management motives, their moral character predicted moral judgements of their own more than others' misdeeds; in contrast, when people were highly motivated to gain a good reputation, moral character only predicted their moral harshness towards others but failed to predict moral decisions for themselves (Study 3 and the Pilot Study in the SM). With the increase of reputation management motives, people who reported a strong (vs. weak) moral character either showed increased hypocrisy by judging others more harshly than themselves (Studies 2 and 3), or showed reduced ‘hypercrisy’ (Lammers, 2012) by judging themselves less harshly than others (the Pilot Study in the SM). Although the specific manifestations of moral double standards varied from moral harshness towards others to moral leniency towards oneself, or both, our findings add more insight to discussions about the effectiveness of moral character measures, by suggesting the importance of taking into account reputation management motives and moral target (e.g. self or others).

Limitations and future directions

We employed diverse samples and methods to test the reputation management account of moral character; however, at least two important limitations should be noted, respectively, related to the self-reported nature of our moral character and reputation management motives measures.

First, although our findings showed a positive relationship between moral character and moral double standards, we may not fully differentiate the ‘genuine’ and ‘reputation managing’ parts of self-reported moral character. People may also internalize reputation management as an integral part of ‘genuine’ moral character. 1 In this case, moral character can facilitate socially desirable reactions in a prompt and heuristic way, and better serve the goal to appear moral to others (Everett et al., 2016; Hardy & Van Vugt, 2006; Jordan et al., 2016; Jordan & Rand, 2020). This theorizing implies that self-reported moral character can be strongly and positively correlated with reputation management motives. However, the hypothesized interaction effect between moral character and reputation management motives on moral double standards replicated, regardless of their different correlations across studies (positive and significant in Studies 1 and 3, non-significant in Study 2, and negative and significant in the Pilot Study in the SM; see Table S6 for specifics). To more formally differentiate the roles of actual and postured moral character in behavioural hypocrisy, future research may integrate self- with other-reports of moral character.

Second, we examined reputation management motives as an individual difference variable, and did not manipulate reputation incentives to show its causal effects. As such, self-reported reputation management motives could be influenced by concerns about social approval. For example, some research suggests that people may under-report their actual reputation management motives because pursuing good reputation and high status can be stigmatized (Kim & Pettit, 2015). People may either over- or under-report their reputation management motives, depending on their perception of the motives as socially approved or disapproved.

Relatedly, our findings do not directly elucidate whether people who display moral double standards (1) genuinely believe such behaviours as morally acceptable, or (2) consciously use them as a reputation management strategy. For example, although high moral character and reputation management motives were associated with stringent moral standards on others across our studies, their relation with lenient moral standards on the self seemed to only apply to moral judgements but not to actual behaviours (Study 3 and its Pilot Study in the SM). The extent to which self-reported moral behaviours reflected actual behaviours or its strategic self-presentation was also unverifiable (Study 1). However, comparisons between different studies may provide tentative evidence on people's conscious and strategic display of moral double standards as a reputation management strategy. People who self-reported high (vs. low) moral character and reputation management motives judged themselves more leniently only in relatively anonymous settings (Study 2) but no more leniently with the presence of a third-party interviewer (Study 1) or observer (Study 3 and its Pilot Study in the SM). Future research may explore the mechanisms of moral double standards in different reputation contexts, and examine moral character and reputation management motives as antecedents to behavioural forms of moral hypocrisy (e.g. saying one thing and doing another; Dong et al., 2019; Effron et al., 2018).