Friday, March 5, 2021

UK self-selected sample: A significant proportion of individuals reported drinking more frequently in lockdown, drinking more units per drinking occasion and more frequent heavy episodic drinking

Characterising the patterns of and factors associated with increased alcohol consumption since COVID‐19 in a UK sample. Melissa Oldham et al. Drug and Alcohol Review, March 3 2021. https://doi.org/10.1111/dar.13256

Abstract

Introduction: To examine changes in drinking patterns and to assess factors associated with reported increases in frequency of drinking, units consumed and frequency of heavy episodic drinking (HED) during the UK lockdown.

Methods: Online cross‐sectional survey of 2777 self‐selected UK adults.

Results: Thirty percent of participants reported drinking more frequently in lockdown, 16% reported drinking more units per drinking occasion and 14% reported more frequent HED. For men and women, increased frequency of drinking was associated with being less likely to believe alcohol drinking would lead to greater chance of catching COVID‐19 (men: OR = 0.99, 95% CI = 0.98, 1.00; women: OR = 0.99, 95% CI = 0.99, 1.00) and deterioration in psychological wellbeing (OR = 1.27, 95% CI = 1.04, 1.54; OR = 1.29, 95% CI = 1.11, 1.51); increased unit consumption was associated with deterioration in financial situation (OR = 1.50, 95% CI = 1.21, 1.86; OR = 1.31, 95% CI = 1.05, 1.64) and physical health (OR = 1.31, 95% CI = 1.03, 1.67; OR = 1.66, 95% CI = 1.31, 2.10). Finally, increases in the frequency of HED were associated with deterioration in psychological wellbeing (OR = 1.65, 95% CI = 1.25, 2.18; OR = 1.46, 95% CI = 1.17, 1.82) and being furloughed (OR = 3.25, 95% CI = 1.80, 5.86; OR = 2.06, 95% CI = 1.19, 3.56). Other gender differences were detected, for example, living with children was associated with an increase in units consumed (OR = 1.72, 95% CI = 1.09, 2.73) and the frequency of HED (OR = 2.40, 95% CI = 1.44, 3.99) for men, but not women.

Discussion and Conclusions: In this self‐selected UK sample, a significant proportion of individuals reported drinking more frequently in lockdown, drinking more units per drinking occasion and more frequent HED. There were consistent predictors of increased consumption across men and women, but other gender differences were detected. This study identifies groups that may require targeted support in future lockdowns.

Discussion

About a one‐third of our self‐selected sample, surveyed between 30 April and 14 June 2020, reported drinking more frequently in the first UK lockdown. These rates were roughly equivalent between men and women though the correlates of increased frequency of alcohol consumption differed somewhat by gender. Deterioration in psychological wellbeing and believing that alcohol was unlikely to put them at greater risk of getting or not recovering from COVID‐19 were correlated with increased frequency of drinking for both men and women. Amongst women, increases in the frequency of drinking occasions were also associated with last‐year alcohol reduction attempts and deterioration in physical health. Whereas amongst men, increases in the frequency of drinking were associated with being younger, having a lower baseline AUDIT‐C score, being furloughed, deterioration in living conditions, deterioration in financial circumstances, improvements in social relationships and having fewer pre‐existing health conditions.

In terms of changes in the units consumed per drinking occasion, women were more likely than men to drink the same number of units as pre COVID‐19 (66% vs. 56% of men). Men were more likely than women to drink both more units (19% vs. 14%) and less units (25% vs. 20%). Deterioration in financial circumstances and physical health were associated with increased unit consumption for both men and women. Amongst women, increases in units consumed per drinking occasion were associated with having more alcohol reduction attempts in the last year. Whereas, amongst men, increases in units consumed per drinking occasion were associated with living with children, deterioration in psychological wellbeing and believing that alcohol was unlikely to put them at greater risk of getting or not recovering from COVID‐19.

Finally, the majority (61%) reported no change in the frequency of HED occasions pre‐ and post‐COVID‐19. For both women and men, being furloughed and deterioration in psychological wellbeing were associated with increases in the frequency of HED. Amongst women, increases in the frequency of HED were also associated with being younger, last‐year alcohol reduction attempts and living alone. Amongst men, increases in the frequency of HED were associated with living with children, having a more negative experience of social distancing, deterioration in financial circumstances, improvements in social relationships and being a current smoker.

Implications

This study has important implications in terms of highlighting groups that may need targeted support for alcohol reduction to counteract an increase in drinking during future COVID‐19‐related lockdowns in the UK. There were some consistencies in correlates of increased drinking amongst men and women. Deterioration in psychological wellbeing was one of the most consistent predictors of increases in the frequency of drinking and HED for both men and women. Being furloughed was also a consistent predictor of increases in HED across men and women. This is in line with other literature showing that alcohol consumption increases in economic downturns where unemployment is higher, partly due to increases in leisure time, amongst those who are unemployed [36]. These findings suggest that in future iterations of lockdown, those on furlough or similar schemes may require additional alcohol‐related support. Communications from either the government or employers around the furlough scheme could contain links to resources to help individuals manage their wellbeing and their drinking.

In line with other studies [3], there is also some evidence of gender differences in drinking patterns; units consumed per drinking occasion have polarised more amongst men in that men are more likely than women to be drinking both more and less. Furthermore, the correlates of increases in each drinking pattern are different for men and women. Living with children was associated with an increase in units consumed and the frequency of heavy episodic drinking for men, but not women. Increases in the amount of alcohol consumed on drinking occasions amongst male parents are concerning as HED in particular is likely to impair performance of caring responsibilities. This is in line with some research showing that women disproportionately carry the burdens of increased child care [2324], which could explain greater declines in wellbeing amongst women [2526].

Deterioration in financial circumstances was consistently associated with increases in all of the drinking measures for men, but only with units consumed for women. Previous research examining the relationships between economic downturns and alcohol consumption also found men were more likely to drink heavily in response to recessions and increased unemployment [36]. This may be due to increased stress in response to traditional gender roles in which men may be more likely to be considered the breadwinner.

Strengths and limitations

A key strength of this study was the variety of measures collected, permitting a detailed analysis of a broad range of potential factors predicting drinking patterns during the start of COVID‐19‐related lockdown in the UK. Furthermore, this survey allowed participants to select the date that they felt COVID‐19 started to affect them, this offers a strength over other studies that rely on using the date lockdown began to signal ‘before’ and ‘after’, in a period of ongoing change. Although the full lockdown began on 24 March, there was advice and knowledge of the virus and its effects in the months before this which may have affected drinking behaviour. For example, pubs in England were closed on 20 March and people were encouraged to work from home or socially distance from 16 March. Furthermore, the collection of data while lockdown was ongoing limits the potential for recall bias, which might be present in retrospective studies. However, this study was not without limitations. As pre‐COVID‐19 drinking is only measured post‐COVID‐19, it may be susceptible to recall bias. The sample was self rather than randomly selected, which reduces the generalisability of these results. Specifically, the results may be more reflective of people who complete online surveys about health than the general population in the UK. Black, Asian and Minority Ethnic people in particular were underrepresented in the study sample, this means that ethnicity was treated as white versus minority ethnic groups. Grouping all Black, Asian and Minority Ethnic participants together in this way does not allow examination of differences between different ethnicities and cultures, which limits the generalisability of these conclusions further. Finally, here we use a non‐validated measure of self‐assessed changes in psychological wellbeing, which may have been interpreted differently by participants.

Implicit transgender attitudes predicted multiple outcomes, including gender essentialism, contact with transgender people, and support for transgender-related policies, over and above explicit attitudes

Implicit Transgender Attitudes Independently Predict Beliefs About Gender and Transgender People. Jordan R. Axt et al. Personality and Social Psychology Bulletin, July 1, 2020. https://doi.org/10.1177/0146167220921065

Abstract: Surprisingly little is known about transgender attitudes, partly due to a need for improved measures of beliefs about transgender people. Four studies introduce a novel Implicit Association Test (IAT) assessing implicit attitudes toward transgender people. Study 1 (N = 294) found significant implicit and explicit preferences for cisgender over transgender people, both of which correlated with transphobia and transgender-related policy support. Study 2 (N = 1,094) found that implicit transgender attitudes predicted similar outcomes among participants reporting no explicit preference for cisgender versus transgender people. Across Study 3a (N = 5,647) and Study 3b (N = 2,276), implicit transgender attitudes predicted multiple outcomes, including gender essentialism, contact with transgender people, and support for transgender-related policies, over and above explicit attitudes. This work introduces a reliable means of measuring implicit transgender attitudes and illustrates how these attitudes independently predict meaningful beliefs and experiences.

Keywords: implicit attitudes, IAT, explicit attitudes, transgender, transphobia, policy


Transgenders’ sociosexuality is largely influenced by their sexual genotype despite their incongruent gender self-perception; the relationships between behavior, attitude, & sociosexual desire are different from those of cisgenders

Influence of Sexual Genotype and Gender Self-Perception on Sociosexuality and Self-Esteem among Transgender People. Rodrigo de Menezes Gomes, Fívia de Araújo Lopes & Felipe Nalon Castro. Human Nature, volume 31, pages483–496. Jan 21 2021. https://doi.org/10.1007/s12110-020-09381-6

Abstract: Empirical data from studies with both heterosexual and homosexual individuals have consistently indicated different tendencies in mating behavior. However, transgenders’ data are often overlooked. This exploratory study compared levels of sociosexuality and self-esteem between transgenders and non-transgender (cisgender) individuals. The aim was to verify whether either sexual genotype or gender self-perception had more influence on the examined variables in transgenders. Correlations between self-esteem and sociosexuality levels were also investigated. The sample consisted of 120 Brazilian individuals (51 transgenders) from both sexes. Sociosexuality scores indicated mostly sex-typical patterns for transgenders of both sexes across the construct’s three dimensions (behavior, attitude, and desire), except for female-to-male transgenders’ behavioral sociosexuality. Unique associations between the dimensions of sociosexuality were found for transgender participants. No differences in self-esteem were observed and no correlations between self-esteem and sociosexuality were found. The results suggest that transgenders’ sociosexuality is largely influenced by their sexual genotype despite their incongruent gender self-perception and that the relationships between behavior, attitude, and sociosexual desire are different from those observed in cisgenders.


Finance employees in India: The relationship between dark triad traits and job performance is positive at the lower end of dark triad traits but flattens out as the dark triad traits intensify

Uppal, N. (2021), "Does it pay to be bad? An investigation of dark triad traits and job performance in India", Personnel Review, Feb 2021. https://doi.org/10.1108/PR-07-2019-0391

Abstract

Purpose: The current paper proposes a curvilinear relationship between the dark triad traits (Machiavellianism, psychopathy and narcissism) and job performance. In addition, it examines the moderation effect of traitedness on the dark triad–job performance relationship.

Design/methodology/approach: Drawing on data from 382 participants in a financial services firm in India, the authors conducted a two-phase study to examine the curvilinear and moderation effects.

Findings: Results confirmed that the relationship between dark triad traits and job performance is positive at the lower end of dark triad traits but flattens out as the dark triad traits intensify.

Originality/value: The authors discuss theoretical and practical implications and offer suggestions for future research.


Of key predictors of religious disbelief, witnessing fewer credible cultural cues of religious commitment was the most potent, followed distantly by reflective cognitive style, & less advanced mentalizing

The Origins of Religious Disbelief: A Dual Inheritance Approach. Will M. Gervais, Maxine B. Najle, Nava Caluori. Social Psychological and Personality Science, March 5, 2021. https://doi.org/10.1177/1948550621994001

Abstract: Widespread religious disbelief represents a key testing ground for theories of religion. We evaluated the predictions of three prominent theoretical approaches—secularization, cognitive byproduct, and dual inheritance—in a nationally representative (United States, N = 1,417) data set with preregistered analyses and found considerable support for the dual inheritance perspective. Of key predictors of religious disbelief, witnessing fewer credible cultural cues of religious commitment was the most potent, β = .28, followed distantly by reflective cognitive style, β = .13, and less advanced mentalizing, β = .05. Low cultural exposure predicted about 90% higher odds of atheism than did peak cognitive reflection, and cognitive reflection only predicted disbelief among those relatively low in cultural exposure to religion. This highlights the utility of considering both evolved intuitions and transmitted culture and emphasizes the dual roles of content- and context-biased social learning in the cultural transmission of disbelief (preprint https://psyarxiv.com/e29rt/).

Keywords: atheism, religion, culture, evolution, dual inheritance theory

Summary

Overall, this study is one of the most comprehensive available analyses of the cognitive, cultural, and motivational factors that predict individual differences in religious belief and disbelief (see also Willard & Cingl, 2017). Consistent patterns emerged, suggesting that lack of exposure to CREDs of religious faith is a key predictor of atheism. Once this context-biased cultural learning mechanism is accounted for, reflective cognitive style predicts some people being slightly more prone to religious disbelief than their cultural upbringing might otherwise suggest. That said, this relationship was relatively modest. Advanced mentalizing was a robust but weak predictor of religious belief, and existential security did not meaningfully predict disbelief. This overall pattern of results closely matched predictions of a dual inheritance approach but is difficult to reconcile with other prominent theoretical approaches (see Table 1 and Figure 2). These results speak directly to competing for theoretical perspectives on the origins of religious disbelief culled from sociology, social psychology, evolutionary psychology, cognitive science of religion, cultural evolution, and gene–culture coevolution.

Alternatives and Limitations

Of the four primary atheism predictors that we used to test prominent theories, religious CREDs emerged as a clear empirical winner. In some ways, however, our tests may have been methodologically stacked in this variable’s favor. Like the self-reports of religious disbelief, this measure includes self-report items about religious upbringing. Thus, there is shared method variance associated with this predictor that is less evident for others. Also, although the CREDs–atheism relationship is consistent with a cultural transmission framework, heritability of religiosity may also contribute to atheists coming from families who aren’t visibly religious. The measure we used is unable to resolve this. Further, our various key predictors varied in both reliability and demonstrated validity. We chose these measures simply because they have been used in previous research; that said, previous use does not necessarily imply that the measures were sufficient.

As with measurement quality, sample diversity is a recurrent concern in psychological research (Henrich et al., 2010Rad et al., 2018Saab et al., 2020). Most psychology research nowadays emerges from convenience samples of undergraduates and Mechanical Turk workers. These samples are fine for some purposes, quite limited for others (Gaither, 2019), and are known to depart from representativeness (Callegaro et al., 2014MacInnis et al., 2018). While our nationally representative sampling allows us to generalize beyond samples, we can access for free (in lab) or cheap (MTurk), even a large nationally representative sample barely scratches the surface of human diversity (Henrich et al., 2010Rad et al., 2018Saab et al., 2020). As such, we encourage similar analyses across different cultures (Willard & Cingl, 2017). Diversifying the samples that make up the empirical portfolio of evolutionary approaches to religion is especially necessary because cultural cues themselves emerged as the strongest predictor of disbelief in this and related work (Gervais & Najle, 2015Gervais et al., 2018Maij et al., 2017Willard & Cingl, 2017). Without diverse samples, including and especially extending well beyond nationally representative samples in the United States, researchers can only aspire to ever more precisely answer a mere outlier of an outlier of our most important scientific questions about human nature.

We measured and tested predictors of religious belief and disbelief. This outcome measure is quite narrow in scope, in terms of the broader construct of religiosity. Further, our Supernatural Belief Scale—while it has been used across cultures—is fairly Judeo-Christian-centric. We suspect that a broader consideration of religiosity in diverse societies may yield different patterns. The Western, Educated, Industrialized, Rich, Democratic (WEIRD) people problem isn’t just a sampling issue; it also reflects an overreliance on the theories, constructs, and instruments developed by WEIRD researchers to test their weird hunches.

Although it is not featured in any of the core theoretical perspectives we evaluated, social liberalism was consistently the strongest covariate of religious disbelief. The intersection of religious and political ideology is an interesting topic in its own right and merits further consideration. Interestingly, disbelief if anything was associated with fiscal conservatism in this sample. This suggests that simple “believers are conservative” tropes are oversimplifications. Ideology and religiosity are multifaceted and dissociable, but certainly of interest given rampant political polarization in the United States and elsewhere. That said, religion–ideology associations, whatever they may be, are largely orthogonal to existing cultural and evolutionary theories of religious belief and disbelief.

Theoretical Implications

We simultaneously evaluated predictions about the origins of disbelief from three prominent theoretical perspectives: secularization, cognitive byproduct, and dual inheritance. Comparing the predictions in Table 1 with the results of Figure 2, results were most consistent with the dual inheritance perspective, the only theoretical perspective that predicted prominent roles for both inCREDulous atheism and analytic atheism. Given the primacy of cultural learning in our data, any model that does not rely heavily on context-biased cultural learning is likely a poor fit for explaining the origins of religious disbelief. By extension, such theoretical models are necessarily incomplete or faulty evolutionary accounts of religion. Simply growing up in a home with relatively fewer credible displays of faith predicted disbelief, contra prior assertions from the cognitive science of religion that disbelief results from “special cultural conditions” and “a good degree of cultural scaffolding” (Barrett, 2010).

Analytic atheism is probably the most discussed avenue to disbelief in the literature (Pennycook et al., 2016Shenhav et al., 2012) and broader culture (Dawkins, 2006). Although in this sample, there was consistent evidence of analytic atheism, the overall trend was modest, the trend itself varied considerably across exposure to CREDs, and sufficient religious CREDs buffered believers against the putatively corrosive influence of reflective cognition on faith. Despite claims that atheism generally requires cognitive effort or reflection (Barrett, 2010Boyer, 2008), cognitive reflection was only modestly related to atheism in these data. These results, taken alongside other evidence accumulating from similar studies (Farias et al., 2017Gervais et al., 2018Willard & Cingl, 2017), may suggest that early claims surrounding the primacy of effortful cognitive reflection as a necessary predictor of atheism may have been overenthusiastic. Analytic thinking predicts atheism in some contexts but is far from primary.

It is initially puzzling that existential security proved largely impotent in our analyses, as it appears to be an important factor in explaining cross-cultural differences in religiosity (Barber, 2013Inglehart & Norris, 2004Solt et al., 2011). It is possible that our analyses were at the wrong level of analysis to capture the influence of existential security, which may act as a precursor to other cultural forces. There may actually be a two-stage generational process whereby existential security demotivates religious behavior in one generation, leading the subsequent generation to atheism as they do not witness CREDs of faith. This longitudinal societal prediction merits future investigation.

Finally, this work has implications beyond religion. Presumably, many beliefs arise from an interaction between core cognitive faculties, motivation, cultural exposure, and cognitive style. The general dual inheritance framework adopted here may prove fruitful for other sorts of beliefs elsewhere. Indeed, a thorough exploration of the degree to which different beliefs are predicted by cultural exposure relative to other cognitive factors may be useful for exploring content- versus context-biased cultural learning and the contributions of transmitted and evoked culture. As this is a prominent point of contention between different schools of human evolutionary thought (Laland & Brown, 2011), such as evolutionary psychology and cultural evolution, further targeted investigation may be productive.

Coda

The importance of transmitted culture and context-biased cultural learning as a predictor of belief and disbelief cannot be overstated. Combined, this work suggests that if you are guessing whether or not individuals are believers or atheists, you are better-off knowing how their parents behaved—Did they tithe? Pray regularly? Attend synagogue?—than how they themselves process information. Further, our interaction analyses suggest that sufficiently strong cultural exposure yields sustained religious commitment even in the face of the putatively corrosive influence of cognitive reflection. Theoretically, these results fit well within a dual inheritance approach, as evolved cognitive capacities for cultural learning prove to be the most potent predictor of individual differences in the cross-culturally canalized expression of religious belief. Atheists are becoming increasingly common in the world, not because human psychology is fundamentally changing but rather because evolved cognition remains fairly stable in the face of a rapidly changing cultural context that is itself the product of a coevolutionary process. Faith emerges in some cultural contexts, and atheism is the natural result in others.

Authors’ Note

We find that, contrary to the conventional wisdom, lawyers are not particularly unhappy; indeed, they suffer rates of mental illness much lower than the general population

Measuring Lawyer Well‐Being Systematically: Evidence from the National Health Interview Survey. Yair Listokin  Raymond Noonan. Journal of Empirical Legal Studies, March 4 2021. https://doi.org/10.1111/jels.12274

Abstract: Conventional wisdom says that lawyers are uniquely unhappy. Unfortunately, this conventional wisdom rests on a weak empirical foundation. The “unhappy lawyers” narrative relies on nonrandom survey data collected from volunteer respondents. Instead of depending on such data, researchers should study lawyer mental health by relying on large microdatasets of public health data, such as the National Health Interview Survey (NHIS) administered by the U.S. Centers for Disease Control. The NHIS includes data from 100–200 lawyers per year. By aggregating years, an adequate sample size of lawyers can readily be obtained, with much greater confidence that the lawyers in the sample resemble the true population of U.S. lawyers. When we examine the NHIS data, we find that, contrary to the conventional wisdom, lawyers are not particularly unhappy. Indeed, they suffer rates of mental illness much lower than the general population. Lawyer mental health is not significantly different than the mental health of similarly educated professionals, such as doctors and dentists. Rates of problematic alcohol use among lawyers, however, are high, even when compared to the general population. Moreover, problematic use of alcohol among lawyers has grown increasingly common over the last 15 years. These sometimes surprising and nuanced findings demonstrate the value of relying on more reliable data such as the NHIS.


We find little evidence that teachers have worse health and wellbeing outcomes than other occupational groups

How does the mental health and wellbeing of teachers compare to other professions? Evidence from eleven survey datasets. John Jerrim, Sam Sims, Hannah Taylor and Rebecca Allen. Review of Education, Vol. 8, No. 3, October 2020, pp. 659–689. DOI: 10.1002/rev3.3228

There is growing concern about the mental health and wellbeing of teachers globally, with the stress caused by the job thought to be a key factor driving many to leave the profession. It is often claimed that teachers have worse mental health and wellbeing outcomes than other occupational groups. Yet academic evidence on this matter remains limited, with some studies supporting this notion, while a handful of others do not. We contribute to this debate by providing the largest, most comprehensive analysis of differences in mental health and wellbeing between teachers and other professional workers to date. Drawing upon data from across 11 social surveys, we find little evidence that teachers have worse health and wellbeing outcomes than other occupational groups. Research in this area must now shift away from whether teachers are disproportionately affected by such issues towards strengthening the evidence on the likely drivers of mental ill-health within the education profession.

Keywords: mental health, occupational comparisons, teachers, wellbeing.


Conclusions

There is widespread global concern about the mental health and wellbeing of the teaching profession. Reports are now widespread in the international media about the stresses and strains of working as a teacher (Brennan & Henton, 2017; Asthana & Boycott-Owen, 2018), with particular pressure stemming from the long term-time working hours and due to the scrutiny teachers are placed under from high-stakes testing and school accountability. It has been suggested that this is a key reason why many individuals are deciding to leave teaching for alternative employment (CooperGibson Research, 2018), with a view that levels of stress, anxiety, depression and other aspects of poor wellbeing are not as prevalent amongst workers in other jobs. Several previous papers and research reports have suggested that mental health and wellbeing outcomes may indeed be worse amongst teachers than other professional groups (Travers & Cooper, 1993; Johnson et al., 2005; Ofsted, 2019; Worth & Van den Brande, 2019). At the same time, a handful of other studies have questioned whether this is really the case, presenting alternative empirical evidence to suggest that teachers have similar (and sometimes even better) wellbeing outcomes than professional employees in general. It hence remains an open question as to whether teachers are at a uniquely high-risk of suffering from low levels of wellbeing and of developing mental health problems.

Given the conflicts in the existing evidence base, this paper has sought to conduct the largest and most comprehensive analysis to date of the mental health and wellbeing of teachers in comparison to other professional groups. Drawing evidence from across 11 separate datasets, which together cover a wide array of mental health and wellbeing constructs and measures, the paper has presented detailed new evidence on this important policy issue. Our headline conclusion is that teachers actually seem to have very similar mental health and wellbeing outcomes to other professionals. There is little robust evidence to suggest that, on the whole, teachers are particularly anxious, depressed, have lower-levels of life-satisfaction or have poorer wellbeing outcomes than demographically similar individuals in other forms of professional employment. Although there are some exceptions amongst certain subgroups (e.g. SEN teachers tend to have somewhat lower levels of mental wellbeing, while the wellbeing of headteachers, on certain measures, is somewhat higher) and for certain outcomes (e.g. comparatively few teachers suffer from feelings of low self-worth) differences between teachers and other professionals are, on the whole, relatively small. These findings do, of course, need to be interpreted in light of the limitations of this study. First, although we have ‘matched’ teachers to demographically comparable professionals in other jobs, the number of potential confounders included within our matching models is a limitation. For instance, we have not been able to control for the wellbeing of study participants before they made their occupational choices. It could therefore be that those who choose to enter teaching start out with very high levels of wellbeing and mental health, which then rapidly decline to around the national average once they start working as a teacher. Such a situation would get masked within our analysis due to our lack of sufficient prior (pre-occupational selection) mental health and wellbeing controls. This is of course part of a much more general caveat that this paper has not been designed to measure the causal effect of choosing teaching as a career. Rather, we have presented a descriptive analysis attempting to establish whether mental health and wellbeing outcomes are worse amongst teachers than other professional groups—and not whether teaching leads to worse outcomes per se.

Second, and relatedly, one interpretation of our findings is that they are the result of individuals with mental health problems selecting out of teaching. For instance, those individuals who were working as teachers—but who struggled with their mental health and wellbeing—may have chosen to quit teaching for alternative employment. It is therefore possible that the teachers within our datasets are hence, on average, found to have similar outcomes to other professionals due to all those with mental health problems having chosen to leave. This again is an important caveat that needs to be remembered when interpreting our results—all the analyses are cross-sectional and are in reference to the population of individuals currently employed as teachers at the time of the surveys.

Third, to some extent all the data analysed in this paper are based upon information that has been self-reported by survey respondents. Although we have considered both responses to widely used and validated instruments and a selection of more objective outcome measures (e.g. prescription of anti-depressants) such indicators are not entirely free from such problems. Indeed, although the stigma attached to mental ill-health may be on the decline, it is possible that this leads some individuals to miss-report. While this issue is unlikely to undermine our substantive conclusions, future work using other measures (possibly including biomarkers and administrative primary care records) would help to strengthen the evidence base still further. Finally, some of the datasets we analysed included questions that asked specifically about wellbeing related to work, while others were about wellbeing and mental health in general. While there was no obvious difference in the pattern of the results, further research into occupational differences in work-related mental ill-health would be beneficial. For instance, the APMS dataset includes 15 questions specifically about work-related stress, which could provide a much more detailed insight into how this problem compares across occupations. Unfortunately, the sample size for each SOC group in the APMS is too small—and the occupational data made available too coarsely coded—to robustly investigate this issue. Our advice would be that at least a subset of these 15 work-related stress questions are asked within one of the UK’s large, flagship surveys (e.g. the LFS or APS) to facilitate such detailed occupational comparisons.

What then are the key directions for future work in this area? In our view, the evidence presented here makes it very hard to sustain the position that wellbeing and mental health outcomes of teachers are worse than for other occupational groups. For researchers in this area, the focus should now shift to better understanding the drivers of poor mental health outcomes amongst teachers, including whether these are indeed mainly work-related, or are actually mainly due to issues outside of their job (e.g. their personal life). Relatedly, we need better evidence on what system and school leaders can do to support their staff. There are, after all, a non-trivial number of school staff facing mental health issues, some of which may be caused or aggravated by their work. Understanding what can be done to help these individuals through this difficult period is key to teaching becoming a happier and healthier profession.

Lilliputian hallucinations concern hallucinated human, animal or fantasy entities of minute size

Leroy’s elusive little people: A systematic review on lilliputian hallucinations. Jan Dirk Blom. Neuroscience & Biobehavioral Reviews, March 4 2021. https://doi.org/10.1016/j.neubiorev.2021.03.002

Rolf Degen's take: https://twitter.com/DegenRolf/status/1367705194562265090

Highlights

• Lilliputian hallucinations are not as harmless as traditionally assumed.

• Their etiology is diverse, with CNS pathology accounting for a third of the cases.

• Therefore, in most cases auxiliary investigations are advisable.

• Treatment is directed at the underlying cause.

• A failure of size constancy may explain part of the underlying mechanism.

Abstract: Lilliputian hallucinations concern hallucinated human, animal or fantasy entities of minute size. Having been famously described by the French psychiatrist Raoul Leroy in 1909, who wrote from personal experience, to date they are mentioned almost routinely in textbooks of psychiatry, albeit with little in-depth knowledge. I therefore systematically reviewed 145 case reports and case series comprising 226 case descriptions, concluding that lilliputian hallucinations are visual (61%) or multimodal (39%) in nature. In 97% of the cases, they are perceived as grounded in the actual environment, thus indicating involvement of higher-level regions of the perceptual network subserving the fusion of sensory and hallucinatory content. Perceptual release and deafferentiation are the most likely underlying mechanisms. Etiology is extremely diverse, with schizophrenia spectrum disorder, alcohol use disorder and loss of vision accounting for 50% of the cases and neurological disease for 36%. Recovery was obtained in 62% of the cases, whereas 18% of the cases ended in chronicity and 8% in death. Recommendations are made for clinical practice and future research.

Keywords: Alcohol hallucinosisCharles Bonnet syndromeentity experienceintoxicationmultimodal hallucinationpsychedelicssize constancy

4.6. Pathophysiology

MRI scans of patients experiencing lilliputian hallucinations indicate the involvement of primary and secondary visual cortex in their mediation (Chen and Liu, 2011Vacchiano et al., 2019). Since systematic localizing studies are lacking and other MRI studies have, moreover, shown involvement of frontal, parietal and mesencephalic regions (Walterfang et al., 2012Hirakawa et al., 2016), it is hard to tell which parts of the visual network are of primary importance. Nonetheless, a special role would seem to be reserved for visual association cortex and - especially in compound hallucinations - higher-level cortical association networks. This follows from the complexity and remarkable embeddedness of the hallucinations in the observer’s surroundings. Individual phenomenological characteristics also hint at a role for lower levels of the visual network, even down to V4 in color perception and V5 in motion perception. Although much remains to be elucidated here, a case can also be made for the applicability of two different pathophysiological mechanisms, i.e. deafferentiation and perceptual release. It should be noted, though, that neither of these mechanisms are likely to produce lilliputian hallucinations without the involvement of higher-level perceptual networks.

4.6.1. Deafferentiation

The deafferentiation model states that loss of peripheral sensory input can lead to spurious activity of central networks normally involved in processing that input. It is thus applicable to closed-eye hallucinations, hypnagogic hallucinations and Charles Bonnet syndrome (ffytche et al., 1998), including cases of hemi- and quadrantanopsia. Moreover, this mechanism has been described in Parkinson’s disease, with the lilliputian hallucinations appearing at dusk (i.e., crepuscular hallucinations) (Montassut & Sauguet, 1943). A special case, and an exception in fact, is one involving a patient with blindness of the right eye and hemianopia of the left eye, who experienced lilliputian hallucinations in the one remaining hemifield, seeing ‘little men’ populating the world that he could still actually see (Camus, 1911), which is thus exemplary of perceptual release.

4.6.2. Perceptual release

The perceptual release model states that endogenously mediated perceptual material, which during our waking hours normally remains below the threshold of consciousness, can break through the surface and be projected into the outside world. Also referred to as ‘dream intrusion’, the model characterizes lilliputian hallucinations as a matter of ‘dreaming while awake’. A crucial difference with dreaming though, is that in this case only part of the perceived environment is replaced by hallucinatory content. The model is not only applicable to hypnopompic hallucinations (Trénel, 1926), but also to the majority of other cases falling outside the ‘sensory deprivation’ category (Table 1). Here, the term peduncular hallucinosis refers to a concept developed by Lhermitte and van Bogaert regarding the alleged involvement of the midbrain and pons (Kosty et al., 2019). However, whether these mesencephalic regions are indeed responsible for (all instances of) perceptual release is as yet uncertain. What is more, the phenomenological characteristics of peduncular hallucinosis far transcend those of lilliputian hallucinations.

4.6.3. Adjuvant models

Four other models likely play minor roles in the mediation of lilliputian hallucinations. The first is the peripheral model, in which peripheral hallucinations are attributed to intraocular pathology such as glaucoma, cataract and entoptic phenomena. Glaucoma and cataract are thought to act via the intermediate process of visual loss and hence deafferentiation, whereas entoptic phenomena (e.g., floaters, vitreous opacities) serve as points de repères for the development of lilliputian hallucinations, much in the way of visual pareidolia. With eyes closed, simple visual phenomena such as photopsia and Eigengrau are thought to fulfill similar catalytic functions (Ahlenstiel, 1954). All such entoptic phenomena result in hallucinations of the swiveling projective type, i.e. those moving along with the observer’s gaze, which were reported in 2% of the cases only. The second adjuvant model is the reperception model, which holds that hallucinations may borrow their content from memory. In the literature reviewed, I found a rare example concerning a 54-year-old woman who saw faithful miniature versions of people she had known at school, who were moreover saying things that made perfect sense in the light of her memories (Jacome, 1999). Then there is the irritation model, which states that epileptic activity may selectively stimulate brain regions responsible for hallucinatory activity. As we saw, epilepsy was diagnosed in only 7% of the cases. Moreover, the complex (and sometimes compound) nature of lilliputian hallucinations suggests that even in epilepsy, these phenomena are probably mediated via the intermediate process of perceptual release. Finally, the psychodynamic model needs mention. It suggests that lilliputian hallucinations are typically experienced by people with playful, imaginative or regressive personalities. In exceptional cases, the model may perhaps help to explain the appreciation or even the content of hallucinations, but I ask myself how the observations might contribute to our understanding of their mediation.

4.7. Size constancy

An intriguing and as yet little understood aspect of lilliputian hallucinations is the minute size of the beings featuring in them. In 1922 a crucial role was already suggested for failed size constancy in their mediation (Salomon, 1922), where size constancy is the capacity to assess the size of objects as uniform at different ranges. Emmert’s law states that this ability depends on depth cues and on the size of the retinal image (Millard et al., 2020). It was thus proposed that, in lilliputian hallucinations, this mechanism goes awry when hallucinated material is ‘projected’ on surfaces in confined spaces, making the ensuing image appear unnaturally small (Salomon, 1922). Even if correct, though, that explanation would only be applicable to hallucinations of the cinematic projective type, which make up a mere 1% of all cases. Only recently, lesion studies and electrophysiological studies have started to shed light on the neural correlates of size constancy. Drawing on two cases of brain infarction, it has been suggested that size constancy is a function of (the forceps major of) the dominant side of the brain (Hong et al., 2010). In an animal study with macaques, tungsten microelectrodes have been inserted into single neurons located in V4, leading to the identification of object-size coding neurons, i.e. neurons that compute object size independently of retinal size (Tanaka & Fujita, 2015). Similar experiments with cats yielded evidence for the involvement of specialized neurons in V1 (Pigarev & Levichkina, 2016), while a study in humans with steady-state visually evoked potentials found even more robust results for the latter region (Chen et al., 2019). While the exact mechanisms subserving size constancy remain to be elucidated, evidence is thus accumulating that specific neurons in early visual cortex play a role in the process, perhaps even without having to rely on retinal size, as was indicated by one study (Tanaka & Fujita, 2015), which is obviously also the case in lilliputian hallucinations.

4.8. Limitations

The number of published case descriptions of lilliputian hallucinations is small and most stem from historical sources. Even though I summarized the original case descriptions as faithfully as possible, it was not always clear whether they had been complete in the first place, and whether neurological patients were not also alcoholics, for instance, or vice versa. As a consequence, there is room for doubt about the accuracy of several of the conclusions drawn in this review. Moreover, since reports on lilliputian hallucinations were sometimes buried in the literature on Charles Bonnet syndrome, where they often serve as examples of the syndrome’s characteristic complex visual hallucinations, I may have missed an unknown number of case descriptions for my systematic review.

Self-regulation composite (preschool & ages 17-37) predicts capital formation at 46, but preschool delay of gratification alone does not predict such capital formation

Predicting mid-life capital formation with pre-school delay of gratification and life-course measures of self-regulation. Daniel J.Benjamin et al. Journal of Economic Behavior & Organization, Volume 179, November 2020, Pages 743-756. https://doi.org/10.1016/j.jebo.2019.08.016

Rolf Degen's take: The renowned marshmallow test goes belly-up, failing to forecast individuals' future

Highlights

• Self-regulation composite (preschool & ages 17-37) predicts capital formation at 46.

• Preschool delay of gratification alone does not predict capital formation at 46.

• The composite is more predictive partly because it consists of many items.

• No evidence of more predictive power for self-regulation reported later in life.

Abstract: How well do pre-school delay of gratification and life-course measures of self-regulation predict mid-life capital formation? We surveyed 113 participants of the 1967–1973 Bing pre-school studies on delay of gratification when they were in their late 40’s. They reported 11 mid-life capital formation outcomes, including net worth, permanent income, absence of high-interest debt, forward-looking behaviors, and educational attainment. To address multiple hypothesis testing and our small sample, we pre-registered an analysis plan of well–powered tests. As predicted, a newly constructed and pre-registered measure derived from preschool delay of gratification does not predict the 11 capital formation variables (i.e., the sign-adjusted average correlation was 0.02). A pre-registered composite self-regulation index, combining preschool delay of gratification with survey measures of self-regulation collected at ages 17, 27, and 37, does predict 10 of the 11 capital formation variables in the expected direction, with an average correlation of 0.19. The inclusion of the preschool delay of gratification measure in this composite index does not affect the index’s predictive power. We tested several hypothesized reasons that preschool delay of gratification does not have predictive power for our mid-life capital formation variables.

Keywords: Self-regulationDelay of gratificationMid-life capital formation

JEL: D910D140I310I210I120


6. Concluding remarks

We have reported pre-registered analysis of the latest survey wave of the Bing pre-school study on delay of gratification. Respondents were in their late 40’s at the time of this latest wave and were asked to report 11 mid-life capital formation outcomes (e.g., net worth and permanent income). Our analysis plan both described our methods and predicted what we would find. As predicted, a newly constructed measure derived from preschool delay of gratification does not predict the 11 capital formation variables (i.e., the sign-adjusted average correlation was 0.02). By contrast a composite self-regulation index, combining preschool delay of gratification with survey measures of self-regulation collected at ages 17, 27, and 37, does predict 10 of the 11 capital formation variables in the expected direction, with an average correlation of 0.19. The inclusion of the preschool delay of gratification measure in this composite index does not affect the index’s predictive power for two reasons. Most importantly, the index of self-regulatory measures is comprised of 86 responses per participant, whereas the preschool delay of gratification task is a single behavioral task. In addition, the preschool delay of gratification task is measured using a diagnostic variant of the task for 34 of our 113 participants; the remaining 79 participants experienced a non-diagnostic variant of the pre-school delay of gratification task.

The data we have analyzed is unique because the Bing cohort is the only sample where preschool delay of gratification has been studied long enough to examine relationships with mid-life outcomes. While the tests of our primary hypotheses were well powered, we caution that our sample is relatively small and not representative of the overall population—e.g., 97% of our sample has a four-year college degree (the exceptions are one participant who has a two-year college degree and two who have some college but no degree)—limiting the generalizability of the results.

However, we can compare our results to the small set of overlapping analyses that have been conducted using the Dunedin cohort, which began collecting data in 1972–1973 and has childhood self-regulation measures but no preschool delay of gratification measure.11 The cohort is from a small town in New Zealand with much lower levels of educational attainment (Moffitt et al., 2011). Specifically, 29% of the Dunedin sample is college educated (Belsky et al., 2016). Despite the stark socioeconomic differences, the self-regulation measures used in the Dunedin study have similar predictive power to the self-regulation measures in the Bing sample. For example, Moffitt et al. (2011) found that a 1 SD increase in childhood self-control as measured in the Dunedin study predicts a 0.24 SD increase in income at age 32. In the Bing sample, a 1 SD increase in RNSRI predicts a 0.32 SD increase in rank-normalized permanent income. Similarly, a 1 SD increase in the Dunedin self-control measure predicts a 0.12 SD decrease in credit card problems, while a 1 SD increase in the Bing RNSRI predicts a 0.18 SD decrease in rank-normalized credit card misuse. A 1 SD increase in the Dunedin self-control measure predicts a 0.14 SD decrease in money management difficulties, while a 1 SD increase in the Bing RNSRI predicts a 0.24 SD increase in financial health.12 Despite these intriguing similarities across the two samples, the issue of generalizability remains an important question to be addressed in future research as mid-life data becomes available in more childhood longitudinal cohorts.

Thursday, March 4, 2021

Can Researchers’ Personal Characteristics Shape Their Statistical Inferences?

Can Researchers’ Personal Characteristics Shape Their Statistical Inferences? Elizabeth W. Dunn, Lihan Chen et al. Personality and Social Psychology Bulletin, August 31, 2020. https://doi.org/10.1177/0146167220950522

Abstract: Researchers’ subjective judgments may affect the statistical results they obtain. This possibility is particularly stark in Bayesian hypothesis testing: To use this increasingly popular approach, researchers specify the effect size they are expecting (the “prior mean”), which is then incorporated into the final statistical results. Because the prior mean represents an expression of confidence that one is studying a large effect, we reasoned that scientists who are more confident in their research skills may be inclined to select larger prior means. Across two preregistered studies with more than 900 active researchers in psychology, we showed that more self-confident researchers selected larger prior means. We also found suggestive but somewhat inconsistent evidence that men may choose larger prior means than women, due in part to gender differences in researcher self-confidence. Our findings provide the first evidence that researchers’ personal characteristics might shape the statistical results they obtain with Bayesian hypothesis testing.

Keywords: confidence, gender, Bayesian inference, hypothesis testing


Body dissatisfaction is highest in heterosexual women and lowest in heterosexual men

Malillos, Monica H., Elena Theofanous, Keith R. Laws, and Paul Jenkinson. 2021. “Gender, Sexual Orientation and Body Dissatisfaction: A Meta-analysis Covering Four Decades of Research.” PsyArXiv. March 4. doi:10.31234/osf.io/5hdkr

Abstract

Background: Four decades of research has assessed how gender and/or sexual orientation contribute to levels of body dissatisfaction (BD). The findings have proven somewhat equivocal and little attention has been paid to potential moderators. Method: The current meta-analysis compared BD in gay and heterosexual men (38 overall effects), and lesbian and heterosexual women (25 overall effects). Additional pairwise comparisons explored differences between heterosexual men and heterosexual women, gay men and lesbians, gay men and heterosexual women, and heterosexual men and lesbian women.

Results: Random effects model meta-analyses revealed greater levels of BD in gay men compared to heterosexual men (g = -0.36, 95% CI -0.43, -0.29). By contrast, BD was greater in heterosexual women than lesbians (g = 0.09 95% CI 0.03, 0.15). Year of publication and mean difference in age between gay and heterosexual samples moderated the relationship between BD and sexual orientation, but only for men. Pairwise comparisons indicated that BD is highest in heterosexual women and lowest in heterosexual men.

Conclusions: Findings indicate that both gender and sexual orientation influence BD. We identified a number of limitations in the existing research base, and make recommendations for future research.


Conservative (vs. liberal) providers enhance consumer experience because conservative providers are higher on trait‐conscientiousness; consumers expect to receive better service from liberals due to their higher openness

How Consumer Experience Is Shaped by the Political Orientation of Service Providers. Alexander Davidson  Derek A. Theriault. Journal of Consumer Psychology, March 3 2021. https://doi.org/10.1002/jcpy.1233

Abstract: This research documents the counterintuitive effect that consumers actually have better service experiences with politically conservative service providers, but expect to have better experiences with politically liberal service providers. First, we document the effect in actual consumer service experience across three different contexts (Airbnb hosts, Uber drivers, waiters), and demonstrate that conservative (vs. liberal) providers enhance consumer experience (studies 1, 2a, 2b), because conservative providers are higher on trait‐conscientiousness (study 3). Second, in an experiment (study 4), we document expectations about service experience and demonstrate that consumers expect to receive better service from liberals (vs. conservatives). We explain that this effect emerges because consumers do not perceive that conservatives (vs. liberals) are more conscientious, but do perceive that they are less open. Overall, our theoretical framework outlines how conservative providers possess an unknown strength (higher conscientiousness) and a known weakness (lower openness), which leads to different actual and expected consumer service experiences. These novel findings provide valuable contributions to our understanding of how consumers are impacted by the political orientation of marketplace providers.




Wednesday, March 3, 2021

The perception of odor pleasantness is shared across cultures

The perception of odor pleasantness is shared across cultures. Artin Arshamian et al. bioRxiv, Mar 2 2021. https://doi.org/10.1101/2021.03.01.433367

Abstract: Human sensory experience varies across the globe. Nonetheless, all humans share sensory systems with a common anatomical blueprint. In olfaction, it is unknown to what degree sensory perception, in particular the perception of odor pleasantness, is dictated by universal biological principles versus sculpted by culture. To address this issue, we asked 235 individuals from 9 diverse non-western cultures to rank the hedonic value of monomolecular odorants. We observed substantial global consistency, with molecular identity explaining 41% of the variance in individual pleasantness rankings, while culture explained only 6%. These rankings were predicted by the physicochemical properties of out-of-sample molecules and out-of-sample pleasantness ratings given by a separate group of industrialized western urbanites, indicating human olfactory perception is strongly constrained by universal principles.


The intellectual tools that are needed for the eliciting of nature's secrets may have derived from the Stone Age art of reading tracks

Tracking Science: An Alternative for Those Excluded by Citizen Science. Louis Liebenberg et al. Citizen Science, Mar 3 2021. http://doi.org/10.5334/cstp.284

Rolf Degen's take: https://twitter.com/DegenRolf/status/1367182399658995720

Abstract: In response to recent discussion about terminology, we propose “tracking science” as a term that is more inclusive than citizen science. Our suggestion is set against a post-colonial political background and large-scale migrations, in which “citizen” is becoming an increasingly contentious term. As a diverse group of authors from several continents, our priority is to deliberate a term that is all-inclusive, so that it could be adopted by everyone who participates in science or contributes to scientific knowledge, regardless of socio-cultural background. For example, current citizen science terms used for Indigenous knowledge imply that such practitioners belong to a sub-group that is other, and therefore marginalized. Our definition for “tracking science” does not exclude Indigenous peoples and their knowledge contributions and may provide a space for those who currently participate in citizen science, but want to contribute, explore, and/or operate beyond its confinements. Our suggestion is not that of an immediate or complete replacement of terminology, but that the notion of tracking science can be used to complement the practice and discussion of citizen science where it is contextually appropriate or needed. This may provide a breathing space, not only to explore alternative terms, but also to engage in robust, inclusive discussion on what it means to do science or create scientific knowledge. In our view, tracking science serves as a metaphor that applies broadly to the scientific community—from modern theoretical physics to ancient Indigenous knowledge.

Keywords: citizen science, tracking science, Indigenous communities, citizenship, immigration, inclusive


Examples of Tracking Science

The definition of tracking science describes, among other things, what Indigenous communities in Africa have been doing for more than 100,000 years (Liebenberg 19902013a2013b). Tracking science does not propose a relativist version of Indigenous knowledge that fails to make distinctions between evidence-based scientific knowledge and mythology. Instead, it attends to the empirical elements of knowledge production across diverse sets of people that, in practice, may contribute to the larger body of scientific knowledge about the world. For example, we do not think that we should “abolish the distinction between science and fiction” (Woolgar 1988, p. 166), but should consider the politics and power involved in determining what scientific facts come to be accepted, much as science studies scholar Bruno Latour suggests (Latour 20032005, p. 87–93). Tracking science addresses this issue by recognizing diverse epistemological traditions without reducing them to the stale knowledge-belief binary opposition. In this context, Hansson (2018, p. 518) explains that:

“the discussion is often couched in terms of comparisons between ‘indigenous belief systems’ and modern science. This is a misguided and unfair comparison. In particular, the common comparison between modern science and the magical and religious thinking in indigenous societies is remarkably misconceived. Religious and spiritual thinking in traditional societies should be compared to religious and spiritual thinking in modern societies. Similarly, modern science should be compared to those elements in traditional societies that are most similar to modern science.”

We do not seek to reproduce the bifurcation Hansson describes, and acknowledge that the lines between scientific and religious thinking are often not as clear as this characterization. Nevertheless, we insist that similar elements of knowledge can be commensurable across societies. Tracking science is what Indigenous communities depended on for their survival for millennia—evidence-based scientific knowledge that had an objective correlation with the real world. Furthermore, in contemporary times, Indigenous communities have been involved in scientific research as well as biodiversity and environmental monitoring in as far afield as the Kalahari in Africa (Stander et al. 1997Liebenberg et al. 2017Keeping et al. 2018), the Arctic (Danielsen et al. 2014Johnson et al. 2015), and Australia (Ansell and Koening 2011Ens 2012), to name but a few examples. See also the video and article by Cross and Page (2020): Indigenous trackers are teaching scientists about wildlife https://edition.cnn.com/2020/07/09/africa/louis-liebenberg-c2e-spc-int/index.html. In today’s world, Indigenous farmers who follow ancient traditions in performing advanced plant breeding and agricultural experiments maintain crop biodiversity by in situ conservation, which is much more efficient than storage of seeds (Altieri and Merrick 1987Hanson 2019). Other examples include Aboriginal burning practices offering alternative fire regimes that have been incorporated into rangeland management in Australia (Verran 2002Cook et al. 2012), the use of fire to manage natural resources by the Kalahari San (Humphrey et al. 2021), and local farmers contributing to soil science in the Philippines (Richelle et al. 2018).

Within the modern urban and rural context, tracking science could become the contemporary equivalent of Indigenous knowledge, local knowledge, or even vernacular knowledge (see Richelle et al. 2018), where urban and rural communities discover and develop their own scientific understanding of their environment—without the constraints of citizenship. This has been happening in the United Kingdom, and probably other parts of the world, for more than a century (Pocock et al. 2015). The Biological Records Centre, established in 1964 in the United Kingdom, is volunteer led and involves an estimated 70,000 people. Their datasets are long-term, have large geographic extent, and are taxonomically diverse. Significantly, many recorders undertake individual research projects on their own or with others, or make observations on novel interactions or behavior. They publish these in various journals and newsletters. We suggest that what the Biological Records Centre has been doing is closer to the definition of tracking science than the dominant, but not only, participatory models of citizen science, in which it is presumed that the research endeavors in which community members participate should be planned and led by professional scientists.

Perhaps one of the most inspirational scientific papers was published by The Royal Society in the journal Biology Letters. This paper, “Blackawton Bees,” describing an original discovery on the vision of bumblebees, was designed, conducted, and written by a group of 8-10-year-old children outside of London, UK. The children asked the questions, hypothesized the answers, designed the games (the experiments) to test these hypotheses, and analyzed the data. They also drew the figures (in color pencil) and wrote the paper. The paper was inspired not by the scientific literature, but by their own observations of the world. In a sense it reveals science in its truest (most pure) form (Blackawton et al., 2010).

Our definition of tracking science would also incorporate the work of eminent independent scientists who changed how we think about the world in which we live, and produced groundbreaking scientific innovations working outside the domain of institutionalized science. These would include the 19th-century naturalists Charles Darwin and Alfred Russel Wallace, co-discoverers of natural selection, along with 20th-century giants such as Rachel Carson, Jane Goodall, and Albert Einstein. Tracking science therefore provides both opportunities and role models for young people who want to go beyond the confines of participatory citizen science. It has the potential to generate a recognized knowledge network wherein their aspirations and explorations may result in unexpected innovations in science and technology.