Wednesday, April 1, 2020

Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System: Costs going up

Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System. Laura C. Rosella et al. JAMA Netw Open. 2020;3(4):e201917, April 1, 2020. doi:10.1001/jamanetworkopen.2020.1917

Question  What are the population-level trends in health care utilization and expenditures in the 2 years before death among adults in Ontario, Canada?

Findings  This cohort study found that health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion in 2005 to CAD$7.84 billion in 2015, and the intensity of health care utilization and deaths in hospital varied by resource utilization gradients.

Meaning  In this study, the observed trends demonstrated that costs and hospital-centered care before death are high in Ontario.


Abstract
Importance  Measuring health care utilization and costs before death has the potential to initiate health care improvement.

Objective  To examine population-level trends in health care utilization and expenditures in the 2 years before death in Canada’s single-payer health system.

Design, Setting, and Participants  This population-based cohort included 966 436 deaths among adult residents of Ontario, Canada, from January 2005 to December 2015, linked to health administrative and census data. Data for deaths from 2005 to 2013 were analyzed from November 1, 2016, through January 31, 2017. Analyses were updated from May 1, 2019, to June 15, 2019, to include deaths from 2014 and 2015.

Exposures  Sociodemographic exposures included age, sex, and neighborhood income quintiles, which were obtained by linking decedents’ postal codes to census data. Aggregated Diagnosis Groups were used as a general health service morbidity-resource measure.

Main Outcomes and Measures  Health care services accessed for the last 2 years of life, including acute hospitalization episodes of care, intensive care unit visits, and emergency department visits. Total health care costs were calculated using a person-centered costing approach. The association of area-level income with high resource use 1 year before death was analyzed with Poisson regression analysis, controlling for age, sex, and Aggregated Diagnosis Groups.

Results  Among 966 436 decedents (483 038 [50.0%] men; mean [SD] age, 76.4 [14.96] years; 231 634 [24.0%] living in the lowest neighborhood income quintile), health care expenditures increased in the last 2 years of life during the study period (CAD$5.12 billion [US $3.83 billion] in 2005 vs CAD$7.84 billion [US $5.86 billion] in 2015). In the year before death, 758 770 decedents (78.5%) had at least 1 hospitalization episode of care, 266 987 (27.6%) had at least 1 intensive care unit admission, and 856 026 (88.6%) had at least 1 emergency department visit. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Utilization in the last 2 years, 1 year, 180 days, and 30 days of life varied by resource utilization gradients. For example, the proportion of individuals visiting the emergency department was slightly higher among the top 5% of health care users compared with other utilization groups in the last 2 years of life (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year of life (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]); however, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). No meaningful associations were observed in high resource use between individuals in the highest income quintile compared with the lowest income quintile (rate ratio, 1.02; 95% CI, 0.99-1.05) after adjusting for relevant covariates.

Conclusions and Relevance  In this study, health care use and spending in the last 2 years of life in Ontario were high. These findings highlight a trend in hospital-centered care before death in a single-payer health system.



Introduction
Similar to those in other high-income countries, health care utilization and costs in Canada are expected to increase because of an expanding and aging population.1 A large proportion of these costs are incurred toward the end of life, with multiple studies demonstrating that health care utilization in the final months of life accounts for a substantial share of health care expenditures in comparison with other points in an individual’s life.2-5 In addition, most spending is concentrated in small groups of the population, who are characterized as high-cost users.6,7 Studies have shown that high-intensity medical care at the end of life can produce poor outcomes,8-10 can be associated with poor quality of life,11 and may conflict with patient preferences.9,12 To meet the growing needs of an aging population, a deeper understanding of the determinants and patterns of health care utilization and costs prior to death is required.

Most studies examining health care utilization prior to death have focused on a single aspect of care (eg, palliative services)13 or were specific to a particular cause of death.14-18 To our knowledge, few studies have examined health care use and costs at a population level and across an array of health sectors.5,19 Despite its potential to inform health care service delivery and improvement, evidence on health care utilization and cost patterns before death in a Canadian context is limited. A recent population-based study that examined health care expenditures in Ontario, Canada, from 2010 to 201319 reported that decedents who constituted less than 1% of the population consumed 10% of Ontario’s total health care budget, demonstrating that health care utilization occurs disproportionately. Using comprehensive multilinked mortality files, we analyzed population-level trends in health care utilization and expenditures prior to death in Ontario’s single-payer health system by looking at overall trends for more than a decade and by gradients of cost (ie, patients in the top 5%, top 6%-50%, and bottom 50% of health care costs).

Methods
Study Design
This retrospective cohort study used multiple linked vital statistics, population files, and health administrative data held at ICES to examine all deaths occurring in Ontario between January 2005 and December 2015. These data sets were linked using unique encoded identifiers and analyzed at ICES, an independent, nonprofit research institute whose legal status under Ontario’s health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. This study received ethical approval from the University of Toronto’s Health Sciences research ethics board and the institutional review board at Sunnybrook Health Sciences Centre, Toronto, Canada. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.

Study Population
Data for all deaths registered in the province of Ontario were obtained from the Office of the Registrar General-Deaths (ORG-D) file. The ORG-D is linked to the Registered Persons Database (RPDB), which contains basic demographic information for those who have ever received an Ontario health card number for the province’s universal health care system (overall linkage rate, 96.5%).20 The study cohort consisted of all deaths registered in the ORG-D between January 1, 2005, and December 31, 2015, that were linked to the RPDB record (N = 966 436). Those who had an invalid Ontario health card number on their death date (n = 4433), were not residents of Ontario (n = 252), or were younger than 18 years (n = 8768) were excluded.

Measures
We examined health care utilization prior to death according to several sociodemographic exposures. Sex and age data were obtained from the RPDB. Categories for age at time of death were 18 to 24 years, 25 to 34 years, 35 to 44 years, 45 to 54 years, 55 to 64 years, 65 to 74 years, 75 to 85 years, and older than 85 years. Ecological-level measures of income and education status were estimated using data from the 2006 Canadian census21 and were applied to individuals according to the dissemination area, which represents the smallest geographic census area in which the individual resided. Based on their postal code at the time of death, individuals were assigned to a dissemination area. Education was characterized as the proportion of individuals who completed high school in a given dissemination area. Individuals were grouped into income and education quintiles ranging from 1 (lowest 20% of income or education) to 5 (highest 20% of income or education).

As a general health service morbidity-resource measure, we used The Johns Hopkins Adjusted Clinical Group system version 10.0.1 Aggregated Diagnosis Group (ADG) scores, a person-focused, diagnosis-based method of categorizing individuals’ illnesses.22 Aggregated Diagnosis Groups have been validated for health services research use in Ontario23 and were calculated for the 2 years prior to death.

We measured health care utilization and services accessed for 2 years, 1 year, 180 days, and 30 days before death. Hospitalization episodes of care and intensive care unit (ICU) visits were obtained from the Discharge Abstract Database. An acute hospitalization episode was defined as either an admission to an acute care setting from which the patient was discharged or a continuous sequence of hospital stays in different hospitals to which the patient was transferred. Transfers between 2 different institutions were defined using both the timing between admissions and transfer flags on either record. Specifically, the following situations were defined as a transfer: (1) any admission within 6 hours of the previous discharge, (2) any admission within 12 hours of the previous discharge in which the type of institution transferred from or to was type 1 (ie, acute care), or (3) any admission within 48 hours of the previous discharge in which the number of the institution transferred from matched the number of the institution or the institution transferred to. Length of stay for episodes of care and ICU visits were calculated by subtracting the date of the latest discharge from the date of the earliest admission. Emergency department visits were obtained from the National Ambulatory Care Reporting System and were counted as 1 claim per patient per registered day. Physician services (primary care and specialist) were obtained from the Ontario Health Insurance Plan claims database. A physician visit was counted as 1 claim per patient per service day per physician. Physician specialties listed as family practice and general practice, community medicine, and pediatrics were considered primary care visits. All other physician specialties were considered a specialist visit. Death in hospital was identified in the Discharge Abstract Database if a hospital discharge disposition code was recorded as died, indicating a death in hospital. In 423 580 of 427 859 in-hospital deaths (99.0%), the death date in ORG-D and hospital discharge date with a code indicating died in the Discharge Abstract Database were within 1 day apart.

We calculated comprehensive per-person health care costs for the time proceeding death (last 2 years of life). Annual health care utilization and costs were calculated from the health care payee perspective, using administrative data from across health care sectors, including inpatient hospital stay, emergency department visits, same day surgery, stays in complex continuing care hospitals and inpatient rehabilitation, inpatient psychiatric admissions, physician payments for patient visits and community laboratory tests, and prescriptions filled for individuals eligible for the Ontario Drug Benefit Plan. A person-centered costing macro was used to calculate total annual health care spending; the costing methodology has been described elsewhere.24 Expenditures were calculated in Canadian dollars for the year 2015. Individuals were categorized in resource utilization groups (top 5%, top 6%-50%, and bottom 50%) based on the total health care costs in the last year of their lives.

Statistical Analysis
The distribution of sociodemographic characteristics among decedents at the time of death was described using means and proportions according to health care utilization gradients. Overall and per-person health care utilization metrics were calculated for the 2 years, 1 year, 180 days, and 30 days before death using medians and proportions and are presented according to health care utilization gradients for the last year of life. We estimated temporal trends of total health care expenditures among adult deaths from 2005 to 2015 by health care utilization gradient.

Factors associated with being in the top 5% of health care users in the last year of life were assessed by a modified Poisson model.25 We chose to model risk directly using a modified Poisson regression because it provides a good approximation of the binomial distribution when the sample is large, and it is less likely than logistic regression to overestimate the relative risk.26 We used belonging to the top 5% as the outcome and sex, age, area-level income quintile, and ADG score as the covariates. Associations were calculated with rate ratios (RRs) with corresponding confidence intervals. Statistical significance was set at P < .05, and all tests were 2-tailed. All analyses were conducted using SAS Enterprise Guide statistical software, version 7.15 (SAS Institute).

Results
Sociodemographic Characteristics by Health Care Utilization Gradient
Sociodemographic characteristics of 966 436 adult decedents (438 038 [50.0%] men; 231 634 [24.0%] living in lowest neighborhood income quintile), stratified by health care utilization gradients (top 5%, top 6%-50%, and bottom 50%) are shown in Table 1. Those in the top 5% were younger, with a mean (SD) age of 71.1 (14.6) years compared with 76.4 (14.96) years for the total cohort. A larger percentage of those in the top 5% were male (26 818 [55.5%] vs 200 965 [46.2%] in the top 6%-50% and 255 255 [52.8%] in the bottom 50%) and had a higher mean (SD) number of ADGs compared with the overall cohort (14.9 [3.6] vs 11.2 [4.4]). In contrast, the distribution of area-level income and education were similar across health care utilization gradients. The number of deaths captured in the cohort per year was similar across years, from 83 227 deaths in 2005 to 95 044 deaths in 2015 (eTable 1 in the Supplement). The major causes of death in the cohort were cancer (287 308 [29.7%]) and diseases of the circulatory system (279 881 [29.0%]) (eTable 2 in the Supplement).

Health Care Utilization in the Last 2 Years, 1 Year, 180 Days, and 30 Days of Life
Health care utilization prior to death for the overall cohort is described in Table 2. In the last 2 years of life, most individuals (758 770 [78.5%]) had at least 1 acute hospitalization episode of care, with a median (interquartile range [IQR]) length of stay of 8 (5-15) days. Approximately one-third (266 987 [27.6%]) were admitted to the ICU with a median (IQR) length of stay of 69 (33-130) hours in acute care, and almost all (856 026 [88.6%]) had an emergency department visit. The median (IQR) number of visits to primary care and specialist physicians were similar, with 31 (17-53) visits and 34 (13-69) visits, respectively.

In the last 30 days of life, 143 225 decedents (14.8%) were admitted to the ICU, spending a median (IQR) of 59 (23-124) hours in acute care. In addition, most visited a primary care physician (856 679 [88.6%]; median [IQR] visits, 4 [1-9]) and a specialist (699 042 [72.3%]; median [IQR] visits, 4 [0-14]). In terms of proximity to death, 475 574 decedents (49.2%) had at least 1 hospitalization episode of care in the last 30 days of life, 662 628 (68.6%) in the last 180 days, 710 035 (73.5%) in the last year, and 758 770 (78.5%) in the last 2 years. Similarly, the proportion that visited the emergency department was 528 658 (54.7%) in the last 30 days of life, 750 558 (77.7%) in the last 180 days, 805 598 (83.4%) in the last year, and 856 026 (88.6%) in the last 2 years. The nominal difference in percentage demonstrates that a substantial portion of health care use occurred toward the end of life.

Health Care Utilization Metrics by Resource Utilization Gradients
Table 2 presents health care utilization metrics at the end of life according to resource utilization gradients. In the last 2 years of life, among those who experienced a hospitalization, individuals in the top 5% had a median (IQR) of 3 (2-6) episodes of care per person, compared with 1 (0-2) episode of care among individuals in the bottom 50%. In the same period, approximately two-thirds of those in the top 5% experienced an ICU admission (31 099 [64.4%]) with a median (IQR) length of stay of 143 (70-317) hours; in comparison, approximately one-fifth of individuals in the bottom 50% (100 959 [20.9%]) had an ICU admission, with a median (IQR) length of stay of 47 (22-89) hours. The proportion of individuals visiting the emergency department was slightly higher among the top 5% compared with other utilization groups in the last 2 years (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]) of life. In contrast, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). In the last 2 years of life, the median (IQR) number of primary care visits was 57 (28-106) among the top 5% compared with 22 (11-37) among the bottom 50%. The median (IQR) number of specialist visits over this period was 163 (96-238) among the top 5% compared with 21 (8-40) among the bottom 50%.

Factors Associated With High Resource Utilization Prior to Death
In the Poisson model (Table 3), significant risk reductions for high resource utilization (ie, top 5%) in the last year of life were observed among women compared with men (RR, 0.90; 95% CI, 0.88-0.91) and among older age groups; the RR was 0.21 times lower in decedents older than 85 years compared with those aged 18 to 24 years (RR, 0.21; 95% CI, 0.19-0.23) after adjusting for income and ADGs (Table 3). No meaningful associations were observed between individuals in the highest area income quintile compared with individuals in the lowest quintile (RR, 1.02; 95% CI, 0.99-1.05) after adjusting for sex, age, and ADGs. The associations between high income (ie, quintile 5) and low income (ie, quintile 1) remained null in the sex-segregated models, in which the confidence interval included the null value.

Hospital Deaths by Resource Utilization Gradients
Table 4 displays trends in the percentage of deaths that occurred in hospital by resource utilization gradients. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Throughout the study period, a total of 29 292 of 48 324 deaths (60.4%) and 203 792 of 483 213 (42.2%) occurred in the hospital among those in the top 5% and the bottom 50%, respectively, without much variation during the study period. Among the top 6% to top 50% resource gradient, deaths in hospital decreased from 14 975 of 28 792 (52.0%) in 2005 to 18 569 of 46 859 (39.6%) in 2015.

Temporal Trends in Health Care Expenditures According to Resource Utilization Gradients
Total health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion (US $3.83 billion) in 2005 to CAD$7.84 billion (US $5.86 billion) in 2015, an increase of approximately 35%. Similarly, expenditures during this period increased from CAD$3.59 billion (US $2.69 billion) to CAD$5.34 billion (US $4.01 billion) in the last year of life, an increase of 33%. In the last 180 days of life, expenditures increased from CAD$2.53 billion (US $1.90 billion) to CAD$3.67 billion (US $2.75 billion), a 31% increase, and for the last 30 days of life, they increased from CAD$1.04 billion (US $0.78 billion) to CAD$1.43 billion (US $1.07 million), a 27% increase (Figure, A). Mean per-person spending in the last 2 years of life increased among the top 5% from CAD$273 820 (95% CI, CAD$269 935 to CAD$277 760) (US $205 365; 95% CI, US $202 451 to US $208 320) in 2005 to CAD$295 183 (95% CI, CAD$291 811 to CAD$298 593) (US $221 387; 95% CI, US $218 858 to US $223 945) in 2015. In the same period, mean per-person spending in the bottom 50% decreased from CAD$33 489 (95% CI, CAD$33 210 to CAD$33 771) (US $25 117; 95% CI, US $24 908 to US $25 328) in 2005 to CAD$31 148 (95% CI, CAD$30 871 to CAD$31 427) (US $23 361; 95% CI, US $23 153 to $23 570) in 2015 (Figure, B). In the last 2 years of life, mean (SD) per-person spending for acute hospital care increased from CAD$4839 (CAD$ 14 053) (US $3629 [US $10 540]) in 2005 to CAD$6572 (CAD$19 722) (US $4928 [US $14 792]) in 2015 (Figure, C).

Discussion
This study examined population-wide health care utilization and costs at the end of life in the universal health care system of Ontario, which accounts for 40% of Canada. Our unique focus on health care utilization gradients and trends of health care use at the end of life was enabled by a mortality database that contained all deaths registered in Ontario during 11 years, linked with health administrative data. We demonstrated that overall health care expenditures in Ontario for the last 2 years of life increased by 35% from 2005 to 2015, with the largest proportional increase in average per-person spending observed in the top 5% and top 6% to top 50% of health care users. We demonstrated higher end-of-life utilization of health care services among those in the top 5% compared with the overall cohort for hospitalization episodes of care, ICU visits, emergency department visits, and physician visits. Exceptions to this pattern were identified for the last 30 days of life, in which utilization of certain services, such as emergency department visits, were higher among the top 6% to top 50% and bottom 50% of health care users than among the top 5%. However, the observed reduced utilization of these services could have been the result of individuals in the highest cost group already being admitted to a hospital in their last 30 days of life. Several studies have reported population-wide health care utilization prior to death,19,27,28 but they have not looked at differences among health care use gradients.

The study showed that in the last year of life, 74% of residents of Ontario had a hospitalization episode of care and 24% spent time in the ICU. Comparable patterns of end-of-life health care utilization have been reported in other high-income countries. For example, an Australian study looking at hospital-based services used by adults during the last year of life reported slightly higher rates of hospitalization (84%) and lower rates of ICU visits (12%).27 In the United States, ICU visit rates in the last month of life ranged from 24% to 29% among Medicare beneficiaries aged 66 years and older compared with 21% in our cohort.29

We observed a negatively linear association between older age groups and being in the top 5% of health care users in the last year of life, especially among men. A similar pattern of lower expenditures among older age groups in the last year of life was reported in the US Medicare population of adults aged 65 and older.30 In our analysis, we did not see meaningful associations for area-level income quintiles and high health care utilization in the last year of life after adjusting for sex, age, and ADGs. Similar findings were observed in a retrospective cohort analysis of health care use among deaths in Ontario from 2010 to 2013, in which total costs did not vary by neighborhood income quintile.19 In contrast, among the US Medicare population, individuals in the lowest-income area had slightly higher expenditures in the last year of life compared with those living in the highest-income areas.30 Furthermore, in a study of health care spending in the last year of life in the province of British Columbia, Canada, the highest 2 household income quintiles were shown to have approximately 4% less health care spending than those in the lowest income quintile.31 The differential income associations observed in these studies could be attributed to health system differences in access to health care services in the jurisdictions under study and differences in ecological-level vs individual-level income measures used.

A larger percentage of deaths occurred in hospital in our cohort compared with Switzerland, where it was reported that 34% of deaths were in a hospital,28 and in the United States, where deaths in acute care hospitals ranged from 25% to 33% among decedents older than 66 years.29 Furthermore, we observed that the proportion of deaths in hospital among the top 5% and bottom 50% of health care users in the last year of life was stable over the study period. The observed high-intensity care near the end of life and high percentage of deaths in hospitals highlights a need for a societal-level discussion about approaches to end-of-life care in Ontario.

We observed that high health care utilization was associated with multimorbidity, as measured by ADGs, and that hospital-centered care was the typical trajectory at the end of life. This points to the need to design appropriate integrated care strategies that could support patients at the end of life to be discharged from the hospital and receive care and management for their conditions through home care or long-term care services.

Limitations
It is important to note some limitations to our study. First, our study used ecological-level indicators of socioeconomic status based on postal code information at the time of death, which may have provided lower estimates of income gradients in health care utilization.32 Second, our database only included services covered by the provincial health care payee and not services that may be covered by supplemental insurance or paid for out of pocket (ie, nursing, personal care, medications, and therapy). Third, comprehensive recommendations regarding end-of-life care are difficult to make in the absence of information on the appropriateness of care and use of potentially avoidable health services, which were out of scope for this study. Nonetheless, the findings support understanding of end-of-life health care trends in a universal health care system.

Conclusions
This study reported on health care utilization in the 2 years before death with a focus on the characterization of high-cost users. It identified patterns of high utilization of health care services before death and a large proportion of deaths in hospital, with variation across health care utilization gradients. The findings suggest a trajectory of hospital-centered care prior to death in Ontario.



References and full text at the link above.

Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables

Harper, Craig A., Liam Satchell, Dean Fido, and Robert Latzman. 2020. “Functional Fear Predicts Public Health Compliance in the COVID-19 Pandemic.” PsyArXiv. April 1. doi:10.31234/osf.io/jkfu3

Abstract: In the current context of the global pandemic of coronavirus disease-2019 (COVID-19), health professionals are working with social scientists to inform government policy on how to slow the spread of the virus. An increasing amount of social scientific research has looked at the role of public message framing, for instance, but few studies have thus far examined the role of individual differences in emotional and personality-based variables in predicting virus-mitigating behaviors. In this study we recruited a large international community sample (N = 324) to complete measures of self-perceived risk of contracting COVID-19, fear of the virus, moral foundations, political orientation, and behavior change in response to the pandemic. Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables. We discuss these data in relation to the potentially functional nature of fear in global health crises.


Gay men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women

Speech Acoustic Features: A Comparison of Gay Men, Heterosexual Men, and Heterosexual Women. Alexandre Suire, Arnaud Tognetti, Valérie Durand, Michel Raymond & Melissa Barkat-Defradas. Archives of Sexual Behavior, March 31 2020. https://rd.springer.com/article/10.1007/s10508-020-01665-3

Abstract: Potential differences between homosexual and heterosexual men have been studied on a diverse set of social and biological traits. Regarding acoustic features of speech, researchers have hypothesized a feminization of such characteristics in homosexual men, but previous investigations have so far produced mixed results. Moreover, most studies have been conducted with English-speaking populations, which calls for further cross-linguistic examinations. Lastly, no studies investigated so far the potential role of testosterone in the association between sexual orientation and speech acoustic features. To fill these gaps, we explored potential differences in acoustic features of speech between homosexual and heterosexual native French men and investigated whether the former showed a trend toward feminization by comparing theirs to that of heterosexual native French women. Lastly, we examined whether testosterone levels mediated the association between speech acoustic features and sexual orientation. We studied four sexually dimorphic acoustic features relevant for the qualification of feminine versus masculine voices: the fundamental frequency, its modulation, and two understudied acoustic features of speech, the harmonics-to-noise ratio (a proxy of vocal breathiness) and the jitter (a proxy of vocal roughness). Results showed that homosexual men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women. Lastly, testosterone levels did not influence any of the investigated acoustic features. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis and suggest that the feminization of some acoustic features could be shared across languages.


Discussion

This study offers an interesting take on the interaction between sexual orientation and acoustic features of speech in a French speaker sample. First, our analysis of different acoustic features revealed well-known patterns of sexual dimorphism in human voices (i.e., F0, F0-SD, jitter, and HNR). Secondly, our findings showed that French homosexual men displayed a more modulated and less breathy voice than French heterosexual men, thus supporting and extending previous studies conducted mostly with English speakers. Our results for the LDA showed that French homosexual men attested a slight but significant vocal feminization when considering speech acoustic features altogether (up to 10.65%), which support the feminization hypothesis. (It is important to note, however, that no overlap was observed between heterosexual and homosexual men vs. heterosexual women.) Lastly, testosterone levels did not mediate the association between vocal patterns and sexual orientation.
Consistent with previous findings in English-speaking populations, no significant differences were observed in mean F0 between French-speaking heterosexual and homosexual men (Gaudio, 1994; Lerman & Damsté, 1969; Munson et al., 2006b; Rendall et al., 2008; Rogers et al., 2001; Smyth et al., 2003). The results did show a difference between homosexual and heterosexual men in intonation, the former displaying higher pitch variations than the latter. The relationship between pitch variations and sexual orientation was previously found in one Dutch (Baeck et al., 2011) and one American-English population (Gaudio, 1994), suggesting that feminized pitch variations might be characteristic of male homosexual speech across languages (but see Levon, 2006). In our study, the average difference in pitch variations reached ~ 4.11 Hz, which is largely above the just noticeable difference for pitch (Pisanski & Rendall, 2011). Hence, our findings suggest that pitch variations could be one of the acoustic correlates of sexual orientation that is used by listeners when they correctly assessed sexual orientation through speech only (Gaudio, 1994; Linville, 1998; Smyth et al., 2003; Valentova & Havlíček, 2013). Further investigations are nevertheless needed to confirm if such a difference in pitch variations between homosexual and heterosexual men is enough to be used as a cue for assessing sexual orientation.
To our knowledge, this is the first study to report an association between men’s vocal breathiness and sexual orientation. Interestingly, vocal breathiness has been suggested to be an important component of vocal femininity in female voices (Van Borsel et al., 2009) and significant relationships to vocal attractiveness have been reported in both sexes (Xu et al., 2013). Although the difference in vocal breathiness between homosexual and heterosexual men is rather low (mean average difference reached ~ 0.80 dB), further research should test whether it is perceptible by listeners to assess male sexual orientation and whether homosexual men’s voices, which are richer in harmonics compared to those of heterosexuals, are perceived as more attractive among homosexual men.
In our study, T-levels did not influence any of the acoustic parameters investigated. The methods to measure T-level and the sample size used in this study were similar to those used in previous studies finding a significant negative link between T-levels and F0 (e.g., Dabbs & Mallinger, 1999; Evans et al., 2008). However, testosterone is a multiple-effect hormone under the influence of numerous biological and environmental factors and pathways. As such, it is generally difficult to correlate T-levels with other biological or behavioral traits, especially with a unique measurement as realized here. Nevertheless, our results might suggest that other underlying processes, different than basal T-level, are involved in vocal differences between homosexual and heterosexual men.
Although our study does not aim to provide an explanation for why vocal differences were found between homosexual and heterosexual men, several biological and social mechanisms can be invoked. For instance, exposure to prenatal testosterone has been suggested to be responsible for the differences between homosexual and heterosexual men on a large range of characteristics such as physiological and behavioral traits including speech characteristics (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). Several studies have thus tested whether the 2D:4D ratio (relative length of the second and fourth digits), a proxy of testosterone prenatal exposure differs between homosexual and heterosexual men (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). However, there is currently no consensus regarding whether the 2D:4D ratio differs between heterosexual and homosexual men as studies have yielded mixed results (Breedlove, 2017; Grimbos, Dawood, Burriss, Zucker, & Puts, 2010; Rahman & Wilson, 2003; Robinson, 2000; Skorska & Bogaert, 2017; Williams et al., 2000). Regarding social mechanisms, a social imitation of women’s speech peculiarities by homosexual men could also explain the differences observed between homosexual and heterosexual men’s speech characteristics (at least for F0-SD and HNR). The use of more feminine acoustic characteristics by homosexual men could reflect a selective adoption model of opposite-sex speech patterns or a selective use of acoustic features for signaling in-group identity (Pierrehumbert et al., 2004), an ability called “gaydar” (i.e., the detection of homosexuality based on a set of specific cues). Interestingly, a recent study suggests that the acquisition of a distinctive speech style may happen before puberty, as boys aged from 5 to 13 with gender identity disorder (a diagnosis made when a child shows distress or discomfort due to a mismatch between his/her gender identity and his/her biological sex) display distinctive speech features (higher F0 and F2 as well as a misarticulation of/s/) from boys without it (Munson, Crocker, Pierrehumbert, Owen-Anderson, & Zucker, 2015). Because some homosexual men display a greater degree of gender nonconforming behavior (GNC) than others during childhood (Bailey & Zucker, 1995), one could thus hypothesize that the former would be more likely to have a more feminine speech in adulthood than the latter. Further work should investigate the relative importance of the mechanisms underlying homosexual men’s speech.
To conclude, although our study did not aim to test specific hypotheses against a formal theoretical framework to understand the differences between homosexual and heterosexual men’s speech, it provides some new descriptive findings. By examining for the first time native French speakers and some understudied acoustic features (i.e., namely, jitter and HNR), our results indicated that some vocal traits differed between heterosexual and homosexual men (i.e., variations of pitch and vocal breathiness) with values shifted toward heterosexual women’s vocal characteristics. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis (at least for some acoustic features) and suggest that the feminization of some acoustic features could be shared across languages. Further studies are needed to test whether intonation and vocal breathiness are perceptually salient to distinguish homosexual and heterosexual men, and whether overall differences are due to biological and/or sociolinguistic reasons.

With no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller amounts more frequently; seems a built-in tactic in humans & possibly in other animals

Mere presence of co-eater automatically shifts foraging tactics toward ‘Fast and Easy' food in humans. Yukiko Ogura, Taku Masamoto and Tatsuya Kameda. Royal Society Open Science, Volume 7, Issue 4, April 1 2020. https://doi.org/10.1098/rsos.200044

Abstract: Competition for food resources is widespread in nature. The foraging behaviour of social animals should thus be adapted to potential food competition. We conjectured that in the presence of co-foragers, animals would shift their tactics to forage more frequently for smaller food. Because smaller foods are more abundant in nature and allow faster consumption, such tactics should allow animals to consume food more securely against scrounging. We experimentally tested whether such a shift would be triggered automatically in human eating behaviour, even when there was no rivalry about food consumption. To prevent subjects from having rivalry, they were instructed to engage in a ‘taste test' in a laboratory, alone or in pairs. Even though the other subject was merely present and there was no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller food amounts more frequently, which was clearly distinct from their reaching patterns both when eating alone and when simply weighing the same food without eating any. These patterns suggest that behavioural shifts in the presence of others may be built-in tactics in humans (and possibly in other gregarious animals as well) to adapt to potential food competition in social foraging.

4. Discussion

We created a laboratory foraging situation in which subjects were asked to eat potato chips for a ‘taste test'. The mere presence of a co-eater in the Visible Pair condition increased the reach frequency for food and decreased the weight of food per reach, as compared to the Solo condition (figures 2a and b). This result supports our hypothesis that the behavioural shift toward foraging smaller food more frequently would be triggered automatically among human subjects, even when there was no actual competition about food consumption.
We argued that the behavioural tactics in social foraging consist of two components—increasing reach frequency and preferring smaller food amounts. Compared to the increase in reach frequency observed across the two Pair conditions, the behavioural shift for smaller food amounts emerged only in the Visible Pair condition. Although the latter shift may be seen as a by-product of random picking caused by distraction from the visible co-eater, the reach pattern was distinct from the simulated random sampling (figure 3b). It was also distinguishable from the counting pattern in the weighing experiment (figure 3c). We thus think that, along with increasing reach frequency, choosing smaller food amounts is a systematic (yet weaker) component of foraging tactics in human group settings.
The overall amount of individuals' food intake was not increased by the presence of a co-eater, which may appear inconsistent with results from previous human psychological studies of social facilitation in eating behaviour [10,26]. However, in these studies, food was freely given to the subjects and eating time was not controlled; in a subsequent study of real-world eating behaviour by humans, the increase in food consumption in the presence of others reflected an increase in meal duration [27]. While these psychological studies were silent about the cost–benefit trade-offs in foraging tactics, the present study examined human eating behaviour from a behavioural ecological perspective, arguing that humans may favour sure gain at the cost of time, effort and amount per intake to adjust to potential competition in social foraging.
Our results showed that the behavioural shift was triggered by the mere presence of a co-eater, even without actual competition. This suggests that the underlying mechanism for the shift may be a built-in system that activates automatically in response to relevant social cues. Considering that gregariousness is not human-specific but widespread in animals, neural implementation of an automatic competitive mode may also be rooted in ancient neural circuits. In domestic chicks, for example, a brain region considered to be homologous to the limbic area in mammals contributes to an automatic increase in the reaching frequency for feeders [28,29]. On the other hand, many brain mapping studies in humans have attempted to identify brain regions related to social competition using behavioural games [3032]. However, the competitive contexts they have used are very different from the foraging situation in our study. Future research addressing the neural implementation of an automatic competitive mode in social foraging will be important not only for behavioural ecology but also to better understand the biological bases of problematic eating behaviour in humans.
In summary, humans shift their foraging tactics when a co-eater is present. Such a behavioural shift is likely to be a built-in response to possible food competition with conspecifics and may be common across many gregarious animals.



Tuesday, March 31, 2020

“When in Danger, Turn Right: Covid-19 Threat Promotes Social Conservatism and Right-wing Presidential Candidates

Karwowski, Maciej, Marta Kowal, Agata Groyecka, Michal Bialek, Izabela Lebuda, Agnieszka Sorokowska, and Piotr Sorokowski. 2020. “When in Danger, Turn Right: Covid-19 Threat Promotes Social Conservatism and Right-wing Presidential Candidates.” PsyArXiv. March 31. doi:10.31234/osf.io/pjfhs

Abstract: The recent coronavirus (COVID-19) pandemic forms an enormous challenge for the world's economy, governments, and societies. Drawing upon the Parasite Model of Democratization (Thornhill, R., Fincher, C. L., & Aran, D. (2009), parasites, democratization, and the liberalization of values across contemporary countries, Biological Reviews, 84(1), 113-131) across two large, preregistered experiments conducted in the USA and Poland (total N = 1,237), we examined the psychological and political consequences of this unprecedented pandemic. By manipulating saliency of COVID-19, we demonstrate that activating thinking about coronavirus elevates Americans' and Poles' anxiety and indirectly promotes their social conservatism as well as support for more conservative presidential candidates. The pattern obtained was consistent in both countries and it implies that the pandemic may result in a shift in political views. Both theoretical and practical consequences of the findings are discussed.


Discussion
In a large-scale, preregistered experiment, we found evidence for a shift in political views of individuals threatened by the coronavirus pandemic. Specifically, we show that those who feel threatened react with anxiety, tend to seek greater structure in their environment, and thus shift toward social conservatism. All of this increases the support for conservative presidential candidates. A great value of our research is the observed similarity of this effect in two countries: Poland and United States. Different in many aspects, these populations still exhibited the same pattern of results. Further, our findings cohere with political ideology shifts following terrorist attacks (38). Hence, the results suggest a universal character of the threat-to-conservatism path.

Our results have crucial, practical implications, since they suggest that forthcoming elections can be biased toward right-wing, conservative candidates. People simply seek stability and order, which seem to be more pronouncedly exhibited by conservative candidates. Our findings also have important theoretical implications, as the current pandemic created a unique opportunity to validate the Parasite Model of Democratization (14). We found strong support for it – pathogen threat boosted preference of values typical for social conservatism. We also provided evidence against an alternative explanation of threat boosting support for status quo, because support was also greater for less liberal (or more centrist) counter-candidates, if participants were to choose among them.

Regarding applicability of our findings, we believe that all candidates should reframe their political communication. In the moral foundation theory (39, 40), loyalty and authority constitute the so-called binding values. These moral values are more prominent in conservatives but are not ignored by liberally oriented individuals either. Hence, communication appealing to these values may be an efficient way to mitigate the shift of values in societies: they are accepted by the core supporters of liberal candidates and are actively sought by individuals affected by the coronavirus threat. In general, our results highlight how important it is for people to perceive the world as a stable and predictable place. This preference is even stronger in times of chaos. Those interested in human behavior should consider its importance in the current models explaining how we judge and think.

Perceived political and nonpolitical dissimilarity were associated with negative emotions, prejudice, and lower affiliative intentions among both liberals and conservatives, more strongly in the former

Ideological Conflict and Prejudice: An Adversarial Collaboration Examining Correlates and Ideological (A)Symmetries. Chadly Stern, Jarret T. Crawford. Social Psychological and Personality Science, March 30, 2020. https://doi.org/10.1177/1948550620904275

Abstract: In an adversarial collaboration, we examined associations among factors that could link ideological conflict—perceiving that members of a group do not share one’s ideology—to prejudice and affiliation interest. We also examined whether these factors would possess similar (“symmetrical”) or different (“asymmetrical”) associative strength among liberals and conservatives. Across three samples (666 undergraduate students, 347 Mechanical Turk workers), ideological conflict was associated with perceived dissimilarity on political and nonpolitical topics, as well as negative emotions. Perceived political and nonpolitical dissimilarity were also associated with negative emotions, prejudice, and lower affiliative intentions among both liberals and conservatives. Importantly, however, perceived political dissimilarity was associated with negative emotions, prejudice, and lower affiliative intentions more strongly among liberals. Some inconsistent evidence also suggested that perceived nonpolitical dissimilarity was associated with prejudice and lower affiliative intentions more strongly among conservatives. These findings document nuance in relationships that could link ideological conflict to prejudice.

Keywords: ideological conflict, prejudice, ideological symmetry, ideological asymmetry

Females are more likely to tweet about the virus in the context of family, social distancing & healthcare, males are more likely to tweet about sports cancellations, the virus global spread & political reactions

Covid-19 Tweeting in English: Gender Differences. Mike Thelwall. Institute of Health, University of Wolverhampton. https://arxiv.org/ftp/arxiv/papers/2003/2003.11090.pdf

Abstract: At the start of 2020, COVID-19 became the most urgent threat to global public health. Uniquely in recent times, governments have imposed partly voluntary, partly compulsory restrictions on the population to slow the spread of the virus. In this context, public attitudes and behaviors are vitally important for reducing the death rate. Analyzing tweets about the disease may therefore give insights into public reactions that may help guide public information campaigns. This article analyses 3,038,026 English tweets about COVID-19 from March 10 to 23, 2020. It focuses on one relevant aspect of public reaction: gender differences. The results show that females are more likely to tweet about the virus in the context of family, social distancing and healthcare whereas males are more likely to tweet about sports cancellations, the global spread of the virus and political reactions. Thus, women seem to be taking a disproportionate share of the responsibility for directly keeping the population safe. The detailed results may be useful to inform public information announcements and to help understand the spread of the virus. For example, failure to impose a sporting bans whilst encouraging social distancing may send mixed messages to males


Quantifying, and Correcting For, the Impact of Questionable Research Practices on False Discovery Rates in Psychological Science

Kravitz, Dwight, and Stephen Mitroff. 2020. “Quantifying, and Correcting For, the Impact of Questionable Research Practices on False Discovery Rates in Psychological Science.” PsyArXiv. March 26. doi:10.31234/osf.io/fu9gy

Abstract: Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.

Even Prosocially Oriented Individuals Save Themselves First: Social Value Orientation, Subjective Effectiveness and the Usage of Protective Measures During the COVID-19 Pandemic in Germany

Leder, Johannes, Alexander Pastukhov, and Astrid Schütz. 2020. “Even Prosocially Oriented Individuals Save Themselves First: Social Value Orientation, Subjective Effectiveness and the Usage of Protective Measures During the COVID-19 Pandemic in Germany.” PsyArXiv. March 30. doi:10.31234/osf.io/nugcr

Abstract: We investigated the perception and the frequency of various protective behavior measures against COVID-19. Although our sample (German general public, N = 419, age = 38.07 (15.67) years, female = 71.1 % (diverse = 0.5%), students = 34.37%) consisted mostly of prosocially oriented individuals, we found that, above all, participants used protective measures that protected themselves. They consistently shunned measures that have higher protective value for the public than for themselves, which indicates that public protective value comes second even for prosocially oriented individuals. Accordingly, health communication should focus on emphasizing a measure’s perceived self-protective value by explaining how it would foster public protection that in the long run will protect the individual and the individual’s close relations.

Monday, March 30, 2020

Robin Hanson: Variolation May Cut Covid19 Deaths 3-30X

Variolation May Cut Covid19 Deaths 3-30X. Robin Hanson. Overcoming Bias, March 30, 2020. http://www.overcomingbias.com/2020/03/variolation-may-cut-covid19-deaths-3-30x.html


(Here I try to put my recent arguments together into an integrated essay, suitable for recommending to others.)

When facing a new pandemic, the biggest win is to end it fast, so that few ever suffer. This prize makes it well worth trying hard to trace, test, and isolate those near the first few cases. Alas, for Covid-19 and the world, this has mostly failed, though not yet everywhere.

The next biggest win is to find a cheap effective treatment, such as a vaccine. And while hope remains for an early win, this looks to be years away. To keep most from getting infected, at this point the West must apparently develop and long maintain unprecedented expansions in border controls, testing, tracing, and privacy invasions, and perhaps also non-home isolation of suspected cases. Alas, these ambitious plans must be implemented by the same governments that have so far failed us badly.

Yes, there remains hope here, which should be pursued. But we also need a Plan B; what if most will eventually be infected without a treatment? The usual answer is “flatten the curve,” via more social distance to lower the average of (and increase the variance of) infection rates, so that more can access limited medical resources. Such as ventilators, which cut deaths by <¼, since >¾ of patients on them die.

However, extreme “lockdowns”, which isolate most everyone at home, not only limit freedoms and strangle the economy, they also greatly increase death rates. This is because infections at home via close contacts tend to come with higher initial virus doses, in contrast to the smaller doses you might get from, say, a public door handle. As soon as your body notices an infection, it immediately tries to grow a response, while the virus tries to grow itself. From then on, it is a race to see which can grow biggest fastest. And the virus gets a big advantage in this race if its initial dose of infecting virus is larger.

This isn’t just a theory. The medical literature consistently finds strong relations, in both animals and humans, between initial virus dose and symptom severity, including death. The most directly relevant data is on SARS and measles, where natural differences in doses were associated with factors of 3 and 14 in death rates, and in smallpox, where in the 1700s low “variolation” doses given on purpose cut death rates by a factor of 10 to 30. For example, variolation saved George Washington’s troops at Valley Forge.

Early on, it can be worth paying such high costs to end a pandemic. But once a pandemic seems likely to eventually infect most everyone, it becomes less clear whether lockdowns are a net win. However, the dose effect that lockdowns exacerbate, by increasing dose size, also offers a huge opportunity to slash deaths, via voluntary infection with very low doses.

Just as replacing accidental smallpox infections with deliberate low dose infections cut smallpox deaths by a factor of 10 to 30, a factor of 3-30 is plausible for Covid19 death rate cuts due to replacing accidental Covid19 infections with deliberate small dose infections. Observed mortality differences due to natural dose variations give only a lower bound on what is feasible via controlled doses. Of course we can’t be sure until we get more direct evidence. But systematic variolation experiments involving at most a few thousand volunteers seem sufficient to get evidence not only on death rates, but also on ideal infection doses and methods, and on the value of complementary drugs that slow viral replication (e.g., remdesivir).

This dose size advantage adds to several other substantial advantages of variolation. Not only does it offer controlled conditions for studying disease progression, and for training medical personnel, it can also help ensure consistent staffing of critical workers, by spacing out their infections.

Furthermore, the combination of variolation with immediate isolation until recovery “flattens the curve,” by spreading out medical demand over time, and also adding to the herd immunity that usually ends a pandemic. So even without a death rate cut due to lower doses, this strategy produces a net social gain.

This last claim may sound counter-intuitive, but it has in fact recently been confirmed in three independently developed simulations. For example, in a simulation where old and sick people are selected for isolation, while only the young and healthy are eligible for variolation, there are 40% fewer life years lost, compared to no variolation and random selection for isolation. Each variolation volunteer suffers only an additional 0.20% chance of death to save a random other person from a 6.5% chance. And these simulations ignore any benefits of low doses; they hold constant the infection and death rates, and the total quantity of social isolation, and thus expense.

Of course, if low doses cut death rates by a factor of two or more, variolation volunteers would actually cut their chance of death, perhaps greatly. Yes, the first few thousand volunteers could be less sure of such gains, but they could be compensated for this risk, just as we now consider compensating subjects in vaccine trials using live Covid19 viruses. We could pay variolation volunteers cash, offer their loved ones priority medical care, certify them as safe for work and social gatherings, and honor them like soldiers selected for their elite features who take risks to produce community gains.

So the scenario is this: Variolation Villages welcome qualified volunteers. Friends and family can enter together, and remain together. A cohort enters together, and is briefly isolated individually for as long as it takes to verify that they’ve been infected with a very small dose of the virus. They can then interact freely with each other, but not leave the village until tests show they have recovered.

In Variolation Village, volunteers have a room, food, internet connection, and full medical care. Depending on available funding from government or philanthropic sources, volunteers might either pay to enter, get everything for free, or be paid a bonus to enter. Health plans of volunteers may even contribute to the expense.

Those who work in medicine or critical infrastructure seem especially valuable candidates for early variolation; volunteers might be offered larger bonuses. Once they have recovered, they are more surely available to work near the pandemic peak, and can more easily risk social contact at work.

Note that this strategy of variolation plus isolation requires no government support, nor loss of personal freedom, just the sort of legal permission sometimes given to administrators and volunteers of vaccine trials. And this comparison with vaccine trial policy can be emphasized to those tempted to see this policy as repulsive. Variolation policy offers similar social gains, and may require similar voluntary personal sacrifices.

Note also that there is no minimum scale required to make this policy beneficial. Even variolation of only a few is still a social gain compared to none at all. A small early trial could generate much useful attention and discussion regarding this strategy, to inspire application in this and future pandemics. Furthermore, the optimal time to stop this practice for personal reasons is probably close to the optimal time to stop for social reasons, so choice of stopping date needn’t be heavily regulated.

Some fear that it is now too late to consider variolation, as the pandemic peak may be only a few weeks away. But lockdowns may succeed in substantially slowing Covid19 growth, and we may then be in for many months or years of alternating local waves of suppression and reappearance. Furthermore, if low doses cut death rates enough, variolation can make sense even at the pandemic peak, when medical resources are stretched most thin. For example, for a factor of 3 cut in death rates, variolation replaces three sick patients with one similarly sick patient, lowering total medical demand.

As variolation doesn’t much change the total number who are ever infected, it doesn’t give the virus more total chances to evolve. In fact, while accidental infections risk selection for versions that infect people more easily, voluntary infections avoid this problematic effect.

While you might think policy wonks would be eager to cut Covid19 death rates by a factor of 3-30, few have so far been attracted to discuss or pursue this concept. It seems to push the wrong buttons in many people. So if you are a rare exception who finds the concept plausible, you can get a disproportionate policy leverage by working on a neglected important option. You might help in one of these areas:

[more text at the link above]

We have much work to do if this Plan B is to be ready when needed.

Adults severely underestimate their absolute and relative fatality risk if infected with SARS-CoV

Niepel, Christoph, Dirk Kranz, Francesca Borgonovi, and Samuel Greiff. 2020. “Sars-cov-2 Fatality Risk Perception in US Adult Residents.” PsyArXiv. March 30. doi:10.31234/osf.io/w52e9

Abstract: Our study presents time-critical empirical results on the SARS-CoV-2 fatality risk perception of 1182 US adult residents stratified for age and gender. Given the current epidemiological figures, our findings suggest that many US adult residents severely underestimate their absolute and relative fatality risk if infected with SARS-CoV-2. These results are worrying because risk perception, as our study suggests, relates to self-reported actual or intended behavior that can reduce SARS-CoV-2 transmission rates.

Animals benefit from numerical competence (foraging, navigating, hunting, predation avoidance, social interactions, & reproductive activities); internal number representations determine how they perceive stimulus magnitude

The Adaptive Value of Numerical Competence. Andreas Nieder. Trends in Ecology & Evolution, March 30 2020. https://doi.org/10.1016/j.tree.2020.02.009

Highlights
*  Numerical competence, the ability to estimate and process the number of objects and events, is of adaptive value.
*  It enhances an animal’s ability to survive by exploiting food sources, hunting prey, avoiding predation, navigating, and persisting in social interactions. It also plays a major role in successful reproduction, from monopolizing receptive mates to increasing the chances of fertilizing an egg and promoting the survival chances of offspring.
*  In these ecologically relevant scenarios, animals exhibit a specific way of internally representing numbers that follows the Weber-Fechner law.
*  A framework is provided for more dedicated and quantitative analyses of the adaptive value of numerical competence.

Abstract: Evolution selects for traits that are of adaptive value and increase the fitness of an individual or population. Numerical competence, the ability to estimate and process the number of objects and events, is a cognitive capacity that also influences an individual’s survival and reproduction success. Numerical assessments are ubiquitous in a broad range of ecological contexts. Animals benefit from numerical competence during foraging, navigating, hunting, predation avoidance, social interactions, and reproductive activities. The internal number representations determine how animals perceive stimulus magnitude, which, in turn, constrains an animal’s spontaneous decisions. These findings are placed in a framework to provide for a more quantitative analysis of the adaptive value and selection pressures of numerical competence.

Keywords: quantitynumberWeber-Fechner lawproportional processingultimate causesanimal cognition



UK: Inequality in socio-emotional skills has increased across cohorts, especially for boys and at the bottom of the distribution

Inequality in socio-emotional skills: A cross-cohort comparison. Orazio Attanasio et al. Journal of Public Economics, March 30 2020, 104171. https://doi.org/10.1016/j.jpubeco.2020.104171

Abstract: We examine changes in inequality in socio-emotional skills very early in life in two British cohorts born 30 years apart. We construct comparable scales using two validated instruments for the measurement of child behaviour and identify two dimensions of socio-emotional skills: ‘internalising’ and ‘externalising’. Using recent methodological advances in factor analysis, we establish comparability in the inequality of these early skills across cohorts, but not in their average level. We document for the first time that inequality in socio-emotional skills has increased across cohorts, especially for boys and at the bottom of the distribution. We also formally decompose the sources of the increase in inequality and find that compositional changes explain half of the rise in inequality in externalising skills. On the other hand, the increase in inequality in internalising skills seems entirely driven by changes in returns to background characteristics. Lastly, we document that socio-emotional skills measured at an earlier age than in most of the existing literature are significant predictors of health and health behaviours. Our results show the importance of formally testing comparability of measurements to study skills differences across groups, and in general point to the role of inequalities in the early years for the accumulation of health and human capital across the life course.

JEL classification: J13J24I14I24C38
Keywords: InequalitySocio-emotional skillsCohort studiesMeasurement invariance


Our results imply that the ability to utilize the enhanced information of a face to recognize familiar faces may develop aged around 7 months of age

Infants’ recognition of their mothers’ faces in facial drawings. Megumi Kobayashi  Ryusuke Kakigi  So Kanazawa  Masami K. Yamaguchi. Developmental Psychobiology, March 29 2020. https://doi.org/10.1002/dev.21972

Abstract: This study examined the development of ability to recognize familiar face in drawings in infants aged 6–8 months. In Experiment 1, we investigated infants’ recognition of their mothers’ faces by testing their visual preference for their mother’s face over a stranger’s face under three conditions: photographs, cartoons produced by online software that simplifies and enhances the contours of facial features of line drawings, and veridical line drawings. We found that 7‐ and 8‐month‐old infants showed a significant preference for their mother’s face in photographs and cartoons, but not in veridical line drawings. In contrast, 6‐month‐old infants preferred their mother’s face only in photographs. In Experiment 2, we investigated a visual preference for an upright face over an inverted face for cartoons and veridical line drawings in 6‐ to 8‐month‐old infants, finding that infants aged older than 6 months showed the inversion effect in face preference in both cartoons and veridical line drawings. Our results imply that the ability to utilize the enhanced information of a face to recognize familiar faces may develop aged around 7 months of age.

Sunday, March 29, 2020

At low levels of video gaming time, gaming protects against gun violence; at high levels, it imprints gun-related behaviors and naturalizes them, a small effect; moral panic of video gaming is largely unsubstantiated

Videogames and guns in adolescents: Preliminary tests of a bipartite association. Ofir Turel. Computers in Human Behavior, March 29 2020, 106355. https://doi.org/10.1016/j.chb.2020.106355

Highlights
• We propose a U-shaped association between video-gaming and gun-related behaviors.
• At low levels of video gaming time, video gaming displaces gun-related behaviors.
• At high levels, it imprints gun-related behaviors and naturalizes them.
• This can explain inconsistent past findings based on an assumed linear association.
• Moral panic over light to moderate video gaming is largely unsubstantiated.

Abstract: The possible role of video gaming in imprinting aggressive and specifically gun-related behaviors has been elusive, and findings regarding these associations have been inconsistent. We address this gap by proposing and testing a bipartite theory that can explain inconsistent results regarding the previously assumed linear association between videogames and gun-related behaviors. Our theory suggests that this association follows a U-shape. It posits that at low levels of video gaming time, video gaming displaces gun-related behaviors and shelters adolescents by keeping them occupied and by reducing opportunities and motivation to acquire guns. However, at some level of gaming time (because most popular games adolescents play include violent aspects), the assumed imprinting of aggressive behaviors overpowers the positive displacement force, and this trivializes and naturalizes gun-carrying behaviors, and ultimately increases motivation to obtain and carry guns. We tested this theory with two national samples of American adolescents (n1 = 24,779 and n2 = 26,543, out of which 403 and 378, respectively, reported bringing a gun to school in the last month). Multiple analyses supported the proposed U-shaped association. These findings show that the moral panic over video games is largely unsubstantiated, especially among light to moderate gamers.

Keywords: Video gamesTechnology and societyGunsAdolescentsImprinting hypothesisDisplacement hypothesis

Patrician women loitering on dark streets, giving themselves to common passers-by; half-clad men molested by their mothers and sisters; effeminates soft as a rabbit and “languid as a limp penis”

True Greek orgy meant mystic loss of self. But in imperial Roman orgy, persona continued. The Roman decadent kept the observing Apollonian eye awake during Dionysian revel. More Alexandrian connoisseurship, here applied to the fashionable self. Eye plus orgy equals decadence. Salaciousness, lewdness, lasciviousness: such interesting hyperstates are produced by a superimposition of mind on erotic action. The west has pioneered in this charred crimson territory. Without strong personality of the western kind, serious decadence is impossible. Sin is a form of cinema, seen from a distance. The Romans, pragmatically adapting Greek ideas, made engineering out of eroticism too. The heir of Greek theater was not Roman theater but Roman sex. The Roman decadence has never been matched in scale because other places and times have lacked the great mass of classical forms to corrupt. Rome made daemonic music of gluttony and lust from the Dionysian body. The Maenadism absent from Roman cult became imperial ecstasy, mechanized greed.

Roman literature’s sexual personae are in hectic perpetual motion. Greek aristocratic athleticism split in two in Rome: vulgar gladiatorship by ruffians and slaves, and leisure-class sexual adventurism, a sporting life then as now. As the republic ends, Catullus records the jazzy promiscuity of Rome’s chic set. Patrician women loitering on dark streets, giving themselves to common passers-by. Half-clad men molested by their mothers and sisters. Effeminates soft as a rabbit and “languid as a limp penis.” A sodomite waking with battered buttocks and “red lips like snow,” mouth rimmed with last night’s pasty spoils. The strolling poet, finding a boy and girl copulating, falls upon the boy from behind, piercing and driving him to his task. Public sex, it is fair to say, is decadent. Oh, those happy pagan days, romping in green meadows: one still encounters this sentimental notion, half-baked Keats. It is quite wrong. Catullus, like Baudelaire, savors imagery of squalor and filth. His moral assumptions remain those of republican Rome, which he jovially pollutes with degeneration and disease. His poetry is a torch-lit descent into a gloomy underworld, where we survey the contamination and collapse of Roman personae. Men and women are suddenly free, but freedom is a flood of superfluous energy, a vicious circle of agitation, quest, satiation, exhaustion, ennui. Moral codes are always obstructive, relative, and man-made. Yet they have been of enormous profit to civilization. They are civilization. Without them, we are invaded by the chaotic barbarism of sex, nature’s tyranny, turning day into night and love into obsession and lust.

Catullus, an admirer of Sappho, turns her emotional ambivalence into sadomasochism. Her chills and fever become his “odi et amo,” “I hate and I love.” Her beloved maidens, fresh as orange flowers, become his cynical Lesbia, adulteress and dominatrix, vampiristically “draining the strength of all.” The urban femme fatale dons the primitive mask of mother nature. Lesbia, the wellborn Clodia, introduces to Rome a depraved sexual persona that had been current, according to aggrieved comment of the Old Testament, for a thousand years in Babylon. Female receptivity becomes a sinkhole of vice, the vagina a collector of pestilence to poison Roman nobility and bring it to an end.

Catullus is a cartographer of sexual personae. His lament for the dying god Attis (Carmen 63) is an extraordinary improvisation on gender. Castrating himself for Cybele, Attis enters a sexual twilight zone. Grammatically, the poem refers to him as feminine. “I a woman, I a man, I a youth, I a boy”: in this litany of haunting memory, Attis floats through a shamanistically expanded present tense of gender, all things and nothing. Like imperial Rome, he has been pitched into an ecstatic free fall of personae. Suspension of sexual conventions brings melancholy, not joy. He is artistically detached from ordinary life but feels “sterile.” Attis is the poet himself, mutating through gender in a strange, new, manic world.

Ovid, born forty years later, is the first psychoanalyst of sex. His masterpiece is aptly called Metamorphoses: as Rome changes, Ovid plunders Greek and Roman legend for magic transformations—man and god to animal and plant, male to female and back. Identity is liquid. Nature is under Dionysian spell; Apollo’s contours do not hold. The world becomes a projected psyche, played upon by amoral vagaries of sexual desire. Ovid’s encyclopedic attentiveness to erotic perversity will not recur until Spenser’s Faerie Queene , directly influenced by him. His successors are Sade, Balzac, Proust, Krafft-Ebing, and Freud.

The Metamorphoses is a handbook of sexual problematics. There is Iphis, a girl raised as a boy who falls in love with another girl and is relieved of her suffering by being changed into a man. Or Caeneus, once the girl Caenis, who rejects marriage and is raped by Neptune. As compensation, she is changed into a man invulnerable to wounds, martial and sexual. According to the Homeric scholiast, Caeneus set up his spear as a phallic totem in the marketplace, prayed and sacrificed to it, and commanded people hail it as a god, angering Zeus. In Vergil’s underworld, Aeneas sees Caeneus as a woman, the morphological ghost of her femaleness reasserting itself. Ovid’s complications of violation and fetishism are theory, not titillation. The theme is our “double nature,” his term for the centaurs who smother impenetrable Caeneus after a horrifying orgy of Maenadic pulverizations. Like Freud, Ovid constructs hypothetical models of narcissism and the will-to-power. His point of view comes from his position between eras. Sexual personae, in flux, allow him to bring cool Apollonian study to bear upon roiling Dionysian process.

In his lesser works, Ovid lightens Catullus’ bitter sex war into parlor politics. In The Art of Love, he says the seducer must be shrewd and changeable as Proteus. This is the Roman Dionysus, metamorphic Greek nature reduced to erotic opportunism. Sex-change is a foxy game: the wise adulteress, counsels Ovid, transsexualizes her letters, turning “he” to “she.” The empire diverted Roman conceptual energy into sex. So specialized is Martial’s sexual vocabulary that it influenced modem medical terminology. Latin, an exact but narrow language, became startlingly precise about sexual activity. The Latinist Fred Nichols tells me that a verb in Martial, used in poetry for the first time by Catullus, describes the fluttering movement of the buttocks of the passive partner in sodomy. There were, in fact, two forms of this verb: one for males and another for females.

Classical Athens, exalting masculine athleticism, had no conspicuous sexual sadomasochists and street transvestites. The Roman empire, on the other hand, if we believe the satirists, was overrun by epicene creatures. Ovid warns women to beware of elegant men with coiffures “sleek with liquid nard”—they may be out to steal your dress! “What can a woman do when her lover is smoother than she, and may have more boyfriends?” 28 Ausonius tells a sodomist with depilated anus and buttocks, “You are a woman behind, a man in front.” Girlish boys and long-haired male prostitutes appear in Horace, Petronius, and Martial. Gaius Julius Phaedrus blames homosexuals of both sexes on drunken Prometheus, who attached the wrong genitalia to human figures he was molding. Lesbianism, infrequent in Greek literature, makes a splash in Rome. Martial and Horace record real-life tribads, Baiba, Philaenis, and Folia of Arminum, with her “masculine libidinousness.” There are lesbian innuendos about the all-woman rites of the Bona Dea, crashed by Publius Clodius in drag. Lucian’s debater condemns lesbian acts as “androgynous passions” and calls dildos “infamous instruments of lust, an unholy imitation of a fruitless union .” 29 Rome’s sexual disorientation was great theater, but it led to the collapse of paganism.

Pursuit of pleasure belongs on the party circuit, not in the centers of power. Today too, one might like playfulness and spontaneity in a friend, lover, or star, but one wants a different character in people with professional or political authority. The more regular, unimaginative, and boring the daily lives of presidents, surgeons, and airline pilots, the better for us, thank you very much. Hierarchic ministry should be ascetic and focused. It does not profit from identity crises, the province of art. Rome had a genius for organization. Its administrative structure was absorbed by the Catholic Church, which turned an esoteric Palestinian sect into a world religion. Roman imperial bureaucracy, an ex* tension of republican legalism, was a superb machine, rolling over other nations with brutal force. Two thousand years later, we are still feeling the consequences of its destruction of Judaea and dispersion of the fractious Jews, who refused to become Roman. We know from Hollywood movies what that machine sounded like, its thunderous, relentless marching drums pushing Roman destiny across the world and through history. But when the masters of the machine turned to idleness and frivolity, Roman moral force vanished.

The Roman annalists give us the riveting gossip. Sodomy was reported of the emperors Tiberius, Nero, Galba, Otho, Commodus, Trajan, and Elagabalus. Even Julius Caesar was rumored to be bisexual. Hadrian fell in love with the beautiful Antinous, deified him after his death, and spread his image everywhere. Caligula had a taste for extravagant robes and women’s clothes. He dressed his wife Caesonia in armour and paraded her before the troops. He loved impersonations, appearing in wig and costume as singer, dancer, charioteer, gladiator, virgin huntress, wife. He posed as all the male and female gods. As Jupiter, he seduced many women, including his sisters. Cassius Dio tartly remarks, “He was eager to appear to be anything rather than a human being and an emperor .” 30 Nero chose the roles of bard, athlete, and charioteer. He dressed as a tragedian to watch Rome bum. Onstage he played heroes and heroines, gods and goddesses. He pretended to be a runaway slave, a blind man, a madman, a pregnant woman, a woman in labor. He wore the mask of his wife Poppaea Sabina, who had died, it was said, after he kicked her in her pregnant belly. Nero was a clever architect of sexual spectacle. He built riverbank brothels and installed patrician women to solicit him from doorways. Tying young male and female victims to stakes, he draped himself in animal skins and leapt out from a den to attack their genitals. Nero devised two homosexual parodies of marriage. He castrated the boy Sporus, who resembled dead Poppaea, dressed him in women’s clothes, and married him before the court, treating him afterward as wife and empress. In the second male marriage, with a youth whom Tacitus calls Pythagoras and Suetonius Doryphorus, sex roles were reversed: the emperor was bride. “On the wedding night,” reports Suetonius, “he imitated the screams and moans of a girl being deflowered .” 31

Commodus gave his mother’s name to a concubine, making his sex life an Oedipal drama. He appeared as Mercury and transvestite Hercules. He was called Amazonius, because he dressed his concubine Marcia as an Amazon and wanted to appear as an Amazon himself in the arena. Elagabalus, Caracalla’s cousin, brought the sexually freakish customs of Asia Minor to imperial Rome. He scandalized the army with his silks, jewelry, and dancing. His short reign was giddy with plays, pageants, and parlor games. Lampridius says, “He got himself up as a confectioner, a perfumer, a cook, a shopkeeper, or a procurer, and he even practiced all these occupations in his own house continually .” 32 Elagabalus’ lordly ease of access to plebeian roles was social mobility in reverse. Like Nero, he practiced “class transvestism,” David Reisman’s phrase for the modem bluejeans fad . 33

Elagabalus’ life passion was his longing for womanhood. Wearing a wig, he prostituted himself in real Roman brothels. Cassius Dio reports:
He set aside a room in the palace and there committed his indecencies, always standing nude at the door of the room, as the harlots do, and shaking the curtain which hung from gold rings, while in a soft and melting voice he solicited the passers-by. There were, of course, men who had been specially instructed to play their part. . . . He would collect money from his patrons and give himself airs over his gains; he would also dispute with his associates in this shameful occupation, claiming that he had more lovers than they and took in more money.
Miming an adulteress caught in the act and beaten by her husband, the emperor cherished black eyes as a souvenir. He summoned to court a man notorious for enormous genitals and greeted him with “a ravishing feminine pose,” saying, “Call me not Lord, for I am a Lady.” He impersonated the Great Mother in a lion-drawn chariot and publicly posed as the Venus Pudica , dropping to his knees with buttocks thrust before a male partner. Finally, Elagabalus’ transvestite fantasies led to a desire to change sex. He had to be dissuaded from castrating himself, reluctantly accepting circumcision as a compromise. Dio says, “He asked the physicians to contrive a woman’s vagina in his body by means of an incision, promising them large sums for doing so .” 34 Science, which only recently perfected this operation, is clearly laggard upon the sexual imagination.

Absolute power is a door into dreaming. The Roman emperors made living theater of their turbulent world. There was no gap between wish and realization; fantasy leapt into instant visibility. Roman imperial masque: charades, inquisition, horseplay. The emperors made sexual personae an artistic medium, plastic as clay. Nero, setting live Christians afire for a night banquet, played with reality. Roman copies of Greek statues are a bit dull and coarse. So too with Rome’s sexual literalization of Greek drama. The emperors, acting to provoke, torture, or arouse, removed the poetry and philosophy from theater. The vomitoria of Roman villas are troughs for vomiting the last six courses before starting on the next. Vomitoria is also the name for the exits of Roman amphitheaters, through which the mob poured. Imperial Rome, heir to sprawling Hellenistic culture, suffered from too-muchness, the hallmark of decadence. Too much mind, too much body; too many people, too many facts. The mind of the king is a perverse mirror of the time. Having no cinema, Nero made his own. In Athens, the beautiful boy was an idealized objet de culte. In Rome, persons were stage machinery, mannequins, decor. The lives of the wastrel emperors demonstrate the inadequacy of our modem myth of personal freedom. Here were men who were free and who were sickened by that freedom. Sexual liberation, our deceitful mirage, ends in lassitude and inertness. An emperor’s day was androgyny-in-action. But was he happier than his republican ancestors, with their rigid sex roles? Repression makes meaning and purpose.

Pandemics Depress the Economy, Public Health Interventions Do Not: Evidence from the 1918 Flu

Correia, Sergio and Luck, Stephan and Verner, Emil, Pandemics Depress the Economy, Public Health Interventions Do Not: Evidence from the 1918 Flu. SSRN, March 26, 2020. http://dx.doi.org/10.2139/ssrn.3561560

Abstract: What are the economic consequences of an influenza pandemic? And given the pandemic, what are the economic costs and benefits of non-pharmaceutical interventions (NPI)? Using geographic variation in mortality during the 1918 Flu Pandemic in the U.S., we find that more exposed areas experience a sharp and persistent decline in economic activity. The estimates imply that the pandemic reduced manufacturing output by 18%. The downturn is driven by both supply and demand-side channels. Further, building on findings from the epidemiology literature establishing that NPIs decrease influenza mortality, we use variation in the timing and intensity of NPIs across U.S. cities to study their economic effects. We find that cities that intervened earlier and more aggressively do not perform worse and, if anything, grow faster after the pandemic is over. Our findings thus indicate that NPIs not only lower mortality; they also mitigate the adverse economic consequences of a pandemic.

Keywords: 1918 Flu Pandemic, non-pharmaceutical interventions (NPI), real economy
JEL Classification: E32, I10, I18, H1

Mirror-writing intersects with fundamental questions about the neural representations for reading and writing, and for object recognition and purposeful action more generally

Reflecting on mirror-writing. Interviewing Robert McIntosh and Sergio della Sala. The Psychologist, April 2020 Vol.33 (pp.32-35), Mar 2020. https://thepsychologist.bps.org.uk/volume-33/april-2020/reflecting-mirror-writing

So, would it be normal for children to mirror-write?
Mirror-writing, though striking to see, is an absolutely normal occurrence when learning to write. It would surprise us if there are any children who never make at least some mirror reversals. Rather than being regarded as a mistake, mirror-writing can be viewed as an impressive act of generalisation from a child, who is able to produce mirrored forms that they have never been taught. Parents with a young child who mirror reverses letters or words should enjoy the variety, and should not worry. Such reversals would only be of concern if they persisted well beyond the age by which most children have securely learned letter direction (7-8 years), in which case they would be part of a broader profile of slow literacy development.

Is there any update on mirror-writing from your own research or other sources?
Mirror-writing is still a ’niche’ research topic, but a few recent papers have been published on developmental mirror-writing. Jean Paul-Fischer’s group in Lorraine (France) had previously shown that children learning a dextrad (left-to-right) language like French (or English) are much more likely to reverse characters that face to the left (like j, z, or 3) than those that face to the right (like k, s, or 6). They inferred that the child may implicitly learn that most letters they see face to the right, and then over-apply this rule, so that they are more likely to flip a left-facing character to the right than vice-versa. We recently confirmed that this bias really is driven by character orientation, and not by differences in frequency, or how hard it is to remember certain shapes. We taught primary school children to write four novel pseudo-letters, two of which were left-facing and two of which were right-facing. We used identical but mirror reflected character sets for different groups of children, to control for any incidental differences between the shapes. Children were three times more likely to mirror-write a novel character they had learned in a left-facing format than to mirror-write one they had learned in a right-facing format.

Interestingly, it turns out that the bias may not be so much about whether the character faces left or right, but whether it faces in the direction of writing. Fischer and his colleagues used a simple technique to bias children to start writing in a right-to-left (i.e. reversed) direction, and they found that the pattern of reversals was also reversed, so that right-facing letters were now more likely to be flipped than left-facing characters. So, it seems that children may generally learn to face characters in the direction of writing before they know which way each of the individual letters should face.

And there is one point in our previous article that we would now revise. We suggested that mirror-writing in children was driven mainly by uncertainty about the direction of writing actions, and not by perceptual uncertainty about how the letters should look on the page. We have now tested this idea more directly, and found that there is in fact a close relationship between a child’s likelihood of mirror-writing and the errors they make when perceptually judging whether normal and reversed characters look correct or not. This relationship was significant even when controlling for age; and the letters that were most often mirror-written were also more prone to recognition errors. These new data indicate that perceptual uncertainty does accompany mirror-writing in children, and that visual and motor representations of letters develop in parallel.

What questions on mirror-writing are still unanswered?
One major shortcoming is that most of what we know about mirror-writing relates to dextrad (left-to-right) languages based on the Latin alphabet, which is only one class of directional writing system, so cross-cultural studies seem essential. How do these phenomena compare in other language systems, especially sinistrad (right-to-left) written languages such as Arabic or Hebrew? Bilingual children, being schooled both in dextrad and sinistrad languages, might be particularly interesting to study. We have unpublished data suggesting that children learning to read and write both English and Arabic make more orientation errors for left-facing characters in English and for right-facing characters in Arabic, consistent with a general bias to prefer letters that face in the script direction. It might also be interesting to examine the relation of reading and writing to other culturally-specified directional behaviours (such as turning taps or screws).

In adults, we would be interested to investigate a possible association of mirror-writing ability with atypical language dominance. We have functional magnetic resonance imaging data showing an unusual pattern of bilateral language representation in a skilled mirror writer. This result is intriguing, but it is not yet known whether it is typical of people who have a facility for mirror-writing. The extensive email correspondence that our Psychologist article has elicited has convinced us that there would be plenty of candidates for a larger-scale study. However, in pursuing this question it would be essential to define more precisely what should qualify a person as being a ‘natural’ mirror-writer; because mirror-writing is also a skill, like any other, that can be developed and made automatic through practice.