Wednesday, April 1, 2020

Going Upstream to Advance Psychosis Prevention and Improve Public Health

Going Upstream to Advance Psychosis Prevention and Improve Public Health. Deidre M. Anglin, Sandro Galea, Peter Bachman. JAMA Psychiatry, April 1, 2020. doi:10.1001/jamapsychiatry.2020.0142

The idea that we can reduce the incidence of psychotic disorders through detection and intervention in the prodromal stage of illness has generated increasing enthusiasm and research over the past 2 decades. This work has sought largely to identify individual-level changes in subjective experience, functioning, or brain volume or activity that immediately precede acute symptom onset. However, mental illnesses, including psychotic disorders, are particularly sensitive to the social, political, cultural, and economic context within which an individual lives.1 Prioritizing approaches to psychosis prevention that fail to give these social determinants a central role ignores compelling evidence and misses an opportunity to identify specific ways to help vulnerable youth.

Consider the example of racism’s pervasive detrimental association with the physical and mental well-being of disadvantaged people of color.2 Institutional racism creates differences in the average group member’s social, economic, and environmental circumstances, including living conditions in neighborhoods, work, and school. These social inequities distribute risk factors for mental disorders, such as exposure to violence, trauma, and chronic adversity and disadvantage, unevenly in the population in such a way that often disproportionately burdens group members with minority status (eg, people of color, poor people, and immigrants). In addition, the social experience of this oppression (ie, interpersonal discrimination) can further heighten the risk for mental illness because of the greater cumulative stress load associated with such lived experiences.

A growing body of US-based research has been providing data to inform our understanding of how social environmental inequities may enhance psychosis risk. For example, the association between social factors, such as racial discrimination3 and adverse childhood experiences,4 and the extended psychosis phenotype has been demonstrated in large national probability samples, developmental cohorts, smaller community-based samples, and even clinical high-risk studies. Despite this, the field’s focus on the role these underlying conditions play in shaping the incidence, duration, and treatment responsiveness of psychosis remains limited and falls short of the importance that these factors play in the etiology and course of psychosis. There are many reasons why there is a paucity of research on social risk factors for psychosis. Federal funding priorities have been a factor, as have concerns among researchers about the nonspecificity of social risk factors and the daunting prospect of large-scale societal change as an intervention. However, we suggest that from a public health perspective, some of these concerns represent opportunities.

Consider nonspecificity using the following example. High levels of air pollution have been found to be associated with depression, anxiety, and psychosis.5 This could indicate a common causal pathway among these 3 distinct syndromes through which pollution increases a disease process broadly (eg, inflammation), resulting in different possible outcomes. Air pollution could also contribute to the risk for depression in a way that is different from how it contributes to the risk for psychosis. We suggest that the significance of air pollution as a potential social determinant of mental illness remains regardless of whether it helps differentiate the risk of one disease from another. Moreover, it is not clear that a preferential focus on more microlevel foci (eg, genetic mutations) reveals evidence of such specificity of predictors.6 It stands to reason that the benefits of reducing air pollution would be widespread, providing more general social benefits that align with evolving views of the pluripotent nature of the risk for mental illness. The risk itself, including social risk, may be fairly nonspecific.

The notion that large-scale societal change as an intervention is too big or outside psychiatrists’ purview does not concord with the history of psychiatry, whose development has mirrored society’s evolving understanding of illness in general. For example, the advent of psychopharmacological interventions in the 1950s shifted the field from a more psychoanalytic understanding of psychopathology toward a strong biological perspective. Such discoveries shaped and changed the way psychiatrists were trained and practiced as clinicians, how research was conducted, and how psychiatrists understood mental illness. Similarly, social change during the 1960s and 1980s contributed to the deinstitutionalization of psychiatric hospitals, increasing the degree to which psychiatry was practiced as part of a larger service team in community-based mental health centers. Psychiatry can continue to evolve and be shaped by a richer appreciation and study of social determinants.

Conclusions and Recommendations

We propose a recalibration of priorities in which we focus on systemic, structural social risk factors with the same energy and investment that we apply to the search for individual-level signs, symptoms, and mechanisms, including physiological mechanisms. Thankfully, the association between social risk factors and physiological mechanisms does not have to be a zero-sum game. We have every reason to believe that moving upstream may demonstrate that these social risk factors operate with and via biological mechanisms to increase psychosis risk.7 Identifying the potential causal role of social mechanisms more explicitly will also require continued advancement in our epidemiologic methods of causal inference. Increasing our attention toward these social risk factors may help us take the next big step in predicting and preventing psychosis, and in doing so, positively affect the incidence and expression of other mental illnesses. Perhaps most important, understanding how forces like racism, poverty, and social marginalization affect mental illness is a step on the way to becoming a society in which the health of vulnerable youth is considered as important as their health care.

How do we get there? We recommend the following research, education, policy, and clinical actions. For us to understand how social risk factors contribute to outcomes such as psychosis, we need funding priorities from grant-making agencies to include the examination of social, cultural, economic, and political associations with risk for serious mental illness without requiring a priori links to identified neural circuits. Large-scale, longitudinal studies of risk for serious mental illness should systematically oversample populations with high levels of social disadvantage so hypotheses regarding the association of social risk factors can be tested. We are encouraged by recent funding efforts from the National Institute on Minority Health and Health Disparities to study the social epigenomics that drive health disparities. We believe psychosis risk should be included in such funding efforts.

Public mental health data quality and availability need to be improved. For example, we have had difficulty obtaining reliable stable estimates of clinical psychosis incidence at a population level across different socially constructed demographic groups (eg, racial groups with minority status) in national probability samples. Regarding the education of psychiatrists, training for clinicians should strive for structural competency, which includes cultural competency as well as facility in addressing other social, economic, and political factors that affect the lives of patients.8 On a policy level, a shift toward value-based care (and away from fee-for-service) would be a step in the right direction. Enacting such a change requires routinely assessing social risk factors as part of treatment planning and robust partnership with social service agencies that are incentivized to address these social disadvantages. Ideally, all policy decisions across all levels of government should consider the question, “would this policy make our constituents healthier or sicker?” Finally, from a clinical perspective, assessing and addressing social disadvantages should be the shared responsibility of professionals across systems of care and seen as a fundamental aspect of taking a whole-person or patient-centered approach to health care.

References, full text at the link above.

Dreams: “A person now dead as alive” is more frequent in older people, while “A person now alive as dead” in children; adults & older adults dream more often of “Trying something again and again” and “Arriving too late”

Maggiolini, A., di Lorenzo, M., Falotico, E., & Morelli, M. (2020). The typical dreams in the life cycle. International Journal of Dream Research, 20(1), 17-28. https://doi.org/10.11588/ijodr.2020.1.61558

Abstract: Most dream content analyses have been carried out on young adult samples, taken as norms, with fewer researches on continuity and discontinuity in the life cycle. A research on dreams in the life cycle (1546 participants, from 8 to 70 years), with the Typical Dreams Questionnaire (Nielsen et al., 2003; Dumel, Nielsen, & Carr, 2012), shows that 55.8% of the dreams reports have one or more typical content, with quite a stable prevalence across ages, with more dreams with a TDQ item in children and in older adults, with the minimum percentage in young adults. Children have more diversity in typical themes than other ages.The most frequent items in children have content related to some threat or some magic topic.  “A person now dead as alive” is more frequent in older people, while “A person now alive as dead” in children and preadolescents.  “School, teachers and studying” is more frequent in adolescence and “Sexual experiences” in young adults. Adults and older adults dream more often of “Trying something again and again” and “Arriving too late”.  Changes in typical dream themes can be related to emotional concerns typical of different phases of the life cycle.




Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System: Costs going up

Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System. Laura C. Rosella et al. JAMA Netw Open. 2020;3(4):e201917, April 1, 2020. doi:10.1001/jamanetworkopen.2020.1917

Question  What are the population-level trends in health care utilization and expenditures in the 2 years before death among adults in Ontario, Canada?

Findings  This cohort study found that health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion in 2005 to CAD$7.84 billion in 2015, and the intensity of health care utilization and deaths in hospital varied by resource utilization gradients.

Meaning  In this study, the observed trends demonstrated that costs and hospital-centered care before death are high in Ontario.


Abstract
Importance  Measuring health care utilization and costs before death has the potential to initiate health care improvement.

Objective  To examine population-level trends in health care utilization and expenditures in the 2 years before death in Canada’s single-payer health system.

Design, Setting, and Participants  This population-based cohort included 966 436 deaths among adult residents of Ontario, Canada, from January 2005 to December 2015, linked to health administrative and census data. Data for deaths from 2005 to 2013 were analyzed from November 1, 2016, through January 31, 2017. Analyses were updated from May 1, 2019, to June 15, 2019, to include deaths from 2014 and 2015.

Exposures  Sociodemographic exposures included age, sex, and neighborhood income quintiles, which were obtained by linking decedents’ postal codes to census data. Aggregated Diagnosis Groups were used as a general health service morbidity-resource measure.

Main Outcomes and Measures  Health care services accessed for the last 2 years of life, including acute hospitalization episodes of care, intensive care unit visits, and emergency department visits. Total health care costs were calculated using a person-centered costing approach. The association of area-level income with high resource use 1 year before death was analyzed with Poisson regression analysis, controlling for age, sex, and Aggregated Diagnosis Groups.

Results  Among 966 436 decedents (483 038 [50.0%] men; mean [SD] age, 76.4 [14.96] years; 231 634 [24.0%] living in the lowest neighborhood income quintile), health care expenditures increased in the last 2 years of life during the study period (CAD$5.12 billion [US $3.83 billion] in 2005 vs CAD$7.84 billion [US $5.86 billion] in 2015). In the year before death, 758 770 decedents (78.5%) had at least 1 hospitalization episode of care, 266 987 (27.6%) had at least 1 intensive care unit admission, and 856 026 (88.6%) had at least 1 emergency department visit. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Utilization in the last 2 years, 1 year, 180 days, and 30 days of life varied by resource utilization gradients. For example, the proportion of individuals visiting the emergency department was slightly higher among the top 5% of health care users compared with other utilization groups in the last 2 years of life (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year of life (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]); however, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). No meaningful associations were observed in high resource use between individuals in the highest income quintile compared with the lowest income quintile (rate ratio, 1.02; 95% CI, 0.99-1.05) after adjusting for relevant covariates.

Conclusions and Relevance  In this study, health care use and spending in the last 2 years of life in Ontario were high. These findings highlight a trend in hospital-centered care before death in a single-payer health system.



Introduction
Similar to those in other high-income countries, health care utilization and costs in Canada are expected to increase because of an expanding and aging population.1 A large proportion of these costs are incurred toward the end of life, with multiple studies demonstrating that health care utilization in the final months of life accounts for a substantial share of health care expenditures in comparison with other points in an individual’s life.2-5 In addition, most spending is concentrated in small groups of the population, who are characterized as high-cost users.6,7 Studies have shown that high-intensity medical care at the end of life can produce poor outcomes,8-10 can be associated with poor quality of life,11 and may conflict with patient preferences.9,12 To meet the growing needs of an aging population, a deeper understanding of the determinants and patterns of health care utilization and costs prior to death is required.

Most studies examining health care utilization prior to death have focused on a single aspect of care (eg, palliative services)13 or were specific to a particular cause of death.14-18 To our knowledge, few studies have examined health care use and costs at a population level and across an array of health sectors.5,19 Despite its potential to inform health care service delivery and improvement, evidence on health care utilization and cost patterns before death in a Canadian context is limited. A recent population-based study that examined health care expenditures in Ontario, Canada, from 2010 to 201319 reported that decedents who constituted less than 1% of the population consumed 10% of Ontario’s total health care budget, demonstrating that health care utilization occurs disproportionately. Using comprehensive multilinked mortality files, we analyzed population-level trends in health care utilization and expenditures prior to death in Ontario’s single-payer health system by looking at overall trends for more than a decade and by gradients of cost (ie, patients in the top 5%, top 6%-50%, and bottom 50% of health care costs).

Methods
Study Design
This retrospective cohort study used multiple linked vital statistics, population files, and health administrative data held at ICES to examine all deaths occurring in Ontario between January 2005 and December 2015. These data sets were linked using unique encoded identifiers and analyzed at ICES, an independent, nonprofit research institute whose legal status under Ontario’s health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. This study received ethical approval from the University of Toronto’s Health Sciences research ethics board and the institutional review board at Sunnybrook Health Sciences Centre, Toronto, Canada. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.

Study Population
Data for all deaths registered in the province of Ontario were obtained from the Office of the Registrar General-Deaths (ORG-D) file. The ORG-D is linked to the Registered Persons Database (RPDB), which contains basic demographic information for those who have ever received an Ontario health card number for the province’s universal health care system (overall linkage rate, 96.5%).20 The study cohort consisted of all deaths registered in the ORG-D between January 1, 2005, and December 31, 2015, that were linked to the RPDB record (N = 966 436). Those who had an invalid Ontario health card number on their death date (n = 4433), were not residents of Ontario (n = 252), or were younger than 18 years (n = 8768) were excluded.

Measures
We examined health care utilization prior to death according to several sociodemographic exposures. Sex and age data were obtained from the RPDB. Categories for age at time of death were 18 to 24 years, 25 to 34 years, 35 to 44 years, 45 to 54 years, 55 to 64 years, 65 to 74 years, 75 to 85 years, and older than 85 years. Ecological-level measures of income and education status were estimated using data from the 2006 Canadian census21 and were applied to individuals according to the dissemination area, which represents the smallest geographic census area in which the individual resided. Based on their postal code at the time of death, individuals were assigned to a dissemination area. Education was characterized as the proportion of individuals who completed high school in a given dissemination area. Individuals were grouped into income and education quintiles ranging from 1 (lowest 20% of income or education) to 5 (highest 20% of income or education).

As a general health service morbidity-resource measure, we used The Johns Hopkins Adjusted Clinical Group system version 10.0.1 Aggregated Diagnosis Group (ADG) scores, a person-focused, diagnosis-based method of categorizing individuals’ illnesses.22 Aggregated Diagnosis Groups have been validated for health services research use in Ontario23 and were calculated for the 2 years prior to death.

We measured health care utilization and services accessed for 2 years, 1 year, 180 days, and 30 days before death. Hospitalization episodes of care and intensive care unit (ICU) visits were obtained from the Discharge Abstract Database. An acute hospitalization episode was defined as either an admission to an acute care setting from which the patient was discharged or a continuous sequence of hospital stays in different hospitals to which the patient was transferred. Transfers between 2 different institutions were defined using both the timing between admissions and transfer flags on either record. Specifically, the following situations were defined as a transfer: (1) any admission within 6 hours of the previous discharge, (2) any admission within 12 hours of the previous discharge in which the type of institution transferred from or to was type 1 (ie, acute care), or (3) any admission within 48 hours of the previous discharge in which the number of the institution transferred from matched the number of the institution or the institution transferred to. Length of stay for episodes of care and ICU visits were calculated by subtracting the date of the latest discharge from the date of the earliest admission. Emergency department visits were obtained from the National Ambulatory Care Reporting System and were counted as 1 claim per patient per registered day. Physician services (primary care and specialist) were obtained from the Ontario Health Insurance Plan claims database. A physician visit was counted as 1 claim per patient per service day per physician. Physician specialties listed as family practice and general practice, community medicine, and pediatrics were considered primary care visits. All other physician specialties were considered a specialist visit. Death in hospital was identified in the Discharge Abstract Database if a hospital discharge disposition code was recorded as died, indicating a death in hospital. In 423 580 of 427 859 in-hospital deaths (99.0%), the death date in ORG-D and hospital discharge date with a code indicating died in the Discharge Abstract Database were within 1 day apart.

We calculated comprehensive per-person health care costs for the time proceeding death (last 2 years of life). Annual health care utilization and costs were calculated from the health care payee perspective, using administrative data from across health care sectors, including inpatient hospital stay, emergency department visits, same day surgery, stays in complex continuing care hospitals and inpatient rehabilitation, inpatient psychiatric admissions, physician payments for patient visits and community laboratory tests, and prescriptions filled for individuals eligible for the Ontario Drug Benefit Plan. A person-centered costing macro was used to calculate total annual health care spending; the costing methodology has been described elsewhere.24 Expenditures were calculated in Canadian dollars for the year 2015. Individuals were categorized in resource utilization groups (top 5%, top 6%-50%, and bottom 50%) based on the total health care costs in the last year of their lives.

Statistical Analysis
The distribution of sociodemographic characteristics among decedents at the time of death was described using means and proportions according to health care utilization gradients. Overall and per-person health care utilization metrics were calculated for the 2 years, 1 year, 180 days, and 30 days before death using medians and proportions and are presented according to health care utilization gradients for the last year of life. We estimated temporal trends of total health care expenditures among adult deaths from 2005 to 2015 by health care utilization gradient.

Factors associated with being in the top 5% of health care users in the last year of life were assessed by a modified Poisson model.25 We chose to model risk directly using a modified Poisson regression because it provides a good approximation of the binomial distribution when the sample is large, and it is less likely than logistic regression to overestimate the relative risk.26 We used belonging to the top 5% as the outcome and sex, age, area-level income quintile, and ADG score as the covariates. Associations were calculated with rate ratios (RRs) with corresponding confidence intervals. Statistical significance was set at P < .05, and all tests were 2-tailed. All analyses were conducted using SAS Enterprise Guide statistical software, version 7.15 (SAS Institute).

Results
Sociodemographic Characteristics by Health Care Utilization Gradient
Sociodemographic characteristics of 966 436 adult decedents (438 038 [50.0%] men; 231 634 [24.0%] living in lowest neighborhood income quintile), stratified by health care utilization gradients (top 5%, top 6%-50%, and bottom 50%) are shown in Table 1. Those in the top 5% were younger, with a mean (SD) age of 71.1 (14.6) years compared with 76.4 (14.96) years for the total cohort. A larger percentage of those in the top 5% were male (26 818 [55.5%] vs 200 965 [46.2%] in the top 6%-50% and 255 255 [52.8%] in the bottom 50%) and had a higher mean (SD) number of ADGs compared with the overall cohort (14.9 [3.6] vs 11.2 [4.4]). In contrast, the distribution of area-level income and education were similar across health care utilization gradients. The number of deaths captured in the cohort per year was similar across years, from 83 227 deaths in 2005 to 95 044 deaths in 2015 (eTable 1 in the Supplement). The major causes of death in the cohort were cancer (287 308 [29.7%]) and diseases of the circulatory system (279 881 [29.0%]) (eTable 2 in the Supplement).

Health Care Utilization in the Last 2 Years, 1 Year, 180 Days, and 30 Days of Life
Health care utilization prior to death for the overall cohort is described in Table 2. In the last 2 years of life, most individuals (758 770 [78.5%]) had at least 1 acute hospitalization episode of care, with a median (interquartile range [IQR]) length of stay of 8 (5-15) days. Approximately one-third (266 987 [27.6%]) were admitted to the ICU with a median (IQR) length of stay of 69 (33-130) hours in acute care, and almost all (856 026 [88.6%]) had an emergency department visit. The median (IQR) number of visits to primary care and specialist physicians were similar, with 31 (17-53) visits and 34 (13-69) visits, respectively.

In the last 30 days of life, 143 225 decedents (14.8%) were admitted to the ICU, spending a median (IQR) of 59 (23-124) hours in acute care. In addition, most visited a primary care physician (856 679 [88.6%]; median [IQR] visits, 4 [1-9]) and a specialist (699 042 [72.3%]; median [IQR] visits, 4 [0-14]). In terms of proximity to death, 475 574 decedents (49.2%) had at least 1 hospitalization episode of care in the last 30 days of life, 662 628 (68.6%) in the last 180 days, 710 035 (73.5%) in the last year, and 758 770 (78.5%) in the last 2 years. Similarly, the proportion that visited the emergency department was 528 658 (54.7%) in the last 30 days of life, 750 558 (77.7%) in the last 180 days, 805 598 (83.4%) in the last year, and 856 026 (88.6%) in the last 2 years. The nominal difference in percentage demonstrates that a substantial portion of health care use occurred toward the end of life.

Health Care Utilization Metrics by Resource Utilization Gradients
Table 2 presents health care utilization metrics at the end of life according to resource utilization gradients. In the last 2 years of life, among those who experienced a hospitalization, individuals in the top 5% had a median (IQR) of 3 (2-6) episodes of care per person, compared with 1 (0-2) episode of care among individuals in the bottom 50%. In the same period, approximately two-thirds of those in the top 5% experienced an ICU admission (31 099 [64.4%]) with a median (IQR) length of stay of 143 (70-317) hours; in comparison, approximately one-fifth of individuals in the bottom 50% (100 959 [20.9%]) had an ICU admission, with a median (IQR) length of stay of 47 (22-89) hours. The proportion of individuals visiting the emergency department was slightly higher among the top 5% compared with other utilization groups in the last 2 years (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]) of life. In contrast, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). In the last 2 years of life, the median (IQR) number of primary care visits was 57 (28-106) among the top 5% compared with 22 (11-37) among the bottom 50%. The median (IQR) number of specialist visits over this period was 163 (96-238) among the top 5% compared with 21 (8-40) among the bottom 50%.

Factors Associated With High Resource Utilization Prior to Death
In the Poisson model (Table 3), significant risk reductions for high resource utilization (ie, top 5%) in the last year of life were observed among women compared with men (RR, 0.90; 95% CI, 0.88-0.91) and among older age groups; the RR was 0.21 times lower in decedents older than 85 years compared with those aged 18 to 24 years (RR, 0.21; 95% CI, 0.19-0.23) after adjusting for income and ADGs (Table 3). No meaningful associations were observed between individuals in the highest area income quintile compared with individuals in the lowest quintile (RR, 1.02; 95% CI, 0.99-1.05) after adjusting for sex, age, and ADGs. The associations between high income (ie, quintile 5) and low income (ie, quintile 1) remained null in the sex-segregated models, in which the confidence interval included the null value.

Hospital Deaths by Resource Utilization Gradients
Table 4 displays trends in the percentage of deaths that occurred in hospital by resource utilization gradients. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Throughout the study period, a total of 29 292 of 48 324 deaths (60.4%) and 203 792 of 483 213 (42.2%) occurred in the hospital among those in the top 5% and the bottom 50%, respectively, without much variation during the study period. Among the top 6% to top 50% resource gradient, deaths in hospital decreased from 14 975 of 28 792 (52.0%) in 2005 to 18 569 of 46 859 (39.6%) in 2015.

Temporal Trends in Health Care Expenditures According to Resource Utilization Gradients
Total health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion (US $3.83 billion) in 2005 to CAD$7.84 billion (US $5.86 billion) in 2015, an increase of approximately 35%. Similarly, expenditures during this period increased from CAD$3.59 billion (US $2.69 billion) to CAD$5.34 billion (US $4.01 billion) in the last year of life, an increase of 33%. In the last 180 days of life, expenditures increased from CAD$2.53 billion (US $1.90 billion) to CAD$3.67 billion (US $2.75 billion), a 31% increase, and for the last 30 days of life, they increased from CAD$1.04 billion (US $0.78 billion) to CAD$1.43 billion (US $1.07 million), a 27% increase (Figure, A). Mean per-person spending in the last 2 years of life increased among the top 5% from CAD$273 820 (95% CI, CAD$269 935 to CAD$277 760) (US $205 365; 95% CI, US $202 451 to US $208 320) in 2005 to CAD$295 183 (95% CI, CAD$291 811 to CAD$298 593) (US $221 387; 95% CI, US $218 858 to US $223 945) in 2015. In the same period, mean per-person spending in the bottom 50% decreased from CAD$33 489 (95% CI, CAD$33 210 to CAD$33 771) (US $25 117; 95% CI, US $24 908 to US $25 328) in 2005 to CAD$31 148 (95% CI, CAD$30 871 to CAD$31 427) (US $23 361; 95% CI, US $23 153 to $23 570) in 2015 (Figure, B). In the last 2 years of life, mean (SD) per-person spending for acute hospital care increased from CAD$4839 (CAD$ 14 053) (US $3629 [US $10 540]) in 2005 to CAD$6572 (CAD$19 722) (US $4928 [US $14 792]) in 2015 (Figure, C).

Discussion
This study examined population-wide health care utilization and costs at the end of life in the universal health care system of Ontario, which accounts for 40% of Canada. Our unique focus on health care utilization gradients and trends of health care use at the end of life was enabled by a mortality database that contained all deaths registered in Ontario during 11 years, linked with health administrative data. We demonstrated that overall health care expenditures in Ontario for the last 2 years of life increased by 35% from 2005 to 2015, with the largest proportional increase in average per-person spending observed in the top 5% and top 6% to top 50% of health care users. We demonstrated higher end-of-life utilization of health care services among those in the top 5% compared with the overall cohort for hospitalization episodes of care, ICU visits, emergency department visits, and physician visits. Exceptions to this pattern were identified for the last 30 days of life, in which utilization of certain services, such as emergency department visits, were higher among the top 6% to top 50% and bottom 50% of health care users than among the top 5%. However, the observed reduced utilization of these services could have been the result of individuals in the highest cost group already being admitted to a hospital in their last 30 days of life. Several studies have reported population-wide health care utilization prior to death,19,27,28 but they have not looked at differences among health care use gradients.

The study showed that in the last year of life, 74% of residents of Ontario had a hospitalization episode of care and 24% spent time in the ICU. Comparable patterns of end-of-life health care utilization have been reported in other high-income countries. For example, an Australian study looking at hospital-based services used by adults during the last year of life reported slightly higher rates of hospitalization (84%) and lower rates of ICU visits (12%).27 In the United States, ICU visit rates in the last month of life ranged from 24% to 29% among Medicare beneficiaries aged 66 years and older compared with 21% in our cohort.29

We observed a negatively linear association between older age groups and being in the top 5% of health care users in the last year of life, especially among men. A similar pattern of lower expenditures among older age groups in the last year of life was reported in the US Medicare population of adults aged 65 and older.30 In our analysis, we did not see meaningful associations for area-level income quintiles and high health care utilization in the last year of life after adjusting for sex, age, and ADGs. Similar findings were observed in a retrospective cohort analysis of health care use among deaths in Ontario from 2010 to 2013, in which total costs did not vary by neighborhood income quintile.19 In contrast, among the US Medicare population, individuals in the lowest-income area had slightly higher expenditures in the last year of life compared with those living in the highest-income areas.30 Furthermore, in a study of health care spending in the last year of life in the province of British Columbia, Canada, the highest 2 household income quintiles were shown to have approximately 4% less health care spending than those in the lowest income quintile.31 The differential income associations observed in these studies could be attributed to health system differences in access to health care services in the jurisdictions under study and differences in ecological-level vs individual-level income measures used.

A larger percentage of deaths occurred in hospital in our cohort compared with Switzerland, where it was reported that 34% of deaths were in a hospital,28 and in the United States, where deaths in acute care hospitals ranged from 25% to 33% among decedents older than 66 years.29 Furthermore, we observed that the proportion of deaths in hospital among the top 5% and bottom 50% of health care users in the last year of life was stable over the study period. The observed high-intensity care near the end of life and high percentage of deaths in hospitals highlights a need for a societal-level discussion about approaches to end-of-life care in Ontario.

We observed that high health care utilization was associated with multimorbidity, as measured by ADGs, and that hospital-centered care was the typical trajectory at the end of life. This points to the need to design appropriate integrated care strategies that could support patients at the end of life to be discharged from the hospital and receive care and management for their conditions through home care or long-term care services.

Limitations
It is important to note some limitations to our study. First, our study used ecological-level indicators of socioeconomic status based on postal code information at the time of death, which may have provided lower estimates of income gradients in health care utilization.32 Second, our database only included services covered by the provincial health care payee and not services that may be covered by supplemental insurance or paid for out of pocket (ie, nursing, personal care, medications, and therapy). Third, comprehensive recommendations regarding end-of-life care are difficult to make in the absence of information on the appropriateness of care and use of potentially avoidable health services, which were out of scope for this study. Nonetheless, the findings support understanding of end-of-life health care trends in a universal health care system.

Conclusions
This study reported on health care utilization in the 2 years before death with a focus on the characterization of high-cost users. It identified patterns of high utilization of health care services before death and a large proportion of deaths in hospital, with variation across health care utilization gradients. The findings suggest a trajectory of hospital-centered care prior to death in Ontario.



References and full text at the link above.

Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables

Harper, Craig A., Liam Satchell, Dean Fido, and Robert Latzman. 2020. “Functional Fear Predicts Public Health Compliance in the COVID-19 Pandemic.” PsyArXiv. April 1. doi:10.31234/osf.io/jkfu3

Abstract: In the current context of the global pandemic of coronavirus disease-2019 (COVID-19), health professionals are working with social scientists to inform government policy on how to slow the spread of the virus. An increasing amount of social scientific research has looked at the role of public message framing, for instance, but few studies have thus far examined the role of individual differences in emotional and personality-based variables in predicting virus-mitigating behaviors. In this study we recruited a large international community sample (N = 324) to complete measures of self-perceived risk of contracting COVID-19, fear of the virus, moral foundations, political orientation, and behavior change in response to the pandemic. Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables. We discuss these data in relation to the potentially functional nature of fear in global health crises.


Gay men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women

Speech Acoustic Features: A Comparison of Gay Men, Heterosexual Men, and Heterosexual Women. Alexandre Suire, Arnaud Tognetti, Valérie Durand, Michel Raymond & Melissa Barkat-Defradas. Archives of Sexual Behavior, March 31 2020. https://rd.springer.com/article/10.1007/s10508-020-01665-3

Abstract: Potential differences between homosexual and heterosexual men have been studied on a diverse set of social and biological traits. Regarding acoustic features of speech, researchers have hypothesized a feminization of such characteristics in homosexual men, but previous investigations have so far produced mixed results. Moreover, most studies have been conducted with English-speaking populations, which calls for further cross-linguistic examinations. Lastly, no studies investigated so far the potential role of testosterone in the association between sexual orientation and speech acoustic features. To fill these gaps, we explored potential differences in acoustic features of speech between homosexual and heterosexual native French men and investigated whether the former showed a trend toward feminization by comparing theirs to that of heterosexual native French women. Lastly, we examined whether testosterone levels mediated the association between speech acoustic features and sexual orientation. We studied four sexually dimorphic acoustic features relevant for the qualification of feminine versus masculine voices: the fundamental frequency, its modulation, and two understudied acoustic features of speech, the harmonics-to-noise ratio (a proxy of vocal breathiness) and the jitter (a proxy of vocal roughness). Results showed that homosexual men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women. Lastly, testosterone levels did not influence any of the investigated acoustic features. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis and suggest that the feminization of some acoustic features could be shared across languages.


Discussion

This study offers an interesting take on the interaction between sexual orientation and acoustic features of speech in a French speaker sample. First, our analysis of different acoustic features revealed well-known patterns of sexual dimorphism in human voices (i.e., F0, F0-SD, jitter, and HNR). Secondly, our findings showed that French homosexual men displayed a more modulated and less breathy voice than French heterosexual men, thus supporting and extending previous studies conducted mostly with English speakers. Our results for the LDA showed that French homosexual men attested a slight but significant vocal feminization when considering speech acoustic features altogether (up to 10.65%), which support the feminization hypothesis. (It is important to note, however, that no overlap was observed between heterosexual and homosexual men vs. heterosexual women.) Lastly, testosterone levels did not mediate the association between vocal patterns and sexual orientation.
Consistent with previous findings in English-speaking populations, no significant differences were observed in mean F0 between French-speaking heterosexual and homosexual men (Gaudio, 1994; Lerman & DamstĂ©, 1969; Munson et al., 2006b; Rendall et al., 2008; Rogers et al., 2001; Smyth et al., 2003). The results did show a difference between homosexual and heterosexual men in intonation, the former displaying higher pitch variations than the latter. The relationship between pitch variations and sexual orientation was previously found in one Dutch (Baeck et al., 2011) and one American-English population (Gaudio, 1994), suggesting that feminized pitch variations might be characteristic of male homosexual speech across languages (but see Levon, 2006). In our study, the average difference in pitch variations reached ~ 4.11 Hz, which is largely above the just noticeable difference for pitch (Pisanski & Rendall, 2011). Hence, our findings suggest that pitch variations could be one of the acoustic correlates of sexual orientation that is used by listeners when they correctly assessed sexual orientation through speech only (Gaudio, 1994; Linville, 1998; Smyth et al., 2003; Valentova & HavlĂ­ÄŤek, 2013). Further investigations are nevertheless needed to confirm if such a difference in pitch variations between homosexual and heterosexual men is enough to be used as a cue for assessing sexual orientation.
To our knowledge, this is the first study to report an association between men’s vocal breathiness and sexual orientation. Interestingly, vocal breathiness has been suggested to be an important component of vocal femininity in female voices (Van Borsel et al., 2009) and significant relationships to vocal attractiveness have been reported in both sexes (Xu et al., 2013). Although the difference in vocal breathiness between homosexual and heterosexual men is rather low (mean average difference reached ~ 0.80 dB), further research should test whether it is perceptible by listeners to assess male sexual orientation and whether homosexual men’s voices, which are richer in harmonics compared to those of heterosexuals, are perceived as more attractive among homosexual men.
In our study, T-levels did not influence any of the acoustic parameters investigated. The methods to measure T-level and the sample size used in this study were similar to those used in previous studies finding a significant negative link between T-levels and F0 (e.g., Dabbs & Mallinger, 1999; Evans et al., 2008). However, testosterone is a multiple-effect hormone under the influence of numerous biological and environmental factors and pathways. As such, it is generally difficult to correlate T-levels with other biological or behavioral traits, especially with a unique measurement as realized here. Nevertheless, our results might suggest that other underlying processes, different than basal T-level, are involved in vocal differences between homosexual and heterosexual men.
Although our study does not aim to provide an explanation for why vocal differences were found between homosexual and heterosexual men, several biological and social mechanisms can be invoked. For instance, exposure to prenatal testosterone has been suggested to be responsible for the differences between homosexual and heterosexual men on a large range of characteristics such as physiological and behavioral traits including speech characteristics (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). Several studies have thus tested whether the 2D:4D ratio (relative length of the second and fourth digits), a proxy of testosterone prenatal exposure differs between homosexual and heterosexual men (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). However, there is currently no consensus regarding whether the 2D:4D ratio differs between heterosexual and homosexual men as studies have yielded mixed results (Breedlove, 2017; Grimbos, Dawood, Burriss, Zucker, & Puts, 2010; Rahman & Wilson, 2003; Robinson, 2000; Skorska & Bogaert, 2017; Williams et al., 2000). Regarding social mechanisms, a social imitation of women’s speech peculiarities by homosexual men could also explain the differences observed between homosexual and heterosexual men’s speech characteristics (at least for F0-SD and HNR). The use of more feminine acoustic characteristics by homosexual men could reflect a selective adoption model of opposite-sex speech patterns or a selective use of acoustic features for signaling in-group identity (Pierrehumbert et al., 2004), an ability called “gaydar” (i.e., the detection of homosexuality based on a set of specific cues). Interestingly, a recent study suggests that the acquisition of a distinctive speech style may happen before puberty, as boys aged from 5 to 13 with gender identity disorder (a diagnosis made when a child shows distress or discomfort due to a mismatch between his/her gender identity and his/her biological sex) display distinctive speech features (higher F0 and F2 as well as a misarticulation of/s/) from boys without it (Munson, Crocker, Pierrehumbert, Owen-Anderson, & Zucker, 2015). Because some homosexual men display a greater degree of gender nonconforming behavior (GNC) than others during childhood (Bailey & Zucker, 1995), one could thus hypothesize that the former would be more likely to have a more feminine speech in adulthood than the latter. Further work should investigate the relative importance of the mechanisms underlying homosexual men’s speech.
To conclude, although our study did not aim to test specific hypotheses against a formal theoretical framework to understand the differences between homosexual and heterosexual men’s speech, it provides some new descriptive findings. By examining for the first time native French speakers and some understudied acoustic features (i.e., namely, jitter and HNR), our results indicated that some vocal traits differed between heterosexual and homosexual men (i.e., variations of pitch and vocal breathiness) with values shifted toward heterosexual women’s vocal characteristics. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis (at least for some acoustic features) and suggest that the feminization of some acoustic features could be shared across languages. Further studies are needed to test whether intonation and vocal breathiness are perceptually salient to distinguish homosexual and heterosexual men, and whether overall differences are due to biological and/or sociolinguistic reasons.

With no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller amounts more frequently; seems a built-in tactic in humans & possibly in other animals

Mere presence of co-eater automatically shifts foraging tactics toward ‘Fast and Easy' food in humans. Yukiko Ogura, Taku Masamoto and Tatsuya Kameda. Royal Society Open Science, Volume 7, Issue 4, April 1 2020. https://doi.org/10.1098/rsos.200044

Abstract: Competition for food resources is widespread in nature. The foraging behaviour of social animals should thus be adapted to potential food competition. We conjectured that in the presence of co-foragers, animals would shift their tactics to forage more frequently for smaller food. Because smaller foods are more abundant in nature and allow faster consumption, such tactics should allow animals to consume food more securely against scrounging. We experimentally tested whether such a shift would be triggered automatically in human eating behaviour, even when there was no rivalry about food consumption. To prevent subjects from having rivalry, they were instructed to engage in a ‘taste test' in a laboratory, alone or in pairs. Even though the other subject was merely present and there was no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller food amounts more frequently, which was clearly distinct from their reaching patterns both when eating alone and when simply weighing the same food without eating any. These patterns suggest that behavioural shifts in the presence of others may be built-in tactics in humans (and possibly in other gregarious animals as well) to adapt to potential food competition in social foraging.

4. Discussion

We created a laboratory foraging situation in which subjects were asked to eat potato chips for a ‘taste test'. The mere presence of a co-eater in the Visible Pair condition increased the reach frequency for food and decreased the weight of food per reach, as compared to the Solo condition (figures 2a and b). This result supports our hypothesis that the behavioural shift toward foraging smaller food more frequently would be triggered automatically among human subjects, even when there was no actual competition about food consumption.
We argued that the behavioural tactics in social foraging consist of two components—increasing reach frequency and preferring smaller food amounts. Compared to the increase in reach frequency observed across the two Pair conditions, the behavioural shift for smaller food amounts emerged only in the Visible Pair condition. Although the latter shift may be seen as a by-product of random picking caused by distraction from the visible co-eater, the reach pattern was distinct from the simulated random sampling (figure 3b). It was also distinguishable from the counting pattern in the weighing experiment (figure 3c). We thus think that, along with increasing reach frequency, choosing smaller food amounts is a systematic (yet weaker) component of foraging tactics in human group settings.
The overall amount of individuals' food intake was not increased by the presence of a co-eater, which may appear inconsistent with results from previous human psychological studies of social facilitation in eating behaviour [10,26]. However, in these studies, food was freely given to the subjects and eating time was not controlled; in a subsequent study of real-world eating behaviour by humans, the increase in food consumption in the presence of others reflected an increase in meal duration [27]. While these psychological studies were silent about the cost–benefit trade-offs in foraging tactics, the present study examined human eating behaviour from a behavioural ecological perspective, arguing that humans may favour sure gain at the cost of time, effort and amount per intake to adjust to potential competition in social foraging.
Our results showed that the behavioural shift was triggered by the mere presence of a co-eater, even without actual competition. This suggests that the underlying mechanism for the shift may be a built-in system that activates automatically in response to relevant social cues. Considering that gregariousness is not human-specific but widespread in animals, neural implementation of an automatic competitive mode may also be rooted in ancient neural circuits. In domestic chicks, for example, a brain region considered to be homologous to the limbic area in mammals contributes to an automatic increase in the reaching frequency for feeders [28,29]. On the other hand, many brain mapping studies in humans have attempted to identify brain regions related to social competition using behavioural games [3032]. However, the competitive contexts they have used are very different from the foraging situation in our study. Future research addressing the neural implementation of an automatic competitive mode in social foraging will be important not only for behavioural ecology but also to better understand the biological bases of problematic eating behaviour in humans.
In summary, humans shift their foraging tactics when a co-eater is present. Such a behavioural shift is likely to be a built-in response to possible food competition with conspecifics and may be common across many gregarious animals.



Tuesday, March 31, 2020

“When in Danger, Turn Right: Covid-19 Threat Promotes Social Conservatism and Right-wing Presidential Candidates

Karwowski, Maciej, Marta Kowal, Agata Groyecka, Michal Bialek, Izabela Lebuda, Agnieszka Sorokowska, and Piotr Sorokowski. 2020. “When in Danger, Turn Right: Covid-19 Threat Promotes Social Conservatism and Right-wing Presidential Candidates.” PsyArXiv. March 31. doi:10.31234/osf.io/pjfhs

Abstract: The recent coronavirus (COVID-19) pandemic forms an enormous challenge for the world's economy, governments, and societies. Drawing upon the Parasite Model of Democratization (Thornhill, R., Fincher, C. L., & Aran, D. (2009), parasites, democratization, and the liberalization of values across contemporary countries, Biological Reviews, 84(1), 113-131) across two large, preregistered experiments conducted in the USA and Poland (total N = 1,237), we examined the psychological and political consequences of this unprecedented pandemic. By manipulating saliency of COVID-19, we demonstrate that activating thinking about coronavirus elevates Americans' and Poles' anxiety and indirectly promotes their social conservatism as well as support for more conservative presidential candidates. The pattern obtained was consistent in both countries and it implies that the pandemic may result in a shift in political views. Both theoretical and practical consequences of the findings are discussed.


Discussion
In a large-scale, preregistered experiment, we found evidence for a shift in political views of individuals threatened by the coronavirus pandemic. Specifically, we show that those who feel threatened react with anxiety, tend to seek greater structure in their environment, and thus shift toward social conservatism. All of this increases the support for conservative presidential candidates. A great value of our research is the observed similarity of this effect in two countries: Poland and United States. Different in many aspects, these populations still exhibited the same pattern of results. Further, our findings cohere with political ideology shifts following terrorist attacks (38). Hence, the results suggest a universal character of the threat-to-conservatism path.

Our results have crucial, practical implications, since they suggest that forthcoming elections can be biased toward right-wing, conservative candidates. People simply seek stability and order, which seem to be more pronouncedly exhibited by conservative candidates. Our findings also have important theoretical implications, as the current pandemic created a unique opportunity to validate the Parasite Model of Democratization (14). We found strong support for it – pathogen threat boosted preference of values typical for social conservatism. We also provided evidence against an alternative explanation of threat boosting support for status quo, because support was also greater for less liberal (or more centrist) counter-candidates, if participants were to choose among them.

Regarding applicability of our findings, we believe that all candidates should reframe their political communication. In the moral foundation theory (39, 40), loyalty and authority constitute the so-called binding values. These moral values are more prominent in conservatives but are not ignored by liberally oriented individuals either. Hence, communication appealing to these values may be an efficient way to mitigate the shift of values in societies: they are accepted by the core supporters of liberal candidates and are actively sought by individuals affected by the coronavirus threat. In general, our results highlight how important it is for people to perceive the world as a stable and predictable place. This preference is even stronger in times of chaos. Those interested in human behavior should consider its importance in the current models explaining how we judge and think.

Perceived political and nonpolitical dissimilarity were associated with negative emotions, prejudice, and lower affiliative intentions among both liberals and conservatives, more strongly in the former

Ideological Conflict and Prejudice: An Adversarial Collaboration Examining Correlates and Ideological (A)Symmetries. Chadly Stern, Jarret T. Crawford. Social Psychological and Personality Science, March 30, 2020. https://doi.org/10.1177/1948550620904275

Abstract: In an adversarial collaboration, we examined associations among factors that could link ideological conflict—perceiving that members of a group do not share one’s ideology—to prejudice and affiliation interest. We also examined whether these factors would possess similar (“symmetrical”) or different (“asymmetrical”) associative strength among liberals and conservatives. Across three samples (666 undergraduate students, 347 Mechanical Turk workers), ideological conflict was associated with perceived dissimilarity on political and nonpolitical topics, as well as negative emotions. Perceived political and nonpolitical dissimilarity were also associated with negative emotions, prejudice, and lower affiliative intentions among both liberals and conservatives. Importantly, however, perceived political dissimilarity was associated with negative emotions, prejudice, and lower affiliative intentions more strongly among liberals. Some inconsistent evidence also suggested that perceived nonpolitical dissimilarity was associated with prejudice and lower affiliative intentions more strongly among conservatives. These findings document nuance in relationships that could link ideological conflict to prejudice.

Keywords: ideological conflict, prejudice, ideological symmetry, ideological asymmetry

Females are more likely to tweet about the virus in the context of family, social distancing & healthcare, males are more likely to tweet about sports cancellations, the virus global spread & political reactions

Covid-19 Tweeting in English: Gender Differences. Mike Thelwall. Institute of Health, University of Wolverhampton. https://arxiv.org/ftp/arxiv/papers/2003/2003.11090.pdf

Abstract: At the start of 2020, COVID-19 became the most urgent threat to global public health. Uniquely in recent times, governments have imposed partly voluntary, partly compulsory restrictions on the population to slow the spread of the virus. In this context, public attitudes and behaviors are vitally important for reducing the death rate. Analyzing tweets about the disease may therefore give insights into public reactions that may help guide public information campaigns. This article analyses 3,038,026 English tweets about COVID-19 from March 10 to 23, 2020. It focuses on one relevant aspect of public reaction: gender differences. The results show that females are more likely to tweet about the virus in the context of family, social distancing and healthcare whereas males are more likely to tweet about sports cancellations, the global spread of the virus and political reactions. Thus, women seem to be taking a disproportionate share of the responsibility for directly keeping the population safe. The detailed results may be useful to inform public information announcements and to help understand the spread of the virus. For example, failure to impose a sporting bans whilst encouraging social distancing may send mixed messages to males


Quantifying, and Correcting For, the Impact of Questionable Research Practices on False Discovery Rates in Psychological Science

Kravitz, Dwight, and Stephen Mitroff. 2020. “Quantifying, and Correcting For, the Impact of Questionable Research Practices on False Discovery Rates in Psychological Science.” PsyArXiv. March 26. doi:10.31234/osf.io/fu9gy

Abstract: Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.

Even Prosocially Oriented Individuals Save Themselves First: Social Value Orientation, Subjective Effectiveness and the Usage of Protective Measures During the COVID-19 Pandemic in Germany

Leder, Johannes, Alexander Pastukhov, and Astrid SchĂĽtz. 2020. “Even Prosocially Oriented Individuals Save Themselves First: Social Value Orientation, Subjective Effectiveness and the Usage of Protective Measures During the COVID-19 Pandemic in Germany.” PsyArXiv. March 30. doi:10.31234/osf.io/nugcr

Abstract: We investigated the perception and the frequency of various protective behavior measures against COVID-19. Although our sample (German general public, N = 419, age = 38.07 (15.67) years, female = 71.1 % (diverse = 0.5%), students = 34.37%) consisted mostly of prosocially oriented individuals, we found that, above all, participants used protective measures that protected themselves. They consistently shunned measures that have higher protective value for the public than for themselves, which indicates that public protective value comes second even for prosocially oriented individuals. Accordingly, health communication should focus on emphasizing a measure’s perceived self-protective value by explaining how it would foster public protection that in the long run will protect the individual and the individual’s close relations.

Monday, March 30, 2020

Robin Hanson: Variolation May Cut Covid19 Deaths 3-30X

Variolation May Cut Covid19 Deaths 3-30X. Robin Hanson. Overcoming Bias, March 30, 2020. http://www.overcomingbias.com/2020/03/variolation-may-cut-covid19-deaths-3-30x.html


(Here I try to put my recent arguments together into an integrated essay, suitable for recommending to others.)

When facing a new pandemic, the biggest win is to end it fast, so that few ever suffer. This prize makes it well worth trying hard to trace, test, and isolate those near the first few cases. Alas, for Covid-19 and the world, this has mostly failed, though not yet everywhere.

The next biggest win is to find a cheap effective treatment, such as a vaccine. And while hope remains for an early win, this looks to be years away. To keep most from getting infected, at this point the West must apparently develop and long maintain unprecedented expansions in border controls, testing, tracing, and privacy invasions, and perhaps also non-home isolation of suspected cases. Alas, these ambitious plans must be implemented by the same governments that have so far failed us badly.

Yes, there remains hope here, which should be pursued. But we also need a Plan B; what if most will eventually be infected without a treatment? The usual answer is “flatten the curve,” via more social distance to lower the average of (and increase the variance of) infection rates, so that more can access limited medical resources. Such as ventilators, which cut deaths by <¼, since >¾ of patients on them die.

However, extreme “lockdowns”, which isolate most everyone at home, not only limit freedoms and strangle the economy, they also greatly increase death rates. This is because infections at home via close contacts tend to come with higher initial virus doses, in contrast to the smaller doses you might get from, say, a public door handle. As soon as your body notices an infection, it immediately tries to grow a response, while the virus tries to grow itself. From then on, it is a race to see which can grow biggest fastest. And the virus gets a big advantage in this race if its initial dose of infecting virus is larger.

This isn’t just a theory. The medical literature consistently finds strong relations, in both animals and humans, between initial virus dose and symptom severity, including death. The most directly relevant data is on SARS and measles, where natural differences in doses were associated with factors of 3 and 14 in death rates, and in smallpox, where in the 1700s low “variolation” doses given on purpose cut death rates by a factor of 10 to 30. For example, variolation saved George Washington’s troops at Valley Forge.

Early on, it can be worth paying such high costs to end a pandemic. But once a pandemic seems likely to eventually infect most everyone, it becomes less clear whether lockdowns are a net win. However, the dose effect that lockdowns exacerbate, by increasing dose size, also offers a huge opportunity to slash deaths, via voluntary infection with very low doses.

Just as replacing accidental smallpox infections with deliberate low dose infections cut smallpox deaths by a factor of 10 to 30, a factor of 3-30 is plausible for Covid19 death rate cuts due to replacing accidental Covid19 infections with deliberate small dose infections. Observed mortality differences due to natural dose variations give only a lower bound on what is feasible via controlled doses. Of course we can’t be sure until we get more direct evidence. But systematic variolation experiments involving at most a few thousand volunteers seem sufficient to get evidence not only on death rates, but also on ideal infection doses and methods, and on the value of complementary drugs that slow viral replication (e.g., remdesivir).

This dose size advantage adds to several other substantial advantages of variolation. Not only does it offer controlled conditions for studying disease progression, and for training medical personnel, it can also help ensure consistent staffing of critical workers, by spacing out their infections.

Furthermore, the combination of variolation with immediate isolation until recovery “flattens the curve,” by spreading out medical demand over time, and also adding to the herd immunity that usually ends a pandemic. So even without a death rate cut due to lower doses, this strategy produces a net social gain.

This last claim may sound counter-intuitive, but it has in fact recently been confirmed in three independently developed simulations. For example, in a simulation where old and sick people are selected for isolation, while only the young and healthy are eligible for variolation, there are 40% fewer life years lost, compared to no variolation and random selection for isolation. Each variolation volunteer suffers only an additional 0.20% chance of death to save a random other person from a 6.5% chance. And these simulations ignore any benefits of low doses; they hold constant the infection and death rates, and the total quantity of social isolation, and thus expense.

Of course, if low doses cut death rates by a factor of two or more, variolation volunteers would actually cut their chance of death, perhaps greatly. Yes, the first few thousand volunteers could be less sure of such gains, but they could be compensated for this risk, just as we now consider compensating subjects in vaccine trials using live Covid19 viruses. We could pay variolation volunteers cash, offer their loved ones priority medical care, certify them as safe for work and social gatherings, and honor them like soldiers selected for their elite features who take risks to produce community gains.

So the scenario is this: Variolation Villages welcome qualified volunteers. Friends and family can enter together, and remain together. A cohort enters together, and is briefly isolated individually for as long as it takes to verify that they’ve been infected with a very small dose of the virus. They can then interact freely with each other, but not leave the village until tests show they have recovered.

In Variolation Village, volunteers have a room, food, internet connection, and full medical care. Depending on available funding from government or philanthropic sources, volunteers might either pay to enter, get everything for free, or be paid a bonus to enter. Health plans of volunteers may even contribute to the expense.

Those who work in medicine or critical infrastructure seem especially valuable candidates for early variolation; volunteers might be offered larger bonuses. Once they have recovered, they are more surely available to work near the pandemic peak, and can more easily risk social contact at work.

Note that this strategy of variolation plus isolation requires no government support, nor loss of personal freedom, just the sort of legal permission sometimes given to administrators and volunteers of vaccine trials. And this comparison with vaccine trial policy can be emphasized to those tempted to see this policy as repulsive. Variolation policy offers similar social gains, and may require similar voluntary personal sacrifices.

Note also that there is no minimum scale required to make this policy beneficial. Even variolation of only a few is still a social gain compared to none at all. A small early trial could generate much useful attention and discussion regarding this strategy, to inspire application in this and future pandemics. Furthermore, the optimal time to stop this practice for personal reasons is probably close to the optimal time to stop for social reasons, so choice of stopping date needn’t be heavily regulated.

Some fear that it is now too late to consider variolation, as the pandemic peak may be only a few weeks away. But lockdowns may succeed in substantially slowing Covid19 growth, and we may then be in for many months or years of alternating local waves of suppression and reappearance. Furthermore, if low doses cut death rates enough, variolation can make sense even at the pandemic peak, when medical resources are stretched most thin. For example, for a factor of 3 cut in death rates, variolation replaces three sick patients with one similarly sick patient, lowering total medical demand.

As variolation doesn’t much change the total number who are ever infected, it doesn’t give the virus more total chances to evolve. In fact, while accidental infections risk selection for versions that infect people more easily, voluntary infections avoid this problematic effect.

While you might think policy wonks would be eager to cut Covid19 death rates by a factor of 3-30, few have so far been attracted to discuss or pursue this concept. It seems to push the wrong buttons in many people. So if you are a rare exception who finds the concept plausible, you can get a disproportionate policy leverage by working on a neglected important option. You might help in one of these areas:

[more text at the link above]

We have much work to do if this Plan B is to be ready when needed.

Adults severely underestimate their absolute and relative fatality risk if infected with SARS-CoV

Niepel, Christoph, Dirk Kranz, Francesca Borgonovi, and Samuel Greiff. 2020. “Sars-cov-2 Fatality Risk Perception in US Adult Residents.” PsyArXiv. March 30. doi:10.31234/osf.io/w52e9

Abstract: Our study presents time-critical empirical results on the SARS-CoV-2 fatality risk perception of 1182 US adult residents stratified for age and gender. Given the current epidemiological figures, our findings suggest that many US adult residents severely underestimate their absolute and relative fatality risk if infected with SARS-CoV-2. These results are worrying because risk perception, as our study suggests, relates to self-reported actual or intended behavior that can reduce SARS-CoV-2 transmission rates.

Animals benefit from numerical competence (foraging, navigating, hunting, predation avoidance, social interactions, & reproductive activities); internal number representations determine how they perceive stimulus magnitude

The Adaptive Value of Numerical Competence. Andreas Nieder. Trends in Ecology & Evolution, March 30 2020. https://doi.org/10.1016/j.tree.2020.02.009

Highlights
*  Numerical competence, the ability to estimate and process the number of objects and events, is of adaptive value.
*  It enhances an animal’s ability to survive by exploiting food sources, hunting prey, avoiding predation, navigating, and persisting in social interactions. It also plays a major role in successful reproduction, from monopolizing receptive mates to increasing the chances of fertilizing an egg and promoting the survival chances of offspring.
*  In these ecologically relevant scenarios, animals exhibit a specific way of internally representing numbers that follows the Weber-Fechner law.
*  A framework is provided for more dedicated and quantitative analyses of the adaptive value of numerical competence.

Abstract: Evolution selects for traits that are of adaptive value and increase the fitness of an individual or population. Numerical competence, the ability to estimate and process the number of objects and events, is a cognitive capacity that also influences an individual’s survival and reproduction success. Numerical assessments are ubiquitous in a broad range of ecological contexts. Animals benefit from numerical competence during foraging, navigating, hunting, predation avoidance, social interactions, and reproductive activities. The internal number representations determine how animals perceive stimulus magnitude, which, in turn, constrains an animal’s spontaneous decisions. These findings are placed in a framework to provide for a more quantitative analysis of the adaptive value and selection pressures of numerical competence.

Keywords: quantitynumberWeber-Fechner lawproportional processingultimate causesanimal cognition



UK: Inequality in socio-emotional skills has increased across cohorts, especially for boys and at the bottom of the distribution

Inequality in socio-emotional skills: A cross-cohort comparison. Orazio Attanasio et al. Journal of Public Economics, March 30 2020, 104171. https://doi.org/10.1016/j.jpubeco.2020.104171

Abstract: We examine changes in inequality in socio-emotional skills very early in life in two British cohorts born 30 years apart. We construct comparable scales using two validated instruments for the measurement of child behaviour and identify two dimensions of socio-emotional skills: ‘internalising’ and ‘externalising’. Using recent methodological advances in factor analysis, we establish comparability in the inequality of these early skills across cohorts, but not in their average level. We document for the first time that inequality in socio-emotional skills has increased across cohorts, especially for boys and at the bottom of the distribution. We also formally decompose the sources of the increase in inequality and find that compositional changes explain half of the rise in inequality in externalising skills. On the other hand, the increase in inequality in internalising skills seems entirely driven by changes in returns to background characteristics. Lastly, we document that socio-emotional skills measured at an earlier age than in most of the existing literature are significant predictors of health and health behaviours. Our results show the importance of formally testing comparability of measurements to study skills differences across groups, and in general point to the role of inequalities in the early years for the accumulation of health and human capital across the life course.

JEL classification: J13J24I14I24C38
Keywords: InequalitySocio-emotional skillsCohort studiesMeasurement invariance


Our results imply that the ability to utilize the enhanced information of a face to recognize familiar faces may develop aged around 7 months of age

Infants’ recognition of their mothers’ faces in facial drawings. Megumi Kobayashi  Ryusuke Kakigi  So Kanazawa  Masami K. Yamaguchi. Developmental Psychobiology, March 29 2020. https://doi.org/10.1002/dev.21972

Abstract: This study examined the development of ability to recognize familiar face in drawings in infants aged 6–8 months. In Experiment 1, we investigated infants’ recognition of their mothers’ faces by testing their visual preference for their mother’s face over a stranger’s face under three conditions: photographs, cartoons produced by online software that simplifies and enhances the contours of facial features of line drawings, and veridical line drawings. We found that 7‐ and 8‐month‐old infants showed a significant preference for their mother’s face in photographs and cartoons, but not in veridical line drawings. In contrast, 6‐month‐old infants preferred their mother’s face only in photographs. In Experiment 2, we investigated a visual preference for an upright face over an inverted face for cartoons and veridical line drawings in 6‐ to 8‐month‐old infants, finding that infants aged older than 6 months showed the inversion effect in face preference in both cartoons and veridical line drawings. Our results imply that the ability to utilize the enhanced information of a face to recognize familiar faces may develop aged around 7 months of age.