Thursday, April 2, 2020

Lay People Are Unimpressed by the Effect Sizes Typically Reported in Psychology

McPhetres, Jonathon, and Gordon Pennycook. 2020. “Lay People Are Unimpressed by the Effect Sizes Typically Reported in Psychological Science.” PsyArXiv. April 2. doi:10.31234/osf.io/qu9hn

Abstract: It is recommended that researchers report effect sizes along with statistical results to aid in interpreting the magnitude of results. According to recent surveys of published research, psychologists typically find effect sizes ranging from r = .11 to r = .30. While these numbers may be informative for scientists, no research has examined how lay people perceive the range of effect sizes typically reported in psychological research. In two studies, we showed online participants (N = 1,204) graphs depicting a range of effect sizes in different formats. We demonstrate that lay people perceive psychological effects to be small, rather meaningless, and unconvincing. Even the largest effects we examined (corresponding to a Cohen’s d = .90), which are exceedingly uncommon in reality, were considered small-to-moderate in size by lay people. Science communicators and policymakers should consider this obstacle when attempting to communicate the effectiveness of research results.

Why Do so Few People Share Fake News? It Hurts Their Reputation

Altay, Sacha, Anne-Sophie Hacquin, and Hugo Mercier. 2019. “Why Do so Few People Share Fake News? It Hurts Their Reputation.” PsyArXiv. October 1. doi:10.31234/osf.io/82r6q

Abstract: Despite their potential attractiveness, fake news is shared by a very small minority of internet users. As past research suggests a good reputation is more easily lost than gained, we hypothesized that the majority of people and media sources avoid sharing fake news stories so as to maintain a good reputation. In two pre-registered experiments (N = 3264) we found that the increase in trust that a source (media outlet or individual) enjoys when sharing one real news against a background of fake news is smaller than the drop in trust a source suffers when sharing one fake news against a background of real news. This asymmetry holds even when the outlet only shares politically congruent news. We suggest that individuals and media outlets avoid sharing fake news because it would hurt their reputation, reducing the social or economic benefits associated with being seen as a good source of information.

How Many Jobs Can be Done at Home? About 34pct of the labor force could work from home in the US

How Many Jobs Can be Done at Home? Jonathan Dingel, Brent Neiman. Chicago U, March 27, 2020. https://bfi.uchicago.edu/wp-content/uploads/BFI_White-Paper_Dingel_Neiman_3.2020.pdf

1  Introduction
Evaluating the economic impact of “social distancing” measures taken to arrest the spread of COVID-19 raises a number of fundamental questions about the modern economy: How many jobs can be performed at home? What share of total wages are paid to such jobs? How does the scope for working from home vary across cities or industries? To answer these questions, we classify the feasibility of working at home for all occupations and merge this classification with occupational employment counts for the United States. Our feasibility measure is based on responses to two Occupational Information Network (O*NET) surveys covering “work context” and “generalized work activities.” For example, if answers to those surveys reveal that an occupation requires daily “work outdoors” or that “operating vehicles, mechanized devices, or equipment” is very important to that occupation’s performance, we determine that the occupation cannot be performed from home.1 We merge this classification of O*NET occupations with information from the U.S. Bureau of Labor Statistics (BLS) on the prevalence of each occupation in the aggregate as well as in particular metropolitan statistical areas and 2-digit NAICS industries

2  Results

Our classification implies that 34 percent of U.S. jobs can plausibly be performed at home. We obtain our estimate by identifying job characteristics that clearly rule out the possibility of working entirely from home, neglecting many characteristics that would make working from home difficult.2

When individuals are exposed to their own image in a mirror, known to increase self-awareness, they may show increased accessibility of suicide-related words (the mirror effect); replication fails in this paper

Monéger, J., Chatard, A., & Selimbegović, L. (2020). The Mirror Effect: A Preregistered Replication. Collabra: Psychology, 6(1), 18. http://doi.org/10.1525/collabra.321

Abstract: When individuals are exposed to their own image in a mirror, known to increase self-awareness, they may show increased accessibility of suicide-related words (a phenomenon labeled “the mirror effect”; Selimbegović & Chatard, 2013). We attempted to replicate this effect in a pre-registered study (N = 150). As in the original study, self-awareness was manipulated using a mirror and recognition latencies for accurately detecting suicide-related words, negative words, and neutral words in a lexical decision task were assessed. We found no evidence of the mirror effect in pre-registered analyses. A multiverse analysis revealed a significant mirror effect only when excluding extreme observations. An equivalence TOST test did not yield evidence for or against the mirror effect. Overall, the results suggest that the original effect was a false positive or that the conditions for obtaining it (in terms of statistical power and/or outlier detection method) are not yet fully understood. Implications for the mirror effect and recommendations for pre-registered replications are discussed.

Keywords: Self-awareness , Suicide thought accessibility , Median Absolute Deviation

4. Discussion

In the present study, we attempted to replicate the mirror effect. We expected recognition latencies to suicide-related words to be shorter in the mirror exposure condition than in the control condition, when controlling for neutral words latencies or negative words latencies. These predictions remained unsupported when using the pre-registered outlier detection method in the confirmatory analyses. However, a test assessing the equivalence of the observed effect to a null effect failed to significantly indicate that the mirror effect was equivalent to a null effect (considering d = 0.2 as the smallest effect size of interest). Moreover, an exploratory multiverse analyses showed increasing effect sizes as a function of the decreasing threshold of outlier exclusion, as detected by a robust outlier detection method (i.e, the median absolute deviation, Leys et al., 2013) such that the mirror effect was significant after excluding observations diverging from 2 or less median absolute deviations from the median, but only when using negative words’ RT as a covariate. This partial replication raises several interesting questions about the status of the mirror effect, the effect of outliers in a sample, and, more generally, about what allows for concluding that a replication is successful.

4.1. Mixed results concerning the mirror effect

Several large-scale replication projects show that about half of published findings fail to replicate in direct and high-powered replications in psychology (Klein et al., 2018Open Science Collaboration, 2015Simons, Holcombe, & Spellman, 2014). These recent studies point out that it is often difficult to replicate published effects. Between the noise inherent to behavioral sciences and the small-sized effects that we often encounter in psychology, observing statistically significant differences is not guaranteed in replication attempts, even when the effect exists in the population. Indeed, one must take into account the inevitable heterogeneity that exists between a study and its replications (Kenny & Judd, 2019), among other factors.
The present replication findings suggest that the original finding might be a false positive. At the same time, equivalence testing does not warrant a conclusion that the effect is equivalent to 0. Also, multiverse analyses show that the effect was significant in some cases, when using a robust method and a severe criterion for detecting outliers. We believe that if the effect exists, the effect size is likely to be smaller than initially thought. In sum, the study did not provide evidence for a robust mirror effect, but neither did it provide evidence for a null effect (i.e., an effect too trivial to be studied, as defined by a Cohen’s d smaller than 0.2). Therefore, further studies using larger samples are needed to establish more reliable estimates of the effect size and a better understanding of the mechanisms involved in this effect, if it exists.

4.2. Detecting outliers in a sample

Outliers are atypical data points that are abnormally different from the “bulk” of observations in a study, and therefore non-representative of the population (Leys, Delacre, Mora, Lakens, & Ley, 2019). There are many ways to define an outlier in a specific data set, as there are many statistical criteria that have been put forward in the literature. Studentized residuals and z-scores are among the most popular ways to detect outliers (Cousineau & Chartier, 2010). However, as underlined by Rousseeuw (1990), these criteria can underperform. The reason for this is that they are based on the sample standard deviation, which is itself a parameter highly sensitive to outliers (Wilcox, 2010). Robust estimators are hence needed to detect outliers. Contrary to studentized and standardized residuals, the median is highly insensitive to outliers (Leys et al., 2013). As one robust estimator, the median absolute deviation (MAD) is particularly relevant in this case, since the classic methods would have failed to detect influential data points (Leys et al., 2013; see also Wilcox, 2017).
How we manage the presence of outliers in a sample is a fundamental aspect of data analysis. However, to date, there is no consensus about which method is the most appropriate and what threshold should be used for detecting and excluding outliers (Leys et al., 2013). In an attempt to optimize the quality of the replication, the hypothesis, method, and statistical analysis were pre-registered. However, what we failed to predict was that excluding outliers on the basis of studentized residuals would not be sufficient to discard all influential data points. Hence, pre-registering a single outlier detection technique might be insufficient. In this view, Leys et al. (2019) recently provided specific recommendations concerning pre-registering and detecting outliers, one of which is to expand a priori reasoning in the registration, in order to manage unpredicted outliers. In our view, this amounts to the option of registering multiple ways to handle outliers. For instance, one could register a decision tree regarding the possible ways to handle outliers, as a function of the distribution. For instance, Nosek, Ebersole, DeHaven, and Mellor (2017) mention the possibility to define a sequence of tests and to determine the use of parametric or non-parametric approach according to the outcome of normality assumption tests. In a similar vein, standard operating procedures (SOPs) are procedures more general than decision trees that are shared in a given field of research in order to ground standardization of data handling (e.g., Lin & Green, 2016). The development of such standard procedures applied to outlier detection and exclusion could provide a useful tool for pre-registration.
Developing common, consensual procedures can thus be a solution for dealing with the unpredictable aspects of data, such as the presence of outliers. This would be a controlled, transparent, and probably the optimal manner of handling unpredictability, while suppressing the researchers’ degrees of freedom in post-hoc decisions concerning the method used to detect outliers (see Wicherts et al., 2016). In statistics and methodology, as in many fields, a perfect plan does not exist, so it is difficult to offer a perfect solution that fits all studies. In our view, there is a need to define a more general plan of how to handle data, a plan that could fit a large amount of studies. Among the issues that would need to be addressed in such a plan are, for instance, the question of outlier detection/exclusion criterion definition (intraindividually or interindividually), the question of the specific (robust) criterion to be used, and the question of the desired distribution.

Wednesday, April 1, 2020

High prevalence of lying to cover up others’ unethical behavior, which increased with increasing bribes; unethical loyalty decreased with individuals’ Honesty–Humility levels

Buying Unethical Loyalty: A Behavioral Paradigm and Empirical Test. Isabel Thielmann, Robert Böhm, Benjamin E. Hilbig. Social Psychological and Personality Science, April 1, 2020. https://doi.org/10.1177/1948550620905218

Abstract: Unethical behavior is often accompanied by others covering up a transgressor’s actions. We devised a novel behavioral paradigm, the Unethical Loyalty Game (ULG), to study individuals’ willingness to lie to cover up others’ dishonesty. Specifically, we examined (i) whether and to what extent individuals are willing to lie to cover up others’ unethical behavior, (ii) whether this unethical loyalty depends on the benefits (bribe) at stake, and (iii) whether trait Honesty–Humility accounts for interindividual variability in unethical loyalty. In a fully incentivized experiment (N = 288), we found a high prevalence of lying to cover up others’ unethical behavior, which increased with increasing bribes. In turn, unethical loyalty decreased with individuals’ Honesty–Humility levels. Overall, the findings show that most but not all individuals are corruptible to disguise others’ transgressions. Future research using the ULG can help to further illuminate (the determinants of) this prevalent type of unethical behavior.

Keywords: unethical loyalty, cover-up, dishonesty, bribing, Honesty–Humility


Going Upstream to Advance Psychosis Prevention and Improve Public Health

Going Upstream to Advance Psychosis Prevention and Improve Public Health. Deidre M. Anglin, Sandro Galea, Peter Bachman. JAMA Psychiatry, April 1, 2020. doi:10.1001/jamapsychiatry.2020.0142

The idea that we can reduce the incidence of psychotic disorders through detection and intervention in the prodromal stage of illness has generated increasing enthusiasm and research over the past 2 decades. This work has sought largely to identify individual-level changes in subjective experience, functioning, or brain volume or activity that immediately precede acute symptom onset. However, mental illnesses, including psychotic disorders, are particularly sensitive to the social, political, cultural, and economic context within which an individual lives.1 Prioritizing approaches to psychosis prevention that fail to give these social determinants a central role ignores compelling evidence and misses an opportunity to identify specific ways to help vulnerable youth.

Consider the example of racism’s pervasive detrimental association with the physical and mental well-being of disadvantaged people of color.2 Institutional racism creates differences in the average group member’s social, economic, and environmental circumstances, including living conditions in neighborhoods, work, and school. These social inequities distribute risk factors for mental disorders, such as exposure to violence, trauma, and chronic adversity and disadvantage, unevenly in the population in such a way that often disproportionately burdens group members with minority status (eg, people of color, poor people, and immigrants). In addition, the social experience of this oppression (ie, interpersonal discrimination) can further heighten the risk for mental illness because of the greater cumulative stress load associated with such lived experiences.

A growing body of US-based research has been providing data to inform our understanding of how social environmental inequities may enhance psychosis risk. For example, the association between social factors, such as racial discrimination3 and adverse childhood experiences,4 and the extended psychosis phenotype has been demonstrated in large national probability samples, developmental cohorts, smaller community-based samples, and even clinical high-risk studies. Despite this, the field’s focus on the role these underlying conditions play in shaping the incidence, duration, and treatment responsiveness of psychosis remains limited and falls short of the importance that these factors play in the etiology and course of psychosis. There are many reasons why there is a paucity of research on social risk factors for psychosis. Federal funding priorities have been a factor, as have concerns among researchers about the nonspecificity of social risk factors and the daunting prospect of large-scale societal change as an intervention. However, we suggest that from a public health perspective, some of these concerns represent opportunities.

Consider nonspecificity using the following example. High levels of air pollution have been found to be associated with depression, anxiety, and psychosis.5 This could indicate a common causal pathway among these 3 distinct syndromes through which pollution increases a disease process broadly (eg, inflammation), resulting in different possible outcomes. Air pollution could also contribute to the risk for depression in a way that is different from how it contributes to the risk for psychosis. We suggest that the significance of air pollution as a potential social determinant of mental illness remains regardless of whether it helps differentiate the risk of one disease from another. Moreover, it is not clear that a preferential focus on more microlevel foci (eg, genetic mutations) reveals evidence of such specificity of predictors.6 It stands to reason that the benefits of reducing air pollution would be widespread, providing more general social benefits that align with evolving views of the pluripotent nature of the risk for mental illness. The risk itself, including social risk, may be fairly nonspecific.

The notion that large-scale societal change as an intervention is too big or outside psychiatrists’ purview does not concord with the history of psychiatry, whose development has mirrored society’s evolving understanding of illness in general. For example, the advent of psychopharmacological interventions in the 1950s shifted the field from a more psychoanalytic understanding of psychopathology toward a strong biological perspective. Such discoveries shaped and changed the way psychiatrists were trained and practiced as clinicians, how research was conducted, and how psychiatrists understood mental illness. Similarly, social change during the 1960s and 1980s contributed to the deinstitutionalization of psychiatric hospitals, increasing the degree to which psychiatry was practiced as part of a larger service team in community-based mental health centers. Psychiatry can continue to evolve and be shaped by a richer appreciation and study of social determinants.

Conclusions and Recommendations

We propose a recalibration of priorities in which we focus on systemic, structural social risk factors with the same energy and investment that we apply to the search for individual-level signs, symptoms, and mechanisms, including physiological mechanisms. Thankfully, the association between social risk factors and physiological mechanisms does not have to be a zero-sum game. We have every reason to believe that moving upstream may demonstrate that these social risk factors operate with and via biological mechanisms to increase psychosis risk.7 Identifying the potential causal role of social mechanisms more explicitly will also require continued advancement in our epidemiologic methods of causal inference. Increasing our attention toward these social risk factors may help us take the next big step in predicting and preventing psychosis, and in doing so, positively affect the incidence and expression of other mental illnesses. Perhaps most important, understanding how forces like racism, poverty, and social marginalization affect mental illness is a step on the way to becoming a society in which the health of vulnerable youth is considered as important as their health care.

How do we get there? We recommend the following research, education, policy, and clinical actions. For us to understand how social risk factors contribute to outcomes such as psychosis, we need funding priorities from grant-making agencies to include the examination of social, cultural, economic, and political associations with risk for serious mental illness without requiring a priori links to identified neural circuits. Large-scale, longitudinal studies of risk for serious mental illness should systematically oversample populations with high levels of social disadvantage so hypotheses regarding the association of social risk factors can be tested. We are encouraged by recent funding efforts from the National Institute on Minority Health and Health Disparities to study the social epigenomics that drive health disparities. We believe psychosis risk should be included in such funding efforts.

Public mental health data quality and availability need to be improved. For example, we have had difficulty obtaining reliable stable estimates of clinical psychosis incidence at a population level across different socially constructed demographic groups (eg, racial groups with minority status) in national probability samples. Regarding the education of psychiatrists, training for clinicians should strive for structural competency, which includes cultural competency as well as facility in addressing other social, economic, and political factors that affect the lives of patients.8 On a policy level, a shift toward value-based care (and away from fee-for-service) would be a step in the right direction. Enacting such a change requires routinely assessing social risk factors as part of treatment planning and robust partnership with social service agencies that are incentivized to address these social disadvantages. Ideally, all policy decisions across all levels of government should consider the question, “would this policy make our constituents healthier or sicker?” Finally, from a clinical perspective, assessing and addressing social disadvantages should be the shared responsibility of professionals across systems of care and seen as a fundamental aspect of taking a whole-person or patient-centered approach to health care.

References, full text at the link above.

Dreams: “A person now dead as alive” is more frequent in older people, while “A person now alive as dead” in children; adults & older adults dream more often of “Trying something again and again” and “Arriving too late”

Maggiolini, A., di Lorenzo, M., Falotico, E., & Morelli, M. (2020). The typical dreams in the life cycle. International Journal of Dream Research, 20(1), 17-28. https://doi.org/10.11588/ijodr.2020.1.61558

Abstract: Most dream content analyses have been carried out on young adult samples, taken as norms, with fewer researches on continuity and discontinuity in the life cycle. A research on dreams in the life cycle (1546 participants, from 8 to 70 years), with the Typical Dreams Questionnaire (Nielsen et al., 2003; Dumel, Nielsen, & Carr, 2012), shows that 55.8% of the dreams reports have one or more typical content, with quite a stable prevalence across ages, with more dreams with a TDQ item in children and in older adults, with the minimum percentage in young adults. Children have more diversity in typical themes than other ages.The most frequent items in children have content related to some threat or some magic topic.  “A person now dead as alive” is more frequent in older people, while “A person now alive as dead” in children and preadolescents.  “School, teachers and studying” is more frequent in adolescence and “Sexual experiences” in young adults. Adults and older adults dream more often of “Trying something again and again” and “Arriving too late”.  Changes in typical dream themes can be related to emotional concerns typical of different phases of the life cycle.




Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System: Costs going up

Population-Based Estimates of Health Care Utilization and Expenditures by Adults During the Last 2 Years of Life in Canada’s Single-Payer Health System. Laura C. Rosella et al. JAMA Netw Open. 2020;3(4):e201917, April 1, 2020. doi:10.1001/jamanetworkopen.2020.1917

Question  What are the population-level trends in health care utilization and expenditures in the 2 years before death among adults in Ontario, Canada?

Findings  This cohort study found that health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion in 2005 to CAD$7.84 billion in 2015, and the intensity of health care utilization and deaths in hospital varied by resource utilization gradients.

Meaning  In this study, the observed trends demonstrated that costs and hospital-centered care before death are high in Ontario.


Abstract
Importance  Measuring health care utilization and costs before death has the potential to initiate health care improvement.

Objective  To examine population-level trends in health care utilization and expenditures in the 2 years before death in Canada’s single-payer health system.

Design, Setting, and Participants  This population-based cohort included 966 436 deaths among adult residents of Ontario, Canada, from January 2005 to December 2015, linked to health administrative and census data. Data for deaths from 2005 to 2013 were analyzed from November 1, 2016, through January 31, 2017. Analyses were updated from May 1, 2019, to June 15, 2019, to include deaths from 2014 and 2015.

Exposures  Sociodemographic exposures included age, sex, and neighborhood income quintiles, which were obtained by linking decedents’ postal codes to census data. Aggregated Diagnosis Groups were used as a general health service morbidity-resource measure.

Main Outcomes and Measures  Health care services accessed for the last 2 years of life, including acute hospitalization episodes of care, intensive care unit visits, and emergency department visits. Total health care costs were calculated using a person-centered costing approach. The association of area-level income with high resource use 1 year before death was analyzed with Poisson regression analysis, controlling for age, sex, and Aggregated Diagnosis Groups.

Results  Among 966 436 decedents (483 038 [50.0%] men; mean [SD] age, 76.4 [14.96] years; 231 634 [24.0%] living in the lowest neighborhood income quintile), health care expenditures increased in the last 2 years of life during the study period (CAD$5.12 billion [US $3.83 billion] in 2005 vs CAD$7.84 billion [US $5.86 billion] in 2015). In the year before death, 758 770 decedents (78.5%) had at least 1 hospitalization episode of care, 266 987 (27.6%) had at least 1 intensive care unit admission, and 856 026 (88.6%) had at least 1 emergency department visit. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Utilization in the last 2 years, 1 year, 180 days, and 30 days of life varied by resource utilization gradients. For example, the proportion of individuals visiting the emergency department was slightly higher among the top 5% of health care users compared with other utilization groups in the last 2 years of life (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year of life (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]); however, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). No meaningful associations were observed in high resource use between individuals in the highest income quintile compared with the lowest income quintile (rate ratio, 1.02; 95% CI, 0.99-1.05) after adjusting for relevant covariates.

Conclusions and Relevance  In this study, health care use and spending in the last 2 years of life in Ontario were high. These findings highlight a trend in hospital-centered care before death in a single-payer health system.



Introduction
Similar to those in other high-income countries, health care utilization and costs in Canada are expected to increase because of an expanding and aging population.1 A large proportion of these costs are incurred toward the end of life, with multiple studies demonstrating that health care utilization in the final months of life accounts for a substantial share of health care expenditures in comparison with other points in an individual’s life.2-5 In addition, most spending is concentrated in small groups of the population, who are characterized as high-cost users.6,7 Studies have shown that high-intensity medical care at the end of life can produce poor outcomes,8-10 can be associated with poor quality of life,11 and may conflict with patient preferences.9,12 To meet the growing needs of an aging population, a deeper understanding of the determinants and patterns of health care utilization and costs prior to death is required.

Most studies examining health care utilization prior to death have focused on a single aspect of care (eg, palliative services)13 or were specific to a particular cause of death.14-18 To our knowledge, few studies have examined health care use and costs at a population level and across an array of health sectors.5,19 Despite its potential to inform health care service delivery and improvement, evidence on health care utilization and cost patterns before death in a Canadian context is limited. A recent population-based study that examined health care expenditures in Ontario, Canada, from 2010 to 201319 reported that decedents who constituted less than 1% of the population consumed 10% of Ontario’s total health care budget, demonstrating that health care utilization occurs disproportionately. Using comprehensive multilinked mortality files, we analyzed population-level trends in health care utilization and expenditures prior to death in Ontario’s single-payer health system by looking at overall trends for more than a decade and by gradients of cost (ie, patients in the top 5%, top 6%-50%, and bottom 50% of health care costs).

Methods
Study Design
This retrospective cohort study used multiple linked vital statistics, population files, and health administrative data held at ICES to examine all deaths occurring in Ontario between January 2005 and December 2015. These data sets were linked using unique encoded identifiers and analyzed at ICES, an independent, nonprofit research institute whose legal status under Ontario’s health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. This study received ethical approval from the University of Toronto’s Health Sciences research ethics board and the institutional review board at Sunnybrook Health Sciences Centre, Toronto, Canada. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.

Study Population
Data for all deaths registered in the province of Ontario were obtained from the Office of the Registrar General-Deaths (ORG-D) file. The ORG-D is linked to the Registered Persons Database (RPDB), which contains basic demographic information for those who have ever received an Ontario health card number for the province’s universal health care system (overall linkage rate, 96.5%).20 The study cohort consisted of all deaths registered in the ORG-D between January 1, 2005, and December 31, 2015, that were linked to the RPDB record (N = 966 436). Those who had an invalid Ontario health card number on their death date (n = 4433), were not residents of Ontario (n = 252), or were younger than 18 years (n = 8768) were excluded.

Measures
We examined health care utilization prior to death according to several sociodemographic exposures. Sex and age data were obtained from the RPDB. Categories for age at time of death were 18 to 24 years, 25 to 34 years, 35 to 44 years, 45 to 54 years, 55 to 64 years, 65 to 74 years, 75 to 85 years, and older than 85 years. Ecological-level measures of income and education status were estimated using data from the 2006 Canadian census21 and were applied to individuals according to the dissemination area, which represents the smallest geographic census area in which the individual resided. Based on their postal code at the time of death, individuals were assigned to a dissemination area. Education was characterized as the proportion of individuals who completed high school in a given dissemination area. Individuals were grouped into income and education quintiles ranging from 1 (lowest 20% of income or education) to 5 (highest 20% of income or education).

As a general health service morbidity-resource measure, we used The Johns Hopkins Adjusted Clinical Group system version 10.0.1 Aggregated Diagnosis Group (ADG) scores, a person-focused, diagnosis-based method of categorizing individuals’ illnesses.22 Aggregated Diagnosis Groups have been validated for health services research use in Ontario23 and were calculated for the 2 years prior to death.

We measured health care utilization and services accessed for 2 years, 1 year, 180 days, and 30 days before death. Hospitalization episodes of care and intensive care unit (ICU) visits were obtained from the Discharge Abstract Database. An acute hospitalization episode was defined as either an admission to an acute care setting from which the patient was discharged or a continuous sequence of hospital stays in different hospitals to which the patient was transferred. Transfers between 2 different institutions were defined using both the timing between admissions and transfer flags on either record. Specifically, the following situations were defined as a transfer: (1) any admission within 6 hours of the previous discharge, (2) any admission within 12 hours of the previous discharge in which the type of institution transferred from or to was type 1 (ie, acute care), or (3) any admission within 48 hours of the previous discharge in which the number of the institution transferred from matched the number of the institution or the institution transferred to. Length of stay for episodes of care and ICU visits were calculated by subtracting the date of the latest discharge from the date of the earliest admission. Emergency department visits were obtained from the National Ambulatory Care Reporting System and were counted as 1 claim per patient per registered day. Physician services (primary care and specialist) were obtained from the Ontario Health Insurance Plan claims database. A physician visit was counted as 1 claim per patient per service day per physician. Physician specialties listed as family practice and general practice, community medicine, and pediatrics were considered primary care visits. All other physician specialties were considered a specialist visit. Death in hospital was identified in the Discharge Abstract Database if a hospital discharge disposition code was recorded as died, indicating a death in hospital. In 423 580 of 427 859 in-hospital deaths (99.0%), the death date in ORG-D and hospital discharge date with a code indicating died in the Discharge Abstract Database were within 1 day apart.

We calculated comprehensive per-person health care costs for the time proceeding death (last 2 years of life). Annual health care utilization and costs were calculated from the health care payee perspective, using administrative data from across health care sectors, including inpatient hospital stay, emergency department visits, same day surgery, stays in complex continuing care hospitals and inpatient rehabilitation, inpatient psychiatric admissions, physician payments for patient visits and community laboratory tests, and prescriptions filled for individuals eligible for the Ontario Drug Benefit Plan. A person-centered costing macro was used to calculate total annual health care spending; the costing methodology has been described elsewhere.24 Expenditures were calculated in Canadian dollars for the year 2015. Individuals were categorized in resource utilization groups (top 5%, top 6%-50%, and bottom 50%) based on the total health care costs in the last year of their lives.

Statistical Analysis
The distribution of sociodemographic characteristics among decedents at the time of death was described using means and proportions according to health care utilization gradients. Overall and per-person health care utilization metrics were calculated for the 2 years, 1 year, 180 days, and 30 days before death using medians and proportions and are presented according to health care utilization gradients for the last year of life. We estimated temporal trends of total health care expenditures among adult deaths from 2005 to 2015 by health care utilization gradient.

Factors associated with being in the top 5% of health care users in the last year of life were assessed by a modified Poisson model.25 We chose to model risk directly using a modified Poisson regression because it provides a good approximation of the binomial distribution when the sample is large, and it is less likely than logistic regression to overestimate the relative risk.26 We used belonging to the top 5% as the outcome and sex, age, area-level income quintile, and ADG score as the covariates. Associations were calculated with rate ratios (RRs) with corresponding confidence intervals. Statistical significance was set at P < .05, and all tests were 2-tailed. All analyses were conducted using SAS Enterprise Guide statistical software, version 7.15 (SAS Institute).

Results
Sociodemographic Characteristics by Health Care Utilization Gradient
Sociodemographic characteristics of 966 436 adult decedents (438 038 [50.0%] men; 231 634 [24.0%] living in lowest neighborhood income quintile), stratified by health care utilization gradients (top 5%, top 6%-50%, and bottom 50%) are shown in Table 1. Those in the top 5% were younger, with a mean (SD) age of 71.1 (14.6) years compared with 76.4 (14.96) years for the total cohort. A larger percentage of those in the top 5% were male (26 818 [55.5%] vs 200 965 [46.2%] in the top 6%-50% and 255 255 [52.8%] in the bottom 50%) and had a higher mean (SD) number of ADGs compared with the overall cohort (14.9 [3.6] vs 11.2 [4.4]). In contrast, the distribution of area-level income and education were similar across health care utilization gradients. The number of deaths captured in the cohort per year was similar across years, from 83 227 deaths in 2005 to 95 044 deaths in 2015 (eTable 1 in the Supplement). The major causes of death in the cohort were cancer (287 308 [29.7%]) and diseases of the circulatory system (279 881 [29.0%]) (eTable 2 in the Supplement).

Health Care Utilization in the Last 2 Years, 1 Year, 180 Days, and 30 Days of Life
Health care utilization prior to death for the overall cohort is described in Table 2. In the last 2 years of life, most individuals (758 770 [78.5%]) had at least 1 acute hospitalization episode of care, with a median (interquartile range [IQR]) length of stay of 8 (5-15) days. Approximately one-third (266 987 [27.6%]) were admitted to the ICU with a median (IQR) length of stay of 69 (33-130) hours in acute care, and almost all (856 026 [88.6%]) had an emergency department visit. The median (IQR) number of visits to primary care and specialist physicians were similar, with 31 (17-53) visits and 34 (13-69) visits, respectively.

In the last 30 days of life, 143 225 decedents (14.8%) were admitted to the ICU, spending a median (IQR) of 59 (23-124) hours in acute care. In addition, most visited a primary care physician (856 679 [88.6%]; median [IQR] visits, 4 [1-9]) and a specialist (699 042 [72.3%]; median [IQR] visits, 4 [0-14]). In terms of proximity to death, 475 574 decedents (49.2%) had at least 1 hospitalization episode of care in the last 30 days of life, 662 628 (68.6%) in the last 180 days, 710 035 (73.5%) in the last year, and 758 770 (78.5%) in the last 2 years. Similarly, the proportion that visited the emergency department was 528 658 (54.7%) in the last 30 days of life, 750 558 (77.7%) in the last 180 days, 805 598 (83.4%) in the last year, and 856 026 (88.6%) in the last 2 years. The nominal difference in percentage demonstrates that a substantial portion of health care use occurred toward the end of life.

Health Care Utilization Metrics by Resource Utilization Gradients
Table 2 presents health care utilization metrics at the end of life according to resource utilization gradients. In the last 2 years of life, among those who experienced a hospitalization, individuals in the top 5% had a median (IQR) of 3 (2-6) episodes of care per person, compared with 1 (0-2) episode of care among individuals in the bottom 50%. In the same period, approximately two-thirds of those in the top 5% experienced an ICU admission (31 099 [64.4%]) with a median (IQR) length of stay of 143 (70-317) hours; in comparison, approximately one-fifth of individuals in the bottom 50% (100 959 [20.9%]) had an ICU admission, with a median (IQR) length of stay of 47 (22-89) hours. The proportion of individuals visiting the emergency department was slightly higher among the top 5% compared with other utilization groups in the last 2 years (top 5%, 45 535 [94.2%]; top 6%-50%, 401 022 [92.2%]; bottom 50%, 409 469 [84.7%]) and 1 year (top 5%, 43 007 [89.0%]; top 6%-50%, 381 732 [87.8%]; bottom 50%, 380 859 [78.8%]) of life. In contrast, in the last 30 days of life, more than half of individuals in the top 6% to top 50% (223 262 [51.3%]) and bottom 50% (288 480 [59.7%]) visited an emergency department, compared with approximately one-third of individuals in the top 5% (16 916 [35.0%]). In the last 2 years of life, the median (IQR) number of primary care visits was 57 (28-106) among the top 5% compared with 22 (11-37) among the bottom 50%. The median (IQR) number of specialist visits over this period was 163 (96-238) among the top 5% compared with 21 (8-40) among the bottom 50%.

Factors Associated With High Resource Utilization Prior to Death
In the Poisson model (Table 3), significant risk reductions for high resource utilization (ie, top 5%) in the last year of life were observed among women compared with men (RR, 0.90; 95% CI, 0.88-0.91) and among older age groups; the RR was 0.21 times lower in decedents older than 85 years compared with those aged 18 to 24 years (RR, 0.21; 95% CI, 0.19-0.23) after adjusting for income and ADGs (Table 3). No meaningful associations were observed between individuals in the highest area income quintile compared with individuals in the lowest quintile (RR, 1.02; 95% CI, 0.99-1.05) after adjusting for sex, age, and ADGs. The associations between high income (ie, quintile 5) and low income (ie, quintile 1) remained null in the sex-segregated models, in which the confidence interval included the null value.

Hospital Deaths by Resource Utilization Gradients
Table 4 displays trends in the percentage of deaths that occurred in hospital by resource utilization gradients. Overall, deaths in hospital decreased from 37 984 (45.6%) in 2005 to 39 474 (41.5%) in 2015. Throughout the study period, a total of 29 292 of 48 324 deaths (60.4%) and 203 792 of 483 213 (42.2%) occurred in the hospital among those in the top 5% and the bottom 50%, respectively, without much variation during the study period. Among the top 6% to top 50% resource gradient, deaths in hospital decreased from 14 975 of 28 792 (52.0%) in 2005 to 18 569 of 46 859 (39.6%) in 2015.

Temporal Trends in Health Care Expenditures According to Resource Utilization Gradients
Total health care expenditures in the last 2 years of life increased in Ontario from CAD$5.12 billion (US $3.83 billion) in 2005 to CAD$7.84 billion (US $5.86 billion) in 2015, an increase of approximately 35%. Similarly, expenditures during this period increased from CAD$3.59 billion (US $2.69 billion) to CAD$5.34 billion (US $4.01 billion) in the last year of life, an increase of 33%. In the last 180 days of life, expenditures increased from CAD$2.53 billion (US $1.90 billion) to CAD$3.67 billion (US $2.75 billion), a 31% increase, and for the last 30 days of life, they increased from CAD$1.04 billion (US $0.78 billion) to CAD$1.43 billion (US $1.07 million), a 27% increase (Figure, A). Mean per-person spending in the last 2 years of life increased among the top 5% from CAD$273 820 (95% CI, CAD$269 935 to CAD$277 760) (US $205 365; 95% CI, US $202 451 to US $208 320) in 2005 to CAD$295 183 (95% CI, CAD$291 811 to CAD$298 593) (US $221 387; 95% CI, US $218 858 to US $223 945) in 2015. In the same period, mean per-person spending in the bottom 50% decreased from CAD$33 489 (95% CI, CAD$33 210 to CAD$33 771) (US $25 117; 95% CI, US $24 908 to US $25 328) in 2005 to CAD$31 148 (95% CI, CAD$30 871 to CAD$31 427) (US $23 361; 95% CI, US $23 153 to $23 570) in 2015 (Figure, B). In the last 2 years of life, mean (SD) per-person spending for acute hospital care increased from CAD$4839 (CAD$ 14 053) (US $3629 [US $10 540]) in 2005 to CAD$6572 (CAD$19 722) (US $4928 [US $14 792]) in 2015 (Figure, C).

Discussion
This study examined population-wide health care utilization and costs at the end of life in the universal health care system of Ontario, which accounts for 40% of Canada. Our unique focus on health care utilization gradients and trends of health care use at the end of life was enabled by a mortality database that contained all deaths registered in Ontario during 11 years, linked with health administrative data. We demonstrated that overall health care expenditures in Ontario for the last 2 years of life increased by 35% from 2005 to 2015, with the largest proportional increase in average per-person spending observed in the top 5% and top 6% to top 50% of health care users. We demonstrated higher end-of-life utilization of health care services among those in the top 5% compared with the overall cohort for hospitalization episodes of care, ICU visits, emergency department visits, and physician visits. Exceptions to this pattern were identified for the last 30 days of life, in which utilization of certain services, such as emergency department visits, were higher among the top 6% to top 50% and bottom 50% of health care users than among the top 5%. However, the observed reduced utilization of these services could have been the result of individuals in the highest cost group already being admitted to a hospital in their last 30 days of life. Several studies have reported population-wide health care utilization prior to death,19,27,28 but they have not looked at differences among health care use gradients.

The study showed that in the last year of life, 74% of residents of Ontario had a hospitalization episode of care and 24% spent time in the ICU. Comparable patterns of end-of-life health care utilization have been reported in other high-income countries. For example, an Australian study looking at hospital-based services used by adults during the last year of life reported slightly higher rates of hospitalization (84%) and lower rates of ICU visits (12%).27 In the United States, ICU visit rates in the last month of life ranged from 24% to 29% among Medicare beneficiaries aged 66 years and older compared with 21% in our cohort.29

We observed a negatively linear association between older age groups and being in the top 5% of health care users in the last year of life, especially among men. A similar pattern of lower expenditures among older age groups in the last year of life was reported in the US Medicare population of adults aged 65 and older.30 In our analysis, we did not see meaningful associations for area-level income quintiles and high health care utilization in the last year of life after adjusting for sex, age, and ADGs. Similar findings were observed in a retrospective cohort analysis of health care use among deaths in Ontario from 2010 to 2013, in which total costs did not vary by neighborhood income quintile.19 In contrast, among the US Medicare population, individuals in the lowest-income area had slightly higher expenditures in the last year of life compared with those living in the highest-income areas.30 Furthermore, in a study of health care spending in the last year of life in the province of British Columbia, Canada, the highest 2 household income quintiles were shown to have approximately 4% less health care spending than those in the lowest income quintile.31 The differential income associations observed in these studies could be attributed to health system differences in access to health care services in the jurisdictions under study and differences in ecological-level vs individual-level income measures used.

A larger percentage of deaths occurred in hospital in our cohort compared with Switzerland, where it was reported that 34% of deaths were in a hospital,28 and in the United States, where deaths in acute care hospitals ranged from 25% to 33% among decedents older than 66 years.29 Furthermore, we observed that the proportion of deaths in hospital among the top 5% and bottom 50% of health care users in the last year of life was stable over the study period. The observed high-intensity care near the end of life and high percentage of deaths in hospitals highlights a need for a societal-level discussion about approaches to end-of-life care in Ontario.

We observed that high health care utilization was associated with multimorbidity, as measured by ADGs, and that hospital-centered care was the typical trajectory at the end of life. This points to the need to design appropriate integrated care strategies that could support patients at the end of life to be discharged from the hospital and receive care and management for their conditions through home care or long-term care services.

Limitations
It is important to note some limitations to our study. First, our study used ecological-level indicators of socioeconomic status based on postal code information at the time of death, which may have provided lower estimates of income gradients in health care utilization.32 Second, our database only included services covered by the provincial health care payee and not services that may be covered by supplemental insurance or paid for out of pocket (ie, nursing, personal care, medications, and therapy). Third, comprehensive recommendations regarding end-of-life care are difficult to make in the absence of information on the appropriateness of care and use of potentially avoidable health services, which were out of scope for this study. Nonetheless, the findings support understanding of end-of-life health care trends in a universal health care system.

Conclusions
This study reported on health care utilization in the 2 years before death with a focus on the characterization of high-cost users. It identified patterns of high utilization of health care services before death and a large proportion of deaths in hospital, with variation across health care utilization gradients. The findings suggest a trajectory of hospital-centered care prior to death in Ontario.



References and full text at the link above.

Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables

Harper, Craig A., Liam Satchell, Dean Fido, and Robert Latzman. 2020. “Functional Fear Predicts Public Health Compliance in the COVID-19 Pandemic.” PsyArXiv. April 1. doi:10.31234/osf.io/jkfu3

Abstract: In the current context of the global pandemic of coronavirus disease-2019 (COVID-19), health professionals are working with social scientists to inform government policy on how to slow the spread of the virus. An increasing amount of social scientific research has looked at the role of public message framing, for instance, but few studies have thus far examined the role of individual differences in emotional and personality-based variables in predicting virus-mitigating behaviors. In this study we recruited a large international community sample (N = 324) to complete measures of self-perceived risk of contracting COVID-19, fear of the virus, moral foundations, political orientation, and behavior change in response to the pandemic. Consistently, the only predictor of positive behavior change (e.g., social distancing, improved hand hygiene) was fear of COVID-19, with no effect of politically-relevant variables. We discuss these data in relation to the potentially functional nature of fear in global health crises.


Gay men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women

Speech Acoustic Features: A Comparison of Gay Men, Heterosexual Men, and Heterosexual Women. Alexandre Suire, Arnaud Tognetti, Valérie Durand, Michel Raymond & Melissa Barkat-Defradas. Archives of Sexual Behavior, March 31 2020. https://rd.springer.com/article/10.1007/s10508-020-01665-3

Abstract: Potential differences between homosexual and heterosexual men have been studied on a diverse set of social and biological traits. Regarding acoustic features of speech, researchers have hypothesized a feminization of such characteristics in homosexual men, but previous investigations have so far produced mixed results. Moreover, most studies have been conducted with English-speaking populations, which calls for further cross-linguistic examinations. Lastly, no studies investigated so far the potential role of testosterone in the association between sexual orientation and speech acoustic features. To fill these gaps, we explored potential differences in acoustic features of speech between homosexual and heterosexual native French men and investigated whether the former showed a trend toward feminization by comparing theirs to that of heterosexual native French women. Lastly, we examined whether testosterone levels mediated the association between speech acoustic features and sexual orientation. We studied four sexually dimorphic acoustic features relevant for the qualification of feminine versus masculine voices: the fundamental frequency, its modulation, and two understudied acoustic features of speech, the harmonics-to-noise ratio (a proxy of vocal breathiness) and the jitter (a proxy of vocal roughness). Results showed that homosexual men displayed significantly higher pitch modulation patterns and less breathy voices compared to heterosexual men, with values shifted toward those of heterosexual women. Lastly, testosterone levels did not influence any of the investigated acoustic features. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis and suggest that the feminization of some acoustic features could be shared across languages.


Discussion

This study offers an interesting take on the interaction between sexual orientation and acoustic features of speech in a French speaker sample. First, our analysis of different acoustic features revealed well-known patterns of sexual dimorphism in human voices (i.e., F0, F0-SD, jitter, and HNR). Secondly, our findings showed that French homosexual men displayed a more modulated and less breathy voice than French heterosexual men, thus supporting and extending previous studies conducted mostly with English speakers. Our results for the LDA showed that French homosexual men attested a slight but significant vocal feminization when considering speech acoustic features altogether (up to 10.65%), which support the feminization hypothesis. (It is important to note, however, that no overlap was observed between heterosexual and homosexual men vs. heterosexual women.) Lastly, testosterone levels did not mediate the association between vocal patterns and sexual orientation.
Consistent with previous findings in English-speaking populations, no significant differences were observed in mean F0 between French-speaking heterosexual and homosexual men (Gaudio, 1994; Lerman & Damsté, 1969; Munson et al., 2006b; Rendall et al., 2008; Rogers et al., 2001; Smyth et al., 2003). The results did show a difference between homosexual and heterosexual men in intonation, the former displaying higher pitch variations than the latter. The relationship between pitch variations and sexual orientation was previously found in one Dutch (Baeck et al., 2011) and one American-English population (Gaudio, 1994), suggesting that feminized pitch variations might be characteristic of male homosexual speech across languages (but see Levon, 2006). In our study, the average difference in pitch variations reached ~ 4.11 Hz, which is largely above the just noticeable difference for pitch (Pisanski & Rendall, 2011). Hence, our findings suggest that pitch variations could be one of the acoustic correlates of sexual orientation that is used by listeners when they correctly assessed sexual orientation through speech only (Gaudio, 1994; Linville, 1998; Smyth et al., 2003; Valentova & Havlíček, 2013). Further investigations are nevertheless needed to confirm if such a difference in pitch variations between homosexual and heterosexual men is enough to be used as a cue for assessing sexual orientation.
To our knowledge, this is the first study to report an association between men’s vocal breathiness and sexual orientation. Interestingly, vocal breathiness has been suggested to be an important component of vocal femininity in female voices (Van Borsel et al., 2009) and significant relationships to vocal attractiveness have been reported in both sexes (Xu et al., 2013). Although the difference in vocal breathiness between homosexual and heterosexual men is rather low (mean average difference reached ~ 0.80 dB), further research should test whether it is perceptible by listeners to assess male sexual orientation and whether homosexual men’s voices, which are richer in harmonics compared to those of heterosexuals, are perceived as more attractive among homosexual men.
In our study, T-levels did not influence any of the acoustic parameters investigated. The methods to measure T-level and the sample size used in this study were similar to those used in previous studies finding a significant negative link between T-levels and F0 (e.g., Dabbs & Mallinger, 1999; Evans et al., 2008). However, testosterone is a multiple-effect hormone under the influence of numerous biological and environmental factors and pathways. As such, it is generally difficult to correlate T-levels with other biological or behavioral traits, especially with a unique measurement as realized here. Nevertheless, our results might suggest that other underlying processes, different than basal T-level, are involved in vocal differences between homosexual and heterosexual men.
Although our study does not aim to provide an explanation for why vocal differences were found between homosexual and heterosexual men, several biological and social mechanisms can be invoked. For instance, exposure to prenatal testosterone has been suggested to be responsible for the differences between homosexual and heterosexual men on a large range of characteristics such as physiological and behavioral traits including speech characteristics (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). Several studies have thus tested whether the 2D:4D ratio (relative length of the second and fourth digits), a proxy of testosterone prenatal exposure differs between homosexual and heterosexual men (Balthazart, 2017; Ehrhardt & Meyer-Bahlburg, 1981). However, there is currently no consensus regarding whether the 2D:4D ratio differs between heterosexual and homosexual men as studies have yielded mixed results (Breedlove, 2017; Grimbos, Dawood, Burriss, Zucker, & Puts, 2010; Rahman & Wilson, 2003; Robinson, 2000; Skorska & Bogaert, 2017; Williams et al., 2000). Regarding social mechanisms, a social imitation of women’s speech peculiarities by homosexual men could also explain the differences observed between homosexual and heterosexual men’s speech characteristics (at least for F0-SD and HNR). The use of more feminine acoustic characteristics by homosexual men could reflect a selective adoption model of opposite-sex speech patterns or a selective use of acoustic features for signaling in-group identity (Pierrehumbert et al., 2004), an ability called “gaydar” (i.e., the detection of homosexuality based on a set of specific cues). Interestingly, a recent study suggests that the acquisition of a distinctive speech style may happen before puberty, as boys aged from 5 to 13 with gender identity disorder (a diagnosis made when a child shows distress or discomfort due to a mismatch between his/her gender identity and his/her biological sex) display distinctive speech features (higher F0 and F2 as well as a misarticulation of/s/) from boys without it (Munson, Crocker, Pierrehumbert, Owen-Anderson, & Zucker, 2015). Because some homosexual men display a greater degree of gender nonconforming behavior (GNC) than others during childhood (Bailey & Zucker, 1995), one could thus hypothesize that the former would be more likely to have a more feminine speech in adulthood than the latter. Further work should investigate the relative importance of the mechanisms underlying homosexual men’s speech.
To conclude, although our study did not aim to test specific hypotheses against a formal theoretical framework to understand the differences between homosexual and heterosexual men’s speech, it provides some new descriptive findings. By examining for the first time native French speakers and some understudied acoustic features (i.e., namely, jitter and HNR), our results indicated that some vocal traits differed between heterosexual and homosexual men (i.e., variations of pitch and vocal breathiness) with values shifted toward heterosexual women’s vocal characteristics. Combined with the literature conducted in other languages, our findings bring new support for the feminization hypothesis (at least for some acoustic features) and suggest that the feminization of some acoustic features could be shared across languages. Further studies are needed to test whether intonation and vocal breathiness are perceptually salient to distinguish homosexual and heterosexual men, and whether overall differences are due to biological and/or sociolinguistic reasons.

With no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller amounts more frequently; seems a built-in tactic in humans & possibly in other animals

Mere presence of co-eater automatically shifts foraging tactics toward ‘Fast and Easy' food in humans. Yukiko Ogura, Taku Masamoto and Tatsuya Kameda. Royal Society Open Science, Volume 7, Issue 4, April 1 2020. https://doi.org/10.1098/rsos.200044

Abstract: Competition for food resources is widespread in nature. The foraging behaviour of social animals should thus be adapted to potential food competition. We conjectured that in the presence of co-foragers, animals would shift their tactics to forage more frequently for smaller food. Because smaller foods are more abundant in nature and allow faster consumption, such tactics should allow animals to consume food more securely against scrounging. We experimentally tested whether such a shift would be triggered automatically in human eating behaviour, even when there was no rivalry about food consumption. To prevent subjects from having rivalry, they were instructed to engage in a ‘taste test' in a laboratory, alone or in pairs. Even though the other subject was merely present and there was no real competition for food, subjects in pairs immediately exhibited a systematic behavioural shift to reaching for smaller food amounts more frequently, which was clearly distinct from their reaching patterns both when eating alone and when simply weighing the same food without eating any. These patterns suggest that behavioural shifts in the presence of others may be built-in tactics in humans (and possibly in other gregarious animals as well) to adapt to potential food competition in social foraging.

4. Discussion

We created a laboratory foraging situation in which subjects were asked to eat potato chips for a ‘taste test'. The mere presence of a co-eater in the Visible Pair condition increased the reach frequency for food and decreased the weight of food per reach, as compared to the Solo condition (figures 2a and b). This result supports our hypothesis that the behavioural shift toward foraging smaller food more frequently would be triggered automatically among human subjects, even when there was no actual competition about food consumption.
We argued that the behavioural tactics in social foraging consist of two components—increasing reach frequency and preferring smaller food amounts. Compared to the increase in reach frequency observed across the two Pair conditions, the behavioural shift for smaller food amounts emerged only in the Visible Pair condition. Although the latter shift may be seen as a by-product of random picking caused by distraction from the visible co-eater, the reach pattern was distinct from the simulated random sampling (figure 3b). It was also distinguishable from the counting pattern in the weighing experiment (figure 3c). We thus think that, along with increasing reach frequency, choosing smaller food amounts is a systematic (yet weaker) component of foraging tactics in human group settings.
The overall amount of individuals' food intake was not increased by the presence of a co-eater, which may appear inconsistent with results from previous human psychological studies of social facilitation in eating behaviour [10,26]. However, in these studies, food was freely given to the subjects and eating time was not controlled; in a subsequent study of real-world eating behaviour by humans, the increase in food consumption in the presence of others reflected an increase in meal duration [27]. While these psychological studies were silent about the cost–benefit trade-offs in foraging tactics, the present study examined human eating behaviour from a behavioural ecological perspective, arguing that humans may favour sure gain at the cost of time, effort and amount per intake to adjust to potential competition in social foraging.
Our results showed that the behavioural shift was triggered by the mere presence of a co-eater, even without actual competition. This suggests that the underlying mechanism for the shift may be a built-in system that activates automatically in response to relevant social cues. Considering that gregariousness is not human-specific but widespread in animals, neural implementation of an automatic competitive mode may also be rooted in ancient neural circuits. In domestic chicks, for example, a brain region considered to be homologous to the limbic area in mammals contributes to an automatic increase in the reaching frequency for feeders [28,29]. On the other hand, many brain mapping studies in humans have attempted to identify brain regions related to social competition using behavioural games [3032]. However, the competitive contexts they have used are very different from the foraging situation in our study. Future research addressing the neural implementation of an automatic competitive mode in social foraging will be important not only for behavioural ecology but also to better understand the biological bases of problematic eating behaviour in humans.
In summary, humans shift their foraging tactics when a co-eater is present. Such a behavioural shift is likely to be a built-in response to possible food competition with conspecifics and may be common across many gregarious animals.