Thursday, October 21, 2021

Frequent I-talk is also positively related to the neurotic variety of narcissism (vulnerable narcissism); also has a positive association with sociodemographic characteristics such as (lower) status, (younger) age, & (female) gender

The story of “I” tracking: Psychological implications of self-referential language use. Amunet K. Berry-Blunt, Nicholas S. Holtzman, M. Brent Donnellan, Matthias R. Mehl. Social and Personality Psychology Compass, October 19 2021. https://doi.org/10.1111/spc3.12647

Abstract: We review extant research on the psychological implications of the use of first-person singular pronouns (i.e., “I-talk”). A common intuition is that I-talk is associated with an overly positive, highly agentic, and inflated view of the self—including arrogance, self-centeredness, and grandiose narcissism. Initial (small-sample) research provided evidence that frequent I-talk was associated with grandiose narcissism. More recent (large-sample) research, however, has found that the correlation is near zero. Frequent I-talk is, however, positively correlated with depressive symptoms, in particular, and negative emotionality (i.e., neuroticism), more broadly. Frequent I-talk is also positively related to the neurotic variety of narcissism called vulnerable narcissism. In addition, frequent I-talk has a positive association with sociodemographic characteristics such as (lower) status, (younger) age, and (female) gender; I-talk has a conditional association with truth-telling and authenticity—a correlation that appears to hinge on context. This review summarizes the literature on I-talk, provides some speculations about the emergent psychological meanings of I-talk, and provides a guide for future research.


Greater survivability of cardio vascular events allows lifestyle choices to catch-up with people

Dahl, Gordon B. and Kreiner, Claus Thustrup and Nielsen, Torben and Serena, Benjamin Ly, Understanding the Rise in Life Expectancy Inequality. IZA Discussion Paper No. 14741, Oct 8 2021, SSRN: https://ssrn.com/abstract=3934758

Abstract: We provide a novel decomposition of changing gaps in life expectancy between rich and poor into differential changes in age-specific mortality rates and differences in "survivability". Declining age-specific mortality rates increases life expectancy, but the gain is small if the likelihood of living to this age is small (ex ante survivability) or if the expected remaining lifetime is short (ex post survivability). Lower survivability of the poor explains between one-third and one-half of the recent rise in life expectancy inequality in the US and the entire change in Denmark. Our analysis shows that the recent widening of mortality rates between rich and poor due to lifestyle-related diseases does not explain much of the rise in life expectancy inequality. Rather, the dramatic 50% reduction in cardiovascular deaths, which benefited both rich and poor, made initial differences in lifestyle-related mortality more consequential via survivability.

Keywords: mortality, life expectancy, inequality

JEL Classification: I14, J10


The devil is in the detail: reflections on the value and application of cognitive interviewing to strengthen quantitative surveys in global health

The devil is in the detail: reflections on the value and application of cognitive interviewing to strengthen quantitative surveys in global health. K Scott, O Ummer, A E LeFevre. Health Policy and Planning, Volume 36, Issue 6, July 2021, Pages 982–995, https://doi.org/10.1093/heapol/czab048

Abstract: Cognitive interviewing is a qualitative research method for improving the validity of quantitative surveys, which has been underused by academic researchers and monitoring and evaluation teams in global health. Draft survey questions are administered to participants drawn from the same population as the respondent group for the survey itself. The interviewer facilitates a detailed discussion with the participant to assess how the participant interpreted each question and how they formulated their response. Draft survey questions are revised and undergo additional rounds of cognitive interviewing until they achieve high comprehension and cognitive match between the research team’s intent and the target population’s interpretation. This methodology is particularly important in global health when surveys involve translation or are developed by researchers who differ from the population being surveyed in terms of socio-demographic characteristics, worldview, or other aspects of identity. Without cognitive interviewing, surveys risk measurement error by including questions that respondents find incomprehensible, that respondents are unable to accurately answer, or that respondents interpret in unintended ways. This methodological musing seeks to encourage a wider uptake of cognitive interviewing in global public health research, provide practical guidance on its application, and prompt discussion on its value and practice. To this end, we define cognitive interviewing, discuss how cognitive interviewing compares to other forms of survey tool development and validation, and present practical steps for its application. These steps cover defining the scope of cognitive interviews, selecting and training researchers to conduct cognitive interviews, sampling participants, collecting data, debriefing, analysing the emerging findings, and ultimately generating revised, validated survey questions. We close by presenting recommendations to ensure quality in cognitive interviewing.

Keywords: Cognitive interviewing, survey research, validity, methodological innovation, qualitative research

Introduction

This methodological musing calls attention to cognitive interviewing, a qualitative research methodology for improving the validity of quantitative surveys that has often been overlooked in global public health. Cognitive interviewing is ‘the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends’ (Beatty and Willis, 2007). This methodology helps researchers see survey questions from the participants’ perspectives rather than their own by exploring how people process information, interpret the words used and access the memories or knowledge required to formulate responses (Drennan, 2003).

Cognitive interviewing methodology emerged in the 1980s out of cognitive psychology and survey research design, gaining prominence in the early 2000s (Beatty and Willis, 2007). Cognitive interviewing is widely employed by government agencies in the preparation of public health surveys in many high-income countries [e.g. the Collaborating Center for Questionnaire Design and Evaluation Research in the Center for Disease Control and Prevention (CDC)/National Center for Health Statistics (2014) and Agency for Healthcare Research and Quality in the Department of Health and Human Services (2019) in the USA and the Quality Care Commission (2019) for the National Health Service Patient Surveys in the UK]. Applications in the global public health space are emerging, including to validate measurement tools undergoing primary development in English and for use in English [e.g. to measure family response to childhood chronic illness (Knafl et al., 2007)]; to support translation of scales between languages [e.g. to validate the London Measure of Unplanned Pregnancy for use in the Chichewa language in Malawi (Hall et al., 2013)] and to assess consumers’ understanding and interpretation of and preferences for displaying information [e.g. for healthcare report cards in rural Tajikistan (Bauhoff et al., 2017)]. However, this methodology remains on the periphery of survey tool development by university-based academic researchers and monitoring and evaluation teams working in global health; most surveys are developed, translated and adapted without cognitive interviews, and publications of survey findings rarely stipulate that cognitive interviews took place as part of tool development processes.

Box 1.
The need for cognitive interviewing: examples from developing a tool to measure respectful maternity care among rural women in central India

Context: respectful maternity care in rural central India

We used cognitive interviewing to examine survey questions for rural central India, adapted from validated instruments to measure respectful maternity care used in Ethiopia, Kenya and elsewhere in India. This process illuminated extensive cognitive mismatch between the intent of the original questions and how women interpreted them, which would have compromised the validity of the survey’s findings (Scott et al., 2019). Two examples are provided here.

Cognitive interviews revealed that hypothetical questions were interpreted in unexpected ways

A question asked women whether they would return to the same facility for a hypothetical future delivery. The researchers intended the question to assess satisfaction with services. Some women replied no, and, upon probing, explained that their treatment at the facility was fine but that they had no intention of having another child. Other women said yes, despite experiencing some problematic treatment, and probing revealed that they said this because they were too poor to afford to go anywhere else.

Cognitive interviews revealed that Likert scales were inappropriate

The concept of graduated agreement or disagreement with a statement was unfamiliar and illogical to respondents. Women did not understand how to engage with the Likert scales we tested (5-, 6- and 10-point scales, using numbers, words, colours, stars, and smiley faces). Most respondents avoided engaging with the Likert scales, instead responding in terms of a dichotomous yes/no, agree/disagree, happened/did not happen, etc., despite interviewer’s attempts to invite respondents to convert their reply to a Likert response. For example, when asked to respond on a 6-point Likert scale to the statement ‘medical procedures were explained to me before they were conducted’, a respondent only repeated ‘they didn’t explain’. Other respondents, when shown a smiley face Likert scale, focused on identifying a face that matched how they felt rather than that depicted their response to the statement in question. For example, when asked to respond to the statement ‘the doctors and nurses did everything they could to help me manage my pain’, a respondent pointed to a sad face, explaining that although the doctors and nurses helped her, since she was in pain her face was ‘like this’ (i.e. sad). Without cognitive interviews, survey enumerators would unknowingly record responses unrelated to the question at hand or would attempt to fit respondent dichotomous answers into Likert scales using whatever interpretation the enumerator saw fit.

Cognitive interviewing recognizes that problems with even one detail of a survey question can compromise the validity of the data gathered, whether it is an improper word, confusing phrasing, unfamiliar concept, inappropriate response option, or other issue. Without cognitive interviews, gaps between question intent and respondent interpretation can persist, severely compromising the quality of data generated from surveys (Box 1). Furthermore, cognitive mismatch is often impossible to detect after data collection. Instead, responses recorded in the survey are taken as ‘true’, regardless of whether the respondents understood and answered the question in the intended manner and regardless of the assistance, adjustment, or interpretation provided by enumerators.

In this article, we argue that cognitive interviewing should be an essential step in the development of quantitative survey tools used in global public health and call attention to the detailed steps of applying this method in the field. We start by reviewing what cognitive interviewing is and consider the varied definitions and use cases in survey tool development. We next outline the recommended steps in survey tool development and then provide an overview of how to go about cognitive interviewing. We close by reflecting on the broader implications of cognitive interviewing.

While people themselves were the most accurate about the majority of their abilities, their verbal and spatial intelligence were only estimable by informants or strangers, respectively

Hofer, Gabriela, Laura Langmann, Roman Burkart, and Aljoscha Neubauer. 2021. “Who Knows What We Are Good At? Unique Insights of the Self, Knowledgeable Informants, and Strangers into a Person’s Abilities.” PsyArXiv. October 21. doi:10.31234/osf.io/u73xf

Abstract: Who is the best judge of a person’s abilities—the person, a knowledgeable informant or strangers just met in a 3-min speed date? To test this, we collected ability measures as well as self-, informant- and stranger-estimates of verbal, numerical and spatial intelligence, creativity, and intra- and interpersonal emotional competence from 175 young adults. While people themselves were the most accurate about the majority of their abilities, their verbal and spatial intelligence were only estimable by informants or strangers, respectively. These differences in accuracy were not accompanied by differences in the domains’ relevance to people’s self-worth or observability to strangers. These results indicate self-other knowledge asymmetries for abilities but raise questions about the reasons behind these asymmetries.


Rolf Degen summarizing... We blindly impute higher moral qualities to good-looking people, even more so than qualities of a non-moral kind

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions. Christoph Klebl, Joshua J. Rhee, Katharine H. Greenaway, Yin Luo & Brock Bastian. Journal of Nonverbal Behavior, Oct 20 2021. https://link.springer.com/article/10.1007/s10919-021-00388-w

Abstract: Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.


The Mid-Life Dip in Well-Being: We find remarkably strong and consistent evidence across countries of statistically significant and non-trivial U-shapes in age with and without socio-economic controls

The Mid-Life Dip in Well-Being: a Critique. David G. Blanchflower & Carol L. Graham. Social Indicators Research, Oct 19 2021. https://link.springer.com/article/10.1007/s11205-021-02773-w

Abstract: A number of studies—including our own—find a mid-life dip in well-being. Yet several papers in the psychology literature claim that the evidence of a U-shape is "overblown" and if there is such a thing that any such decline is "trivial". Others have claimed that the evidence of a U-shape "is not as robust and generalizable as is often assumed," or simply "wrong." We identify 409 studies, mostly published in peer reviewed journals that find U-shapes that these researchers apparently were unaware of. We use data for Europe from the Eurobarometer Surveys (EB), 1980–2019; the Gallup World Poll (GWP), 2005–2019 and the UK's Annual Population Survey, 2016–2019 and the Census Bureau's Household Pulse Survey of August 2021, to examine U-shapes in age in well-being. We find remarkably strong and consistent evidence across countries of statistically significant and non-trivial U-shapes in age with and without socio-economic controls. We show that studies cited by psychologists claiming there are no U-shapes are in error; we reexamine their data and find differently. The effects of the mid-life dip we find are comparable to major life events such as losing a spouse or becoming unemployed. This decline is comparable to half of the unprecedented fall in well-being observed in the UK in 2020 and 2021, during the Covid19 pandemic and lockdown, which is hardly “inconsequential” as claimed.


Discussion

An early psychology literature argued that there was no relationship between well-being and age. This appears to have been based on studies that included a handful of people with tiny sample sizes. Even where there was evidence of a U-shape, it was denied in the literature. We reworked a few of these studies using same data and showed there were U-shapes, and their scale was large and comparable to the loss of a spouse, or a job. Some studies have failed to find U-shapes but generally they have been based on small sample sizes,

In addition to our findings of U-shapes using life satisfaction data from the Eurobarometer we also looked at Cantril's ladder of life satisfaction in the Gallup World Poll data and found U-shapes with and without controls for an additional 64 non-European countries. We found similar U-shapes for the UK from the Annual Population Surveys.

Two more recent papers (Galambos et al., 2020, 2021) suggested there was little evidence of U-shapes based on a literature review of 28 papers. We showed that that the authors had misclassified many of these paper's findings. Indeed, after misclassifications have been accounted for and ineligible studies dropped, B&G found that there were zero that didn't find any evidence of U-shapes. Of the 28 papers 21 found U-shapes and three had mixed evidence while four had to be excluded as they did not set the criteria set by GKJL1; of note is that GKJL2 did not dispute any of these re-classifications.

We have also identified an astonishing 387 additional papers that the authors had ignored that did find U-shapes, making 403 in total. Indeed, we count a total of 373 published in a vast array of peer-reviewed journals in English, including 73 in this journal alone, that find U-shapes, which was the main criterion the authors set for examination. When this was pointed out to the authors by us in an earlier paper (Blanchflower & Graham, 2021a) the authors claimed that they did not set out to do an exhaustive review because they "wanted to show support for the view that not all researchers find the U shapes". Hence, their analysis is advocacy not science. There is a U-shape in well-being in midlife.

On the basis of this evidence, it is clearly inappropriate to dismiss the literature on the U-curve as “overblown” or the scale of the effects as trifling, inconsequential or even "trivial". We have shown that the effects of the mid-life dip are comparable to major life events like losing a spouse or a job. We show that the drop from teenage years to the midlife low is about half the size of the unprecedented drop in life satisfaction that occurred during the COVID19 pandemic.

Beyond being empirically interesting, there are implications for substantial parts of the world’s population. These dips in well-being are associated with higher levels of depression, including chronic depression, difficulty sleeping, and even suicide. In the U.S., deaths of despair are most likely to occur in the middle-aged years, and the patterns are robustly associated with unhappiness and stress. Across countries chronic depression and suicide rates peak in midlife. The mid-life dip in well-being is robust to within person analysis, also exists with the prescribing of anti-depressants and it extends beyond humans. The evidence comes from both longitudinal and cross-section data, which complement one another, as noted in a recent report by The Lancet’s COVID-19 Commission Mental Health Task Force. It remains puzzling then why some psychologists continue to suggest that well-being is unrelated to age.

Based on the significant evidence we present, the decline in mid-life well-being seems real and consequential and has robust linkages to other serious markers of ill-being. The mid-life dip is real, it applies to most of the world’s population, excepting countries in which it is very difficult to age—such as those with very high levels of absolute poverty and conflict and low levels of life expectancy. It links to behaviors and outcomes that merit the attention of scholars and policymakers alike. These include rising rates of despair and reported pain among the middle-aged in many rich countries and associated premature mortality due to despair-related deaths, and some similar if less well documented patterns in developing economies. Among other things, more public awareness of how common this mid-life dip is might help those navigating its worst manifestations to make it through to a happier and longer life.

The overwhelming evidence from four hundred and nine papers, and counting, as well as the evidence presented here, support the conclusion that there is a midlife low in well-being. This is among the most striking, persistent and consistent patterns in social science.