Saturday, March 5, 2022

From 2021... Relationship Patterns Between Mountainousness and Basic Human Values: Altitude and mountainousness are related to increased conservation values and decreased hedonism

From 2021... A Tale of Peaks and Valleys: Sinusoid Relationship Patterns Between Mountainousness and Basic Human Values. Stefan Stieger et al. Social Psychological and Personality Science, Aug 16, 2021. https://doi.org/10.1177/19485506211034966

Abstract: Mountains—mythic and majestic—have fueled widespread speculation about their effects on character. Emerging empirical evidence has begun to show that physical topography is indeed associated with personality traits, especially heightened openness. Here, we extend this work to the domain of personal values, linking novel large-scale individual values data (n = 32,666) to objective indicators of altitude and mountainousness derived from satellite radar data. Partial correlations and conditional random forest machine-learning algorithms demonstrate that altitude and mountainousness are related to increased conservation values and decreased hedonism. Effect sizes are generally small (|r| < .031) but comparable to other socio-ecological predictors, such as population density and latitude. The findings align with the dual-pressure model of ecological stress, suggesting that it might be most adaptive in the mountains to have an open personality to effectively deal with threats and endorse conservative values that promote a social order that minimizes threats.

Keywords: personal values, mountainousness, geographical psychology, socioecology, conditional random forests


Check also Physical topography is associated with human personality. Friedrich M. Götz, Stefan Stieger, Samuel D. Gosling, Jeff Potter & Peter J. Rentfrow. Nature Human Behaviour (2020). September 7 2020. https://www.bipartisanalliance.com/2020/09/mountainous-areas-were-lower-on.html

The present research employed advanced analysis techniques to investigate whether mountainousness is meaningfully associated with personal values. Correlation curve analysis indicated that individuals living in hilly and mountainous areas were likely to emphasize conservation values, specifically security and tradition. Individuals living at high altitudes showed a similar pattern but also cared less about hedonism. These results were stable across various robustness checks. Conditional random forest machine-learning algorithms confirmed both mountainousness indices as relevant predictors of personal values when tested against a conservative set of demographic (age, gender, and income) and socio-ecological (population density, latitude) controls.

How should we interpret the associations between mountainousness and personal values? The negative relationship with hedonism appears straightforward. Mountainous areas tend to be secluded and inhospitable, making them ill-suited for the pursuit of worldly pleasures and sensuous gratification. Meanwhile, the robust association between mountainousness and conservation values may initially seem surprising and even counterintuitive. According to voluntary settlement theory (Kitayama et al., 20062010), during the European settlement of the United States, frontier environments like the Rocky Mountains attracted primarily self-reliant, freedom-seeking nonconformists. The accumulation of individuals with such traits laid the foundation for an ethos of independence that continues to characterize the inhabitants of these areas today (Plaut et al., 2002Varnum & Kitayama, 2011). Indeed, the mountain states still exhibit the strongest individualist tendencies in the United States (Vandello & Cohen, 1999). Moreover, recent research examining the personality structure of mountain dwellers in the United States found that mountainousness was most strongly related to heightened openness to experience (Götz, Stieger, et al., 2020). With openness being negatively related to conservation values (Fischer & Boer, 2014Parks-Leduc et al., 2015Roccas et al., 2002), these findings appear to be at odds with the current results.

However, from an analytical standpoint, even the strongest correlations between traits and values—which are typically found between agreeableness and benevolence (rsp = .61, Parks-Leduc et al., 2015r = .45, Roccas et al., 2002; and r = .54, Vecchione et al., 2019) and openness and self-direction (rsp = .52, Parks-Leduc et al., 2015r = .48, Roccas et al., 2002; and r = .39, Vecchione et al., 2019)—leave sufficient unexplained variance to manifest in differential relations with third variables, such as mountainousness. More importantly, from a conceptual standpoint, while personality traits and personal values are similar, they are not the same. Values are evaluative, mutually exclusive (i.e., following a diametrical organization, wherein endorsement of certain values implies rejection of others), enduring goals that reflect what a person finds important as a member of society. Meanwhile, traits are descriptive, nonmutually exclusive (i.e., following an orthogonal organization, wherein stronger expression of certain traits does not affect others), enduring dispositions that reflect what a person is like as an individual (Bilsky & Schwartz, 1994Roccas et al., 2002Vecchione et al., 2019).

The current findings dovetail well with the dual-pressure model of ecological stress (Conway et al., 2017). According to this model, the same ecological stressor, such as the harshness of mountain terrains, might simultaneously produce opposing pressures that push people in two different directions. In the current context, mastering the tough ecological conditions of mountainous areas might require individuals with independent agency and preparedness to confront unknown challenges and thus favor an open personality (Götz, Stieger et al., 2020). Meanwhile, thriving in ecologically challenging environments, such as mountainous terrains, might require social groups that are committed to safety, self-discipline, stability, and protection of the status quo—hallmarks of conservation philosophy. This conclusion aligns with research showing that experiences of environmental threats and uncertainty (1) prompt individuals to be skeptical of strangers and more territorial about their group domains (Sng et al., 2018), (2) lead to increased endorsement of socially and politically conservative positions (Malka et al., 2014Oishi et al., 2017), and (3) are conducive to the creation of vertical governmental restriction—laws that impose hierarchies and protect specific groups (Conway et al., 20172020). Thus, having an open personality (i.e., autonomy and the readiness to confront novel challenges when faced with threats) and conservative values (i.e., supporting a social order governed by norms of security, self-discipline and respect for customs to minimize threats) might be most adaptive for thriving in the mountains.3

It should, of course, be noted that the observed effects are small.4 Compared to the average correlation between age and values (M |r| = .098), the average correlation between mountainousness (20 miles) and values was about a 10th (M |r| = .009). However, personal values are determined by many factors (Sagiv et al., 2017), and any single factor is likely to have only a small effect (Götz et al, 2021). This argument is especially true in uncontrolled, real-world settings as in the present study, where—compared to classical lab experiments—effect sizes are typically diminished due to heightened error variance (Maner, 2016Oishi & Graham, 2010). Moreover, their small magnitude does not render the observed effects unimportant. Rather, even small effects can make a big difference when considered over time and at scale (Funder & Ozer, 2019Matz et al., 2017). The former seems likely as personal values influence human attitudes and behaviors daily (Sagiv et al., 2017). The latter is especially probable for socio-ecological influences, such as mountains that—while distal and thus less influential than personal factors (e.g., demographics)—simultaneously affect large groups of people who share the same environmental milieu (Conway et al., 2020Lu et al., 2018Oishi, 2014). Taken together, the immediate impact of mountainousness on personal values may be small. But when considered over a lifetime and at population scale, small effects translate into highly consequential outcomes such as election results (Caprara et al., 2006), cultural capital, and economic growth (Bardi et al., 2008).

Limitations and Future Research

The current research has several limitations. First, due to the correlational nature of our data, no causal inferences can be drawn. Longitudinal studies at the individual and community levels are needed to illuminate the psychological underpinnings of the associations between mountainousness and personal values (i.e., acculturation effects, selective migration or a combination thereof; Götz et al., in pressRentfrow et al., 2008Stieger & Lewetz, 2016). Second, while our data offered one of and perhaps the largest personal values samples in the United States, it is not nationally representative. Although the ethnic composition and geographic coverage were broadly representative of the general population, which is common in large-scale online samples (Gosling et al., 2004Götz, Bleidorn, et al., 2020Jokela et al., 2015Kosinski et al., 2015), the participants in our study were younger, predominantly female, and less affluent than the national average (U.S. Census Bureau, 2020). Third, our assessment of personal values was limited to a 20-item short scale. While the TwIVI displayed respectable psychometric properties in the current study and previous research (Sandy et al., 2017Vignoles et al., 2018), its brevity comes at the cost of reduced measurement precision and content breadth (Credé et al., 2012). Thus, future research should extend the current work by using longer scales, which might include the extended 19-value version (Schwartz et al., 2012) that could offer even more nuanced insights. Such work may also systematically assess nonlinear trends in mountainousness–value associations (Lee et al., 2021).5 Furthermore, future research might try to dynamically adjust the 20-mile radius as a proxy for the mean commuting distance to the actual commuting distance in each ZIP-code area. Such an adjustment might reduce error variance and isolate the effect of interest more effectively. Lastly, future research should investigate the associations between personal values and other challenging ecologies, including coastlines, swamplands, and deserts (Götz, Stieger, et al., 2020Oishi et al., 2015).

Variance of log yield across farms in the United States: The 95th percentile of corn yield is 190 percent larger than the 5th percentile yield

Suri, Tavneet, and Christopher Udry. 2022. "Agricultural Technology in Africa." Journal of Economic Perspectives, 36 (1): 33-56. DOI: 10.1257/jep.36.1.33

Abstract: We discuss recent trends in agricultural productivity in Africa and highlight how technological progress in agriculture has stagnated on the continent. We briefly review the literature that tries to explain this stagnation through the lens of particular constraints to technology adoption. Ultimately, none of these constraints alone can explain these trends. New research highlights pervasive heterogeneity in the gross and net returns to agricultural technologies across Africa. We argue that this heterogeneity makes the adoption process more challenging, limits the scope of many innovations, and contributes to the stagnation in technology use. We conclude with directions for policy and what we feel are still important, unanswered research questions.

---
Excerpts:

Farmers who were provided with plot-specific recommendations for appropriate fertilizer use (along with vouchers for reduced cost access to inputs) were more likely to apply the recommended fertilizer, and increased yields by over 150 percent relative to the control group.

Claassen and Just (2011) study the variance of log yield across farms in the United States: they find that the 95th percentile of corn yield is 190 percent larger than the 5th percentile yield.

Love is not blind: What romantic partners know about our abilities compared to ourselves, our close friends, and our acquaintances

Love is not blind: What romantic partners know about our abilities compared to ourselves, our close friends, and our acquaintances. Gabriela Hofer, Silvia Macher, Aljoscha Neubauer. Journal of Research in Personality, March 4 2022, 104211. https://doi.org/10.1016/j.jrp.2022.104211

Abstract: How much do our partners, close friends, and acquaintances know about our abilities, as compared to ourselves? This registered report aimed to investigate asymmetries in these perspectives’ knowledge of a person’s verbal, numerical, and spatial intelligence, creativity, and intra- and interpersonal emotional abilities. We collected self-estimates and performance measures of these abilities from 238 targets. Each target’s abilities were also rated by their romantic partner, a close friend, and an acquaintance. Results showed knowledge-asymmetries but also similarities between perspectives. People themselves were at least moderately accurate across all six domains. However, partners achieved similar accuracy and both partners and friends could provide unique insights into some abilities. We discuss these results with regard to Vazire’s self-other knowledge asymmetry model.

Introduction

“How am I doing?” This question is one that many of us likely ask themselves on a regular basis. Whether it concerns academic performance or everyday skills like driving ability, knowing how well we are doing is essential and sometimes our impression of our abilities shapes important life decisions (e.g., Ackerman & Wolman, 2007). It is, therefore, of little surprise that a lot of research has investigated the accuracy of self-estimates of abilities, reaching the conclusion that they are less accurate than one would imagine (Freund and Kasten, 2012, Zell and Krizan, 2014). Indeed, our self-estimates seem to be distorted by overestimation (e.g., Visser et al., 2008). Moreover, other people can also provide valuable information about our abilities and skills and their estimates might be similarly accurate or sometimes even slightly more accurate than our own (Denissen et al., 2011, Steinmayr and Spinath, 2009). To this date, however, hardly any research has directly compared the accuracy of self- and other-estimates of abilities (for an exception see Neubauer et al., 2018, who investigated accuracies of self- and peer-estimates in adolescents). The first main goal of this article was to provide such a comparison and to do so for an adult population and a wide range of abilities. When people make important decisions like vocational choices, they may ask several others for feedback. Close friends and romantic partners are probably common sources people turn to. Our second main goal was, therefore, to investigate, whether romantic partners and close friends have special insights or biases when it comes to assessing our abilities by comparing the accuracy of their judgments with those of acquaintances.

A considerable amount of research has focused on the accuracy of self- and other-perceptions of personality traits. Both types of perceptions can predict important outcomes like academic success or job performance and other-perceptions can provide incremental validity over self-perceptions (Connelly & Ones, 2010). However, neither perspective is without its biases. As an example, Anusic, Schimmack, Pinkus, and Lockwood (2009) found evidence for an evaluative bias factor in self- and other-ratings of the Big Five personality traits. In their truth and bias model of person perception, West and Kenny (2011) proposed that a perceiver’s rating of a target on a given trait does not only reflect the target’s true score (and measurement error) but is also affected by certain bias variables. John and Robins (1993) showed that self-other and other-other agreement for the Big Five are determined by a trait’s observability (i.e., its visibility to observers) and evaluativeness (i.e., its social desirability or undesirability). Both self-other and other-other agreement were highest for highly observable traits of low evaluativeness. High evaluativeness seemed particularly detrimental for self-other agreement. Earlier work by Paunonen (1989) had shown that not observability per se but the interaction between observability and acquaintance is related to self-other agreement: Low observability is only related to lower self-other agreement when the level of acquaintance between target and rater is low. More recently, Connelly and Ones (2010) confirmed this meta-analytically and showed that the interpersonal intimacy between perceiver and target might be even more important than acquaintance per se. They found that the most accurate ratings in terms of self-other correlations come from spouses and dating partners. A recent extension of the truth and bias model (Leising et al., 2015) found that ratings of a target were influenced by perceiver’s attitudes (liking) but only when items were high in evaluativeness. Finally, current work found that how much the perceiver likes the target and how well he/she knows the target have opposing effects on accuracy: Whereas higher knowing was associated with higher accuracy and lower positivity bias, higher liking was related to lower accuracy and higher positivity bias (Wessels et al., 2018). Overall, past research seems to agree that both characteristics of the trait to be judged and of the relationship between target and perceiver affect the accuracy of ratings. However, hardly any of these models have focused on mechanisms behind potential differences in accuracy between self- and other-estimates.

Simine Vazire’s (2010) self-other knowledge asymmetry (SOKA) model offers a framework for systematic comparisons of the accuracy of self- and other-estimates. The model builds on the Johari window (Luft & Ingham, 1955) and assumes that a person’s traits fall into one of four different quadrants, depending on how much the person themselves and others know about the respective characteristic: Traits in the ‘open area’ are judged accurately by both the self and others. If only others are accurate about a trait, it is in the ‘blind spot’, whereas traits only validly judged by oneself are in the ‘hidden area’. Lastly, traits that neither perspective can judge accurately are in the ‘unknown area’. Drawing on the research summarized in the past section, the model proposes that the position of a trait in the Johari window should be determined by two factors: observability and evaluativeness. Vazire argued that self-estimates of highly evaluative traits are often distorted, since these traits are relevant to the person’s self-esteem (see also John & Robins, 1993). At the same time, others can only make accurate estimates about observable traits. Taken together, others might have more accurate views of our observable and evaluative traits than we ourselves do. Vazire (2010) allocated traits to the positions within the SOKA model/Johari window based on differences in correlation coefficients between self- and peer-estimates and relevant behavior (for extraversion and neuroticism) or objective performance (for intellect). In this initial study, she found extraversion (high observability, low evaluativeness) to be in the open area, intellect (low observability, high evaluativeness) mostly in the blind spot, and neuroticism (low observability, low evaluativeness) in the hidden area.

Similar to some of the models discussed in section 1.1, Vazire (2010) also considered a third aspect that might influence a trait’s position within the SOKA model: the level of acquaintance. She discussed that, while well-acquainted others might have advantages compared to less acquainted others when it comes to judging low observability traits (see also Connelly and Ones, 2010, Paunonen, 1989), they might also share some of the self’s self-protective biases, leading to less accurate judgments. Unexpectedly, she found friends to be more accurate than strangers when judging the highly evaluative trait intellect. Thus, she proposed that distortions of other-estimates due to high evaluativeness might only occur in particularly emotionally invested known others like romantic partners. The emotional investment in friendships might have been too low for the negative effects of evaluativeness on accuracy to emerge. This would be in line with the negative association between liking and accuracy found by Wessels and colleagues (2018). John and Robins (1993) proposed that judgments by emotionally invested others might involve similar psychological processes as self-perception. On a similar note, it has been suggested that “in a close relationship, the person acts as if some or all aspects of the partner are partially the person's own” (Aron et al., 1991, p. 242). This is also in line with the self-evaluation maintenance model (Tesser, 1988), according to which the performance of a close other might affect one’s own self-esteem and do so negatively, if the domain in question is relevant for one’s self-definition. Vazire (2010) proposed that a direct comparison between ratings by romantic partners and similarly well-acquainted friends could provide valuable insight into this question. Surprisingly, such a study does not seem to exist until today. In general, only little research on the SOKA model has been conducted. At the time of writing, it has mainly been investigated for personality traits (e.g., Beer & Vazire, 2017) but pertinent research also exists for personality disorders (Carlson et al., 2013), and moral behaviors (Thielmann et al., 2017).

To this point, hardly any studies have investigated the SOKA model for different aspects of intelligence or other abilities, even though this line of research might provide valuable insights. When making important life decisions, people may rely on feedback about their abilities from different sources (e.g., self, parents, friends, partners or teachers; Neubauer et al., 2018). Thus, it seems essential to investigate which of these sources can provide accurate estimates for a given domain.

First evidence on self-other knowledge asymmetries for abilities comes from Vazire (2010), whose findings on intellect are based on creativity (originality in a divergent thinking task) and overall intelligence. Both abilities were measured with objective ability tests. Results showed that creativity is in the blind spot, with only friends but not the self providing accurate estimates. Findings for intelligence were similar but less clear-cut, since self-estimates showed at least some accuracy in this domain. Strangers were unable to make accurate estimates for either ability.

Only recently, Neubauer and colleagues (2018) have analyzed the position of a more diverse set of abilities within the SOKA model based on self-ratings and ratings of randomly assigned classmates in 14- and 18-year-old pupils (i.e., ages when important educational decisions have to be made). The following abilities were assessed: verbal, numerical, and spatial intelligence (as measured by a standardized intelligence test), creativity (originality in a divergent thinking task), and intra- and interpersonal emotional management abilities (as measured by a situational judgment test). In both age groups, numerical intelligence and creativity were open, verbal intelligence was in the blind spot, and intra- and interpersonal emotional abilities were hidden. Spatial intelligence was unknown in the younger group and hidden in the older one. Thus, there seems to be variation in the location of abilities within the SOKA model, even though most of those examined could be considered to belong to the concept of intellect investigated by Vazire (2010) and might, therefore, be expected to be located in the blind spot. Self-reported closeness to the rated peer did not moderate any of the effects, a finding that the authors mainly attribute to the random assignment of peer-raters.

The relevance of having an accurate view of one’s own abilities and those of one’s peers (e.g., in order to give them feedback) might be particularly high during adolescence, given that important (educational/vocational) decisions have to be made around this time (Neubauer et al., 2018). Nevertheless, accurate self- and other-assessments are probably also relevant later in life and maintaining self-insight over the course of life may prove increasingly difficult, since adults usually receive less regular feedback on their abilities than pupils in school do. The accuracy of self- and other-estimates of abilities can also be important in clinical contexts: Accurate perceptions of a person’s memory decline – which might, for example, be due to a cognitive disorder – could be essential to provide them with appropriate and timely care (Buelow et al., 2014). Even though self-reported memory complaints show a small (negative) correlation with objective cognitive function in the general aging population (Burmester et al., 2016), this association seems to disappear in individuals with mild cognitive impairment (Buelow et al., 2014, Fyock and Hampstead, 2015) or Alzheimer’s disease (Buelow et al., 2014). It has also been shown that informant-reports can outperform self-reports in terms of accuracy for individuals with mild cognitive impairment (Buelow et al., 2014, Fyock and Hampstead, 2015).

Providing a systematic comparison of the accuracy of self- and other-estimates of abilities in adults was one of the main goals of the present work. In view of the lack of literature that directly compares these perspectives, we summarized available work that focused on the accuracy of either self-estimates or other-estimates in the upcoming sections. In line with recent suggestions regarding the interpretation of effect sizes in individual difference research (Gignac & Szodorai, 2016), we classified correlations starting from .1 to indicate low accuracy and correlations starting from .2 to indicate medium or moderate accuracy. However, we used the conventional—and, thus, stricter—threshold (r ≥ .5) for high accuracy (Cohen, 1992; for a display of the practical importance of such a correlation see Table 2).

A considerable amount of research has focused on the accuracy of self-estimates of abilities, resulting in several meta-analyses (e.g., Freund and Kasten, 2012, Mabe and West, 1982, Ross, 1998) and even one metasynthesis (i.e., a combination of several meta-analyses; Zell & Krizan, 2014). According to this metasynthesis, overall accuracy of self-estimates is moderate (rmean = .29) with considerable variability of effects depending on the ability domain in question (rs ranging from .09 for interpersonal sensitivity to .63 for second language competence). Freund and Kasten (2012) focused their meta-analysis on verbal, numerical, spatial, and overall intelligence and also found moderate accuracy (rmean = .33). Additionally, they found greater accuracy of self-estimates of numerical intelligence as compared to overall intelligence, with no comparable differences in accuracy between overall and verbal or spatial intelligence.

Past results on the accuracy of self- and other- estimates of the domains that we investigated in the present study, that is verbal, numerical, and spatial intelligence, creativity, and intra- and interpersonal emotional management abilities, are summarized in Table 1. As can be seen, these results again point towards an accuracy advantage for self-estimates of numerical intelligence compared to those of other intelligence facets: In the majority of cases, very low to medium accuracy was reported for self-estimates of verbal and spatial intelligence, while medium to high accuracy was found for numerical intelligence (Furnham et al., 2001, Neubauer et al., 2018, Proyer and Ruch, 2009, Rammstedt and Rammsayer, 2002, Steinmayr and Spinath, 2009, Visser et al., 2008). Correlations between self-estimates of creativity and creative performance were found to range from slightly negative to .44, depending on the way creativity was assessed (Furnham et al., 2005, Neubauer et al., 2018, Pretz and McCollum, 2014, Vazire, 2010). For both inter- and intrapersonal emotional management abilities, correlations between self-estimates and performance were moderate to high (Freudenthaler and Neubauer, 2005, Neubauer et al., 2018). In addition to the results presented in Table 1, it seems noteworthy that Elfenbein, Barsade, and Eisenkraft (2015) reported low (r = .13) to medium (r = .3) accuracy of self-reported overall emotional management abilities in two studies, even though they did not differentiate between intra- and interpersonal aspects.

The predominant focus on correlation coefficients in this line of research has repeatedly been criticized (e.g., Dunning & Helzer, 2014) and some research has instead focused on the direction of misestimation. There is a large amount of indirect evidence for humans’ tendency to overestimate themselves. As an example, people were repeatedly shown to believe that they perform better than the average person (e.g., Dunning et al., 1989, Horrey et al., 2015, Kruger and Dunning, 1999), a phenomenon known as the above-average or better-than-average effect (Alicke & Govorun, 2005). A recent study found that 65 percent of Americans believe they are more intelligent than the average person, something that is logically impossible (Heck et al., 2018). Visser and colleagues (2008) showed that students judge their intelligence on all of Gardner’s eight intelligence domains to be above that of the average student at their university. Still, hardly any research has investigated people’s apparent tendency to overestimate themselves more directly by comparing self-estimated and objectively measured intellectual abilities (Gignac & Zajenkowski, 2019). In a rare exception, Reilly and Mulhern (1995) found that men, on average, overestimate their IQ by about 8 IQ points, while women’s self-estimates did not differ significantly from their measured IQ. In a recent study, Gignac and Zajenkowski (2019) reported that both men and women overestimate their IQ by on average 30 IQ points, which represents a large effect. Clearly, more research on this topic is needed before a definite conclusion can be made. Given the differences in accuracy correlations for different ability domains, investigating over-/underestimation in several domains seems particularly interesting.

Past work focusing on other-estimates of intelligence yielded moderate to high accuracy correlations but also overestimation by close others. Several correlational studies showed that others are already able to make reasonably accurate intelligence judgements after watching short standardized videos of a person (Borkenau et al., 2004; rs between .22 and .53 Borkenau and Liebler, 1993, Reynolds and Gifford, 2001). Denissen and colleagues (2011) investigated how intelligence-estimates by fellow students develop over the course of a semester and found accuracy correlations of .25 after one week, .27 after one month, and .22 after another 4 months of acquaintance. Borkenau and Liebler (1993) investigated intelligence estimates by a person’s cohabitant (in most cases their romantic partner) and reported a correlation of .29 with objectively measured intelligence. Recently, Gignac and Zajenkowski (2019) found that women’s estimates of their male romantic partner’s intelligence correlated at .30 with the partner’s actual intelligence, whereas men’s estimates only correlated at .19 with their female partner’s intelligence. Moreover, the authors found that both genders did not only overestimate their own but also their partner’s intelligence by around 30 IQ points, which again constitutes a large effect.

As shown in Table 1, only little research seems to have investigated accuracy of other-estimates for different ability domains. Steinmayr and Spinath (2009) reported that parents judged their adolescent sons’ and daughters’ verbal, numerical, and spatial intelligence with medium accuracy. Sommer, Fink, and Neubauer (2008) found that both teachers and parents estimated elementary school pupils’ intelligence with an accuracy of around .5, creativity with an accuracy of between .2 and .3, and social competence (consisting of inter- and intrapersonal parts) with an accuracy of only .1. Neubauer and colleagues (2018) found a similar pattern of results for peer-estimates in their older age group: Numerical and verbal intelligence as well as creativity were estimated with medium accuracy, whereas estimates of intra- and interpersonal emotional management abilities were of low or low to medium accuracy. Low accuracy was also reported for peer-estimates of spatial intelligence. More support for the comparatively low accuracy of other-ratings of emotional management abilities (again consisting of intra- and interpersonal aspects) comes from Elfenbein and colleagues (2015), with estimate*performance correlations between -.04 (student classmates) and .04 (work colleagues). Vazire (2010) found quite low accuracy of stranger-ratings of creativity and slightly higher accuracy for friend-ratings.

In the present study, we investigated the position of six abilities within the SOKA model in an adult sample. We aimed to:

(1)

investigate the accuracy of self- and other-estimates of abilities, with the latter stemming from the target’s romantic partner, their best or a very close friend, and an acquaintance. Thus, we collected data from two sources who knew the target considerably well but differed with regard to the expected closeness/intimacy of their relationship to the target (friends and partners; in line with the proposition by Vazire, 2010) and added a source that we expected to know the target less well and be less close to him/her (acquaintances).

(2)

determine for which domains the four perspectives (self, partner, friend, and acquaintance) differ in their accuracy.

(3)

investigate the unique insights of each perspective and the overall amount of variance all four perspectives can jointly explain.

(4)

determine the direction of misestimation by targets, friends, partners, and acquaintances.

We included verbal, numerical, and spatial intelligence, creativity, and inter- and intrapersonal emotional management abilities due to their relevance for important life outcomes. Verbal, numerical, and spatial intelligence form part of most modern models of intelligence (see Hunt, 2010) and several meta-analyses have determined that intelligence is an important predictor of professional and socioeconomic success (Hülsheger et al., 2007, Schmidt and Hunter, 2004, Schmidt and Hunter, 1998, Strenze, 2007). Creativity is seen as essential for solving key problems and has been connected with many essential aspects of life (Hennessey and Amabile, 2010, Plucker et al., 2004). A recent meta-analysis found that creativity is associated with academic achievement, although with only a small to medium effect (Gajda et al., 2017). Emotional management comprises the highest branch in one of the most influential models of emotional intelligence (Mayer & Salovey, 1997) and refers to the “ability to manage emotions and emotional relationships for personal and interpersonal growth” (Mayer et al., 2001, p. 235). Both intra- and interpersonal emotional management abilities are associated with life satisfaction and (lower) depressive tendencies (Freudenthaler, Neubauer, & Haller, 2008). Emotional intelligence as a broader ability exhibits small but meta-analytically stable associations with job performance (Joseph et al., 2015) and was found to predict academic and social success over and above personality and psychometric intelligence (van der Zee et al., 2002).

Perhaps one of the most important methodological considerations when conducting research on accuracy in person perception relates to the choice of accuracy criteria. When it comes to perceptions of a person’s abilities, the target’s performance in objective ability tests constitutes an obvious accuracy criterion. Intelligence tests, for example, have long been accepted as objective measures of cognitive abilities and their scores are widely used as accuracy criteria (Freund & Kasten, 2012). Thus, we used subscales of a well-established, standardized intelligence test battery to measure verbal, numerical, and spatial intelligence. For conceptually broader abilities like creativity and emotional competence, the choice of adequate accuracy criteria becomes less obvious. One widely accepted measure of creativity, or to be more precise creative potential, is originality in divergent thinking tasks, which shows good reliability and validities when scored adequately (Benedek et al., 2013, Diedrich et al., 2018) and has served as accuracy criterion in past research (Neubauer et al., 2018, Pretz and McCollum, 2014, Vazire, 2010). Therefore, we used originality in the widely applied alternative uses task (AUT; Guilford, 1967) as accuracy criterion for creativity. Emotional management abilities are typically measured by confronting individuals with hypothetical situations and asking them how they would change or maintain their emotions (Mayer et al., 2004). Hence, tests of emotional management abilities like the respective subscale of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer et al., 2003), the Situational Test of Emotional Management (STEM; MacCann & Roberts, 2008) or the Typical-performance Emotional Management Test (TEMT; Freudenthaler & Neubauer, 2005) belong to the family of situational judgement tests (SJTs)1. Maximum performance tests of emotional management like the MSCEIT or STEM, which ask the individual to judge the most effective actions in each situation, have been criticized for measuring a person’s knowledge about how to behave in emotional situations instead of their actual regulative behavior (Freudenthaler & Neubauer, 2005; see also Brackett et al., 2006). Therefore, we used a typical performance situational judgment test comprised of subscales for intra- and interpersonal emotional management.

In line with past studies (Beer and Vazire, 2017, Neubauer et al., 2018, Vazire, 2010), we considered positive correlations between estimates and performance starting from .2 to indicate relevant levels of accuracy. A correlation of this size represents a typical effect in the individual differences literature (Gignac & Szodorai, 2016) and seems like a reasonable threshold, given average effects found in research looking at the accuracy of self- and other-estimates of abilities (e.g., Denissen et al., 2011, Zell and Krizan, 2014).

We considered estimate*performance correlation coefficients that differed in at least .15 to indicate relevant differences in accuracy between two perspectives. Vazire (2010) proposed that differences in accuracy correlations of more than .15 can be considered as substantial, given that this number is close to one standard deviation in effect size distributions in personality and social psychology (see Richard et al., 2003). It is also slightly higher than one standard deviation in the distribution of self-estimate*performance correlations for different abilities (Zell & Krizan, 2014). To illustrate the practical importance of a difference of .15, we show binomial effect size displays (BESDs) for correlations of various sizes in Table 2. BESDs are an intuitive method to evaluate the size of correlations (Rosenthal & Rubin, 1982; see Funder & Ozer, 2019 for a recent discussion) but have also sparked some controversy (see Hall et al., 2008). Nevertheless, if we think of both measured and estimated abilities as dichotomous constructs, BESDs can provide rough estimates of the proportion of individuals that are correctly characterized as high- or low-performers based on their own or someone else’s judgment (for a similar application of BESDs see Naumann et al., 2009). Table 2 shows four BESDs for estimate*performance correlations of .05, .20, .35, and .50 in a sample of 200 and, therefore, illustrates the impact of correlational differences of .15 for correlations of various strength. To provide an example, if the correlation between partner-estimated and measured verbal intelligence is .20, this indicates 60% correct predictions (e.g., both measured and estimated verbal intelligence is high). If the correlation for friend-estimates is .35, this relates to 67.5 % correct predictions. In this example, friends clearly have higher success at providing accurate feedback than romantic partners.


Reputation: A fundamental route to human cooperation

Wu, J., Balliet, D., & Van Lange, P. A. M. (2021). Reputation: A fundamental route to human cooperation. In W. Wilczynski & S. F. Brosnan (Eds.), Cooperation and conflict: The interaction of opposites in shaping social behavior (pp. 45–65). Cambridge University Press, Mar 2022. https://doi.org/10.1017/9781108671187.005

Abstract: Social interactions do not occur in a vacuum. They often take place in groups and social networks where people can monitor and spread each other’s reputation. Despite the temptation to act selfishly when interacting with strangers, there is a never-ending conflict between the desire to act selfishly and the need to gain a good reputation (or avoid losing the good reputation one already has). While one’s selfish behavior guarantees immediate material benefits, it may harm one’s reputation and can lead to a long-term loss. Thus, reputation is a key element of indirect reciprocity that provides a fundamental route to human cooperation. In this chapter, we have discussed how reputation is formed and assessed in social interactions, reviewed empirical research that documents the phenomena of indirect reciprocity and reputation-based cooperation as well as evidence about the greater power of reputation over monetary sanctions in solving cooperation problems. Future research would benefit by investigating the negativity bias in reputation systems, the efficiency of reputation in varied-size groups, whether reputation transcends group boundaries to promote cooperation, and potential cultural variations. Taken together, we emphasize that reputation monitoring and spreading is a strong candidate to promote trust and cooperation, thereby reducing the possibility of social conflict, in a cost-effective manner, perhaps more so among people who are inclined to act selfishly. 


Friday, March 4, 2022

Women in relationships may be disadvantaged by hookup culture norms suggesting sex is freely available, putting pressure on them to acquiesce to the withdrawal method

Norms, Trust, and Backup Plans: U.S. College Women’s Use of Withdrawal with Casual and Committed Romantic Partners. Christie Sennott & Laurie James-Hawkins. The Journal of Sex Research, Feb 24 2022. https://doi.org/10.1080/00224499.2022.2039893

Abstract: This study integrates research on contraceptive prevalence with research on contraceptive dynamics in hookup culture to examine college women’s use of withdrawal with sexual partners. Drawing on in-depth interviews with 57 women at a midwestern U.S. university, we analyzed women’s explanations for using withdrawal for pregnancy prevention and framed our study within the research on gender norms, sexual scripts, and power dynamics. Findings showed withdrawal was normalized within collegiate hookup culture, and that women frequently relied on withdrawal as a secondary or backup method or when switching between methods. Women often followed up with emergency contraceptives if using withdrawal alone. With casual partners, women advocated for their own preferences, including for partners to withdraw. In committed relationships, women prioritized their partner’s desires for condomless sex, but also linked withdrawal with trust and love. Thus, women in relationships may be disadvantaged by hookup culture norms suggesting sex is freely available, putting pressure on them to acquiesce to withdrawal. Many women used withdrawal despite acknowledging it was not the most desirable or effective method, emphasizing the need for a sexual health approach that acknowledges these tensions and strives to help women and their partners safely meet their sexual and contraceptive preferences.


Trivialization of concepts of harm: Concept creep, the contemporary down-defining of notions of harm & trauma, makes people downplay the seriousness of the phenomenon as a whole

Broadened Concepts of Harm Appear Less Serious. Brodie C. Dakin et al. Social Psychological and Personality Science, March 3, 2022. https://doi.org/10.1177/19485506221076692

Abstract: Harm-related concepts have progressively broadened their meanings to include less severe phenomena, but the implications of this expansion are unclear. Across five studies involving 1,819 American participants recruited on MTurk or Prolific, we manipulated whether participants learned about marginal, prototypical (severe), or mixed examples of workplace bullying (Studies 1 and 3a), trauma (Studies 2 and 3b), or sexual harassment (Study 4). We hypothesized that exposure to marginal examples of a concept would lead participants to view the harm associated with it as less serious than those exposed to prototypical examples (trivialization hypothesis). We also predicted that mixing marginal examples with prototypical examples would disproportionately reduce perceived seriousness (threshold shift hypothesis). All studies supported the trivialization hypothesis, but threshold shift was not consistently supported. Our findings suggest that broadened concepts of harm may dilute the perceived severity and urgency of the harms they identify.

Keywords: concept creep, concept breadth, trauma, bullying, moral psychology


Specific cognitive abilities (fluid reasoning, processing speed, quantitative knowledge, & 13 another abilities) show a similar high heritability as general intelligence, some even a higher one

The genetics of specific cognitive abilities. Francesca Procopio, Quan Zhou, Ziye Wang, Agnieska Gidziela,  View ORCID ProfileKaili Rimfeld,  View ORCID ProfileMargherita Malanchini, Robert Plomin. bioRxiv Feb 8 2022. https://doi.org/10.1101/2022.02.05.479237

Abstract: Most research on individual differences in performance on tests of cognitive ability focuses on general cognitive ability (g), the highest level in the three-level Cattell-Horn-Carroll (CHC) hierarchical model of intelligence. About 50% of the variance of g is due to inherited DNA differences (heritability) which increases across development. Much less is known about the genetics of the middle level of the CHC model, which includes 16 broad factors such as fluid reasoning, processing speed, and quantitative knowledge. We provide a meta-analytic review of 863,041 monozygotic-dizygotic twin comparisons from 80 publications for these middle-level factors, which we refer to as specific cognitive abilities (SCA). Twin comparisons were available for 11 of the 16 CHC domains. The average heritability across all SCA is 55%, similar to the heritability of g. However, there is substantial differential heritability and the SCA do not show the dramatic developmental increase in heritability seen for g. We also investigated SCA independent of g (g-corrected SCA, which we refer to as SCA.g). A surprising finding is that SCA.g remain substantially heritable (53% on average), even though 25% of the variance of SCA that covaries with g has been removed. Our review frames expectations for genomic research that will use polygenic scores to predict SCA and SCA.g. Genome-wide association studies of SCA.g are needed to create polygenic scores that can predict SCA profiles of cognitive abilities and disabilities independent of g. These could be used to foster children’s cognitive strengths and minimise their weaknesses.



Increasing love feelings, marital satisfaction, and motivated attention to the spouse

Langeslag, S. J. E., & Surti, K. (2022). Increasing love feelings, marital satisfaction, and motivated attention to the spouse. Journal of Psychophysiology, Mar 2022. https://doi.org/10.1027/0269-8803/a000294

Abstract: Love typically decreases over time, sometimes leading to divorces. We tested whether positively reappraising the spouse and/or up-regulating positive emotions unrelated to the spouse increases infatuation with and attachment to the spouse, marital satisfaction, and motivated attention to the spouse as measured by the late positive potential (LPP). Married individuals completed a regulation task in which they viewed spouse, pleasant, and neutral pictures without regulation prompt as well as spouse and pleasant pictures that were preceded by regulation prompts. Event-related potentials were recorded, and self-reported infatuation, attachment, and marital satisfaction were assessed. Viewing spouse pictures increased infatuation, attachment, and marital satisfaction compared to viewing pleasant or neutral pictures in the no regulation condition. Thinking about positive aspects of the spouse and increasing positive emotions unrelated to the spouse did not increase infatuation, attachment, and marital satisfaction any further. Motivated attention, measured by the LPP amplitude, was greatest to spouse pictures, intermediate to pleasant pictures, and minimal to neutral pictures. Although the typical up-regulation effect on the LPP amplitude was observed for pleasant pictures, positively reappraising the spouse did not increase the LPP amplitude and hence motivated attention to the spouse any further. This study indicates that looking at spouse pictures increases love and marital satisfaction, which is not due to increased positive emotions unrelated to the spouse. Looking at spouse pictures is an easy strategy that could be used to stabilize marriages in which the main problem is the decline of love feelings over time.


Thursday, March 3, 2022

Dutch Marine recruits: Unexpectedly, cadets with higher levels of grit were not more likely to complete training; it seems grit is not as important as we thought

Grit was not associated to dropout in Dutch Marine recruits. Iris Dijksma, Cees Lucas & Martijn Stuiver. Military Psychology, Mar 2 2022. https://doi.org/10.1080/08995605.2022.2028518

Abstract: Approximately half of all recruits drop out of Marine recruit training. Identifying associated and predisposing factors for dropout would be helpful to understand dropout patterns and induce preventive strategies. Grit has been suggested to be a predictor of who is likely to succeed and who is not. We aimed to investigate the association between baseline grit scores and dropout of Marine recruit training in the Netherlands Armed Forces. We performed an exploratory study using data of three platoons Marine recruit training of the Royal Netherlands Marine Corps. Individual grit levels were measured using the NL-Grit scale, including two subscales. The primary outcome of this study was successful completion or dropout of Marine recruit training. Data were available from 270 recruits, of whom 119 (44%) dropped out of training. The odds ratio for dropout were 1.01 (95% CI 0.84–1.21, p = .917) and 1.07 (95% CI 0.89–1.29, p = .481) per standard deviation increase of consistency of interests and perseverance of effort, respectively. Our study did not confirm the proposed association between baseline grit levels and dropout of Marine recruit training in Dutch Marine recruits using the NL-Grit scale.

Keywords: Gritmilitary trainingretentiondropout

Discussion

Our study aimed to explore the association between baseline grit scores and dropout of Marine recruit training. The results of this study did not confirm the proposed association between baseline grit levels and dropout of Marine recruit training in Dutch Marine recruits using the NL-Grit scale. This finding holds both in recruits who were discharged upon individual request and those who dropout due to musculoskeletal injuries. Explained variance in dropout by baseline grit levels was somewhat higher in the former subgroup than in the latter but lower in both.

Our results do not align with the initial findings by Duckworth and colleagues, who found that grit scores were related to successful completion of military courses (Duckworth et al., 2019; Eskreis-Winkler et al., 2014). Several phenomena may explain why our findings do not suggest an association between grit levels and dropout. First, presumably due to rigorous pre-selection procedures, the data of baseline grit levels per subscale showed a limited range, and they lacked variance (i.e., information). Because of the lack of normative data, we were unable to directly compare subscale sum scores and ranges of our sample to previously study military populations; however, we do assume that cadets at the U.S. Military Academy at West Point would show similarly limited ranges (Crede et al., 2017; Duckworth et al., 2019). The lack of variance is apparent in both subscales, but even more so in the perseverance of effort subscale, which has previously been suggested to be strongly associated with (or even predictive for) performance than consistency of interests (Crede et al., 2017). As a consequence, the possibility to differentiate (i.e., discriminate) recruits based on their grit score is limited. On the other hand, it is possible that within this restricted range, there truly is no association between baseline grit levels and dropout. After all, it is easily conceivable that, as a result of pre-selection, recruits who are fit and brave enough to arrive at the pre-attendance all must possess – and must have addressed – a relatively high level of grit. Possibly, at that point, their grit level contributes less to performance than other traits such as hardiness and resilience (Maddi et al., 20172012). Second, we cannot exclude the possibility of social desirability bias in answering the NL-Grit scale (Grimm, 2010) and the possibility that (young) Marine recruits entertain a less than realistic view of their own grit levels (i.e., measurement bias because of reporting inflated grit levels) (Credé, 2018; Krumpal, 2013).

Although grit as a predictor of military success holds much intuitive appeal, the relation remains uncertain. The measurement of grit levels, and thus the possibility to differentiate, may be improved by adding items to the scale in the higher end of the spectrum. Also, the survey may be taken at an earlier stage in the selection procedure. It is likely that, at that point, the range of grit levels is wider, and the influence of social desirability bias may be less strong.

Limitations and implications

Several limitations of this explorative study are worth highlighting. First, other unmeasured variables may have obscured the association between baseline grit levels and the chance of dropout. Given the explorative nature of this study and the fact that causal paths are far from certain – for example, baseline physical fitness could be considered either a confounder or mediator (Pearl, 2010) – we chose to refrain from controlling for other variables. However, we should also note that the objective of exploring the association of grit with dropout risk was to assess its possible value as a predictor. In prediction research, the causal path and hence considerations about confounding and mediation are irrelevant as long as a variable is a consistent predictor of the outcome. Second, as per common, we measured grit through a self-reported measurement scale. Although it is stated that the act of answering survey questions can increase awareness, which opens the door to development, it also has disadvantages when such self-reported measures are used to detect and quantify associations or even predictive abilities between baseline levels and success outcomes (Oh et al., 2010). Perhaps, observer ratings of personality constructs such as grit levels – or even conscientiousness as an overarching construct – next to self-report methods may yield more valid estimates than the self-report method alone (Oh et al., 2010). Third, the NL-Grit was queried as the last survey, following other surveys. We cannot exclude the possibility that recruits rushed the last survey in order to finish it off. Finally, we wish to emphasize that our study findings are not necessarily generalizable to female military service members (since all participants were male) or other recruit training programs. Future research on both self-reported methods and observer-rated methods, also in other military courses, would add to the understanding of the relation between personality traits and dropout of military training.


“Unmasking” uncertainty, embracing it, and openly communicating about it could help alleviate anxiety and feelings of emotional exhaustion, detachment, and personal inadequacy

Understanding and Communicating Uncertainty in Achieving Diagnostic Excellence. Maria R. Dahm, Carmel Crock. JAMA, March 3, 2022. doi:10.1001/jama.2022.2141

Uncertainty pervades the diagnostic process. In health care, taxonomies of uncertainty have been developed to describe aspects such as personal (eg, individual knowledge gaps), scientific (eg, limits of biomedical knowledge), and probabilistic (eg, imprecise estimates of risk or prognosis) dimensions of uncertainty.1

When clinicians encounter diagnostic uncertainty, they often find themselves in an unfamiliar situation, without a clear method to proceed confidently, comfortably, and safely. Being unable to explain to patients what causes their symptoms may be perceived as a failure for all involved. When clinicians and patients dwell in diagnostic uncertainty, it can trigger feelings of concern and anxiety, may lead patients to mistrust clinicians’ competence, and could contribute to clinician burnout (feeling exhausted, disconnected, and personally inadequate), especially for early-career clinicians.2,3

Excellent diagnosticians should understand how uncertainty manifests. They should acknowledge and embrace uncertainty, and openly discuss it with other clinicians and patients to normalize its ubiquitous and inevitable part in the diagnostic process.4 Such a reimagining, focused on the inevitable and beneficial aspects of diagnostic uncertainty, relies on identifying how uncertainty is understood, managed, and communicated.


What Is Diagnostic Uncertainty, and for Whom?

Diagnosis is a complex and collaborative process that involves gathering, integrating, and interpreting information across the entire diagnostic team: clinicians (physicians, nurses, and allied health professionals), patients, and patients’ families and caregivers.5 All team members encounter different types of diagnostic uncertainty at different stages in the diagnostic process.3

From the clinicians’ perspective, diagnostic uncertainty has been defined as the “subjective perception of an inability to provide an accurate explanation of the patient’s health problem.”6 These subjective feelings are entangled in a multitude of factors and tensions surrounding the qualities deemed essential in clinicians, such as competence and confidence. The decisiveness with which clinicians make a diagnosis may be perceived as reflecting diagnostic expertise and clinical competence. Yet diagnostic excellence in the setting of uncertainty requires recognition and tolerance of uncertainty, cognitive flexibility, and willingness to engage with evolving information. It includes the ability to share clinical reasoning and communicate uncertainty to patients.3,4

Patients may experience uncertainty at any point along the diagnostic process and beyond. For patients, diagnostic uncertainty often begins before they present for health care, such as doubt about whether a persistent minor pain or occasional numbness warrants a clinical visit. Patients may have doubts about how long it will take to get answers, what their role is in the diagnostic process, whether a treatment is available, and whether they want a diagnosis if they already fear having a serious illness. They may have doubts about what a diagnosis means for their personal and professional life, their functional status, and quality of life.

Patients also encounter doubt when they perceive their valid symptoms are being dismissed. This is a common experience reported by patients, particularly those who experience other health disparities related to age, sex, race and ethnicity, or language background. For example, some women with myocardial ischemia may present with symptoms (such as back or abdominal pain or vomiting) that are not considered typical cardiac presentations, and may believe their symptoms are being dismissed. Some people might have doubts when a diagnosis does not match what they think is affecting them, or when family members, such as children and older adults who are unable to advocate for themselves, experience disease progression or adverse outcomes despite having been assigned a diagnostic label and associated treatments.


Managing Uncertainty Positively

“Unmasking”4 uncertainty, embracing it, and openly communicating about it could help alleviate anxiety and feelings of emotional exhaustion, detachment, and personal inadequacy associated with burnout and help clinicians “enjoy rather than dread the diagnostic process.”7 However, tolerating uncertainty rather than trying to reduce it to absolute certainty requires a major shift in the clinician’s mindset. Current medical education inadequately prepares early-career clinicians for feelings of failure associated with diagnostic uncertainty. Instead of upholding the illusion of certainty, medical education and professional development should provide a judgment-free opportunity for clinicians to openly and safely reflect, as well as be guided by and learn to live with the stress associated with diagnostic uncertainty.8

All clinicians across hierarchies and levels of experience need to openly acknowledge the realities of diagnostic uncertainty. The uncertainty surrounding diagnosis does need not be perceived as a threat to medical “authority,” expertise, or professionalism. On the contrary, clinicians who openly encourage and engage in discussions of uncertainty without blame or penalty model excellent diagnostic processes. Normalizing and promoting acceptance of uncertainty as integral to the diagnostic process thus should become routine within clinical care and medical education.8

The effects of explicitly acknowledging and managing uncertainty in the diagnostic process could be profound; doing so may help foster a safety culture in which all diagnostic team members can openly discuss, challenge, and collaborate to refine clinical reasoning. Diagnostic possibilities could be explored in self-reflection, and in interactions with colleagues and with patients.


Communicating Uncertainty

Effective communication about uncertainty across the entire diagnostic team is essential to avoid diagnostic error and patient harm.9

Diagnostic error has been defined as a failure to find an accurate and timely explanation for a health problem or failure to communicate that explanation to the patient.5 This definition should be expanded to include failure to communicate uncertainty explicitly, given its pervasiveness, as a potent contributor to diagnostic error.3 When clinicians do not disclose their doubts, patients may leave the clinical encounter feeling reassured yet remain unaware of their clinician’s uncertainty. When medical notes in electronic medical records (EMRs) present diagnoses as certainties, the diagnostic team may miss other diagnostic possibilities. Instead, EMRs should embed differential diagnosis and language expressing uncertainty (such as “possible viral conjunctivitis”) into documentation.

Probabilistic reasoning is often used to articulate uncertainty. Probabilistic (or bayesian) reasoning is a useful method to reduce cognitive biases when information is assessed during the diagnostic process,5 yet it is underused or even misunderstood in routine medical practice. Applying bayesian reasoning principles could lead clinicians to adjust their thinking and revise disease probabilities as they gather more information, thereby potentially avoiding diagnostic errors (eg, considering the frequency of disease processes in the immediate population to avoid base-rate neglect: the tendency to overemphasize information specific to an individual).5 Most clinicians apply probabilistic reasoning unconsciously, but bringing these skills and related language to interactions could be one way to explicitly communicate uncertainty.

How people understand language commonly associated with uncertainty and probability (eg, “occasionally,” “rarely”), including in radiology or pathology reports (eg, “highly suspicious for,” “suggestive of”), could differ between speaker/sender and hearer/receiver and may lead to ambiguity regarding diagnostic certainty. Clinicians also communicate uncertainty via implicit communication strategies that patients may not identify as expressions of uncertainty. For the clinician, “I’d like to follow-up with you next week” may signal they are unsure of a diagnosis and are adopting a watchful, waiting approach. For the patient, it may seem like an ordinary follow-up appointment without any indication of uncertainty.


Key Points for Diagnostic Excellence

.  Diagnostic uncertainty should be shared explicitly with patients. Failure to communicate uncertainty contributes to diagnostic error.

.  Understanding diagnostic uncertainty can be enriched by incorporating perspectives from medicine, social sciences, and humanities.

.  Diagnostic uncertainty should be reimagined as positive and routinely embraced in clinical care and education.

.  Explicitly acknowledging, managing, and communicating uncertainty promotes a robust diagnostic safety culture.

Clinical practice would benefit from evidence-based recommendations on how to best communicate uncertainty in diagnostic encounters. For example, linguistic analysis of video-recorded diagnostic interactions can help identify the language structures clinicians use when expressing diagnostic uncertainty. Diagnostic excellence should be informed by broadening the current understanding of diagnostic uncertainty beyond medical realms to include linguistic, communication, humanistic, sociological, and patient-centered perspectives to better understand and describe the nuance of the diagnostic process and uncertainty.


Diagnosis as a Relational, Communicative Process

Diagnosis is “a relational process, with each party (lay and medical) confronting illness with different explanations, understandings, values, and beliefs.”10 Managing patient anxiety surrounding uncertainty in diagnosis requires open interpersonal communication to increase patients’ awareness of the nature of diagnosis as a process rather than an isolated event. Clinicians could build rapport and trust and manage expectations by listening to patients, clearly communicating steps along the diagnostic process, and sharing their own uncertainty.

Patients’ expectations change as they gain a more transparent understanding of the complex and often complicated pathway to diagnosis. Clinicians can build safety nets by alerting patients about their uncertainty, discussing red-flag symptoms, and codeveloping plans of when and where patients should seek additional or urgent help.3 Open communication between clinicians and patients could also provide avenues for feedback on diagnostic performance, essential to calibrate clinicians’ diagnostic abilities.5

To effectively manage the complexity and challenges of the diagnostic process, clinicians and patients need to find approaches to address uncertainty. Acknowledging, embracing, and communicating uncertainty opens diagnostic possibilities and a way toward achieving diagnostic excellence.