Saturday, March 5, 2022

Reputation: A fundamental route to human cooperation

Wu, J., Balliet, D., & Van Lange, P. A. M. (2021). Reputation: A fundamental route to human cooperation. In W. Wilczynski & S. F. Brosnan (Eds.), Cooperation and conflict: The interaction of opposites in shaping social behavior (pp. 45–65). Cambridge University Press, Mar 2022. https://doi.org/10.1017/9781108671187.005

Abstract: Social interactions do not occur in a vacuum. They often take place in groups and social networks where people can monitor and spread each other’s reputation. Despite the temptation to act selfishly when interacting with strangers, there is a never-ending conflict between the desire to act selfishly and the need to gain a good reputation (or avoid losing the good reputation one already has). While one’s selfish behavior guarantees immediate material benefits, it may harm one’s reputation and can lead to a long-term loss. Thus, reputation is a key element of indirect reciprocity that provides a fundamental route to human cooperation. In this chapter, we have discussed how reputation is formed and assessed in social interactions, reviewed empirical research that documents the phenomena of indirect reciprocity and reputation-based cooperation as well as evidence about the greater power of reputation over monetary sanctions in solving cooperation problems. Future research would benefit by investigating the negativity bias in reputation systems, the efficiency of reputation in varied-size groups, whether reputation transcends group boundaries to promote cooperation, and potential cultural variations. Taken together, we emphasize that reputation monitoring and spreading is a strong candidate to promote trust and cooperation, thereby reducing the possibility of social conflict, in a cost-effective manner, perhaps more so among people who are inclined to act selfishly. 


Friday, March 4, 2022

Women in relationships may be disadvantaged by hookup culture norms suggesting sex is freely available, putting pressure on them to acquiesce to the withdrawal method

Norms, Trust, and Backup Plans: U.S. College Women’s Use of Withdrawal with Casual and Committed Romantic Partners. Christie Sennott & Laurie James-Hawkins. The Journal of Sex Research, Feb 24 2022. https://doi.org/10.1080/00224499.2022.2039893

Abstract: This study integrates research on contraceptive prevalence with research on contraceptive dynamics in hookup culture to examine college women’s use of withdrawal with sexual partners. Drawing on in-depth interviews with 57 women at a midwestern U.S. university, we analyzed women’s explanations for using withdrawal for pregnancy prevention and framed our study within the research on gender norms, sexual scripts, and power dynamics. Findings showed withdrawal was normalized within collegiate hookup culture, and that women frequently relied on withdrawal as a secondary or backup method or when switching between methods. Women often followed up with emergency contraceptives if using withdrawal alone. With casual partners, women advocated for their own preferences, including for partners to withdraw. In committed relationships, women prioritized their partner’s desires for condomless sex, but also linked withdrawal with trust and love. Thus, women in relationships may be disadvantaged by hookup culture norms suggesting sex is freely available, putting pressure on them to acquiesce to withdrawal. Many women used withdrawal despite acknowledging it was not the most desirable or effective method, emphasizing the need for a sexual health approach that acknowledges these tensions and strives to help women and their partners safely meet their sexual and contraceptive preferences.


Trivialization of concepts of harm: Concept creep, the contemporary down-defining of notions of harm & trauma, makes people downplay the seriousness of the phenomenon as a whole

Broadened Concepts of Harm Appear Less Serious. Brodie C. Dakin et al. Social Psychological and Personality Science, March 3, 2022. https://doi.org/10.1177/19485506221076692

Abstract: Harm-related concepts have progressively broadened their meanings to include less severe phenomena, but the implications of this expansion are unclear. Across five studies involving 1,819 American participants recruited on MTurk or Prolific, we manipulated whether participants learned about marginal, prototypical (severe), or mixed examples of workplace bullying (Studies 1 and 3a), trauma (Studies 2 and 3b), or sexual harassment (Study 4). We hypothesized that exposure to marginal examples of a concept would lead participants to view the harm associated with it as less serious than those exposed to prototypical examples (trivialization hypothesis). We also predicted that mixing marginal examples with prototypical examples would disproportionately reduce perceived seriousness (threshold shift hypothesis). All studies supported the trivialization hypothesis, but threshold shift was not consistently supported. Our findings suggest that broadened concepts of harm may dilute the perceived severity and urgency of the harms they identify.

Keywords: concept creep, concept breadth, trauma, bullying, moral psychology


Specific cognitive abilities (fluid reasoning, processing speed, quantitative knowledge, & 13 another abilities) show a similar high heritability as general intelligence, some even a higher one

The genetics of specific cognitive abilities. Francesca Procopio, Quan Zhou, Ziye Wang, Agnieska Gidziela,  View ORCID ProfileKaili Rimfeld,  View ORCID ProfileMargherita Malanchini, Robert Plomin. bioRxiv Feb 8 2022. https://doi.org/10.1101/2022.02.05.479237

Abstract: Most research on individual differences in performance on tests of cognitive ability focuses on general cognitive ability (g), the highest level in the three-level Cattell-Horn-Carroll (CHC) hierarchical model of intelligence. About 50% of the variance of g is due to inherited DNA differences (heritability) which increases across development. Much less is known about the genetics of the middle level of the CHC model, which includes 16 broad factors such as fluid reasoning, processing speed, and quantitative knowledge. We provide a meta-analytic review of 863,041 monozygotic-dizygotic twin comparisons from 80 publications for these middle-level factors, which we refer to as specific cognitive abilities (SCA). Twin comparisons were available for 11 of the 16 CHC domains. The average heritability across all SCA is 55%, similar to the heritability of g. However, there is substantial differential heritability and the SCA do not show the dramatic developmental increase in heritability seen for g. We also investigated SCA independent of g (g-corrected SCA, which we refer to as SCA.g). A surprising finding is that SCA.g remain substantially heritable (53% on average), even though 25% of the variance of SCA that covaries with g has been removed. Our review frames expectations for genomic research that will use polygenic scores to predict SCA and SCA.g. Genome-wide association studies of SCA.g are needed to create polygenic scores that can predict SCA profiles of cognitive abilities and disabilities independent of g. These could be used to foster children’s cognitive strengths and minimise their weaknesses.



Increasing love feelings, marital satisfaction, and motivated attention to the spouse

Langeslag, S. J. E., & Surti, K. (2022). Increasing love feelings, marital satisfaction, and motivated attention to the spouse. Journal of Psychophysiology, Mar 2022. https://doi.org/10.1027/0269-8803/a000294

Abstract: Love typically decreases over time, sometimes leading to divorces. We tested whether positively reappraising the spouse and/or up-regulating positive emotions unrelated to the spouse increases infatuation with and attachment to the spouse, marital satisfaction, and motivated attention to the spouse as measured by the late positive potential (LPP). Married individuals completed a regulation task in which they viewed spouse, pleasant, and neutral pictures without regulation prompt as well as spouse and pleasant pictures that were preceded by regulation prompts. Event-related potentials were recorded, and self-reported infatuation, attachment, and marital satisfaction were assessed. Viewing spouse pictures increased infatuation, attachment, and marital satisfaction compared to viewing pleasant or neutral pictures in the no regulation condition. Thinking about positive aspects of the spouse and increasing positive emotions unrelated to the spouse did not increase infatuation, attachment, and marital satisfaction any further. Motivated attention, measured by the LPP amplitude, was greatest to spouse pictures, intermediate to pleasant pictures, and minimal to neutral pictures. Although the typical up-regulation effect on the LPP amplitude was observed for pleasant pictures, positively reappraising the spouse did not increase the LPP amplitude and hence motivated attention to the spouse any further. This study indicates that looking at spouse pictures increases love and marital satisfaction, which is not due to increased positive emotions unrelated to the spouse. Looking at spouse pictures is an easy strategy that could be used to stabilize marriages in which the main problem is the decline of love feelings over time.


Thursday, March 3, 2022

Dutch Marine recruits: Unexpectedly, cadets with higher levels of grit were not more likely to complete training; it seems grit is not as important as we thought

Grit was not associated to dropout in Dutch Marine recruits. Iris Dijksma, Cees Lucas & Martijn Stuiver. Military Psychology, Mar 2 2022. https://doi.org/10.1080/08995605.2022.2028518

Abstract: Approximately half of all recruits drop out of Marine recruit training. Identifying associated and predisposing factors for dropout would be helpful to understand dropout patterns and induce preventive strategies. Grit has been suggested to be a predictor of who is likely to succeed and who is not. We aimed to investigate the association between baseline grit scores and dropout of Marine recruit training in the Netherlands Armed Forces. We performed an exploratory study using data of three platoons Marine recruit training of the Royal Netherlands Marine Corps. Individual grit levels were measured using the NL-Grit scale, including two subscales. The primary outcome of this study was successful completion or dropout of Marine recruit training. Data were available from 270 recruits, of whom 119 (44%) dropped out of training. The odds ratio for dropout were 1.01 (95% CI 0.84–1.21, p = .917) and 1.07 (95% CI 0.89–1.29, p = .481) per standard deviation increase of consistency of interests and perseverance of effort, respectively. Our study did not confirm the proposed association between baseline grit levels and dropout of Marine recruit training in Dutch Marine recruits using the NL-Grit scale.

Keywords: Gritmilitary trainingretentiondropout

Discussion

Our study aimed to explore the association between baseline grit scores and dropout of Marine recruit training. The results of this study did not confirm the proposed association between baseline grit levels and dropout of Marine recruit training in Dutch Marine recruits using the NL-Grit scale. This finding holds both in recruits who were discharged upon individual request and those who dropout due to musculoskeletal injuries. Explained variance in dropout by baseline grit levels was somewhat higher in the former subgroup than in the latter but lower in both.

Our results do not align with the initial findings by Duckworth and colleagues, who found that grit scores were related to successful completion of military courses (Duckworth et al., 2019; Eskreis-Winkler et al., 2014). Several phenomena may explain why our findings do not suggest an association between grit levels and dropout. First, presumably due to rigorous pre-selection procedures, the data of baseline grit levels per subscale showed a limited range, and they lacked variance (i.e., information). Because of the lack of normative data, we were unable to directly compare subscale sum scores and ranges of our sample to previously study military populations; however, we do assume that cadets at the U.S. Military Academy at West Point would show similarly limited ranges (Crede et al., 2017; Duckworth et al., 2019). The lack of variance is apparent in both subscales, but even more so in the perseverance of effort subscale, which has previously been suggested to be strongly associated with (or even predictive for) performance than consistency of interests (Crede et al., 2017). As a consequence, the possibility to differentiate (i.e., discriminate) recruits based on their grit score is limited. On the other hand, it is possible that within this restricted range, there truly is no association between baseline grit levels and dropout. After all, it is easily conceivable that, as a result of pre-selection, recruits who are fit and brave enough to arrive at the pre-attendance all must possess – and must have addressed – a relatively high level of grit. Possibly, at that point, their grit level contributes less to performance than other traits such as hardiness and resilience (Maddi et al., 20172012). Second, we cannot exclude the possibility of social desirability bias in answering the NL-Grit scale (Grimm, 2010) and the possibility that (young) Marine recruits entertain a less than realistic view of their own grit levels (i.e., measurement bias because of reporting inflated grit levels) (Credé, 2018; Krumpal, 2013).

Although grit as a predictor of military success holds much intuitive appeal, the relation remains uncertain. The measurement of grit levels, and thus the possibility to differentiate, may be improved by adding items to the scale in the higher end of the spectrum. Also, the survey may be taken at an earlier stage in the selection procedure. It is likely that, at that point, the range of grit levels is wider, and the influence of social desirability bias may be less strong.

Limitations and implications

Several limitations of this explorative study are worth highlighting. First, other unmeasured variables may have obscured the association between baseline grit levels and the chance of dropout. Given the explorative nature of this study and the fact that causal paths are far from certain – for example, baseline physical fitness could be considered either a confounder or mediator (Pearl, 2010) – we chose to refrain from controlling for other variables. However, we should also note that the objective of exploring the association of grit with dropout risk was to assess its possible value as a predictor. In prediction research, the causal path and hence considerations about confounding and mediation are irrelevant as long as a variable is a consistent predictor of the outcome. Second, as per common, we measured grit through a self-reported measurement scale. Although it is stated that the act of answering survey questions can increase awareness, which opens the door to development, it also has disadvantages when such self-reported measures are used to detect and quantify associations or even predictive abilities between baseline levels and success outcomes (Oh et al., 2010). Perhaps, observer ratings of personality constructs such as grit levels – or even conscientiousness as an overarching construct – next to self-report methods may yield more valid estimates than the self-report method alone (Oh et al., 2010). Third, the NL-Grit was queried as the last survey, following other surveys. We cannot exclude the possibility that recruits rushed the last survey in order to finish it off. Finally, we wish to emphasize that our study findings are not necessarily generalizable to female military service members (since all participants were male) or other recruit training programs. Future research on both self-reported methods and observer-rated methods, also in other military courses, would add to the understanding of the relation between personality traits and dropout of military training.


“Unmasking” uncertainty, embracing it, and openly communicating about it could help alleviate anxiety and feelings of emotional exhaustion, detachment, and personal inadequacy

Understanding and Communicating Uncertainty in Achieving Diagnostic Excellence. Maria R. Dahm, Carmel Crock. JAMA, March 3, 2022. doi:10.1001/jama.2022.2141

Uncertainty pervades the diagnostic process. In health care, taxonomies of uncertainty have been developed to describe aspects such as personal (eg, individual knowledge gaps), scientific (eg, limits of biomedical knowledge), and probabilistic (eg, imprecise estimates of risk or prognosis) dimensions of uncertainty.1

When clinicians encounter diagnostic uncertainty, they often find themselves in an unfamiliar situation, without a clear method to proceed confidently, comfortably, and safely. Being unable to explain to patients what causes their symptoms may be perceived as a failure for all involved. When clinicians and patients dwell in diagnostic uncertainty, it can trigger feelings of concern and anxiety, may lead patients to mistrust clinicians’ competence, and could contribute to clinician burnout (feeling exhausted, disconnected, and personally inadequate), especially for early-career clinicians.2,3

Excellent diagnosticians should understand how uncertainty manifests. They should acknowledge and embrace uncertainty, and openly discuss it with other clinicians and patients to normalize its ubiquitous and inevitable part in the diagnostic process.4 Such a reimagining, focused on the inevitable and beneficial aspects of diagnostic uncertainty, relies on identifying how uncertainty is understood, managed, and communicated.


What Is Diagnostic Uncertainty, and for Whom?

Diagnosis is a complex and collaborative process that involves gathering, integrating, and interpreting information across the entire diagnostic team: clinicians (physicians, nurses, and allied health professionals), patients, and patients’ families and caregivers.5 All team members encounter different types of diagnostic uncertainty at different stages in the diagnostic process.3

From the clinicians’ perspective, diagnostic uncertainty has been defined as the “subjective perception of an inability to provide an accurate explanation of the patient’s health problem.”6 These subjective feelings are entangled in a multitude of factors and tensions surrounding the qualities deemed essential in clinicians, such as competence and confidence. The decisiveness with which clinicians make a diagnosis may be perceived as reflecting diagnostic expertise and clinical competence. Yet diagnostic excellence in the setting of uncertainty requires recognition and tolerance of uncertainty, cognitive flexibility, and willingness to engage with evolving information. It includes the ability to share clinical reasoning and communicate uncertainty to patients.3,4

Patients may experience uncertainty at any point along the diagnostic process and beyond. For patients, diagnostic uncertainty often begins before they present for health care, such as doubt about whether a persistent minor pain or occasional numbness warrants a clinical visit. Patients may have doubts about how long it will take to get answers, what their role is in the diagnostic process, whether a treatment is available, and whether they want a diagnosis if they already fear having a serious illness. They may have doubts about what a diagnosis means for their personal and professional life, their functional status, and quality of life.

Patients also encounter doubt when they perceive their valid symptoms are being dismissed. This is a common experience reported by patients, particularly those who experience other health disparities related to age, sex, race and ethnicity, or language background. For example, some women with myocardial ischemia may present with symptoms (such as back or abdominal pain or vomiting) that are not considered typical cardiac presentations, and may believe their symptoms are being dismissed. Some people might have doubts when a diagnosis does not match what they think is affecting them, or when family members, such as children and older adults who are unable to advocate for themselves, experience disease progression or adverse outcomes despite having been assigned a diagnostic label and associated treatments.


Managing Uncertainty Positively

“Unmasking”4 uncertainty, embracing it, and openly communicating about it could help alleviate anxiety and feelings of emotional exhaustion, detachment, and personal inadequacy associated with burnout and help clinicians “enjoy rather than dread the diagnostic process.”7 However, tolerating uncertainty rather than trying to reduce it to absolute certainty requires a major shift in the clinician’s mindset. Current medical education inadequately prepares early-career clinicians for feelings of failure associated with diagnostic uncertainty. Instead of upholding the illusion of certainty, medical education and professional development should provide a judgment-free opportunity for clinicians to openly and safely reflect, as well as be guided by and learn to live with the stress associated with diagnostic uncertainty.8

All clinicians across hierarchies and levels of experience need to openly acknowledge the realities of diagnostic uncertainty. The uncertainty surrounding diagnosis does need not be perceived as a threat to medical “authority,” expertise, or professionalism. On the contrary, clinicians who openly encourage and engage in discussions of uncertainty without blame or penalty model excellent diagnostic processes. Normalizing and promoting acceptance of uncertainty as integral to the diagnostic process thus should become routine within clinical care and medical education.8

The effects of explicitly acknowledging and managing uncertainty in the diagnostic process could be profound; doing so may help foster a safety culture in which all diagnostic team members can openly discuss, challenge, and collaborate to refine clinical reasoning. Diagnostic possibilities could be explored in self-reflection, and in interactions with colleagues and with patients.


Communicating Uncertainty

Effective communication about uncertainty across the entire diagnostic team is essential to avoid diagnostic error and patient harm.9

Diagnostic error has been defined as a failure to find an accurate and timely explanation for a health problem or failure to communicate that explanation to the patient.5 This definition should be expanded to include failure to communicate uncertainty explicitly, given its pervasiveness, as a potent contributor to diagnostic error.3 When clinicians do not disclose their doubts, patients may leave the clinical encounter feeling reassured yet remain unaware of their clinician’s uncertainty. When medical notes in electronic medical records (EMRs) present diagnoses as certainties, the diagnostic team may miss other diagnostic possibilities. Instead, EMRs should embed differential diagnosis and language expressing uncertainty (such as “possible viral conjunctivitis”) into documentation.

Probabilistic reasoning is often used to articulate uncertainty. Probabilistic (or bayesian) reasoning is a useful method to reduce cognitive biases when information is assessed during the diagnostic process,5 yet it is underused or even misunderstood in routine medical practice. Applying bayesian reasoning principles could lead clinicians to adjust their thinking and revise disease probabilities as they gather more information, thereby potentially avoiding diagnostic errors (eg, considering the frequency of disease processes in the immediate population to avoid base-rate neglect: the tendency to overemphasize information specific to an individual).5 Most clinicians apply probabilistic reasoning unconsciously, but bringing these skills and related language to interactions could be one way to explicitly communicate uncertainty.

How people understand language commonly associated with uncertainty and probability (eg, “occasionally,” “rarely”), including in radiology or pathology reports (eg, “highly suspicious for,” “suggestive of”), could differ between speaker/sender and hearer/receiver and may lead to ambiguity regarding diagnostic certainty. Clinicians also communicate uncertainty via implicit communication strategies that patients may not identify as expressions of uncertainty. For the clinician, “I’d like to follow-up with you next week” may signal they are unsure of a diagnosis and are adopting a watchful, waiting approach. For the patient, it may seem like an ordinary follow-up appointment without any indication of uncertainty.


Key Points for Diagnostic Excellence

.  Diagnostic uncertainty should be shared explicitly with patients. Failure to communicate uncertainty contributes to diagnostic error.

.  Understanding diagnostic uncertainty can be enriched by incorporating perspectives from medicine, social sciences, and humanities.

.  Diagnostic uncertainty should be reimagined as positive and routinely embraced in clinical care and education.

.  Explicitly acknowledging, managing, and communicating uncertainty promotes a robust diagnostic safety culture.

Clinical practice would benefit from evidence-based recommendations on how to best communicate uncertainty in diagnostic encounters. For example, linguistic analysis of video-recorded diagnostic interactions can help identify the language structures clinicians use when expressing diagnostic uncertainty. Diagnostic excellence should be informed by broadening the current understanding of diagnostic uncertainty beyond medical realms to include linguistic, communication, humanistic, sociological, and patient-centered perspectives to better understand and describe the nuance of the diagnostic process and uncertainty.


Diagnosis as a Relational, Communicative Process

Diagnosis is “a relational process, with each party (lay and medical) confronting illness with different explanations, understandings, values, and beliefs.”10 Managing patient anxiety surrounding uncertainty in diagnosis requires open interpersonal communication to increase patients’ awareness of the nature of diagnosis as a process rather than an isolated event. Clinicians could build rapport and trust and manage expectations by listening to patients, clearly communicating steps along the diagnostic process, and sharing their own uncertainty.

Patients’ expectations change as they gain a more transparent understanding of the complex and often complicated pathway to diagnosis. Clinicians can build safety nets by alerting patients about their uncertainty, discussing red-flag symptoms, and codeveloping plans of when and where patients should seek additional or urgent help.3 Open communication between clinicians and patients could also provide avenues for feedback on diagnostic performance, essential to calibrate clinicians’ diagnostic abilities.5

To effectively manage the complexity and challenges of the diagnostic process, clinicians and patients need to find approaches to address uncertainty. Acknowledging, embracing, and communicating uncertainty opens diagnostic possibilities and a way toward achieving diagnostic excellence.


Choice Matters More with Others: Choosing to be with Other People is More Consequential to Well-Being than Choosing to be Alone

Choice Matters More with Others: Choosing to be with Other People is More Consequential to Well-Being than Choosing to be Alone. Liad Uziel & Tomer Schmidt-Barad. Journal of Happiness Studies, Mar 2 2022. https://link.springer.com/article/10.1007/s10902-022-00506-5

Abstract: Stable social relationships are conducive to well-being. However, similar effects are not reported consistently for daily social interactions in affecting episodic (experiential) subjective well-being (ESWB). The present investigation suggests that the choice of being in a social context plays an important moderating role, such that social interactions increase ESWB only if taken place by one's choice. Moreover, it is argued that choice matters more in a social context than in an alone context because experiences with others are amplified. These ideas were tested and supported in two studies: An experiment that manipulated social context and choice status, and a 10-day experience-sampling study, which explored these variables in real-life settings. Results showed that being with others by one’s choice had the strongest positive association with ESWB, sense of meaning, and control, whereas being with others not by one’s choice—the strongest negative association with ESWB. Effects of being alone on ESWB also varied by choice status, but to a lesser extent. The findings offer theoretical and practical insights into the effects of the social environment on well-being.

Discussion

By studying participants’ experiences in their natural environment, this study affirmed our previous findings that ESWB is shaped by an interaction between the social context and choice of being in this context. Across the different expressions of ESWB, choice was more consequential 'with others' than alone, corroborating approaches that suggest that social contexts act to amplify and intensify experiences (e.g., Steinmetz et al., 2016).

The findings extended beyond ESWB, addressing some of the processes that could account for the observed differences in ESWB. Being with others by choice was also associated with an increase in sense of meaning and control. Our participants evaluated their activities and their level of agency more extremely during non-solitary experiences, and the choice of being in each social context moderated whether this would be for better or worse.

General Discussion

Being alone and socializing are fundamental bricks in the human experience. The mere being in one state (vs. the other) carries important short-term (Kahneman et al., 2004; Uziel, 2007) and long-term (Bowlby, 1973; Winnicott, 1958) implications in a wide range of domains—affective, cognitive, motivational, and behavioral. Crucially, both conditions are conducive to well-being (Uziel, 2021). Seminal studies documented the immense importance of meeting social needs and establishing sound social bonds on healthy development and personal well-being (Baumeister & Leary, 1995), and emerging literature recognizes the benefits of solitary living (DePaulo & Morris, 2005).

In the present investigation, we sought to add to this literature in several respects. First, much of our knowledge on the effects of social bonds or solitary living is based on these conditions as stable ways of living (e.g., being married vs. being single). These are important aspects of our social lives, but the knowledge acquired only via a 'stable relations' lens does not capture the dynamics of our social lives as they unfold across the scenes that comprise our daily experiences (Nezlek et al., 2002). Second, research generally does not compare these social conditions (alone/'with others') but studies each condition separately. And, importantly, research has yet to fully account for the substantial variability in ESWB in these two settings. To address these issues we conducted two studies, an experiment and an experience-sampling study, which provided initial answers to these questions.

Our experience-sampling study (Study 2), which sampled more than 4200 episodes across 10 days, uncovered some of the dynamics of (young) individuals’ daily social lives. Participants reported being with others in about 63% (and alone 37%) of the sampled episodes (which were throughout the day), and regardless of the social context, they were also in a setting of their choice in most (64%) of the episodes. These frequencies are consistent with findings reported in previous studies (e.g., Hudson et al., 2020; O'Connor & Rosenblood, 1996), and they imply that individuals (specifically, students) spend non-negligible periods—about a third of their time—in externally imposed social settings.

Do social interactions increase ESWB compared with periods of aloneness? The extant literature associates stable social relations with greater subjective well-being (Diener & Seligman, 2002), but findings are less conclusive for episodic social interactions (Uziel et al., 2020). The results of the present research coincide with the intricacy of the effect and provide directions toward understanding when and how episodic interactions affect well-being. First, being with others is associated with desirable effects if it reinforces one’s sense of agency, and it is detrimental in the absence of control. Supporting this account are our findings on the sense of control, which increased under chosen social settings, along with the increase in ESWB. These findings resonate early models about the effects of social presence in the social facilitation effect, which emphasized the role of (un)certainty in shaping the reaction to others’ presence (Guerin & Innes, 1982; Zajonc, 1965).

Another path for constructive (vs. destructive) episodic social interactions that emerges from the present findings concerns the sense of meaning. Social contacts were constructive when they were experienced as meaningful. Interestingly, low meaningful contacts (which in experiential sense are less impactful) were nonetheless associated with a relative reduction in ESWB, highlighting an often-neglected aspect in our daily social life. Furthermore, our findings imply that choosing (and perhaps initiating) social interactions is central in affecting ESWB, thus accounting for both—the reason why many people do not initiate such relations (because they generally expect to experience low ESWB in non-chosen settings), and why they may gain if acted to initiate (i.e., choose to be in such) interactions (Epley & Schroeder, 2014).

In popular and academic writings, episodes of aloneness are often depicted as reflecting reduced subjective well-being compared to social engagement (Larson, 1990; Srivastava, 2008). Our data lend partial support to these findings. Study 2 (but not Study 1) found periods of aloneness to be less conducive to well-being than 'with others' contexts, averaged across the different measures. Differences between these conditions were especially notable for sense of meaning. People felt that their actions were more meaningful 'with others' than alone (with the interaction term significant, but weaker than for other measures). This, though, does not necessarily imply that the alone setting was less desirable, as it could reflect the sense that having others observe your actions makes them more consequential (Baumeister, 1982).

Aloneness (by choice and not) emerged as a setting of relative stability, with participants experiencing their different alone conditions quite similarly. Therefore, solitude might not present immediate benefits to well-being, but it does appear to offer a more predictable experience, and if utilized effectively could be a source of personal growth (Lay et al., 2018; Long & Averill, 2003; Uziel, 2021). A worthy direction for future research would be to compare the immediate and sustained implications of periods of aloneness. Moreover, these findings imply that internal (i.e., non-contextual) factors play a significant role in shaping the effects of aloneness. Indeed, the literature has begun identifying relevant factors, such as personality traits (Uziel, 2016; Uziel et al., 2020), preferences and desires (Coplan et al., 2019; Leary et al., 2003), and developmental periods (Larson et al., 1985).

The most robust effect that emerged from the present two studies is in the intersection of being with others, aloneness, and choice. Choice was substantially more important 'with others' (vs. alone) in determining ESWB, sense of meaning, and control. This finding showed in controlled settings (Study 1) and real-life data (Study 2). This finding is in line with approaches stemming from laboratory research, which associate social presence with polarizing effects (Blascovich et al., 1999; Uziel, 20072015), greater intensity and arousal (Wilt & Revelle, 2019; Zajonc, 1965), and self-presentational concerns (Baumeister, 1982). They are further in line with cognitive approaches suggesting that experiences are amplified in social presence (Boothby et al., 2014; Steinmetz et al., 2016). Our data indicate that for better or worse, experiences are more intense 'with others', and that choice of being with others is more consequential to well-being than the choice (vs. not) to be alone.

Last, this study highlights a relatively neglected aspect of research in social psychology, which often applies an experimental approach to the study of social interactions, and consequently non-chosen social settings. The findings inform about the role that chosen social settings play in real-life dynamics, showing that individuals often manage to navigate their social lives by their choice. It is worthwhile to consider this aspect with greater attention in future research.

Limitations and Future Directions

The present research is not free from limitations. First, although sample composition varied between the two studies (e.g., by age and native language), participants were nonetheless from Western cultures (UK and Israel), and Study 2 participants were (mainly Female) college students. Perceptions and experiences of aloneness and of being with others may differ by culture and over the lifespan. Marital status, family composition, and work status could affect not only the likelihood of being with others (or alone) by choice (or not) but also one’s experience in these conditions. Future research could extend the present findings beyond the sampled populations and systematically consider the role of different life conditions.Footnote3

Second, the present studies were focused on transient situational variables, yet individual differences in personality may also affect the experiences in these settings. For example, being alone is experienced differently by individuals varying in neuroticism (Uziel et al., 2020) or in affinity for aloneness (Coplan et al., 2019). Seeking others' company is often affected by extraversion (Wilt & Revelle, 2019) and a range of additional personality traits (e.g., Uziel, 2015). Furthermore, locus of control and self-deception may moderate people's experience of situations as chosen or not.

A third issue concerns the scope of the experiences sampled. Our conclusions are bounded by sampling daily activities in the lives of normative populations. Questions of choice (or lack thereof) and solitude take different forms under extreme conditions, and this warrants separate investigations. Moreover, choice was considered in our study a (subjectively judged) dichotomy. It could be argued that situations are often a mix of choice and constraints. Future research could address this issue by considering different levels of experienced choice. In addition, although our Study 2 sampled a large number of episodes across and within days, it addressed experiences resulting from being in a given situation, but not dynamics resulting from these situations. Future research could address such dynamics by looking at situational contingencies (e.g., likelihood of being alone by choice after being with others), time spent in each situation, and variations in ESWB over extended periods. Additionally, we did not ask about the specific activities that participants were doing (nor about their level of engagement with other people in the ‘with others’ setting). Future research could extend the present findings by emphasizing the type of activities people pursue under these settings.

Relatedly, the present research was focused on self-related constructs. Future research could address implications associated with interpersonal variables (e.g., trust), and objective outcomes (e.g., physiological responses). An additional extension concerns intervention aiming to modify the perceptions of choice in (imposed) social contexts (e.g., while commuting) and their impact on ESWB.

Daughters are less likely than sons to take over their parents’ rightist positions, while parent-son transmission is equally large on the left and the right

Political socialization, political gender gaps, and the intergenerational transmission of left-right ideology. Mathilde M. van Ditmars. European Journal of Political Research, February 21 2022. https://doi.org/10.1111/1475-6765.12517

Abstract: While left and right are the main terms to distinguish political views in Western Europe, the family socialization of citizens has mainly been studied in terms of partisan preferences rather than identification with these ideological blocks. Therefore, this study investigates the intergenerational transmission of left-right ideological positions in two European multiparty systems. To investigate expectations regarding gendered patterns in political socialization, ideological transmission between mothers, fathers, daughters and sons are analysed, making use of German and Swiss household data. The results underline the relevance of the family in the transmission of political ideology in multiparty systems, showing high contemporary parent-child concordance in ideological positioning in line with classic work in political socialization. Moreover, the study demonstrates how the gender-generation gap in political ideology is consequential for this process. Young women consistently place themselves on the left of men across all combinations of parental ideology, which indicates that the gender-generation gap trumps other gendered patterns in intergenerational transmission. Consequently, daughters are less likely than sons to take over their parents’ rightist positions, while parent-son transmission is equally large on the left and the right. This also means that left-leaning parents have a general advantage over right-leaning parents in having their ideological identification reproduced by their daughters. The study highlights the importance of differentiating between the transmission of left- and right-wing ideology in political socialization processes. Moreover, it demonstrates that the distinction by offspring gender is imperative when studying the intergenerational transmission of traits that display gender differences within and between parental and offspring generations. The findings point at the active role of especially female offspring in the political socialization process, as they seem to be more strongly impacted by influences outside the family that sustain generational processes of further gender realignment.


Placebo effects are ubiquitous yet highly variable between individuals; a meta-analysis of 10 different personality traits shows no evidence of associations between them and the magnitude of placebo effects

Kang, Heemin, Miriam S. Miksche, and Dan-Mikael Ellingsen. 2022. “The Association Between Personality Traits and Placebo Effects: A Preregistered Systematic Review and Meta-analysis.” PsyArXiv. March 1. doi:10.31234/osf.io/tc9e8

Abstract: Placebo effects are ubiquitous yet highly variable between individuals, and therefore strongly impact clinical trial outcomes. It is unclear whether dispositional psychological traits influence responsiveness to placebo. This preregistered meta-analysis and systematic review synthesized the literature investigating the association between personality traits and placebo effects. Based on 19 studies with 712 participants, we performed formal meta-analyses for 10 different personality traits. We did not find evidence of associations between any of these traits and magnitude of placebo effects, which was supported by equivalence tests. Furthermore, we did not find evidence for moderating factors such as placebo manipulation type (Conditioning, non-conditioning) or condition (pain, non-pain). However, the current synthesis was not statistically powered for full inquiry into potential conditional or interactive associations between personality and situational variables. These findings challenge the notion that personality influences responsiveness to placebos and contradict its utility for identifying placebo “responders” and “non-responders”.

 

Evidence of a talisman effect of insurance—consumers who have an insurance policy feel that the covered mishap is less likely to occur

Anxiety, Cognitive Availability, and the Talisman Effect of Insurance. Robert M. Schindler et al. Personality and Social Psychology Bulletin, March 1, 2022. https://doi.org/10.1177/01461672221077791

Abstract: Across four experiments (N = 1,923), this research provides converging evidence of a talisman effect of insurance—consumers who have an insurance policy feel that the covered mishap is less likely to occur. Although such an effect has previously been proposed, empirical evidence for it is limited, in part because the talisman effect has often been conflated with a related but distinct magical-thinking phenomenon, the tempting-fate effect. By disentangling these two effects, we are better able to isolate the talisman effect and show that it is a robust phenomenon in its own right. We also provide support for a mechanism underlying the talisman effect: Insurance reduces anxiety and repetitious thoughts related to the mishap; with fewer thoughts about the mishap, its cognitive availability is lower and so it seems less likely to occur.


Keywords: insurance, magical thinking, tempting fate, availability, anxiety


Wednesday, March 2, 2022

In electronic interactions, on average, the public of all ethnic groups (except Blacks themselves) is less likely to respond to emails from people they believe to be Black (rather than White)

Are Americans less likely to reply to emails from Black people relative to White people? Ray Block Jr. et al. Proceedings of the National Academy of Sciences, December 20, 2021, Vol 118 (52) e2110347118. https://doi.org/10.1073/pnas.2110347118

Significance: Although previous attempts have been made to measure everyday discrimination against African Americans, these approaches have been constrained by distinct methodological challenges. We present the results from an audit or correspondence study of a large-scale, nationally representative pool of the American public. We provide evidence that in simple day-to-day interactions, such as sending and responding to emails, the public discriminates against Black people. This discrimination is present among all racial/ethnic groups (aside from among Black people) and all areas of the country. Our results provide a window into the discrimination that Black people in the United States face in day-to-day interactions with their fellow citizens.

Abstract: In this article, we present the results from a large-scale field experiment designed to measure racial discrimination among the American public. We conducted an audit study on the general public—sending correspondence to 250,000 citizens randomly drawn from public voter registration lists. Our within-subjects experimental design tested the public’s responsiveness to electronically delivered requests to volunteer their time to help with completing a simple task—taking a survey. We randomized whether the request came from either an ostensibly Black or an ostensibly White sender. We provide evidence that in electronic interactions, on average, the public is less likely to respond to emails from people they believe to be Black (rather than White). Our results give us a snapshot of a subtle form of racial bias that is systemic in the United States. What we term everyday or “paper cut” discrimination is exhibited by all racial/ethnic subgroups—outside of Black people themselves—and is present in all geographic regions in the United States. We benchmark paper cut discrimination among the public to estimates of discrimination among various groups of social elites. We show that discrimination among the public occurs more frequently than discrimination observed among elected officials and discrimination in higher education and the medical sector but simultaneously, less frequently than discrimination in housing and employment contexts. Our results provide a window into the discrimination that Black people in the United States face in day-to-day interactions with their fellow citizens.


People were better at detecting Twitter bots when they came from the opposite political camp, while greater experience with social media had a detrimental effect on the hit rate

Duped by Bots: Why Some are Better than Others at Detecting Fake Social Media Personas. Ryan Kenny et al. Human Factors: The Journal of the Human Factors and Ergonomics Society, February 24, 2022. https://doi.org/10.1177/00187208211072642

Abstract

Objective: We examine individuals’ ability to detect social bots among Twitter personas, along with participant and persona features associated with that ability.

Background: Social media users need to distinguish bots from human users. We develop and demonstrate a methodology for assessing those abilities, with a simulated social media task.

Method: We analyze performance from a signal detection theory perspective, using a task that asked lay participants whether each of 50 Twitter personas was a human or social bot. We used the agreement of two machine learning models to estimate the probability of each persona being a bot. We estimated the probability of participants indicating that a persona was a bot with a generalized linear mixed-effects model using participant characteristics (social media experience, analytical reasoning, and political views) and stimulus characteristics (bot indicator score and political tone) as regressors.

Results: On average, participants had modest sensitivity (d’) and a criterion that favored responding “human.” Exploratory analyses found greater sensitivity for participants (a) with less self-reported social media experience, (b) greater analytical reasoning ability, and (c) who were evaluating personas with opposing political views. Some patterns varied with participants' political identity.

Conclusions: Individuals have limited ability to detect social bots, with greater aversion to mistaking bots for humans than vice versa. Greater social media experience and myside bias appeared to reduce performance, as did less analytical reasoning ability.

Application: These patterns suggest the need for interventions, especially when users feel most familiar with social media.


Keywords: signal detection theory, social bots, social media, analytical reasoning, myside bias


Storytelling is inter alia an entertainment technology used as recruitment tool: Allows to attract and potentially cooperate with those that matter to the story creator by signaling one’s qualities & enhancing one's reputation as a cooperative partner

Why and How Did Narrative Fictions Evolve? Fictions as Entertainment Technologies. Edgar Dubourg and Nicolas Baumard. Front. Psychol., March 1 2022. https://doi.org/10.3389/fpsyg.2022.786770

Abstract: Narrative fictions have surely become the single most widespread source of entertainment in the world. In their free time, humans read novels and comics, watch movies and TV series, and play video games: they consume stories that they know to be false. Such behaviors are expanding at lightning speed in modern societies. Yet, the question of the origin of fictions has been an evolutionary puzzle for decades: Are fictions biological adaptations, or the by-products of cognitive mechanisms that evolved for another purpose? The absence of any consensus in cognitive science has made it difficult to explain how narrative fictions evolve culturally. We argue that current conflicting hypotheses are partly wrong, and partly right: narrative fictions are by-products of the human mind, because they obviously co-opt some pre-existing cognitive preferences and mechanisms, such as our interest for social information, and our abilities to do mindreading and to imagine counterfactuals. But humans reap some fitness benefits from producing and consuming such appealing cultural items, making fictions adaptive. To reconcile these two views, we put forward the hypothesis that narrative fictions are best seen as entertainment technologies that is, as items crafted by some people for the proximate goal to grab the attention of other people, and with the ultimate goal to fulfill other evolutionary-relevant functions that become easier once other people’s attention is caught. This hypothesis explains why fictions are filled with exaggerated and entertaining stimuli, why they fit so well the changing preferences of the audience they target, and why producers constantly make their fictions more attractive as time goes by, in a cumulative manner.

See also Why imaginary worlds? Exploratory preferences explain the cultural success of fictions with imaginary worlds in modern societies. Edgar Dubourg. Human Behavior & Evolution Society HBES 2021, Jun-Jul 2021. https://www.bipartisanalliance.com/2021/06/fictions-with-imaginary-worlds-should.html.


---

A Specific Kind of Technologies: Entertainment Technologies

The Centrality of Entertainment in Fictions

Literary theorists and historians have long noticed the cross-culturally recurrent and entertaining features of fictions (which have also been called “themes,” “tropes,” or “patterns”) such as adventures, conflicts, love stories, imaginary worlds, monsters, gossip, authority, success, and the search of social status (Kato and Saunders, 1985, p. 232; Pavel, 1986, pp. 147–148; Campbell, 1993Schaeffer, 1999, p. 241; Huang, 2001, pp. 60–61; Hogan, 2003Booker, 2004). Evolutionary critics in the humanities and evolutionary social scientists brought evidence that such universal fictional features are influenced by the evolutionary history of the human mind (Carroll, 1995Gottschall, 2008Fisher and Salmon, 2012Saad, 2012Grodal, 2017). More recently, as we have seen in section The By-product Hypothesis (and the Problem of Fitness Benefits), these cross-cultural features have been linked to specific cognitive preferences (Table 1). In all, there seems to be a large and interdisciplinary consensus to say that narrative fictions include attractive and entertaining features. The question therefore is: Why are such features attractive and entertaining to the human mind?

We contend that such pleasurable features of fictions are very close to what evolutionary biologists called superstimuli (Tinbergen, 1969Barrett, 2010). Many studies show that some species, in the course of their evolutionary history, recycled pre-existing attractive traits for new evolutionary relevant functions such as attracting mates (Lorenz, 1966Krebs and Dawkins, 1978Basolo, 1990Ryan et al., 1990). For instance, because the female frog Physalaemus pustulosus had developed preferences for lower-frequency chuck sounds, males evolved the ability to produce such sounds to tap into this sensory preference (Ryan et al., 1990).

In nonhuman animals, this recycling of preexisting preferences usually emerges through biological selection. In humans, it can emerge through cultural evolution: producers use their expertise to target and refine stimuli that are already appealing to consumers (Lightner et al., 2022), so as to fulfill fitness relevant goals (Singh, 2020). We will explain what these goals are in the next sub-section.

We therefore argue that content features in fictions are superstimuli: they are crafted to resemble stimuli that were already appealing to the human mind, because of the natural selection of attention-orienting cognitive mechanisms, and of the pleasure systems rewarding the behavior of paying attention to such stimuli. This is a form of what psychologists have called “content-based attraction,” when the attraction and prevalence of a cultural item is favored by its content (Sperber, 1996Claidière and Sperber, 2007Scott-Phillips et al., 2018).

A question follows: Why are such stimuli attention-grabbing in the first place (in the real world)? This is where we fall back on the by-product hypothesis: such preferences for some stimuli (e.g., social information) evolved because humans endowed with them survived and reproduced better in the ancestral environments when the human cognition evolved.

In evolutionary and cognitive approaches to fictional content, superstimuli have already been studied in fictional texts (Jobling, 2001Nettle, 2005a,bSingh, 2019), in movies (Cutting et al., 2011Andrews, 2012Clasen, 2012Cutting, 20162021Sobchuk and Tinits, 2020), in video games (Jansz and Tanis, 2007Mendenhall et al., 2010), in artistic representations (Verpooten and Nelissen, 20102012), and in cross-media approaches to fiction (Grodal, 2010Barrett, 2016Dubourg and Baumard, 2021). Let us note that such fictional superstimuli can be narrative superstimuli (e.g., how Marcel in Search of Lost Time reaches prestige), visual superstimuli (e.g., the form of Mickey), auditory superstimuli (e.g., the terrifying sounds in horror films), and other sensory superstimuli (e.g., the sense of control in open-world video games or in virtual reality games). Producers of fictions use any means available to them to make the most attention-grabbing superstimuli and therefore the most entertaining fictions.

Of course, the pleasure-inducing effect elicited by superstimuli in fictions is also elicited by some other cultural behavior and products, such as sport and news (Barrett, 20102016). This is because the fiction industry is not the only one to target entertainment. However, the presence of superstimuli successfully isolate fiction from non-fiction, because superstimuli are never included in non-fictional narratives: the obligation to (try to) stick to real facts prevent, to a large extent, producers of non-fictional narratives to invent and exaggerate any feature (or else their epistemic reputation might suffer, and the benefits of attracting other people’s attention would be overweighted by the reputational costs of having deceived their audience). We contend that such a distinction is intuitive to consumers: they will continue to consume and positively evaluate fictions that they take pleasure from, while they will either stop consuming or negatively evaluate fictions that deceive the expectation to be entertained. Conversely, when they consume non-fictional narratives, such as a philosophical treatise, a political essay, or an history documentary, their primary goal is to learn things, so that they will not stop consuming the non-fiction if they are not entertained, and they will not base their evaluation on this criterion.

The Fitness Consequences of Entertainment Technologies

Why would producing fictions be adaptive? With the entertainment hypothesis, this question is the same as the following one: Why would attracting the attention of other people by inventing entertaining cultural items should bring any fitness benefit? We propose that, because they are highly attractive and entertaining, fictions can be used to fulfill any evolutionary relevant goal that needs others’ attention to be caught, be it signaling one’s values to potential mates (Miller, 2001) or cooperative partners (Bourdieu, 2010André et al., 2020André and Baumard, 2020Dubourg et al., 2021bLightner et al., 2022), transmitting knowledge (Schniter et al., 2018Nakawake and Sato, 2019Sugiyama, 2021b), communicating social norms (Mar and Oatley, 2008Ferrara et al., 2019), or selling products (Saad and Gill, 2000Saad, 2012).

Consistently, narrative fictions seem to have been used (1) as recruitment technologies: they allow the producers of fictions to attract and potentially cooperate with individuals that matter to them, by signaling one’s qualities (e.g., their competence, their moral sense, and their intelligence) and therefore enhancing one’s reputation as a cooperative partner (Sperber and Baumard, 2012). For instance, in many countries at most time in history, cultural institutions and organizations aimed at spotlighting the producers of fictions, from the poetry contests (uta-awase) in Japan from the Heian period to the modern Nobel Prize in Literature and movie Academy Awards. Narrative fictions are also obviously used to (2) derive economic or material gains. This is clearly pictured in the form fiction production and fiction consumption took in large-scale societies, that of a massive (and highly lucrative) contract-based market.

Crucially, such adaptive goals need not be conscious or deliberate. They need not be the only motivations either: drawing on adaptive hypotheses that we reviewed in section State of the Current Hypotheses, producers of fictions can have other goals, such as transmitting knowledge (Sugiyama, 2021a). The association between both motivations of educating and entertaining people has produced a new form of cultural devices called “Edutainment” (Singhal, 2004Anikina and Yakimenko, 2015), which we argue has emerged far back in human cultural history, embedding not only recent fictions (e.g., Dora the Explorer), but also ancient folktales (Sugiyama, 2021b) and other literary forms such as pre-17th century European fairy tales.

According to this hypothesis, narrative fictions are sustained because they confer fitness benefits to the consumers too. First, let us note that the opportunity costs of fiction consumption seem rather low because people do not seem to consume fictions at the expense of other more “evolutionary relevant” activities such as sleeping, eating, and parenting. On the other hand, consumers can use fictions they liked to signal their skills (Veblen, 1899Bourdieu, 1979Lizardo, 20062013). They can also use more culturally successful fictions they liked to signal their personality traits (Dubourg et al., 2021a), or to share cultural focal points for social coordination (Dubourg et al., 2021b,c). Besides, human minds have evolved specialized cognitive mechanisms to detect and use social markers for coordination (Nettle and Dunbar, 1997Boyer, 2018). We propose that preferences for fictions have become relatively important markers in the ecology of modern cultural diversity, because of their signaling potential.

Summary of the Hypothesis

In all, we propose that humans did not specifically evolve the capacity to tell fictional stories, but they rather produce fictions thanks to a range of other adaptations (e.g., language, the capacity to simulate, Theory of Mind, and communicative inferences; Zunshine, 2006Mellmann, 2012Wilson, 2018). Yet, we do not consider fictions as “by-products,” because they clearly confer fitness benefits to the producers (André et al., 2020). We argue that fictions are “entertainment technologies” (Dubourg and Baumard, 2021): they are crafted by storytellers to artificially attract the attention of other people and then fulfill evolutionary-relevant goals (Singh, 2020). Obviously, fictions are not the only example of entertainment technologies. Sport, TV shows (Barrett, 20102016), music (Dubourg et al., 2021a), and performing arts (Verpooten and Nelissen, 20102012) are also entertainment technologies in the sense that they are created to trigger people’s attention, and are consumed because they exaggerate the features of phenomena (e.g., human voice and interindividual competition) that humans evolved to be interested in.