Wednesday, March 11, 2020

Prevention of Psychosis—Advances in Detection, Prognosis, & Intervention; but evidence that favored any particular preventive intervention over another is currently of great uncertainty

Prevention of Psychosis—Advances in Detection, Prognosis, and Intervention. Paolo Fusar-Poli et al. JAMA Psychiatry, March 11, 2020. doi:10.1001/jamapsychiatry.2019.4779

Key Points
Question  What is the status of current clinical knowledge in the detection, prognosis, and interventions for individuals at risk of psychosis?

Findings  In this review of 42 meta-analyses encompassing 81 outcomes, detecting individuals at risk for psychosis required knowledge of their specific sociodemographic, clinical, functional, cognitive, and neurobiological characteristics, and predicting outcomes was achieved with good accuracy provided that assessment tools were used in clinical samples. Evidence for specific effective interventions for this patient population is currently insufficient.

Meaning  Findings of this review suggest that, although clinical research knowledge for psychosis prevention is substantial and detecting and formulating a prognosis in individuals at risk for psychosis are possible, further research is needed to identify specific effective interventions in individuals with sufficient risk enrichment.


Abstract
Importance  Detection, prognosis, and indicated interventions in individuals at clinical high risk for psychosis (CHR-P) are key components of preventive psychiatry.

Objective  To provide a comprehensive, evidence-based systematic appraisal of the advancements and limitations of detection, prognosis, and interventions for CHR-P individuals and to formulate updated recommendations.

Evidence Review  Web of Science, Cochrane Central Register of Reviews, and Ovid/PsychINFO were searched for articles published from January 1, 2013, to June 30, 2019, to identify meta-analyses conducted in CHR-P individuals. MEDLINE was used to search the reference lists of retrieved articles. Data obtained from each article included first author, year of publication, topic investigated, type of publication, study design and number, sample size of CHR-P population and comparison group, type of comparison group, age and sex of CHR-P individuals, type of prognostic assessment, interventions, quality assessment (using AMSTAR [Assessing the Methodological Quality of Systematic Reviews]), and key findings with their effect sizes.

Findings  In total, 42 meta-analyses published in the past 6 years and encompassing 81 outcomes were included. For the detection component, CHR-P individuals were young (mean [SD] age, 20.6 [3.2] years), were more frequently male (58%), and predominantly presented with attenuated psychotic symptoms lasting for more than 1 year before their presentation at specialized services. CHR-P individuals accumulated several sociodemographic risk factors compared with control participants. Substance use (33% tobacco use and 27% cannabis use), comorbid mental disorders (41% with depressive disorders and 15% with anxiety disorders), suicidal ideation (66%), and self-harm (49%) were also frequently seen in CHR-P individuals. CHR-P individuals showed impairments in work (Cohen d = 0.57) or educational functioning (Cohen d = 0.21), social functioning (Cohen d = 1.25), and quality of life (Cohen d = 1.75). Several neurobiological and neurocognitive alterations were confirmed in this study. For the prognosis component, the prognostic accuracy of CHR-P instruments was good, provided they were used in clinical samples. Overall, risk of psychosis was 22% at 3 years, and the risk was the highest in the brief and limited intermittent psychotic symptoms subgroup (38%). Baseline severity of attenuated psychotic (Cohen d = 0.35) and negative symptoms (Cohen d = 0.39) as well as low functioning (Cohen d = 0.29) were associated with an increased risk of psychosis. Controlling risk enrichment and implementing sequential risk assessments can optimize prognostic accuracy. For the intervention component, no robust evidence yet exists to favor any indicated intervention over another (including needs-based interventions and control conditions) for preventing psychosis or ameliorating any other outcome in CHR-P individuals. However, because the uncertainty of this evidence is high, needs-based and psychological interventions should still be offered.

Conclusions and Relevance  This review confirmed recent substantial advancements in the detection and prognosis of CHR-P individuals while suggesting that effective indicated interventions need to be identified. This evidence suggests a need for specialized services to detect CHR-P individuals in primary and secondary care settings, to formulate a prognosis with validated psychometric instruments, and to offer needs-based and psychological interventions.



Introduction

Detection, assessment, and intervention before the onset of a first episode of the disorder in individuals at clinical high risk for psychosis (CHR-P) have the potential to maximize the benefits of early interventions in psychosis.1,2 The CHR-P paradigm originated in Australia 25 years ago3 and has since gained enough traction to stimulate hundreds of research publications. These published studies have been summarized in evidence synthesis studies spanning different topics and have influenced several national4 and international5 clinical guidelines and diagnostic manuals (eg, DSM-56,7). Overall, CHR-P represents the most established preventive approach in clinical psychiatry; therefore, periodically reviewing its progress and limitations is essential.

The rapid developments of detection, prognostic, and intervention-focused knowledge in the CHR-P field have not yet been integrated into a comprehensive, evidence-based summary since a 2013 publication in JAMA Psychiatry.8 Produced by the European College of Neuropsychopharmacology Network on the Prevention of Mental Disorders and Mental Health Promotion,9 the present study aimed to provide the first umbrella review summarizing the most recent evidence in the CHR-P field. An additional objective was to provide evidence-based recommendations for the 3 core components that are necessary to implement the CHR-P paradigm in clinical practice: detection, prognosis, and intervention.10

Methods
The protocol of this study was registered in PROSPERO (registration No. CRD42019135880). This study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA) reporting guideline11 and the Reporting Items for Practice Guidelines in Healthcare (RIGHT) statement12 (eTable 1 in the Supplement).

Search Strategy and Selection Criteria
A multistep literature search was performed for articles published between January 1, 2013, and June 30, 2019 (eMethods 1 in the Supplement). Web of Science, Cochrane Central Register of Reviews, and Ovid/PsychINFO were searched for meta-analyses conducted in CHR-P individuals, and MEDLINE was used to search the reference lists of retrieved articles. The literature search, study selection, and data extraction were conducted independently by 2 of us (G.S.d.P., P.F.-P.), and consensus was reached through discussion.

Studies included were (1) meta-analyses (pairwise or network; aggregate or individual participant data) published as original investigations, reviews, research letters, or gray literature without restriction on the topic investigated13; (2) conducted in CHR-P individuals (ie, individuals meeting ultra-high-risk and/or basic symptoms criteria) as established by validated psychometric instruments8 (eMethods 2 in the Supplement) without restriction on the type of comparison group; and (3) published in the past 6 years.

Studies excluded (1) were original studies, study protocols, systematic reviews without quantitative analyses, and other non-meta-analytical studies; (2) did not formally assess and selected participants with established CHR-P instruments; or (3) were abstracts and conference proceedings. Data obtained from each article included first author, year of publication, topic investigated, type of publication, study design and number, sample size of CHR-P population and comparison group, type of comparison group, age and sex of CHR-P individuals, type of prognostic assessment, interventions, quality assessment (using AMSTAR [Assessing the Methodological Quality of Systematic Reviews]), and key findings with their effect sizes.

To respect the hierarchy of the evidence (eMethods 3 in the Supplement), if 2 or more meta-analyses addressing the same topic were found, we gave preference to individual participant data meta-analyses over aggregate network meta-analyses and to network meta-analyses over pairwise meta-analyses. The most recent study was selected when the previous criteria did not apply. If, after applying the hierarchical criteria, 2 studies were similar, both were included.

Outcome Measures, Data Extraction, and Timing and Effect Measures
From each study, a predetermined set of outcome measures (eMethods 4 in the Supplement) was extracted. The results were then narratively reported in tables, clustered around 3 core domains: detection, prognosis, and intervention.

When feasible, effect size measures were estimated through Cohen d. Other effect size measures were converted to Cohen d.13 In case of meta-analyses reporting time-dependent risks or rates or descriptive data only, proportions (95% CIs) or means (SDs) were summarized.

Quality Assessment
The quality of the included meta-analyses was assessed with the AMSTAR tool.14 Details of the meta-analyses and items evaluated are found in eMethods 5 in the Supplement.

Standards for Guidelines Development
To develop the recommendations, we followed the US Preventive Services Task Force (USPSTF) grading system15 (eTable 2 in the Supplement), which is suited explicitly for preventive approaches and has received extensive validation in articles published in several journals, including JAMA.16-21 Guideline development followed the JAMA Clinical Guidelines Synopsis, reaching consensus across the multidisciplinary European College of Neuropsychopharmacology Network on the Prevention of Mental Disorders and Mental Health Promotion.9 The rationale for the recommendations was provided. Conflicts of interest were fully detailed.

Results
The literature search yielded 886 citations, which were screened for eligibility, and 55 of them were considered. After checking the inclusion and exclusion criteria, we included 42 meta-analyses encompassing 81 outcomes in the final analysis (Figure 1; eTables 3 to 11 in the Supplement).

Detection

Characteristics
No meta-analysis focused on the basic symptoms criteria. Overall, 85% (95% CI, 79%-90%) of CHR-P individuals met the attenuated psychosis symptoms (APS) criteria,22 10% (95% CI, 6%-14%) met the brief limited intermittent psychotic symptoms (BLIPS) criteria,22 and 5% (95% CI, 3%-7%) met the genetic risk and deterioration (GRD) syndrome criteria.22 The mean (SD) age of CHR-P individuals across the included studies was 20.6 (3.2) years, with a range of 12 to 49 years.5,22-52 These individuals were predominantly male (58%)22-29,31,33,35-43,46-50,53,54 and had attenuated psychotic symptoms lasting for more than 1 year before their presentation to specialized services. Several studies included underage patients.5,22-30,32-35,39-50,52,55,56 No differences were observed across the APS, BLIPS, and GRD subgroups.22 However, the mean (SD) duration of untreated attenuated psychotic symptoms tended to be shorter in the BLIPS group (435.8 [456.4] days) compared with the GRD group (783.5 [798.6] days) and APS group (709.5 [518.5] days) (eTable 3 in the Supplement).

Genetic and Environmental Risk and Protective Factors
Individuals who met CHR-P criteria, compared with those who did not, were more likely to have olfactory dysfunction (Cohen d = 0.71),57 be physically inactive (Cohen d = 0.7), have obstetric complications (Cohen d = 0.62), be unemployed (Cohen d = 0.57), be single (Cohen d = 0.27), have a low educational level (Cohen d = 0.21), and be male (Cohen d = 0.18).55 Trauma, which encompassed childhood emotional abuse (Cohen d = 0.98),55 high perceived stress (Cohen d = 0.85),55 childhood physical neglect (Cohen d = 0.62),55 and being bullied (Cohen d = 0.62)56 (eTable 4 in the Supplement; Figure 2), was also more frequent (87% for overall trauma)23 and severe (Cohen d = 1.38)56 in CHR-P individuals compared with the control groups. No meta-analysis addressed the association between genetic factors and the CHR-P state.

Substance Use
A statistically significant association was found between the CHR-P state and tobacco use (Cohen d = 0.61).55 Altogether, 33% of CHR-P individuals smoked tobacco compared with 14% in the control groups.58 Those in the CHR-P group were also more likely to be current cannabis users than control participants (27% vs 17%).53 Current cannabis use disorder was associated with an increased risk of psychosis (Cohen d = 0.31), whereas lifetime cannabis use was not.24 Higher levels of unusual thought content (Cohen d = 0.27) and suspiciousness (Cohen d = 0.21) were found in CHR-P individuals who were cannabis users compared with non–cannabis users,53 but attenuated positive or negative psychotic symptoms did not differ between these 2 groups53 (eTable 5 in the Supplement).

Clinical Comorbidity
Depressive (41%) and anxiety (15%) disorders were frequent in the CHR-P state.25 Most CHR-P individuals presented with suicidal ideation (66%).26 The prevalence of self-harm was 49% and of suicide attempts was 18% in CHR-P individuals26 (eTable 6 in the Supplement).

Functioning and Quality of Life
CHR-P individuals had lower levels of adolescence (Cohen d = 0.96-1.03) and childhood (Cohen d = 1.0) functioning compared with control participants.55 Functional impairments in CHR-P individuals were as severe as impairments in other mental disorders and were more severe than in control participants (Cohen d = 3.01)27 but were less severe than in established psychosis (Cohen d = 0.34). The CHR-P status was also associated with significant social deficits (Cohen d = 1.25).55 Quality of life was worse in CHR-P individuals than in control individuals (Cohen d = 1.75),27 whereas no differences from individuals with psychosis27 were reported (eTable 7 in the Supplement).

Cognition
Visual learning (Cohen d = 0.27), processing speed (Cohen d = 0.42), and verbal learning (Cohen d = 0.42)54 were impaired in CHR-P individuals compared with control participants. CHR-P individuals who later developed psychosis showed poorer cognitive functioning (Cohen d = 0.24-0.54)54 compared with those who did not develop psychosis. However, no evidence of cognitive decline was found from baseline to follow-up in CHR-P individuals at any time.28 Although social cognition was impaired in CHR-P individuals compared with control individuals (Cohen d = 0.48),30 theory of mind was less impaired than in participants with first-episode psychosis (Cohen d = 0.45).31 CHR-P individuals showed more metacognitive dysfunctions (Cohen d = 0.57-1.09) than control participants but were similar to those with established psychosis29 (eTable 8 in the Supplement).

Neuroimaging and Biochemistry
CHR-P individuals had decreased blood interleukin 1β (IL-1β) levels33 (Cohen d = 0.66), increased salivary cortisol levels (Cohen d = 0.59),32 and increased blood IL-633 (Cohen d = 0.31) compared with control groups. The thalamus was smaller in CHR-P individuals than in control participants (Cohen d = 0.60),36 whereas no significant differences in the pituitary volume were found.37 The right hippocampal volume (unlike the left one) was also significantly smaller in CHR-P individuals38 compared with control participants (Cohen d = 0.24).38 Levels of glutamate and glutamine (measured together) were higher in the medial frontal cortex of CHR-P individuals than in control participants (Cohen d = 0.26).34

Compared with control individuals, CHR-P individuals showed decreased activations in the right inferior parietal lobule and left medial frontal gyrus and increased activations in the left superior temporal gyrus and right superior frontal gyrus35 (eTable 9 in the Supplement). As for neurophysiological processes, the mismatch negativity amplitude was reduced in CHR-P individuals compared with control participants (Cohen d = 0.4)39 and in CHR-P individuals who developed psychosis compared with those who did not (Cohen d = 0.71).59 A theoretical neurobiological model of the CHR-P state, which integrates these findings, is reported in Figure 3.60

Prognosis
Overall Prognostic Performance
Currently used semistructured interviews for psychosis prediction have an excellent overall prognostic performance (area under the curve [AUC] = 0.9).42 However, their sensitivity is high (96%) and specificity is low (47%),42 and they are not valid outside clinical samples that have undergone risk enrichment (ie, screening the general population is not useful)42 (Figure 2). The CAARMS (Comprehensive Assessment of At-Risk Mental States) instrument has an acceptable (AUC = 0.79) accuracy for predicting psychosis,43 and it has no substantial differences in prognostic accuracy from other CHR-P instruments,42 although the Structured Interview of Psychosis-Risk Syndromes has a slightly higher sensitivity (95%) than the CAARMS (86%).43 The reason for this lack of difference in prognostic accuracy is that most of the risk for psychosis (posttest risk) is accounted for by the way these individuals are recruited and sampled (pretest risk, independent from clinically verified CHR-P status) before the CHR-P test is administered.41 Pretest risk for psychosis is 15% at 3 years and is heterogeneous, ranging from 9% to 24%. Variability in pretest risk for psychosis is modulated by the type of sampling strategies,41 increasing if samples are recruited from secondary care and decreasing if samples are recruited from the community41 (Figure 2; eTable 10 in the Supplement).

The proportion of CHR-P individuals who developed a psychotic disorder (positive posttest risk, updated in 2016) was 22% at 3 years (Figure 4).40 The speed of transition to psychosis was greatest in the first months after CHR-P individuals presented to clinical services (median time to psychosis = 8 months).61 Transition to schizophrenia-spectrum psychoses was more than 6 times more frequent (73%) than transition to affective psychoses (11%), whereas transition to other psychoses was 16%.40 The transition risk to psychosis was higher in the BLIPS subgroup (38%) than in the APS (24%) and GRD (8%) subgroups at the 48-month follow-up or later,22 whereas the GRD subgroup was not at higher risk compared with the help-seeking control participants (which represents the standard comparative group during CHR-P interviews).22 No prognostic difference in the risk of psychotic recurrence was found across different operationalizations of short-lived psychotic episodes, including acute and transient psychotic disorders and brief psychotic disorders, but this risk was lower than in patients with remitted first-episode schizophrenia62 (eTable 10 in the Supplement). In the BLIPS group, the 2-year risk of developing schizophrenia was 23% and affective psychoses was null.62 Conversely, the remission rate of the baseline CHR-P symptoms was 35% at 1.94 years’ follow-up.45 No data were available on the remission rates across the BLIPS, APS, and GRD subgroups.

Prediction of Outcomes
Among CHR-P individuals, transition to psychosis was associated with severity of negative symptoms (Cohen d = 0.39), right-handedness (Cohen d = 0.26), severity of attenuated positive psychotic symptoms (Cohen d = 0.35), disorganized and cognitive symptoms (Cohen d = 0.32), unemployment (Cohen d = 0.32), severity of total symptoms (Cohen d = 0.31), low functioning (Cohen d = 0.29), severity of general symptoms (Cohen d = 0.23), living alone (Cohen d = 0.16), male sex (Cohen d = 0.10), and lifetime stress or trauma (Cohen d = 0.08) (eTable 11 in the Supplement; Figure 2).63 However, only severity of attenuated psychotic symptoms and low functioning (highly suggestive level of evidence13) and negative symptoms (suggestive level of evidence13) were associated with psychosis onset after controlling for several biases.63 Comorbid anxiety and depressive disorders were not significantly associated with transition to psychosis.25 No data were available on the predictors of outcomes other than psychosis onset.

Prognostic accuracy may be optimized by controlling pretest risk enrichment55 and using sequential assessments that include a staged assessment based on clinical information, electroencephalogram, neuroimaging, and blood markers44 (eTable 11 in the Supplement).

Interventions
No evidence was found that favored any indicated intervention over another (including needs-based interventions or control conditions) for preventing the transition to psychosis.46 Likewise, no evidence supported the superior efficacy of any intervention over another for reducing attenuated positive psychotic symptoms47,48 (2 meta-analyses on the same topic were retained after the hierarchical criteria were applied) or negative symptoms,49 improving overall functioning5 or social functioning,50 alleviating depression,52 improving symptom-related distress or quality of life,51 or affecting acceptability46 in CHR-P individuals (eTable 12 in the Supplement).

Discussion
To our knowledge, this study is the first comprehensive review (42 meta-analyses with 81 outcomes) focusing on detection, prognosis, and intervention of CHR-P individuals. No meta-analyses had reported consistent results from well-designed, well-conducted studies related to detection, prognosis, or interventions in representative primary care populations (USPSTF criteria for high level of certainty).

The detection of CHR-P individuals has a moderate level of certainty (grade B; Table). Research in the past 6 years has revealed that detection of truly at-risk individuals may be the key rate-limiting step toward a successful implementation of the CHR-P paradigm at scale. Although the CHR-P group is heterogeneous, its baseline sociodemographic characteristics are now clearer; typically, these individuals were young (mean [SD] age, 20.6 [3.2] years) men (58%) who presented with APS and had associated impairments in global functioning (Cohen d = 3.01), social functioning (Cohen d = 1.25),55 and quality of life (Cohen d = 1.75)27; suicidal ideation (66%26); self-harm (49%26); and suicide attempts (18%26). Because of these problems, these individuals sought help at specialized clinics; however, typically, these problems remained undetected (and untreated) for 1 year or more.

Currently, detection of CHR-P individuals is entirely based on their referral on suspicion of psychosis risk and on the promotion of help-seeking behaviors. These detection strategies appear inefficient: only about 5%64 to 12%65 of first-episode cases were detected at the time of their CHR-P stage through stand-alone or youth mental health services. A further caveat is that approximately one-third of first-episode cases may not lead to the development of psychosis through a CHR-P stage.66,67 Furthermore, at presentation, CHR-P individuals often had comorbid nonpsychotic mental disorders (41% depressive disorders and 15% anxiety disorders25) and substance use (33% tobacco use58 and 27% cannabis use53). Because of these limitations, the chain of evidence lacked coherence (per USPSTF grading system; eTable 2 in the Supplement). These issues could be addressed by integrated detection programs that leverage automatic detection tools for screening large clinical10,64,68 and nonclinical69 samples in a transdiagnostic70 fashion, encompassing primary and secondary care, the community,71 and youth mental health services.72 In addition, the detection of CHR-P individuals is currently based on the assessment of symptoms, but symptoms may be only the epiphenomena of underlying pathophysiological processes. CHR-P individuals often have several established sociodemographic, environmental, and other types of risk factors for psychosis,73 including male sex, unemployment, single status, low educational and functional level, obstetric complications, physical inactivity, olfactory dysfunction, and childhood trauma (Figure 2; eDiscussion 1 in the Supplement). Incorporating the assessment of these multiple factors with CHR-P symptoms, resulting in a Psychosis Polyrisk Score, may produce refined detection approaches74 that better map the etiopathological path of psychosis onset.

The prognosis of CHR-P individuals has a moderate level of certainty (grade B; Table).47,75 Converging evidence has demonstrated that CHR-P assessment instruments have good prognostic accuracy (AUC = 0.9)42 for the prediction of psychosis, comparable to the accuracy of clinical tools used in other areas of medicine.42 However, alternative instruments are needed to predict other nonpsychotic outcomes (eg, bipolar onset in those at risk76,77). No substantial prognostic accuracy differences were found across different CHR-P tools.42 The current CHR-P prediction instruments have high sensitivity (96%) but low specificity (47%) and are valid only if applied to clinical samples that have accumulated the above risk factors and have therefore already undergone substantial risk enrichment (Figure 2). In fact, it is not only CHR-P criteria that determine the probability of transition to psychosis but also the recruitment and selection of samples, which modulate enrichment in risk.47,78 The next generation of research should better deconstruct and control risk enrichment79 to maximize the scalability of the use of the CHR-P prediction instruments.71 The 3-year meta-analytic risk of psychosis onset in the entire CHR-P group has declined from 31.5% (estimated in 201280) to the current 22% (Figure 4), although not globally.81

Transition risk has decreased when recruitment strategies focused on the community as opposed to primary or secondary care (eDiscussion 2 in the Supplement). Risk was the highest in the BLIPS subgroup (38% at 4 years; 89% at 5 years if there were “seriously disorganising or dangerous”82 features), intermediate in the APS subgroup (24% at 4 years), and lowest in the GRD subgroup (8% at 4 years).22 Those in the GRD subgroup were not at higher risk than the help-seeking control individuals at up to 4 years of follow-up.22 A revised version of the CHR-P paradigm, which includes stratification across these 3 subgroups, has therefore been proposed.2,83 The BLIPS group also overlapped substantially with the acute and transient psychotic disorders in the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision.82 Therefore, current CHR-P instruments can allow only subgroup-level (ie, BLIPS>APS>GRD) but not participant-level prognosis (inconsistent evidence, USPSTF grading; eTable 2 in the Supplement).

To refine prognosis at the individual participant level, future research may consider specific risk factors (eg, sex, stress and trauma, employment, and living status63), biomarkers (eg, hippocampal volume38), or cognitive markers (eg, processing speed, verbal and visual memory, and attention84) in addition to the CHR-P subgroups22 and clinical symptoms (only severity of attenuated positive and negative symptoms and level of functioning are robust risk factors for psychosis63). The potential of this approach has been supported by the development and validation of individualized clinical prediction models that leverage multimodal risk profiling,64,85,86 including dynamic87 risk prediction models.88 Because these models tend to be more complex compared with standard symptomatic CHR-P assessments, they are more likely to enter clinical routine through a sequential testing framework44 (eDiscussion 3 in the Supplement). Good outcomes in CHR-P individuals have not been fully operationalized,89 and information is lacking on prediction of relevant clinical outcomes (USPSTF; eTable 2 in the Supplement), such as functional level and quality of life, with only approximately one-third of individuals remitting from their initial CHR-P state.45

The available evidence is insufficient (grade C/I; Table) to assess the effects of preventive interventions on health outcomes in CHR-P groups. Although earlier meta-analyses found advantages to cognitive behavioral therapy,90 which is currently recommended by clinical guidelines,4 the inclusion of new trials in recent meta-analyses has indicated no clear benefits to favor any available intervention over another intervention or any control condition, such as needs-based interventions. An independent pairwise meta-analysis published by the Cochrane Group after completion of the present study concluded that no convincing, unbiased, high-quality evidence exists that favors any type of intervention.91 Evidence is insufficient because these studies tended to report large CIs and therefore high uncertainty (USPSTF; eTable 2 in the Supplement) in the meta-analytic estimates, and significant implications of the interventions for specific subgroups may not have been detected. For example, the needs-based interventions that are typically used as control conditions may have diluted the comparative efficacy of experimental interventions. This nondifferential outcome could also be an effect of the sampling biases leading to too few CHR-P individuals in the intervention studies who were at true risk for psychosis, diluting the statistical power of current trials that may have not been able to detect small to modest effect sizes (USPSTF; eTable 2 in the Supplement).92 This lack of demonstrable advantages of specific interventions could also be the consequence of one-size-fits-all approaches in treating CHR-P individuals that go against the clinical, neurobiological, and prognostic heterogeneity of this group and against the recent calls for precision medicine. For example, CHR-P interventions to date have been developed largely for individuals with APS at the expense of those with BLIPS, who are often unwilling to receive the recommended interventions. Another explanation for the lack of comparative efficacy of preventive interventions is that they have largely targeted symptoms, as opposed to key neurobiological processes associated with the onset of psychosis (gaps in the chain of evidence, USPSTF; eTable 2 in the Supplement; Figure 3) or risk factors that could be modified (eg, physical inactivity; Figure 2). We believe that future experimental interventions should also better target relevant outcomes (USPSTF; eTable 2 in the Supplement) other than psychosis onset, including functioning, given the poor remission rates and low functioning of this population.93 As acknowledged by the USPSTF criteria (eTable 2 in the Supplement), in the case of uncertainty, new trials published in the near future may allow a more accurate estimation of the preventive implications for health outcomes.

Grading the recent meta-analytic evidence described in this review, the European College of Neuropsychopharmacology Network on the Prevention of Mental Disorders and Mental Health Promotion has recommended (Table) implementing specialized services to detect CHR-P individuals in primary and secondary care settings and to formulate a prognosis with the validated psychometric instruments.9 Owing to insufficient evidence that favored any particular preventive intervention over another (including control conditions) and considering the uncertainty of the current evidence, no firm conclusions can be made,91 and a cautious approach is required. This approach should involve offering the least onerous feasible primary indicated prevention based on needs-based interventions and psychotherapy (cognitive behavioral therapy or integrated psychological interventions), titrated in accordance with the patient characteristics and risk profile (CHR-P subgroup levels BLIPS>APS>GRD, severity of attenuated positive and negative symptoms, and level of functioning), values, and preferences of the individual.94,95 In addition, other comorbid psychiatric conditions should be treated according to available guidelines, aiming for improving recovery, functional status, and quality of life beyond preventive aims.

Limitations
The main limitations of this study were that the meta-analyses had heterogeneous quality (eResults in the Supplement) and that the literature search approach may have favored the selection of more commonly and readily studied domains that are more likely to be included in a meta-analysis. We cannot exclude the possibility that some promising advancements in the CHR-P field, despite having sufficient data, do not (yet) have a corresponding eligible meta-analysis, such as polygenic risk scores.96 However, in the current era, this possibility is becoming increasingly less likely, with meta-analyses being performed frequently, to the point that multiple meta-analyses are available for the same topic.97-99 In any case, for most putative domains that are difficult to study (or uncommonly studied), the current grade of evidence is unlikely to be remarkable, given the limited data.

Conclusions
Over recent years, substantial advancements in the detection and prognosis of CHR-P individuals have been confirmed in several meta-analyses. However, further research is needed to optimize risk enrichment and stratification and to identify effective interventions that target quantitative individualized risk signatures for both poor and good outcomes.

Longer Religious Fasting Increases Support for Islamist Parties: Evidence from Ramadan

Aksoy, Ozan, and Diego Gambetta. 2020. “Longer Religious Fasting Increases Support for Islamist Parties: Evidence from Ramadan.” SocArXiv. March 10. doi:10.31235/osf.io/n3s8a

Abstract: Much scientific research shows that the sacrifices imposed by religious practices are positively associated with the success of religious organizations. We present the first evidence that this association is causal. We employ a natural experiment that rests on a peculiar time-shifting feature of Ramadan that makes the length of fasting time vary from year-to-year and by latitude. We find that an hour increase in fasting during the median Ramadan day increases the vote shares of Islamist political parties by about 6.5 percentage-points in Turkey’s parliamentary elections between 1973 and 2018. This effect is weaker in provinces where the proportion of non-orthodox Muslims is higher, but stronger in provinces where the number of per capita mosques and of religious personnel is higher. Further analyses suggest that the main mechanism underlying our findings is an increased commitment to religion induced by costlier practice. By showing that the success of religious organizations is causally related to the sacrifice demanded by religious practices, these results strengthen a key finding of the science of religion.

Check also Witnessing fewer credible cultural cues of religious commitment is the most potent predictor of religious disbelief, β=0.28, followed distantly by reflective cognitive style:
Gervais, Will M., Maxine B. Najle, Sarah R. Schiavone, and Nava Caluori. 2019. “The Origins of Religious Disbelief: A Dual Inheritance Approach.” PsyArXiv. December 8. https://www.bipartisanalliance.com/2019/12/witnessing-fewer-credible-cultural-cues.html

Bullshitting frequency was negatively associated with sincerity, honesty, cognitive ability, open-minded cognition, and self-regard and positively related to overclaiming

Littrell, Shane, Evan F. Risko, and Jonathan A. Fugelsang. 2019. “The Bullshitting Frequency Scale: Development and Psychometric Properties.” PsyArXiv. September 27. doi:10.31234/osf.io/dxzqh

Abstract: Recent psychological research has identified important individual differences associated with receptivity to bullshit, which has greatly enhanced our understanding of the processes behind susceptibility to pseudo-profound or otherwise misleading information. However, the bulk of this research attention has focused on cognitive and dispositional factors related to bullshit (the product), while largely overlooking the influences behind bullshitting (the act). Here, we present results from four studies focusing on the construction and validation of a new, reliable scale measuring the frequency with which individuals engage in two types of bullshitting (persuasive and evasive) in everyday situations. Overall, bullshitting frequency was negatively associated with sincerity, honesty, cognitive ability, open-minded cognition, and self-regard and positively related to overclaiming. Additionally, the Bullshitting Frequency Scale was found to reliably measure constructs distinct from lying. These results represent an important step forward by demonstrating the utility of the Bullshitting Frequency Scale as well as highlighting certain individual differences that may play important roles in the extent to which individuals engage in everyday bullshitting



Does Honesty Require Time? Shalvi, Eldar, and Bereby-Meyer (2012) may have overestimated the true effect of time pressure on cheating and the generality of the effect beyond the original context

Does Honesty Require Time? Two Preregistered Direct Replications of Experiment 2 of Shalvi, Eldar, and Bereby-Meyer (2012). Ine Van der Cruyssen et al. Psychological Science, March 10, 2020. https://doi.org/10.1177/0956797620903716

Abstract: Shalvi, Eldar, and Bereby-Meyer (2012) found across two studies (N = 72 for each) that time pressure increased cheating. These findings suggest that dishonesty comes naturally, whereas honesty requires overcoming the initial tendency to cheat. Although the study’s results were statistically significant, a Bayesian reanalysis indicates that they had low evidential strength. In a direct replication attempt of Shalvi et al.’s Experiment 2, we found that time pressure did not increase cheating, N = 428, point biserial correlation (rpb) = .05, Bayes factor (BF)01 = 16.06. One important deviation from the original procedure, however, was the use of mass testing. In a second direct replication with small groups of participants, we found that time pressure also did not increase cheating, N = 297, rpb = .03, BF01 = 9.59. These findings indicate that the original study may have overestimated the true effect of time pressure on cheating and the generality of the effect beyond the original context.

Keywords: intuition, cheating, lying, honesty, replication, moral decision making, time pressure, open data, open materials, preregistered

What is people’s automatic tendency in a tempting situation? Shalvi et al. (2012) found that time pressure, a straightforward manipulation to spark “thinking fast” over “thinking slow,” provoked more cheating, and they concluded that people’s initial response is to serve their self-interest and cheat. We found no evidence that time pressure increased cheating in the die-roll paradigm. There are three possible reasons why replication studies do not produce the same results as the original study: (a) methodological problems in the replication study, (b) overestimation of the true effect size in the original study, or (c) differences between the studies that moderate the effect (Wicherts, 2018).
The first possibility is that methodological limitations in the replication study produced different results. In our first replication study, participants may not have fully appreciated the financial benefits of cheating. In our second replication study, relying on two test sites and offering the task in two languages may have increased error variance. But even for participants who performed the task in their native language, there was anecdotal support for the absence of a time-pressure effect (BF01 = 2.90).
The second possible explanation is that the original study overestimated the true effect size. The use of between-session rather than within-session randomization in the original study makes the experimenter aware of condition assignment and raises the possibility that the experimenter influenced the results (Rosenthal et al., 1963). Also, a single observation (in this case, a single reported die-roll outcome) per participant is likely to provide for a noisy measure. With low reliability, the results are more likely to vary per sample.
The third possible explanation is that the time-pressure effect on cheating is influenced by the context and that differences between the studies explain the different results. Our replications differed in several ways from the original, the most prominent being the country where the study was run, namely Israel in the original versus The Netherlands in the replications. The difference in test site raises the possibility of cross-cultural differences in intuitive dishonesty. Perceived country corruption, for instance, is related to the amount of cheating in the die-under-the-cup game (Gächter & Schulz, 2016). Then again, the large meta-analysis by Abeler, Nosenzo, & Raymond (2019) found that cheating behavior varies little by country. Still, it seems worthwhile to explore whether the automatic tendency to cheat may vary with culture.
In both our PDRs, people were predominantly honest, and we in fact found no evidence of cheating.4 Whereas Shalvi et al. (2012) originally reasoned that “time pressure evokes lying even in settings in which people typically refrain from lying” (p. 1268), our findings point to the possibility that the time-pressure effect is bound to settings that produce more pronounced cheating (e.g., when providing justifications for cheating).
In sum, our findings indicate that the original study by Shalvi et al. (2012) may have overestimated the true effect of time pressure on cheating or the generality of the effect beyond the original context. The vast majority of our participants were honest—even under time pressure. This finding casts doubt on whether people’s intuitive tendency is to cheat and fits better with a preference for honest behavior.


In the context of romantic attraction, beautification can increase assertiveness in women

In the context of romantic attraction, beautification can increase assertiveness in women. Khandis R. Blake, Robert Brooks, Lindsie C. Arthur, Thomas F. Denson. PLOS, March 10, 2020. https://doi.org/10.1371/journal.pone.0229162

Abstract: Can beautification empower women to act assertively? Some women report that beautification is an agentic and assertive act, whereas others find beautification to be oppressive and disempowering. To disentangle these effects, in the context of romantic attraction we conducted the first experimental tests of beautification—on psychological and behavioral assertiveness. Experiment 1 (N = 145) utilized a between-subjects design in which women used their own clothing, make-up, and accessories to adjust their appearance as they normally would for a “hot date” (beautification condition) or a casual day at home with friends (control condition). We measured implicit, explicit, and behavioral assertiveness, as well as positive affect and sexual motivation. Experiment 2 (N = 40) sought to conceptually replicate Experiment 1 using a within-subject design and different measures of assertiveness. Women completed measures of explicit assertiveness and assertive behavioral intentions in three domains, in whatever clothing they were wearing that day then again after extensively beautifying their appearance. In Experiment 1, we found that women demonstrated higher psychological assertiveness after beautifying their appearance, and that high sexual motivation mediated the effect of beautification on assertive behavior. All effects were independent of positive affect. Experiment 2 partially replicated Experiment 1. These experiments provide novel insight into the effects of women’s appearance-enhancing behaviors on assertiveness by providing evidence that beautification may positively affect assertiveness in women under some circumstances.


Discussion

Using a within-subjects design, we found that beautification increased explicit assertiveness to the extent that beautification increased women’s sexual motivation. For assertive consumer behavioral intentions, findings were mixed. Beautification had a direct effect on increasing willingness to endorse public consumer assertiveness, but the effect did not reach conventional levels of statistical significance. Beautification also elevated endorsement of private consumer assertiveness, but the effect was moderated by trait self-objectification. The more women tended to self-objectify, the more they reported willingness to engage in private consumer assertiveness after beautification. This effect was also moderated by sexual motivation, showing the same pattern. We found no effect of beautification, self-objectification, or sexual motivation on consumer assertiveness unrelated to appearance.

General discussion

Research derived from objectification theory has emphasized the negative consequences of beautification and related practices, highlighting that they harm women and are derived from a cultural context that disempowers them [1,57,24]. An alternative perspective, derived from sociometer theory, holds that beautification can benefit women by raising their self-esteem in important domains [8,11,12]. We added clarity to this research area by experimentally manipulating beautification through within- and between-subject designs, and subsequently measuring multiple indicators of assertiveness, as well as positive mood, sexual motivation, and self-objectification. Our results suggest that beautification can increase assertiveness in women, but that the effect may be domain-specific. These findings shed light on a key tension in female psychology by challenging the notion that beautification and related appearance-enhancing phenomena are necessarily disempowering.
Many of our effects were dependent on beautification increasing sexual motivation, with beautification elevating assertiveness only when it also elevated sexual motivation. This finding is consistent with previous research [30], and suggests that the effect of beautification on assertiveness depends upon the degree to which beautification increases the subjective feeling of sexual attractiveness. By including measures of behavioral assertiveness (Experiment 1) and assertive behavioral intentions in three domains (Experiment 2), we intended to distinguish whether beautification-induced assertiveness was domain-specific or domain-general. Unfortunately, results were inconclusive: We did not find a significant effect for beautification in our appearance-unrelated consumer assertiveness vignette, however, we did find that beautification increased assertive behavior in the mock job interview in Experiment 1 (to the extent that it also increased sexual motivation). Future work teasing out these effects would help to clarify the conditions under which beautification can increase assertiveness, and whether that increase is specific to the appearance domain, or whether effects might transfer to unrelated domains.
Beautification interacted with sexual motivation to increase explicit assertiveness in women, regardless of trait self-objectification. Surprisingly, trait self-objectification was positively associated with a beautification-induced willingness to act assertively in one of our vignettes. This finding is supportive of parallel work showing that self-objectification and its antecedents can raise women’s self-esteem in particular contexts [11,12]. Though these effects warrant replication, they suggest that conceiving of self-objectification as an entirely deleterious phenomenon may mischaracterize its psychological effects. The degree to which self-objectification may translate into enhanced female empowerment in some conditions is perplexing, yet it is also a worthwhile topic for future research.

Implications for understanding self-objectification

These results provide further insight into understanding women’s motivation for appearance- modifying behaviors, including self-objectification and self-sexualization. Many of these phenomena are motivated by desires to elevate attractiveness to new or existing romantic partners [38,39]. However, our findings suggest that women may also engage in these behaviors to increase assertiveness as well as mood. Thus, a desire for feeling empowered may partially account for women’s beautification practices and consumption of appearance-enhancing products. This conception offers a unique perspective on why women are more beauty-focused when the economy declines (the lipstick effect; [40]). Beautification may provide an affordable way to elevate the subjective experience of empowerment in ecological conditions that often constrain agentic action [41].
The negative effects of self-objectification—including usurping women’s attentional and cognitive resources and increasing the likelihood of mental health problems—usually result from intermediary processes, such as elevated body shame and body surveillance [3]. Our findings raise the possibility that beautification may not always elicit these intermediary processes, and our work suggests that beautification can elicit sexual motivation as well. A defining difference between whether beautification and related phenomena empower or disempower women, then, may depend upon which intermediary processes are elicited. For example, if beautification elicits appearance anxiety or body shame, it may reduce assertiveness; If beautification elicits sexual motivation or high self-esteem, it may heighten assertiveness. Future investigation into the intermediary processes induced by appearance-relevant behaviors on positive and negative psychological outcomes would be a welcome contribution to future work.
Contextual effects—such as the person a beautified woman believes is judging her [42]—are also likely to be important. We focused on beautification in one situation only, and it is unclear whether mandatory beautification in other contexts (e.g., stipulated by an employer for an important meeting) would show similar effects. Women often become targets of backlash when they act assertively, especially in domains that are stereotype-inconsistent [43], and attractive women may be especially likely to be targeted. Women who engage in beautification and appearance-enhancing phenomena can also become targets of aggression by others, men and women like [33,4447]. Thus, although increases in beautification may engender benefits to women, in certain contexts it may also engender costs. The contexts under which women may express assertiveness and beautify without suffering backlash effects, or the contexts under which women experience beautification as especially disempowering are important future research topics.

The paradox of sexualized beautification and female agency

Although the current work provides evidence for a conditional effects of beautification on female assertiveness, our findings appear to be inconsistent with work showing that men and women perceive that women in attractive, revealing clothing lack agency [33,48,49]. Why is it that people perceive that women in such clothing lack agency, whereas the women themselves may potentially feel and behave in a more assertive manner? Compelling evidence demonstrates that people derogate those who act counter to the status quo [50]. Perceptions that women who engage in beautification lack agency may thus function to penalize women who threaten notions of demure and passive femininity through asserting sexual power [43,51]. Perceiving that these women lack agency may also support male dominance by discrediting the agency that some women demonstrate via beautification.
Equating beautification or self- sexualization with low agency may also reflect the cultural suppression of female sexuality, an ever-present albeit culturally variable phenomenon that sanctions women’s sexual self-expression more heavily than men’s. Although the drivers of the cultural suppression of female sexuality remain controversial [5254], evidence supports the idea that competition between women can encourage them to suppress the sexuality and attractiveness-enhancing efforts of other women. Derogating such women as cultural dupes, who misunderstand female agency and how they are perceived by others, may thus function to reduce the occurrence of competition amongst women by elevating anxiety in potential competitors. Ultimately, such a process may function to diminish the threat of another woman’s physical and sexual attractiveness.
Perceptions that sexualized women lack agency may also function to motivate sexual approach in men. Evidence suggests that some men find cues of sexual vulnerability and low agency in women to be alluring [55]. From a functional perspective, perceiving low agency in such women may be attractive to men because it reduces the threat of rejection, female infidelity, and paternity uncertainty associated with female sexual agency. It is also plausible that low agency women are perceived as less likely to rebuff sexual advances and easier to monopolize [30]. For these reasons, men’s perceptions of low agency in women may be a cognitive bias that engenders sexual approach, akin to the robust bias men show to over-estimating women’s sexual intent [56,57]. Future work investigating these notions would provide valuable insight into the constancy of patriarchal culture over time and provide. Research could also clarify the paradoxical nature of men’s views of women’s agency, and women’s view of their own agency.

Limitations and future directions

We aimed to provide a rigorous test of the effects of beautification on assertiveness by employing explicit, implicit, and behavioral indicators of assertiveness, ecologically valid designs, and by testing the importance of theoretically relevant mechanisms and confounds (i.e., sexual motivation, positive affect). That being said, our findings are limited in several ways. Although patterns of variation in Experiment 2 were generally consistent with Experiment 1, two effects from Experiment 2 did not reach conventional levels of statistical significance. Likewise, in Experiment 2, we failed to replicate the direct effect of beautification on explicit assertiveness, finding instead that the effect was dependent on beautification eliciting sexual motivation. This latter finding highlights the importance of sexual motivation to the beautification–assertiveness link, but it weakens our ability to draw conclusions about the overall relationship between the two phenomena. Likewise, whether assertiveness effects are domain-general, or specific to appearance-relevant domains only, was unresolved by the current work. Based on sociometer theory, we speculate that beautification-induced assertiveness may be strongest in appearance-related domains, and weaker, albeit present in other domains.
A further limitation is that the instructions in the beautification condition were multi-faceted. The instructions informed women to dress for a night out where they might meet someone they were romantically interested in, a hot date, and a party. We emphasized “hot date” in the verbal instructions most frequently both before and during the experimental sessions, and parties are locations where young people commonly meet romantic partners. We did so because the aim of our study was to focus on beautification in the context of romantic relationships, and attractiveness in the domain of romantic relationships is a domain where women are especially likely to derive self-esteem [8,11]. The multi-faceted nature of these instructions; however, may have introduced unnecessary noise in our experimental manipulation, weakening our ability to detect effects.
Another limitation is that design differences between Experiments 1 and 2 may account for some variability in our findings. Experiment 1 occurred in the laboratory, meaning that participants were seen by the experimenter after they changed their clothing and makeup. In contrast, Experiment 2 occurred online, and participants completed the experimental session in their home. We utilized this design difference so participants in Experiment 2 had the choice of their entire wardrobe and all of their own beauty products at their disposal. Unfortunately, this distinction between public and private may have weakened findings in Experiment 2. It is possible that the element of being seen in public after one enhances their sexual appearance strengthens the effect of beautification on female assertiveness, resulting in stronger effects in public versus private settings. Such an interpretation would account for weaker effects in Experiment 2 compared to Experiment 1.
A final limitation is that we only controlled for one individual difference in our analyses. Although trait self-objectification was highly relevant, many other individual differences affect women’s willingness to beautify, self-objectify, and self-sexualize. For example, recent work indicates that ideological components related to higher order personal values are especially relevant [58]. Testing whether findings reported here are sensitive to these differences, and the individual differences predictive of beautification, would strengthen our conclusions.

Weather and suicide: Association between meteorological variables and suicidal behavior—a systematic qualitative review article

Weather and suicide: Association between meteorological variables and suicidal behavior—a systematic qualitative review article. Charlotte Pervilhac M.Sc.-Psych., Kyrill Schoilew, Hansjörg Znoj & Thomas J. Müller. Der Nervenarzt vol 91, pages 227–232(2020). https://link.springer.com/article/10.1007/s00115-019-00795-x

Abstract
Background: The effects of current and expected future climate change on mental health outcomes are of increasing concern. In this context, the importance of meteorological factors on suicidal behavior is receiving growing attention in research.

Objective: Systematic review article with qualitative synthesis of the currently available literature, looking at the association between meteorological variables and attempted and completed suicide.

Material and methods: Criteria-based, systematic literature search according to the PRISMA criteria. Peer-reviewed original research studies were included without time limits.

Results and conclusion: A total of 99 studies were included and grouped according to the research analysis based on daily, weekly, monthly and annual data. The majority of the studies reported a statistical association with at least one meteorological variable. The most consistent positive correlation was shown between temperature and suicidal behavior. However, the results are not conclusive and in part contradictory. The reported studies differed distinctively in terms of study design. Meteorological parameters may be associated with suicidal behavior. Future research in this area is needed to provide further clarity. Despite existing knowledge gaps, the current findings may have implications for suicide prevention plans.

Tuesday, March 10, 2020

Among high-ability female students, being assigned a female professor leads to substantial increases in the probability of working in a STEM occupation & the probability of receiving a STEM master’s degree

The Effects of Professor Gender on the Post-Graduation Outcomes of Female Students. Hani Mansour, Daniel I. Rees, Bryson M. Rintala, Nathan N. Wozny. NBER Working Paper No. 26822, March 2020. https://www.nber.org/papers/w26822

Abstract: Although women earn approximately 50 percent of science, technology, engineering and math (STEM) bachelor’s degrees, more than 70 percent of scientists and engineers are men. We explore a potential determinant of this STEM gender gap using newly collected data on the career trajectories of United States Air Force Academy students. Specifically, we examine the effects of being assigned female math and science professors on occupation choice and postgraduate education. We find that, among high-ability female students, being assigned a female professor leads to substantial increases in the probability of working in a STEM occupation and the probability of receiving a STEM master’s degree.


Males generally have a higher level of body appreciation than females

Meta-analysis of gender differences in body appreciation. Jinbo He et al. Body Image, Volume 33, June 2020, Pages 90-100. https://doi.org/10.1016/j.bodyim.2020.02.011

Highlights
•    Males had higher levels of body appreciation than females.
•    Gender differences in body appreciation could be moderated by survey method.
•    Gender differences in body appreciation could be moderated by type of samples.
•    Gender differences in body appreciation could be moderated by age.

Abstract: There are a number of studies that have conducted comparisons of body appreciation between males and females. However, findings are largely inconsistent, making it unclear whether there are actual gender differences in body appreciation. With a meta-analytic approach, the current study quantitatively reviewed and synthesized previous findings, published up to May 2019, on gender differences in body appreciation. After searching and screening potential studies in four databases (i.e., PubMed, PsycINFO, Web of Science, and ProQuest Dissertations & Theses Global), we identified 40 relevant articles published from 2008 to 2019. A random-effects model reveals an overall estimate of gender difference in body appreciation of d = 0.27 (95 % CI: 0.21, 0.33; p <  .001); that is, males generally have a higher level of body appreciation than females, with a small effect size. Survey method, type of sample (cohorts), and age were identified as significant moderators that have contributed to the variability in previous findings. Future research and interventions in body appreciation may consider gender differences in their designs.


Blind at First Sight: On a first date, distinctive accuracy tends to be paired with lower romantic interest

Blind at First Sight: The Role of Distinctively Accurate and Positive First Impressions in Romantic Interest. Lauren Gazzard Kerr1, Hasagani Tissera, M. Joy McClure, John E. Lydon, Mitja D. Back, & Lauren J. Humani. https://osf.io/zyvj9/

Abstract: Viewing others with distinctive accuracy – the degree to which personality impressions corresponds with targets’ unique characteristics – often predict positive interpersonal experiences, including liking and relationship satisfaction. Does this hold in the context of first dates, or might distinctive accuracy have negative links with romantic interest in such evaluative settings? We examined this with two speed-dating samples (N1 = 172, Ndyad = 2407; N2 = 397, Ndyad = 1849). Not surprisingly, positive impressions of potential dating partners were strongly associated with greater romantic interest. In contrast, distinctively accurate impressions were associated with significantly less romantic interest. This association was even stronger for potential partners whose personalities were less romantically appealing, specifically, those lower in extraversion. In sum, on a first date, distinctive accuracy tends to be paired with lower romantic interest. The potential implications of distinctive accuracy for romantic interest and of interest for distinctive accuracy are discussed.

Keywords: Distinctive accuracy, positivity, first impressions, speed dating, attraction,

Contrary to widespread worries, health misinformation gains little traction on Facebook, and sex does not sell

Berriche, Manon, and Sacha Altay. 2019. “Internet Users Engage More with Phatic Posts Than with Health Misinformation on Facebook.” PsyArXiv. December 5. doi:10.31234/osf.io/nj2sr

Abstract: Social media like Facebook are harshly criticized for the propagation of health misinformation. Yet, little research has provided in-depth analysis of real-world data to measure the extent to which Internet users engage with it. This article examines 6.5 million interactions generated by 500 posts on an emblematic case of online health misinformation: the Facebook page Santé + Mag, which generates five times more interactions than the combination of the five best-established French media outlets.
Based on the literature on cultural evolution, we tested whether the presence of cognitive factors of attraction, that tap into evolved cognitive preferences, such as information related to sexuality, social relations, threat, disgust or negative emotions, could explain the success of Santé + Mag’s posts. Drawing from media studies findings, we hypothesized that their popularity could be driven by Internet users’ desire to interact with their friends and family by sharing phatic posts (i.e. statements with no practical information fulfilling a social function such as “hello” or “sister, I love you”).
We found that phatic posts were the strongest predictor of interactions, followed by posts with a positive emotional valence. While 50% of the posts were related to social relations, only 28% consisted of health misinformation. Despite its cognitive appeal, health misinformation was a negative predictor of interactions. Sexual contents negatively predicted interactions and other factors of attraction such as disgust, threat or negative emotions did not predict interactions. 
These results strengthen the idea that Facebook is first and foremost a social network used by people to foster their social relations, not to spread online misinformation. We encourage researchers working on misinformation to conduct finer-grained analysis of online contents and to adopt interdisciplinary approach to study the phatic dimension of communication, together with positive contents, to better understand the cultural evolution dynamics of social media.

Monday, March 9, 2020

Aphantasia is associated with scientific & mathematical occupations; hyperphantasia with ‘creative’ professions; those with aphantasia report an elevated rate of difficulty with face recognition & autobiographical memory

Zeman, Adam, Fraser Milton, Sergio Della Sala, Michaela Dewar, Timothy Frayling, James Gaddum, Andrew Hattersley, et al. 2020. “Phantasia - the Psychological Significance of Lifelong Visual Imagery Vividness Extremes.” PsyArXiv. March 9. doi:10.31234/osf.io/sfn9w

Abstract: Visual imagery typically enables us to see absent items in the mind’s eye. It plays a role in memory, day-dreaming and creativity. Since coining the terms aphantasia and hyperphantasia to describe the absence and abundance of visual imagery, we have been contacted by many thousands of people with extreme imagery abilities. Questionnaire data from 2000 participants with aphantasia and 200 with hyperphantasia indicate that aphantasia is associated with scientific and mathematical occupations, whereas hyperphantasia is associated with ‘creative’ professions. Participants with aphantasia report an elevated rate of difficulty with face recognition and autobiographical memory, whereas participants with hyperphantasia report an elevated rate of synaesthesia. Around half those with aphantasia describe an absence of wakeful imagery in all sense modalities, while a majority dream visually. Aphantasia appears to run within families more often than would be expected by chance. Aphantasia and hyperphantasia appear to be widespread but neglected features of human experience with informative psychological associations.



Abortion attitudes have become less complex and more polarized over time, a trend largely driven by the pro-abortion camp

Abortion Complexity Scores from 1972 to 2018: A Cross-Sectional Time-Series Analysis Using Data from the General Social Survey. Kristen N. Jozkowski, Brandon L. Crawford & Malachi Willis. Sexuality Research and Social Policy, March 9 2020. https://rd.springer.com/article/10.1007/s13178-020-00439-9

Abstract
Introduction: According to data from the General Social Survey (GSS), abortion attitudes have remained relatively stable since 1972. Despite this apparent stability, some researchers argue abortion opinions have become increasingly polarized, particularly among certain subgroups. Others argue people’s attitudes toward abortion are complex and nuanced; that is, people may feel conflicted or ambivalent about abortion in certain contexts. To better understand this issue, we examined complexity and polarization in people’s attitudes toward abortion using GSS data from 1972 until 2018 (n = 44,302).

Methods: The GSS includes six items assessing whether it should be possible for “a pregnant woman to obtain a legal abortion” under specific circumstances. Using these items, we created an aggregate complexity measure. Negative binomial, Poisson, and logistic regression models were tested to assess potential changes in complexity and polarization over time among demographic subgroups.

Results: Findings indicate changes in complexity across political party affiliations, religious identity, and age groups. However, any significant differences among these demographic subgroups are lost once polarized scores are removed. That is, changes in complexity are driven largely by more people supporting access to abortion in all or no situations; among those who remain conflicted, there has been little change in complexity.

Discussion: These findings provide a more nuanced assessment of trends in abortion attitudes. Given the saliency of this issue, we recommend researchers consider alternative mechanisms to assess abortion attitudes.

Policy Implication: These nuanced assessments of abortion attitudes should be considered when determining the congruence between abortion legislation and public opinion.

Cognitive ability was related to more affective prejudice towards relatively conservative groups; people with higher levels of cognitive ability were more in favor of freedom of speech for all groups

De keersmaecker, Jonas, Dries H. Bostyn, Alain Van Hiel, and Arne Roets. 2020. “Disliked but Free to Speak: Cognitive Ability Is Related to Supporting Freedom of Speech for Groups Across the Ideological Spectrum.” PsyArXiv. March 9. doi:10.31234/osf.io/b7kty

Abstract: Freedom of speech for all citizens is often considered as a cornerstone of democratic societies. In three studies, we examined the relationship between cognitive ability and support for freedom of speech for a variety of social groups across the ideological spectrum (N1 varies between 1373 and 18719, N2 = 298, N3 = 395). Corroborating our theoretical expectations, although cognitive ability was related to more affective prejudice towards relatively conservative groups, and less affective prejudice towards relatively liberal groups (Study 2), people with higher levels of cognitive ability were more in favor of freedom of speech for all target groups (Study 1 – 3). The relationship between cognitive ability and freedom of speech support was mediated by intellectual humility (pre-registered Study 3). These results indicate that, cognitive ability contributes to support for the democratic right of freedom of speech for all social-ideological groups.

Consistent with Petry & others' work, nonmonetary outcomes are discounted more steeply than monetary outcomes; people who steeply discount monetary outcomes steeply discount nonmonetary outcomes as well

Delay discounting of different outcomes: Review and theory. Amy L. Odum et al. Journal of the Experimental Analysis of Behavior, March 8 2020. https://doi.org/10.1002/jeab.589

Abstract: Steep delay discounting is characterized by a preference for small immediate outcomes relative to larger delayed outcomes and is predictive of drug abuse, risky sexual behaviors, and other maladaptive behaviors. Nancy M. Petry was a pioneer in delay discounting research who demonstrated that people discount delayed monetary gains less steeply than they discount substances with abuse liability. Subsequent research found steep discounting for not only drugs, but other nonmonetary outcomes such as food, sex, and health. In this systematic review, we evaluate the hypotheses proposed to explain differences in discounting as a function of the type of outcome and explore the trait‐ and state‐like nature of delay discounting. We found overwhelming evidence for the state‐like quality of delay discounting: Consistent with Petry and others' work, nonmonetary outcomes are discounted more steeply than monetary outcomes. We propose two hypotheses that together may account for this effect: Decreasing Future Preference and Decreasing Future Worth. We also found clear evidence that delay discounting has trait‐like qualities: People who steeply discount monetary outcomes steeply discount nonmonetary outcomes as well. The implication is that changing delay discounting for one outcome could change discounting for other outcomes.


Canada: Rank mobility increases as the percentage of mothers with a high school diploma increases; weaker evidence that mobility increases with the percentage of mothers with a university degree

Parental Education Mitigates the Rising Transmission of Income between Generations. Marie Connolly, Catherine Haeck, and Jean-William P. Laliberte. NBER, February 19, 2020. http://conference.nber.org/conf_papers/f129700.pdf

Abstract: This article provides evidence on the causal relationship between maternal education andthe intergenerational transmission of income. Using a novel linkage between intergenerational income tax data and Census data for individuals born between 1963 and 1985 and their parents, we show that rank mobility has decreased over time, and that this decline was sharpest for children of mothers without a high school diploma. Using variation in compulsory schooling laws, we show that rank mobility increases as the percentage of mothers with a high school diploma increases. We find weaker evidence that mobility increases with the percentage of mothers with a university degree.

JEL codes: J62, D63
Keywords: social mobility, intergenerational income transmission, income inequality, educa-tion, Canada


Sunday, March 8, 2020

People have been accused of being excessively pessimistic about SARS-CoV-2's future consequences; but a large survey shows that the majority of respondents was actually overly optimistic

Raude, Jocelyn, Marion Debin, Cécile Souty, Caroline Guerrisi, Clement Turbelin, Alessandra Falchi, Isabelle Bonmarin, et al. 2020. “Are People Excessively Pessimistic About the Risk of Coronavirus Infection?.” PsyArXiv. March 8. doi:10.31234/osf.io/364qj

Abstract: The recent emergence of the SARS-CoV-2 in China has raised the spectre of a novel, potentially catastrophic pandemic in both scientific and lay communities throughout the world. In this particular context, people have been accused of being excessively pessimistic regarding the future consequences of this emerging health threat. However, consistent with previous research in social psychology, a large survey conducted in Europe in the early stage of the COVID-19 epidemic shows that the majority of respondents was actually overly optimistic about the risk of infection.



Links between spanking & delinquency, depression, & alcohol use are explained by moderate-to-large degrees of genetic covariation, & small-to-moderate degrees of nonshared environmental covariation

Barbaro, Nicole. 2020. “The Effects of Spanking on Psychosocial Outcomes: Revisiting Genetic and Environmental Covariation.” PsyArXiv. March 8. doi:10.31234/osf.io/zhgme

Abstract: A large body of work has investigated the associations between spanking and a wide range of psychosocial outcomes across development. A comparatively smaller subset of this literature, on a narrower range of psychosocial outcomes, has employed genetically-informative research designs capable of estimating the degree to which observed phenotypic effects are explained by genetic and environmental covariation. The current research analyzed data from the Children of the National Longitudinal Survey of Youth (CNLSY; Study 1) and conducted simulation models using input parameters from the existing literature (Study 2) to provide a summative evaluation of the psychosocial effects of spanking with regard to genetic and nonshared environmental covariation. Results of Study 1 replicated previous work showing that associations between spanking and outcomes such as delinquency, depression, and alcohol use were explained by moderate-to-large degrees of genetic covariation, and small-to-moderate degrees of nonshared environmental covariation. Estimates from the simulations of Study 2 suggest that, generally, genetic covariation could account for a substantial amount of the observed phenotypic effect between spanking and the psychosocial outcome of interest (≈ 60%-80%), with the remainder likely attributable to nonshared environmental covariation (≈ 0%-40%). Collectively the results of the current research indicate that continued work on the developmental effects of spanking is best served by genetically-informative research designs on a broader range of outcomes than what is currently available.

Are Humans Constantly but Subconsciously Smelling Themselves?

Perl, Ofer, Eva Mishor, Aharon Ravia, Inbal Ravreby, and Noam Sobel. 2020. “Are Humans Constantly but Subconsciously Smelling Themselves?” PsyArXiv. March 8. doi:10.1098/rstb.2019.0372

Abstract: All primates, including humans, engage in self-face-touching at very high frequency. The functional purpose or antecedents of this behaviour remain unclear. In this hybrid review we put forth the hypothesis that self-face-touching subserves self-smelling. We first review data implying that humans touch their own face at very high frequency. We then detail evidence from the one study that implicated an olfactory origin for this behaviour: This evidence consists of significantly increased nasal inhalation concurrent with self-face-touching, and predictable increases or decreases in self-face-touching as a function of subliminal odourant tainting. Although we speculate that self-smelling through self-face-touching is largely an unconscious act, we note that in addition, humans also consciously smell themselves at high frequency. To verify this added statement, we administered an online self-report questionnaire. Upon being asked, ~94% of ~400 respondents acknowledged engaging in smelling themselves. Paradoxically, we observe that although this very prevalent behaviour of self-smelling is of concern to individuals, especially to parents of children overtly exhibiting self-smelling, the behaviour has nearly no traction in the medical or psychological literature. We suggest psychological and cultural explanations for this paradox, and end in suggesting that human self-smelling become a formal topic of investigation in the study of human social olfaction.

Swedish data: Those gamers who spend more time engaging in their favorite pastime become less interested in sociopolitical issues and less prosocial than non-gamers from year to year

Gaming alone: Videogaming and sociopolitical attitudes. Pavel Bacovsky.  New Media & Society, March 7, 2020. https://doi.org/10.1177/1461444820910418

Abstract: What sustains prosocial attitudes and political engagement in the era of online connectivity? Scholars disagree on whether frequent consumers of virtual entertainment disconnect from sociopolitical life. Using the Swedish Political Socialization Panel dataset and partial-pool time series methodology, I investigate the relationship between playing videogames and adolescents’ political and social attitudes over time. I find that those gamers who spend more time engaging in their favorite pastime become less interested in sociopolitical issues and less prosocial than non-gamers from year to year. My findings tell a cautionary tale about the adverse effects of extensive gaming on the development of democratic attitudes among adolescents.

Keywords: Adolescents, political interest, prosocial attitudes, videogaming


Saturday, March 7, 2020

Free bathrooms in Starbucks: Cellphone location data allows to know of a 7.3% decline in store attendance; remaining customers spent 4.1% less time in Starbucks relative to nearby coffee shops

Gurun, Umit and Nickerson, Jordan and Solomon, David H., The Perils of Private Provision of Public Goods (January 31, 2020). SSRN: http://dx.doi.org/10.2139/ssrn.3531171

Abstract: In May 2018, in response to protests, Starbucks changed its policies nationwide to allow anybody to sit in their stores and use the bathroom without making a purchase. Using a large panel of anonymized cellphone location data, we estimate that the policy led to a 7.3% decline in store attendance at Starbucks locations relative to other nearby coffee shops and restaurants. This decline cannot be calculated from Starbucks’ public disclosures, which lack the comparison group of other coffee shops. The decline in visits is around 84% larger for stores located near homeless shelters. The policy also affected the intensive margin of demand: remaining customers spent 4.1% less time in Starbucks relative to nearby coffee shops after the policy enactment. Wealthier customers reduced their visits more, but black and white customers were equally deterred. The policy led to fewer citations for public urination near Starbucks locations, but had no effect on other similar public order crimes. These results show the difficulties of companies attempting to provide public goods, as potential customers are crowded out by non-paying members of the public.

Keywords: Public Good, Socially Responsible Investment, ESG investment, Homeless, Starbucks, Location data
JEL Classification: A11, A13, C55, D02, D22, D61, D62, D63, D64, H23, G30, L21, I15, G34


Aversion towards simple broken patterns predicts moral judgment

Aversion towards simple broken patterns predicts moral judgment. Anton Gollwitzer et al. Personality and Individual Differences, Volume 160, 1 July 2020, 109810. https://doi.org/10.1016/j.paid.2019.109810

Abstract: To what extent can simple, domain-general factors inform moral judgment? Here we examine whether a basic cognitive-affective factor predicts moral judgment. Given that most moral transgressions break the assumed pattern of behavior in society, we propose that people's domain-general aversion towards broken patterns – their negative affect in response to the distortion of repeated forms or models – may predict heightened moral sensitivity. In Study 1, participants’ nonsocial pattern deviancy aversion (e.g., aversion towards broken patterns of geometric shapes) predicted greater moral condemnation of harm and purity violations. This link was stronger for intuitive thinkers, suggesting that this link occurs via an intuitive rather than analytical pathway. Extending these results, in Study 2, pattern deviancy aversion predicted greater punishment of harm and purity violations. Finally, in Study 3, in line with pattern deviancy aversion predicting moral condemnation because moral violations break the pattern of behavior in society, pattern deviancy aversion predicted context-dependent morality. Participants higher in pattern deviancy aversion exhibited a greater shift towards tolerating moral violations when these violations were described as the pattern of behavior in an alternate society. Collectively, these results suggest that something as basic as people's aversion towards broken patterns is linked to moral judgment.

Keywords: Pattern deviancy aversionMoralityPunishmentMoral judgmentBroken patterns

Polarization is increasing not only among political parties adherents, also intraparty polarization between ideologically extreme and ideologically moderate partisans is on the rise

Intraparty Polarization in American Politics. Eric Groenendyk, Michael W. Sances, and Kirill Zhirkov. The Journal of Politics, Aug 27 2019. https://www.journals.uchicago.edu/doi/abs/10.1086/708780. Free https://kirillzhirkovme.files.wordpress.com/2019/09/groenendyk_intraparty_polarization.pdf

Abstract: We know that elite polarization and mass sorting have led to an explosion of hostility
between parties, but how do Republicans and Democrats feel toward their own respective parties? Have these trends led to more cohesion or more division within parties? Using the American National Election Studies (ANES) time series, we first show that intraparty polarization between ideologically extreme and ideologically moderate partisans is on the rise. Second, we demonstrate that this division within parties has important implications for how we think about affective polarization between parties. Specifically, the distribution of relative affect between parties has not become bimodal, but merely dispersed. Thus, while the mean partisan has become affectively polarized, the modal partisan has not. These results suggest polarization and sorting may be increasing the viability of third party candidates and making realignment more likely.

Keywords: Polarization, Party Coalitions, Realignment, Ideology


Friday, March 6, 2020

After Aesop's fable, the “sour-grape effect”: A systematic tendency to downplay the value of unattainable goals and rewards

Greener grass or sour grapes? How people value future goals after initial failure. Hallgeir Sjåstad, Roy F. Baumeister. Michael Ent. Journal of Experimental Social Psychology, Volume 88, May 2020, 103965. https://doi.org/10.1016/j.jesp.2020.103965

Abstract: Across six experiments (N = 1304), people dealt with failure by dismissing the value of future goals. Participants were randomly assigned to receive good or poor feedback on a practice trial of a cognitive test (Studies 1–3, 5–6) or their academic performance (Study 4). Those who received poor (vs. good) feedback predicted that they would feel less happy about a future top performance. However, when all participants received a top score on the actual test they became equally happy, regardless of initial feedback. That is, initial failure made people underestimate how good it would feel to succeed in the future. Inspired by Aesop's fable of the fox and the grapes, we term this phenomenon the “sour-grape effect”: A systematic tendency to downplay the value of unattainable goals and rewards. Mediation analyses suggest that the low happiness predictions were a self-protective maneuver, indicated by apparent denial of the personal and future relevance of their performance. Moderation analysis showed that people high in achievement motivation constituted the main exception, as they predicted (correctly) that a big improvement would bring them joy. In a final and high-powered experiment, the effect generalized from predicted happiness to predicted pride and gratitude. Crucially, the sour-grape effect was found repeatedly across two different countries (USA and Norway) and multiple settings (lab, field, online), including two pre-registered replications. In line with the principle of “adaptive preferences” from philosophy and cognitive dissonance theory from psychology, the results suggest that what people want is restricted by what they can get.

Keywords: GoalsHappinessCognitive dissonanceSour grapesAffective forecasting

Local sleep and wakefulness & insomnia disorder:Wake-like activations (‘islands of wakefulness’) can occur during both major sleep stages (NREM & REM)

Local sleep and wakefulness—the concept and its potential for the understanding and treatment of insomnia disorder. Lina Stålesen Ramfjord, Elisabeth Hertenstein, Kristoffer Fehér, Christian Mikutta, Carlotta Louisa Schneider, Christoph Nissen & Jonathan Gabriel Maier. Somnologie, March 6 2020. https://rd.springer.com/article/10.1007/s11818-020-00245-w

Abstract: In ancient mythology, sleep was often regarded as an inactive state, close to death. Research in the past century has, however, demonstrated that the brain is highly active and oscillates through well-defined stages during sleep. Yet it is only over the past decade that accumulating evidence has shown that sleep and wake processes can occur simultaneously, localized in distinct areas of the brain. The aim of this article is to review relevant aspects of the shift from global to local concepts of sleep–wake regulation and to further translate this perspective to the clinical problem of insomnia. Animal and human studies show that local wake-like activations (‘islands of wakefulness’) can occur during both major sleep stages, i.e. non-rapid eye movement (NREM) and rapid eye movement (REM) sleep. Preliminary evidence suggests that higher levels of local wake-like activity, not captured in standard polysomnographic recordings, might underlie the perception of disrupted sleep or even wakefulness during polysomnographic epochs of sleep in patients with chronic insomnia. To further decipher the neural mechanisms, advanced techniques of high-density electroencephalography (hdEEG) and non-invasive brain stimulation techniques can be applied. Furthermore translating the concept of local sleep and wakefulness to the prevalent health problem of chronic insomnia might help to reduce the current mismatch between subjective sleep–wake perception and standard recordings, and might inform the development of new treatments.



Substantial mobility in & out of poverty: 41 pct of those in poverty in 2007 were out of poverty in the following year; however, many of those who are poor spend multiple years in poverty or escape poverty only to fall back into it

Presence and Persistence of Poverty in U.S. Tax Data. Jeff Larrimore, Jacob Mortenson, David Splinter. Feb 2020. http://www.davidsplinter.com/LMS_PersistencePoverty_2020.pdf

Abstract: This paper presents new estimates of the level and persistence of poverty among U.S. households since the Great Recession. We build new annual household data files using U.S. income tax filings between 2007 and 2018. These data, which are constructed for the population of U.S. residents, allow us to track individuals over time and measure how tax policies affect poverty trends. Using an after-tax household income measure, we estimate that over 4 in 10 people spent at least one year in poverty between 2007 and 2018. Those that experienced at least one year of poverty spent an average of one-fourth of the 12-year period in poverty. There is substantial mobility in and out of poverty. For example, 41 percent of those in poverty in 2007 were out of poverty in the following year. However, many of those who are poor spend multiple years in poverty or escape poverty only to fall back into it. Of those who were in poverty in 2007, one-third are in poverty for at least half of the years through 2018. We also document substantial heterogeneity in these trends by age: younger individuals experience higher rates of poverty but less persistence; older individuals experience lower rates of poverty but more persistence.


New Frontiers in Irritability Research—From Cradle to Grave and Bench to Bedside

New Frontiers in Irritability Research—From Cradle to Grave and Bench to Bedside. Neir Eshel, Ellen Leibenluft. JAMA Psychiatry. 2020;77(3):227-228, December 4, 2019. doi:10.1001/jamapsychiatry.2019.3686

We all know what it’s like to be irritable. Our partners walk on eggshells around us. The slightest trigger sets us off. If there’s a punching bag nearby, it had better watch out. Irritability, defined as a low threshold for experiencing frustration or anger, is common. In the right context, irritability can be adaptive, motivating us to overcome barriers or dominate our environment. When prolonged or disproportionate, however, irritability can be counterproductive, causing us to waste our energy on maladaptive behavior.

In recent years, there has been an increase in research on irritability in childhood, with an emerging literature on its neurobiology, genetics, and epidemiology.1 There is even a new diagnosis focused on this symptom, disruptive mood dysregulation disorder (DMDD). However, there is a dearth of irritability research in adults. This is regrettable, because irritability is an important clinical symptom in multiple mental illnesses throughout the life span. From depression to posttraumatic stress disorder, dementia to premenstrual dysphoric disorder, traumatic brain injury to borderline personality disorder, irritability is associated with extensive burdens on individuals, their families, and the general public.

In this Viewpoint we suggest that studying the brain basis for irritability across development and disorder could have substantial clinical benefits. Furthermore, we propose that irritability, like addiction or anxiety, is an evolutionarily conserved focus ready for translational neuroscience.

Diagnosis and Treatment Across the Life Span

Despite its clinical toll, there are few evidence-based treatments for irritability. The only US Food and Drug Administration–approved medications for irritability are risperidone and aripiprazole, which are approved only in the context of autism and are associated with adverse effects that limit their utility. Stimulants, serotonin reuptake inhibitors, and variants of cognitive behavioral therapy and parent management training show promise for different populations, but overall there is a shortage of options, leading many health care professionals to try off-label drug cocktails with unclear efficacy. This situation results in part from our primitive understanding of the phenomenology and brain mechanisms of irritability throughout the life span.
An emerging body of work focuses on measuring irritability in children and adolescents, determining comorbid disorders, and tracking related functional impairment.1 Multiple studies, for example, report that chronically irritable youth are at elevated risk for suicidality, depression, and anxiety in adulthood.2,3 But what are the clinical characteristics and longitudinal course of irritability in adults? Irritability diminishes from toddlerhood through school age, but does it continue to decrease monotonically with age into adulthood? What about the end of life? Irritability and aggression are common in patients with neurodegenerative disorders, but are these symptoms similar to those in a child with DMDD? There has been limited systematic study of irritability in adulthood, and studies that mention irritability in adulthood operationalize the construct in different ways. One study counted 21 definitions and 11 measures of irritability in the psychiatric literature, all of which overlapped with anger and aggression.4 This lack of clarity diminishes our ability to identify biomarkers or track treatment success. Even studies that use childhood irritability to predict adult impairment do not typically measure irritability in adults, thereby obscuring the natural history of irritability as a symptom.5 For the field to progress, it will be crucial to establish standard definitions and measurements spanning childhood through adulthood.
Beyond phenomenology, we need to identify brain signatures associated with the emergence, recurrence, and remission of irritability across the life span and during treatment. Irritability is a prototypical transdiagnostic symptom, but it remains unclear to what extent its brain mechanisms overlap across disorders. For example, in children, data suggest that the brain mechanisms mediating irritability in DMDD, anxiety disorders, and attention-deficit/hyperactivity disorder are similar but differ from those mediating irritability in childhood bipolar disorder.1,6 The frequency of irritable outbursts appears to diminish in step with the maturity of prefrontal regions during childhood.1 Could degeneration in the same structures predict reemergence of irritable outbursts in patients with dementia? Could developmental differences in these regions increase the likelihood of irritability when individuals are sleep deprived or intoxicated later in adolescence or adulthood? Only through fine-grained neuroscientific studies can we disentangle what is unique to the symptom (ie, irritability) and to the disorder (eg, bipolar disorder vs DMDD vs dementia), and develop treatments tailored to an individual’s brain pathology.
 
Translational Neuroscience and Irritability

In addition to their clinical relevance, neuroscientific studies of irritability can address fundamental questions about brain dysfunction and recovery. Over the past 2 decades, studies have revealed the circuits underlying reward processing, and in particular prediction error, the mismatch between expected and actual reward.7 The neuroscience of aggression has also advanced through the discovery of cells in the amygdala and hypothalamus that form a final common pathway for aggressive behavior.8 Irritability and the concept of frustrative nonreward can tie these 2 fields together.
Frustrative nonreward is the behavioral and emotional state that occurs in response to a negative prediction error, ie, the failure to receive an expected reward. In the classic study by Azrin et al,9 pigeons were trained to peck a key for food reward. After pigeons learned the task, the experimenters removed the reward; then when the pigeons pecked, nothing happened. For the next several minutes, there were 2 changes in the pigeons’ behavior. First, they pecked the key at a higher rate. Second, they became unusually aggressive, damaging the cage and attacking another pigeon nearby. In other words, a negative prediction error led to a state of frustration, which then induced increased motor activity and aggression. Such responses to frustration have been replicated in many species, including chimpanzees, cockerels, salmon, and human children and adults.10 Frustrative nonreward therefore provides an evolutionarily conserved behavioral association between prediction error and aggression. Apart from studies in children,1,6 however, little has been done to probe the neural circuits of frustrative nonreward or of irritability, which can be defined as a low threshold for experiencing frustrative nonreward.
We know, for example, that negative prediction errors cause phasic decreases in dopamine neuron firing, which help mediate learning by reducing the valuation of a stimulus. Does this dip in dopamine level also increase the likelihood of aggression and if so how? The same optogenetic techniques that have demonstrated a causal role for dopamine prediction errors in reward learning could be used to test their role in aggressive behavior. Likewise, multiple nodes in the reward circuit encode the value of environmental stimuli. Could these values modulate the propensity for aggression? Environments of plenty, for instance, may protect against aggressive outbursts, because if there is always more reward available, the missing out factor may not be salient. Conversely, scarcity could make individuals more likely to be aggressive, because if there are few rewards to be had, achieving dominance may be necessary for survival.
Exploring the bidirectional associations between the reward processing and aggression circuits would help us understand state changes in the brain and how environmental context determines our behavior. At the same time, understanding these circuits will lay the groundwork for mechanism-based treatments for irritability.
 
Conclusions
The neuroscience of irritability is in its infancy and research has focused almost exclusively on children. We now have an opportunity to expand this field to adults, across disorders, and to animal models for more precise mechanistic studies. Through better measurement, careful experimental design, input from theorists and computational psychiatrists, and coordinated efforts across experts in multiple disorders, we can guide the field to maturity.

Reduction of Facebook use longitudinally increased life satisfaction, enhanced the level of physical activity, & reduced depressive symptoms and smoking behavior

Less Facebook use – More well-being and a healthier lifestyle? An experimental intervention study. Julia Brailovskaia et al. Computers in Human Behavior, March 6 2020, 106332. https://doi.org/10.1016/j.chb.2020.106332

Highlights
• Experimental reduction of Facebook use longitudinally increased life satisfaction.
• Reduction of Facebook use longitudinally enhanced the level of physical activity.
• Reduction of Facebook use longitudinally reduced depressive symptoms and smoking behavior.
• Less time spent on Facebook leads to more well-being and a healthier lifestyle.

Abstract: Use of the social platform Facebook belongs to daily life, but may impair subjective well-being. The present experimental study investigated the potential beneficial impact of reduction of daily Facebook use. Participants were Facebook users from Germany. While the experimental group (N = 140; Mage(SDage) = 24.15 (5.06)) reduced its Facebook use for 20 min daily for two weeks, the control group (N = 146; Mage(SDage) = 25.39 (6.69)) used Facebook as usual. Variables of Facebook use, life satisfaction, depressive symptoms, physical activity and smoking behavior were assessed via online surveys at five measurement time points (pre-measurement, day 0 = T1; between-measurement, day 7 = T2; post-measurement, day 15 = T3; follow-up 1, one month after post-measurement = T4; follow-up 2, three months after post-measurement = T5). The intervention reduced active and passive Facebook use, Facebook use intensity, and the level of Facebook Addiction Disorder. Life satisfaction significantly increased, and depressive symptoms significantly decreased. Moreover, frequency of physical activity such as jogging or cycling significantly increased, and number of daily smoked cigarettes decreased. Effects remained stable during follow-up (three months). Thus, less time spent on Facebook leads to more well-being and a healthier lifestyle.