Wednesday, November 24, 2021

Umbrella review: Most adolescents experienced no or negligible effects of social media use on mental health

Social media use and its impact on adolescent mental health: An umbrella review of the evidence. Patti M. Valkenburg, Adrian Meier, Ine Beyens. Current Opinion in Psychology, Volume 44, April 2022, Pages 58-68. https://doi.org/10.1016/j.copsyc.2021.08.017

Abstract: Literature reviews on how social media use affects adolescent mental health have accumulated at an unprecedented rate of late. Yet, a higher-level integration of the evidence is still lacking. We fill this gap with an up-to-date umbrella review, a review of reviews published between 2019 and mid-2021. Our search yielded 25 reviews: seven meta-analyses, nine systematic, and nine narrative reviews. Results showed that most reviews interpreted the associations between social media use and mental health as ‘weak’ or ‘inconsistent,’ whereas a few qualified the same associations as ‘substantial’ and ‘deleterious.’ We summarize the gaps identified in the reviews, provide an explanation for their diverging interpretations, and suggest several avenues for future research.

Keywords: Meta-reviewSocial networking sitesSNSFacebookInstagramWell-beingDepressionDepressive symptoms

Discussion

In this umbrella review, we synthesized the results of 25 recent reviews into the effects of SMU on adolescent mental health. Given that adolescents’ SMU is continually changing, it is important to provide regular research updates on this use and its potential effects. In addition to the many important future directions raised in earlier reviews, we discuss three crucial avenues for future research.

Defining SMU, defining mental health

First, future research needs to consistently define the predictors and outcomes under investigation. Several reviews regularly switched between terms such as digital media use, technology use, and SMU without specifying to which media activities these terms refer. In some studies, emailing and gaming were part of the definitions of SMU, whereas others covered only time spent on SNSs. Such imprecise definitions may greatly hinder our understanding of the effects of SMU on mental health because different types of SMU may lead to different effects on mental health outcomes. For example, time spent on SNS is associated with higher levels of depression [17], whereas emotional connectedness to SNS (‘intensity of use’) [15] and the number of friends on SNS [16] are unrelated to depression. In the world of SM, everything is rapidly new and rapidly old, and, therefore, it is all the more important to define the specific types of SMU under investigation and to hypothesize how and why these types of SMU could affect mental health outcomes.

Likewise, in several reviews, both mental health and well-being were used as catchall terms that were left undefined, which sometimes led to the discussion of a potpourri of cognitive and affective outcomes that each deserve to be investigated in their own right. Our umbrella review confirmed that similar types of SMU can lead to opposite associations with different mental health outcomes [17]. Both SMU and mental health are highly complex constructs. Although most studies have focused on the associations of SMU with depression or depressive symptoms, all other constituent mental health outcomes, including their risk (e.g. loneliness) and resilience factors (e.g. self-esteem), also deserve our full research attention, provided that they are clearly defined and demarcated from other mental health outcomes.

Capturing the content and quality of SM interactions

Several reviews have pointed at a need to move away from possibly biased self-report measures toward more objective measures of SMU use, such as log-based measures of time spent with SM. Indeed, self-report measures of time spent with SM correlate only moderately with similar log-based measures [42,43]. However, although log-based measures are often seen as the gold standard, they have their own validity threats, such as technical errors and the erroneous tracing of SM apps running in the background when the screen is turned off [42,43]. This means that the modest correlations between self-reports and log-based measures could be due to validity issues of self-reports but also of objective measures. More importantly, though, most log-based measures only capture time spent with SM apps, which is just as crude a predictor of mental health as comparable self-report measures. If logging measures only reiterate the ‘screen time’ approach of most self-report research, they provide only a limited way forward.

To arrive at a true understanding of the effects of SMU on mental health, future research needs to adopt measures that capture adolescents' responses to specific content or qualities of SM interactions. In experimental settings, this can be realized by using mock SM sites, such as the Truman Platform (https://socialmedialab.cornell.edu/) or the mock SM site developed by Shaw et al. [44]. In non-experimental settings, there are three approaches that can be combined with survey or experience sampling studies: (1) The ‘Screenomics’ approach developed by Reese et al. [45], which entails end-to-end software that randomly collects screenshots of adolescents’ smartphones, and extracts text and images; (2) phone-based mobile sensing [46], which captures sound via the microphone and text entered via the keyboard; and (3) analysis of SM ‘data download packages’ [47], the archives of SM interactions that each SM user is allowed to download. While each of these methods is promising, they require sophisticated technical skills and specific expertise. Therefore, they can best be achieved in collaborative interdisciplinary projects, which are also better equipped to realize larger samples.

Understanding inconsistent interpretations

Although the majority of the reviews concluded that the reported associations of SMU with mental health were small to moderate, some others interpreted these associations as serious [30], substantial [48] or detrimental [25]. Such disagreeing interpretations can also be witnessed in three recent publications on SMU and mental health by Twenge et al. [49], Orben and Przybylski [3], and Kreski et al. [50], all relying on the same UK-based data set. Such divides in interpretations of the same modest effect sizes are certainly not new in the media effects field. For example, as of the 1980s, there has been a fierce debate among scholars about the effects of game violence on aggression (e.g. see the dispute in Psychological Bulletin about whether this effect is trivial or meaningful [51,52]). Oftentimes, the involved scholars do not disagree that much about the size of the reported effects but just on how to interpret them.

What has often been ignored in such debates is that the effect sizes are just what they are: statistics observed at the aggregate level. Such statistics are typically derived from heterogeneous samples of adolescents who may differ greatly in their susceptibilities to the effects of environmental influences in general [53] and media influences in particular [54]. After all, each adolescent is subject to unique dispositional, social-context, and situational factors that guide their SMU and moderate its effects [55]. Such person-specific antecedents and effects of SMU cannot be captured by the aggregate-level statistics that have been reported in the majority of empirical studies and reviews, including the current one.

If we accept the propositions of media-specific susceptibility theories [54], it is plausible to assume that both optimistic and pessimistic conclusions about the effects of SMU are valid — they just refer to different adolescents. In fact, recent studies that have adopted an idiographic (i.e. N = 1 or person-specific) media effects paradigm [56] have found that a small group of adolescents experienced negative effects of SMU on well-being (around 10–15%) and another small group experienced positive effects (also around 10%–15%). Reassuringly though, most adolescents experienced no or negligible effects [57].

A person-specific approach to media effects requires a large number of respondents and a large number of within-person observations per respondent. Indeed, statistical power is expensive. However, due to rapidly advancing technological (e.g. phone-based experience sampling methods) and methodological developments (e.g. N = 1 time series analyses), such approaches are increasingly within everyone's reach, especially when researchers pool resources in interdisciplinary teams. A person-specific media effects paradigm may not only help academics resolve controversies between optimistic and pessimistic interpretations of aggregate-level effect sizes, but it may also help us understand when, why, and for whom SMU can lead to positive or negative effects on mental health. And above all, it may help us facilitate personalized prevention and intervention strategies to help adolescents maintain or improve their mental health.

Politically-motivated reasoning is similar for both men and women, but only men find it particularly attractive to believe that they outperform others

Gender differences in motivated reasoning. Michael Thaler. Journal of Economic Behavior & Organization, Volume 191, November 2021, Pages 501-518. https://doi.org/10.1016/j.jebo.2021.09.016

Highlights

• I experimentally study whether there are gender differences in motivated reasoning.

• I find that there are significant gender differences in motivated reasoning about performance on a knowledge task: men systematically motivatedly reason to think they outperformed others, while women do not.

• I also find that there are sizeable gender differences in overconfidence in the same direction as motivated reasoning.

• I find that gender differences in motivated reasoning is only observed on the question about performance, and that there are no differences in politically-motivated reasoning.

• Results suggest that men and women are both susceptible to motivated reasoning in general, but that only men find it particularly attractive to believe that they outperform others.

Abstract: Men and women systematically differ in their beliefs about their performance relative to others; in particular, men tend to be more overconfident. This paper provides support for one explanation for gender differences in overconfidence, performance-motivated reasoning, in which people distort how they process new information in ways that make them believe they outperformed others. Using a large online experiment, I find that male subjects distort information processing in ways that favor their performance, while female subjects do not systematically distort information processing in either direction. These statistically-significant gender differences in performance-motivated reasoning mimic gender differences in overconfidence; beliefs of male subjects are systematically overconfident, while beliefs of female subjects are well-calibrated on average. The experiment also includes political questions, and finds that politically-motivated reasoning is similar for both men and women. These results suggest that, while men and women are both susceptible to motivated reasoning in general, men find it particularly attractive to believe that they outperformed others.

Keywords: Motivated reasoningOverconfidenceGender differencesExperimental economics

JEL: J16D83C91D91


Three key ways people/organisms can differ in a trait: Mean averages (personality), how variable they are in trait intraindividually over time/context (predictability), & how reactive/responsive the trait is across differing ecologies/contexts (plasticity)

Unifying individual differences in personality, predictability and plasticity: A practical guide. Rose E. O'Dea, Daniel W. A. Noble, Shinichi Nakagawa. Methods in Ecology and Evolution, November 1 2021. https://doi.org/10.1111/2041-210X.13755

Abstract

Organisms use labile traits to respond to different conditions over short time-scales. When a population experiences the same conditions, we might expect all individuals to adjust their trait expression to the same, optimal, value, thereby minimising phenotypic variation. Instead, variation abounds. Individuals substantially differ not only from each other, but also from their former selves, with the expression of labile traits varying both predictably and unpredictably over time.

A powerful tool for studying the evolution of phenotypic variation in labile traits is the mixed model. Here, we review how mixed models are used to quantify individual differences in both means and variability, and their between-individual correlations. Individuals can differ in their average phenotypes (e.g. behavioural personalities), their variability (known as ‘predictability’ or intra-individual variability), and their plastic response to different contexts.

We provide detailed descriptions and resources for simultaneously modelling individual differences in averages, plasticity and predictability. Empiricists can use these methods to quantify how traits covary across individuals and test theoretical ideas about phenotypic integration. These methods can be extended to incorporate plastic changes in predictability (termed ‘stochastic malleability’).

Overall, we showcase the unfulfilled potential of existing statistical tools to test more holistic and nuanced questions about the evolution, function, and maintenance of phenotypic variation, for any trait that is repeatedly expressed.


Successful blinding is an important feature of double-blind randomized controlled trials, & ensures that the safety and efficacy of treatments are accurately appraised; but blinding is not successful among either patients or investigators

A systematic review and meta-analysis of the success of blinding in antidepressant RCTs. Amelia J Scott, Louise Sharpe, Ben Colagiuri. Psychiatry Research, November 24 2021, 114297. https://doi.org/10.1016/j.psychres.2021.114297

Highlights

• Successful blinding is an important feature of double-blind randomized controlled trials, and ensures that the safety and efficacy of treatments are accurately appraised.

• In a range of fields (e.g. chronic pain, general medicine), few trials report assessing the success of blinding.

• We do not know the frequency or success of blinding assessment among antidepressant RCTs within depression.

• Only 4.7% of RCTs examining antidepressants in depression assess blinding.

• Overall, blinding is not successful among either patients or investigators.

Abstract: Successful blinding in double-blind RCTs is crucial for minimizing bias, however studies rarely report information about blinding. Among RCTs for depression, the rates of testing and success of blinding is unknown. We conducted a systematic review and meta-analysis of the rates of testing, predictors, and success of blinding in RCTs of antidepressants for depression. Following systematic search, further information about blinding assessment was requested from corresponding authors of the included studies. We reported the frequency of blinding assessment across all RCTs, and conducted logistic regression analyses to assess predictors of blinding reporting. Participant and/or investigator guesses about treatment allocation were used to calculate Bang's Blinding Index (BI). The BI between RCT arms was compared using meta-analysis. Across the 295 included trials, only 4.7% of studies assessed blinding. Pharmaceutical company sponsorship predicted blinding assessment; unsponsored trials were more likely to assess blinding. Meta-analysis suggested that blinding was unsuccessful among participants and investigators. Results suggest that blinding is rarely assessed, and often fails, among RCTs of antidepressants. This is concerning considering controversy around the efficacy of antidepressant medication. Blinding should be routinely assessed and reported in RCTs of antidepressants, and trial outcomes should be considered in light of blinding success or failure.

Keywords: Randomized controlled trialsBlindingDepressionAntidepressants