Tuesday, March 16, 2021

There is a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation

The Psychology of Fake News. Gordon Pennycook, David G. Rand. Trends in Cognitive Sciences, March 15 2021. https://doi.org/10.1016/j.tics.2021.02.007

Highlights

Recent evidence contradicts the common narrative that partisanship and politically motivated reasoning explain why people fall for 'fake news'.

Poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, as well as to the use of familiarity and source heuristics.

There is also a large disconnect between what people believe and what they will share on social media, and this is largely driven by inattention rather than by purposeful sharing of misinformation.

Effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.


Abstract: We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.

Keywords: fake newsmisinformationsocial medianews mediamotivated reasoningdual process theorycrowdsourcingattentioninformation sharing

What Can Be Done? Interventions To Fight Fake News

We now turn to the implications of these findings for interventions intended to decrease the spread and impact of online misinformation.

Current Approaches for Fighting Misinformation

As social media companies are, first and foremost, technology companies, a common approach is the automated detection of problematic news via machine learning, natural language processing, and network analysis [74.75.76.]. Content classified as problematic is then down-ranked by the ranking algorithm such that users are less likely to see it. However, creating an effective misinformation classifier faces two fundamental challenges. First, truth is not a black-and-white, clearly defined property: even professional fact-checkers often disagree on how exactly to classify content [77,78]. Thus, it is difficult to decide what content and features should be included in training sets, and artificial intelligence approaches run the risk of false positives and, therefore, of unjustified censorship [79]. Second, there is the problem of nonstationarity: misinformation content tends to evolve rapidly, and therefore the features which are effective at identifying misinformation today may not be effective tomorrow. Consider, for example, the rise of COVID-19 misinformation in 2020 – classifiers trained to detect largely political content were likely unequipped to be effective for novel false and misleading claims relating to health.

Another commonly used approach involves attaching warnings to content that professional fact-checkers have found to be false (reviewed in [80,81]). A great deal of evidence indicates that corrections and warnings do successfully reduce misperceptions [41,81.82.83.] and sharing [49,84,85]. Despite some early evidence that correction checking could backfire and increase belief in false content [86], recent work has shown that these backfire effects are extremely uncommon and are not a cause for serious concern [87,88].

There are, however, other reasons to be cautious about the sufficiency of professional fact-checking. Most importantly, fact-checking is simply not scalable – it typically requires substantial time and effort to investigate whether a particular claim is false or misleading. Thus, many (if not most) false claims never get fact-checked. Even for those claims that do eventually get flagged, the process is often slow, such that warnings are likely to be absent during the claim's period of peak viral spreading. Furthermore, warnings are typically only attached to blatantly false news, and not to extremely misleading or biased coverage of events that actually occurred. In addition to straightforwardly undermining the reach of fact-checks, this sparse application of warnings could lead to an 'implied truth' effect where users may assume that (false or misleading) headlines without warnings have actually been verified [84]. Fact-checks often also fail to reach their intended audience [89], and may fade over time [90], provide incomplete protection against familiarity effects [49], and cause corrected users to subsequently share more low-quality and partisan content [91].

Another potential approach that is commonly referenced is emphasizing the publishers of news articles, seeking to leverage the reliance on source cues described earlier. This, in theory, could be effective because people (at least in the USA) are actually fairly good at distinguishing between low- and high-quality publishers [92]. However, experimental evidence on emphasizing news publishers is not very encouraging: Numerous studies find that making source information more salient (or removing it entirely) has little impact on whether people judge headlines to be accurate or inaccurate [37,93.94.95.96.97.] (although see [98,99]).

New Approaches for Fighting Misinformation

One potentially promising alternative class of interventions involve a more proactive 'inoculation' or 'prebunking' against misinformation [8,100]. For example, the 'Bad News Game' uses a 10–20 minute interactive tutorial to teach people how to identify fake news in an engaging way [101]. An important limitation of such approaches is that they are 'opt in' – that is, people have to actively choose to engage with the inoculation technique (often for a fairly substantial amount of time – at least in terms of the internet attention span [102]). This is particularly problematic given that those most in need of 'inoculation' against misinformation (e.g., people who are low on cognitive reflection) may be the least likely to seek out and participate in lengthy inoculations. Lighter-touch forms of inoculation that simply present people with information that helps them to identify misinformation (e.g., in the context of climate change [103]) may be more scalable. For example, presenting a simple list of 12 digital media literacy tips improved people's capacity to discern between true and false news in the USA and India [104].

Both fact-checking and inoculation approaches are fundamentally directed toward improving people's underlying knowledge or skills. However, as noted earlier, recent evidence indicates that misinformation may spread on social media not only because people are confused or lack the competency to recognize fake news, but also (or even mostly) because people fail to consider accuracy at all when they make choices about what to share online [21,44]. In addition, as mentioned, people who are more intuitive tend to be worse at distinguishing between true and false news content, both in terms of belief (Figure 1A) and sharing [35,71]. This work suggests that interventions aimed at getting people to slow down and reflect about the accuracy of what they see on social media may be effective in slowing the spread of misinformation.

Indeed, recent research shows that a simple accuracy prompt – specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) before making judgments about social media sharing – improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments [21,44]. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a politically neutral news headline were sent to thousands of accounts who recently shared links to misinformation sites [21]. This subtle prompt significantly increased the quality of the new they subsequently shared (Figure 2B). Furthermore, survey experiments have shown that asking participants to explain how they know whether a headline is true or false before sharing it increases sharing discernment [105], and having participants rate accuracy at the time of encoding protects against familiarity effects [106]. Relatedly, metacognitive prompts – probing questions that make people reflect – increases resistance to inaccurate information [107].

A major advantage of such accuracy prompts is that they are readily scalable. There are many ways that social media companies, or other interested parties such as governments or civil society organizations, could shift people's attention to accuracy (e.g., through ads, by asking about the accuracy of content that is shared, or via public service announcements, etc.). In addition to scalability, accuracy prompts also have the normative advantage of not relying on a centralized arbiter to determine truth versus falsehood. Instead, they leverage users' own (often latent) ability to make such determinations themselves, preserving user autonomy. Naturally, this will not be effective for everyone all of the time, but it could have a positive effective in the aggregate as one of the various tools used to combat misinformation.

Finally, platforms could also harness the power of human reasoning and the 'wisdom of crowds' to improve the performance of machine-learning approaches. While professional fact-checking is not easily scalable, it is much more tractable for platforms to have large numbers of non-experts rate news content. Despite potential concerns about political bias or lack of knowledge, recent work has found high agreement between layperson crowds and fact-checkers when evaluating the trustworthiness of news publishers: the average Democrat, Republican, and fact-checker all gave fake news and hyperpartisan sites very low trust ratings [92] (Figure 3A). This remained true even when layperson raters were told that their responses would influence social media ranking algorithms, creating an incentive to 'game the system' [108]. However, these studies also revealed a weakness of publisher-based crowd ratings: familiarity with a publisher was necessary (although not sufficient) for trust, meaning that new or niche publishers are unfairly punished by such a rating scheme. One solution to this problem is to have laypeople rate the accuracy of individual articles or headlines (rather than publishers), and to then aggregate these item-level ratings to create average scores for each publisher (Figure 3B). Furthermore, the layperson ratings of the articles themselves are also useful. Analyzing a set of headlines flagged for fact-checking by an internal Facebook algorithm found that the average layperson accuracy rating for fairly small crowds correlated equally well with that of professional fact-checkers as the fact-checkers correlated with each other [77]. Thus, using crowdsourcing to add a 'human in the loop' element to misinformation detection algorithms is promising.

These observations about the utility of layperson ratings have a strong synergy with the aforementioned idea of prompts that shift users' attention to accuracy: periodically asking social media users to rate the accuracy of random headlines both (i) shifts attention to accuracy and thus induces the users to be more discerning in their subsequent sharing, and (ii) generates useful ratings to help inform ranking algorithms.

No comments:

Post a Comment