Thursday, August 25, 2022

Rolf Degen summarizing... Deepfakes are much less impactful than the scaremongering in the media might lead you to believe

You Won't Believe What They Just Said! The Effects of Political Deepfakes Embedded as Vox Populi on Social Media. Michael Hameleers, Toni G. L. A. van der Meer, Tom Dobber. Social Media + Society, August 25, 2022. https://doi.org/10.1177/20563051221116346

Abstract: Disinformation has been regarded as a key threat to democracy. Yet, we know little about the effects of different modalities of disinformation, or the impact of disinformation disseminated through (inauthentic) social media accounts of ordinary citizens. To test the effects of different forms of disinformation and their embedding, we conducted an experimental study in the Netherlands (N = 1,244). In this experiment, we investigated the effects of disinformation (contrasted to both similar and dissimilar authentic political speeches), the role of modality (textual manipulation versus a deepfake), and the disinformation’s embedding on social media (absent, endorsed or discredited by an (in)authentic citizen). Our main findings indicate that deepfakes are less credible than authentic news on the same topic. Deepfakes are not more persuasive than textual disinformation. Although we did find that disinformation has effects on the perceived credibility and source evaluations of people who tend to agree with the stance of the disinformation’s arguments, our findings suggest that the strong societal concerns on deepfakes’ destabilizing impact on democracy are not completely justified.

Keywords: deepfakes, disinformation, endorsement, misinformation


Although previous research has experimentally tested the impact of political deepfakes (Dobber et al., 2020Vaccari & Chadwick, 2020), important questions remain: Do deepfakes in the political realm have a persuasive advantage over textual modes of deception, and are its effects contingent upon the endorsement by (fake) social media accounts in a “participatory” disinformation order (Lukito et al., 2020Starbird, 2019)? Our main findings indicate that disinformation was rated as substantially less credible than an unrelated authentic speech. However, disinformation was not rated as substantially less credible than malinformation based on an authentic speech of the depicted political actor. Finally, exposure to a deepfake did not yield stronger effects than exposure to textual disinformation.

These findings contradict the ubiquitous concerns on deepfakes in the current digital information age and are not in line with literature on the persuasiveness of multimodal framing (Powell et al., 2018) or deepfakes (Lee & Shin, 2021). To some extent, the lack of effects is in line with previous research on deepfakes indicating that deepfakes do not directly mislead news users (Dobber et al., 2020Vaccari & Chadwick, 2020). Rather, deepfakes may have a more indirect effect by making recipients unsure on what or whom to believe, which, in turn, reduces people’s trust in (online) news (Vaccari & Chadwick, 2020).

Regarding the embedding of disinformation on social media, we found that discrediting the fabricated statements can make disinformation appear more credible. The social endorsement cue does not have this effect. However, endorsing a disinformation message on social media resulted in a more positive evaluation of the depicted politician compared to the absence of such an endorsement. This can be understood as an in-group serving bias: The presence of a source cue similar to the recipient may enhance credibility. These findings show that disinformation agents employing trolls or bots that use inauthentic social media profiles are only effective in increasing the message’s credibility when a fake message is discredited. The democratic implications of these findings are optimistic: Deepfakes, at least based on the current state-of-the-art, do not seem to be as dangerous for society as assumed (Paris & Donovan, 2020). While concerns about information pollution and eroding public trust remain, deepfakes’ ability to destabilize democracy should not be overstated.

We did find some support for the conditional effects of deepfakes and textual disinformation. Overall, disinformation is more effective in affecting the credibility ratings and positive evaluations of the depicted politician among news users already inclined to support the attitudinal stance of disinformation’s statements—which is in line with extant research on the indirect impact of disinformation campaigns (e.g., Schaewitz et al., 2020). We can understand this as a confirmation bias (e.g., Knobloch-Westerwick et al., 2017): Disinformation that is congruent with people’s prior beliefs may reinforce these existing beliefs. Yet, we did not find support for a moderating role of prior levels of issue agreement on disinformation’s impact on message congruent beliefs. We can explain this as a ceiling effect: People already inclined to support the attitudinal stance of disinformation are not further bolstering their beliefs based on one single message that reassures the attitudes they already hold.

Despite contributing to our understanding of the political consequences of exposure to socially endorsed deepfakes, this article has a number of limitations. As the deepfake was almost as credible as an authentic decontextualized video (malinformation), we believe that the lack of effects was not only due to technological failures of the deepfake itself but also the lack of plausibility of the extremist political statements associated with the more moderate political actor. Yet, we aimed to strike a balance between actually voiced statements by the political actor and a delegitimizing narrative that would harm the political actor by making him look bad. To achieve this, some deviation from the truth and familiar statements was needed. The finding that people do not clearly differentiate between fabricated disinformation and decontextualized malinformation is an important finding in its own right: In times when the truth has become more relative (e.g., Van Aelst et al., 2017), people may also distrust authentic information when it triggers suspicion due to its unusual nature.

This specific trade-off between audiovisual credibility and argumentative discrepancies needs to be teased out further in the future: How far can a deepfake deviate from a political actor’s profile to still be perceived as credible, and what persuasive techniques can be used to make inauthentic arguments seem real? Hence, future research may experiment with different conditions that are more or less plausible and more or less distant to the everyday communication of a known target. They may also more centrally take into account people’s existing knowledge and beliefs related to the depicted politician’s issue positions. If deepfakes are no longer credible when they deviate too much from reality, this may have positive implications for democracy: There are limits to the “fake reality” shown in synthetic videos, and deepfakes are not capable of making everyone say anything while remaining credible.

We should also note that the construct of perceived credibility we used may mean different things for different participants. While some may interpret the statements as referring to the authenticity of the presented materials, others may see it as the “truth value” of the statements themselves (Lewandowsky, 2021). Against this backdrop, some participants may find a deepfake uncredible because the statements do not have truth value, whereas others detect deception in the presentation of the video. Although robustness checks distinguishing between these drivers of credibility do not point to substantial differences, we suggest future research to rely on a more comprehensive multidimensional measure of credibility that distinguishes between these interpretations. In addition, although exposure to one short deepfake on its own may not affect polarization or political evaluations, the cumulative (targeted or algorithmic) exposure to attitude-consistent disinformation may, over time, exert a stronger influence on people’s beliefs and behaviors. Finally, future research may also need to take individual-level differences into account that could predict susceptibility and resilience to disinformation, such as people’s trust in social versus mainstream media, and formats more likely to contain disinformation.

As a key take-away point, we stress that although the disrupting impact of deepfakes on democracy should not be overstated, deepfakes’ ability to become part of native online political discussions may offer a persuasive advantage when it can find nuanced ways to delegitimize political actors or amplify the political beliefs of targeted groups in society via social media.

No comments:

Post a Comment