Monday, June 6, 2022

Liberals were more prone than conservatives to perceive a tweet with the opposing political view as a bot

Why Some Are Better than Others at Detecting Social Bots: Comparing Baseline Performance to Performance with Aids and Training. Ryan John Kenny. Carnegie Mellon University PhD Dissertations, May 2022. https://www.proquest.com/openview/a907b0780af36f04b33cb1477c8aadc9/1?pq-origsite=gscholar&cbl=18750&diss=y

Abstract: Social bots have infiltrated many social media platforms, sowing misinformation and disinformation. The harm caused by social bots depends on their ability to avoid detection by credibly impersonating human users. These three studies use a signal detection task to compare human detection of Twitter social bot personas with that of machine learning assessments. Across these studies, we find that sensitivity was (1a) minimal without training or aid, (1b) people were hesitant to respond ‘bot,’ and (1c) people were prone to “myside bias,” judging personas less critically when they shared political views. We also observed (1d) sensitivity improved when a bot detection aid was provided and (1e) when users received training focused on the objectives of social bot creators: to amplify narratives to an extensive social network. When participants labeled a persona a social bot, (2) the probability of their willingness to share its content dropped dramatically. We investigated the relationships between users’ attributes and social bot detection performance and found, (3a) social media experience did not improve detection and at times impaired it; (3b) myside bias affected the sensitivity and criterion used by liberals and conservatives differently; and (3c) analytical reasoning did not improve social bot detection, nor did it mitigate observed myside bias effects, but increased them slightly. We found that (4) people were more concerned about social bots influencing others’ online behaviors than being influenced themselves. Additionally, users’ willingness to pay for a social bot detection aid increased (5a) the more they were concerned about social bots, (5b) the greater their social media experience, (5c) the greater their sensitivity, and (5d) the higher their threshold for responding ‘bot.’ These findings demonstrate the threat posed by social  bots and two interventions that may reduce them.

---
Political Values and Political Differences

As participants' political difference (PD) from the persona increased, they were more 
likely to judge it a 'bot,' consistent with myside bias, with a one standard deviation increase in 
PD shifting the intercept by 0.19. The post-hoc model adds participants' self-reported 
political values (PV) as the main effects and interactions. In the post hoc model, both liberal 
and conservative participants had a greater probability of responding ‘bot’ when viewing a 
persona of an opposing political view. However, liberals had a greater probability of 
responding ‘bot’ than conservatives. 

Both models' interaction between bot indicator and political differences (BI x PD) is 
explained by adding political values in the post-hoc model. The significant three-way 
interaction (BI x PV x PD) (p < 0.001) reveals an asymmetric pattern of sensitivity related to
participants’ political views. Figure 2 shows the relationship between PV and BI, with PD 
divided into five levels. In the upper left, when judging personas with similar political views,
liberals were very sensitive to the bot indicator score (red line), while conservatives were not
(purple line). At the other extreme, when judging personas with opposite political views, 
liberals were insensitive to bot indicator scores, while conservatives had a modest sensitivity.

No comments:

Post a Comment