Wednesday, November 23, 2022

We know little about where users draw the line when it comes to offensive language, and what measures they wish to see implemented when content crosses the boundary of what is deemed acceptable

Pradel, Franziska, Jan Zilinsky, Spyros Kosmidis, and Yannis Theocharis. 2022. “Do Users Ever Draw a Line? Offensiveness and Content Moderation Preferences on Social Media.” OSF Preprints. November 22. doi:10.31219/osf.io/y4xft

Abstract: When is content on social media offensive enough to warrant content moderation? While social media platforms impose limits to what can be posted, we know little about where users draw the line when it comes to offensive language, and what measures they wish to see implemented when content crosses the boundary of what is deemed acceptable. Conducting randomized experiments with over 5,000 participants we study how different types of offensive language causally affect users' content moderation preferences. We quantify causal effects of uncivil, intolerant, and threatening language by randomly introducing these aspects into fictitious social media posts targeting various social groups. While overall there is limited demand for action against offensive behavior, the severity of the attack matters to the average participant. Amongst our treatments, violent threats cause the greatest support for content moderation of various types, including punishments that would be viewed as censorship in some contexts, such as taking down content or suspending accounts.


No comments:

Post a Comment