Tuesday, August 27, 2019

Out of all the participating users who post comments in a particular shaming event, the majority of them are likely to shame the victim; shamers' follower counts increase faster than that of the nonshamers in Twitter

Online Public Shaming on Twitter: Detection, Analysis, and Mitigation. Rajesh Basak; Shamik Sural; Niloy Ganguly; Soumya K. Ghosh. IEEE Transactions on Computational Social Systems, Volume 6, Issue 2, April 2019, pp 208 - 220, DOI: 10.1109/TCSS.2019.2895734

Abstract: Public shaming in online social networks and related online public forums like Twitter has been increasing in recent years. These events are known to have a devastating impact on the victim's social, political, and financial life. Notwithstanding its known ill effects, little has been done in popular online social media to remedy this, often by the excuse of large volume and diversity of such comments and, therefore, unfeasible number of human moderators required to achieve the task. In this paper, we automate the task of public shaming detection in Twitter from the perspective of victims and explore primarily two aspects, namely, events and shamers. Shaming tweets are categorized into six types: abusive, comparison, passing judgment, religious/ethnic, sarcasm/joke, and whataboutery, and each tweet is classified into one of these types or as nonshaming. It is observed that out of all the participating users who post comments in a particular shaming event, majority of them are likely to shame the victim. Interestingly, it is also the shamers whose follower counts increase faster than that of the nonshamers in Twitter. Finally, based on categorization and classification of shaming tweets, a web application called BlockShame has been designed and deployed for on-the-fly muting/blocking of shamers attacking a victim on the Twitter.

No comments:

Post a Comment