Friday, November 20, 2020

Neither non-algorithmic nor algorithmically determined news contribute to higher levels of partisan polarization; & getting news from sites of algorithms-generated content corresponds with more political participation

Exploring the Effects of Algorithm-Driven News Sources on Political Behavior and Polarization. Jessica T. Feezell, John K. Wagner, Meredith Conroy. Computers in Human Behavior, Nov 19 2020, 106626, https://doi.org/10.1016/j.chb.2020.106626.

Abstract: Do algorithm-driven news sources have different effects on political behavior when compared to non-algorithmic news sources? Media companies compete for our scarce time and attention; one way they do this is by leveraging algorithms to select the most appealing content for each user. While algorithm-driven sites are increasingly popular sources of information, we know very little about the effects of algorithmically determined news at the individual level. The objective of this paper is to define and measure the effects of algorithmically generated news. We begin by developing a taxonomy of news delivery by distinguishing between two types of algorithmically generated news, socially driven and user-driven, and contrasting these with non-algorithmic news. We follow with an exploratory analysis of the effects of these news delivery modes on political behavior, specifically political participation and polarization. Using two nationally representative surveys, one of young adults and one of the general population, we find that getting news from sites that use socially driven or user-driven algorithms to generate content corresponds with higher levels of political participation, but that getting news from non-algorithmic sources does not. We also find that neither non-algorithmic nor algorithmically determined news contribute to higher levels of partisan polarization. This research helps identify important variation in the consequences of news consumption contingent on the mode of delivery.

Keywords: Algorithms; YouTube; Social Media; Political Behavior; Polarization



Individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views

All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Sarah Kreps, R. Miles McCain and Miles Brundage. Journal of Experimental Political Science, Nov 20 2020. https://doi.org/10.1017/XPS.2020.37

Rolf Degen's take: https://twitter.com/DegenRolf/status/1329736218721050624

Abstract: Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy. The first evaluated human perceptions of AI-generated text relative to an original story. The second investigated the interaction between partisanship and AI-generated news. The third examined the distributions of perceived credibility across different AI model sizes. We find that individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views. The findings have important implications in understanding AI in online misinformation campaigns.


Note:

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/1XVYU3. This research was conducted using Sarah Kreps’ personal research funds. Early access to GPT-2 was provided in-kind by OpenAI under a non-disclosure agreement. Sarah Kreps and Miles McCain otherwise have no relationships with interested parties. Miles Brundage is employed by OpenAI.

Men in same-sex couples are 12% points less likely to have completed a bachelor’s degree in a STEM field compared to men in different sex couples, a bigger gap than the STEM degree gap between all white & black men

Turing’s children: Representation of sexual minorities in STEM. Dario Sansone, Christopher S. Carpenter. PLoS One, Nov 18 2020. https://journals.plos.org/plosone/article/comments?id=10.1371/journal.pone.0241596

Abstract: We provide nationally representative estimates of sexual minority representation in STEM fields by studying 142,641 men and women in same-sex couples from the 2009-2018 American Community Surveys. These data indicate that men in same-sex couples are 12 percentage points less likely to have completed a bachelor’s degree in a STEM field compared to men in different sex couples. On the other hand, there is no gap observed for women in same-sex couples compared to women in different-sex couples. The STEM degree gap between men in same-sex and different sex couples is larger than the STEM degree gap between all white and black men but is smaller than the gender gap in STEM degrees. We also document a smaller but statistically significant gap in STEM occupations between men in same-sex and different-sex couples, and we replicate this finding by comparing heterosexual and gay men using independently drawn data from the 2013-2018 National Health Interview Surveys. These differences persist after controlling for demographic characteristics, location, and fertility. Finally, we document that gay male representation in STEM fields (measured using either degrees or occupations) is systematically and positively associated with female representation in those same STEM fields. 

Keywords: sexual minorities; representation; LGBTQ; STEM