Saturday, November 9, 2019

The study found auto-written news stories were rated as more objective, credible (both message and medium credibility), and less biased

Is Automated Journalistic Writing Less Biased? An Experimental Test of Auto-Written and Human-Written News Stories. Yanfang Wu. Journalism Practice, Oct 29 2019. https://doi.org/10.1080/17512786.2019.1682940

ABSTRACT: By administering an online experiment, this study examined how source and journalistic domains affect the perceived objectivity, message credibility, medium credibility, bias, and overall journalistic quality of news stories among an adult sample (N = 370) recruited using Amazon’s Mechanical Turk (MTurk) service. Within the framework of the cognitive authority theory, the study found auto-written news stories were rated as more objective, credible (both message and medium credibility), and less biased. However, significant difference was found between a combined assessment condition (news stories with source and author information) and a message only assessment condition (news stories without source and author information) in the ratings of objectivity and credibility, but not bias. Moreover, significant differences were found in the objectivity and credibility ratings of auto-written and human-written news stories in the journalistic domains of politics, finance and sports news stories. In auto-written news stories, sports news stories were rated more objective and credible, while financial news stories were rated as more biased. In human-written stories, financial news stories were rated as more objective and credible. However, political news stories were rated as more biased among human-written news stories, and in cases where auto-written and human-written stories were combined.

KEYWORDS: Cognitive authority, auto-written, human-written, automated journalistic writing, experimental test, objectivity, credibility, bias

Discussion
The study found that auto-written news stories were rated as significantly more objective than human-written news stories. This finding is in line with previous researchers’ assumptions about the inherent objectivity of algorithms, the limits of humans’ subjective “gut feeling” in the evaluation of newsworthiness and news inclusion, and the advantages of algorithms in overcoming inherent human biases and limitations (Carlson 2018; Toraman and Can 2015). Also, the results corroborated Cleary and coauthor’s(2011) conclusion that Natural Language Generation software could augment accuracy, Clerwall’s (2014) result that text generated by algorithms is perceived as more informative, accurate, and objective, and Melin and coauthor’s(2018) finding that auto-written content tend to be rated more accurate, trustworthy and objective. Additionally, these findings echoed Thurman and coauthors’ (2017) conclusion on automaton and the increase of objectivity in news stories. The reason why automatic journalistic writing was rated as more objective could be that readers favor stories distinguishing facts and opinions clearly, which NGL, the generation method used for the texts chosen in this study, was recognized as being able to generate stories of this nature in Melin and coauthor’s(2018) conclusion. Further, Graefe and coauthor’s(2018) recent studies concluded that algorithms, such as NLG, are more accurate on factual information. Moreover, choosing the right vocabulary that represents the information in numbers is major task for journalists. Word choice is always influenced by the journalist’s personal interpretation, which may reduce a news story’s objectivity. For example, Carlson (2018, 1764) believed journalists have inherent human subjectivity because they apply learned knowledge to professionally valid interpretations. In contrast, algorithms have the unthinking objectivity of computer programs, and are the apotheosis of journalistic knowledge production – objectivity. Furthermore, specialized algorithms have a narrow domain focus reducing the options for word choice, thereby increasing objectivity (McDonald 2010). Gillespie (2014) used the term “algorithmic objectivity” to describe the power of algorithms in strengthening objectivity. Additionally, the integration of automation and datafication in news reporting may increase objectivity. For example, web analytics are found to be useful tools in increasing the precision of journalists’ news judgement (Wu, Tandoc, and Salmon2018). Data driven journalism not only empowers journalists to report stories in new and creative ways (Anderson 2018), it is also believed to increase objectivity (Parasie and Dagiral 2013). Another interesting finding is that auto-written stories were even perceived as more credible (both message and medium credibility) than human-written news stories. This finding aligns with Haim and Graefe’s(2017), Van der Kaa and Krahmer’s (2014), Clerwall’s (2014), and Melin’s(2018) conclusions that readers tend to perceive auto-written news as more credible than human-written stories. The finding also aligns with Wolker and Powell’s (2018) claim that there are equal perceptions of credibility between auto-written and auto and-human-combined-written content. Although auto-written algorithms lack the skills in using nuances of languages, human reporters may produce less credible news stories due to failing to express views, or distinguish facts from opinions clearly (Meyer, Marchionni, and Thorson 2010). However, an algorithm is viewed as a “credible knowledge logic” (Gillespie 2014, 191) because it is considered a force that could eliminate human biases or errors (Linden 2017a). Algorithms also create many more possibilities for detecting falsehood (such as bias, inaccuracy) automatically and verifying truth more effectively (Kaczmarek 2008). Stories developed by programmers from multi-sourced data can fulfill functions of professional journalism and may even align with more traditional journalistic standards (Parasie and Dagiral 2013; Dörr 2016). Furthermore, algorithms may perform better than human reporters in data verification, as algorithms restrict themselves to a specialized area with a very stipulated content (Perloff 1993; Hassan and Azmi 2018).

This experimental design distinguished the effect of source and journalists’ authorship from text on readers’ ratings of quality of journalism, which is one of the major contributions of this study. This study further verified that readers consider automatic journalistic writing more objective when source and journalists’ affiliation information were not disclosed. Moreover, the message and medium credibility of automated journalistic writing were rated significantly higher without source and authorship. These findings align with Graefe et al.’s(2018) results on the confounding effect of source on readers’ ratings of credibility – the declaration of an article written by a journalist substantially increase the ratings of readability. The findings also showed a decline of trust in traditional media and human writers in producing good journalism, which was reflected in a recent survey by Knight Foundation and Gallup poll – a majority of surveyed Americans reported they had lost trust in American media due to inaccuracy, bias, fake news, alternative facts, a general lack of credibility, and reporters are “based on opinions or emotions” (Ingram 2018). Automated journalistic writing, however, showed strength in objectivity and credibility.

The confounding effect of source and authorship was also identified in the ratings of bias. In particular, whether journalists’ political belief affects the presentation of the subject is one of the important indexes of message credibility in this study. Human written stories were rated more biased than auto-written news stories. However, when source and authorship information were included, human-written stories were rated less biased. Although impartiality is recognized as one of the important journalistic ideologies, human journalists were tagged as partisan actors whose political beliefs affected their news decisions, although they define themselves as news professionals committed to a form of journalism marked by objectivity and neutrality (Patterson and Donsbagh 1996; DellaVigna and Kaplan 2007; Oosterhoff, Shook, and Ford 2018; Linden 2017a). Gaziano and McGrath (1986) identified that political bias was important factor affecting credibility in news reporting, particularly accuracy, fairness, responsibility, and role in criticism of government. With the proliferation of data, human-written news stories may contextualize the automated-generated content by using it as a multi-source from different perspectives (Carlson 2015; Thurman, Dörr, and Kunert 2017). This was described by Carlson (2018) as “a visible incursion of algorithmic judgment in the space of human editorial judgment” and Wolker and Powell (2018) as the well-rounded future of journalism. These applications are feasible to reduce human bias in journalism.

Participants in the message only assessment condition rated auto-written news stories as both more objective and more credible (both message credibility and medium credibility) than human-written news stories compared to participants in the combined assessment condition. According to the hypothesized news assessment model based on the cognitive authority theory, readers rely on textual authority (intrinsic plausibility) – whether the content is “accurate, authentic and believable” (Appelman and Sundar 2016, 73) – to execute evaluation when the affiliation of the news organization and the journalist’s name were removed from the stories. However, when institutional authority and personal authority were combined with textual authority, readers combine textual authority with whether the source of message is “authoritative, reliable, reputable and trustworthy” (Appelman and Sundar 2016, 74) to make judgment. The results showed that this combined assessment process reduced news stories’ perceived objectivity and credibility. In the internet age, readers may assess news stories with greater emphasis on textual authority due to the insufficient bandwidth for storing information (Sundar 2008). The findings from this study corroborates Wilson’s (1983) cognitive authority theory. When source – personal authority and institutional authority – are not revealed, readers have to use textual type authority (document type), and intrinsic plausibility authority (content of text) to evaluate the credibility of a news story. This may change how news quality is assessed in digital journalism. The findings further verified the result that auto-written news content is perceived as more objective and credible than human-written news stories.

Although automated journalistic writing received higher credibility ratings, it is also more likely to distribute fake news due to its dependence on data for source, processing and output. Ethical issues may arise when data is used without proper verification, transparency about the source and content in generation algorithms (Graefe 2016). In addition, whether the programmer, reporter, or editor will be responsible for the collection, analysis and presentation of data, and who should be held accountable for automated journalistic writing in which human reporters contributed to contextualize generated content, and algorithms acted as an intelligence augmenter, remain controversial topics in the field of automated journalistic writing.

Journalistic domains were found to affect readers’ evaluations of objectivity, message credibility, and medium credibility, but not bias. Sports news stories were rated more objective and credible (both message and medium credibility) than finance and political newsstories in auto-written news tories. Financial news stories were rated more objective and credible than sports and political newsstories among human-written stories. Financial news stories were rated more biased than sports and political news stories among autowritten news stories. However, political news stories were rated as more biased than financial and sports news stories in human-written news stories. Political news stories were rated as more biased than sports and finance news stories when auto-written and human-written stories were combined. Multiple business motivations may result in journalistic domains havinganeffect on readers’ assessment of news stories. First, thepublishers’ or new organizations’ political stance was found to greatly affect that of its reporters’. or example, news stories of ABC, CBS, NBC and Fox were believed to exhibit political leanings. Subsequently, political bias may be more salient in story constructions or patterns of bias in news stories (Groeling 2008). Similarly, participants’ political ideology may affect their views towards The Associated Press, The New York Times, and The Washington Post. Secondly, social identity – the sense of whether one belongs to a perceived ingroup or out-group – may affect bias in sports news. Previous studies found that American sportscasters tended to report favorably on athletes from the United States and highlight them more frequently. In-group favoritism is more pronounced when their participants’ team won (Bryant and Oliver 2009; Wann 2006). However, financial news stories, which are mostly free from political standing and social identity, were the least biased when compared to politics and sports stories in the human-written group. On the contrary, when neither political stance nor social identity plays a role, financial news stories were rated the most biased in the auto-written group.

No comments:

Post a Comment