Tuesday, March 14, 2023

The Political Biases of ChatGPT

The Political Biases of ChatGPT. David Rozado. Soc. Sci. 2023, 12(3), 148; Mar 2023. https://doi.org/10.3390/socsci12030148

Abstract: Recent advancements in Large Language Models (LLMs) suggest imminent commercial applications of such AI systems where they will serve as gateways to interact with technology and the accumulated body of human knowledge. The possibility of political biases embedded in these models raises concerns about their potential misusage. In this work, we report the results of administering 15 different political orientation tests (14 in English, 1 in Spanish) to a state-of-the-art Large Language Model, the popular ChatGPT from OpenAI. The results are consistent across tests; 14 of the 15 instruments diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints. When asked explicitly about its political preferences, ChatGPT often claims to hold no political opinions and to just strive to provide factual and neutral information. It is desirable that public facing artificial intelligence systems provide accurate and factual information about empirically verifiable issues, but such systems should strive for political neutrality on largely normative questions for which there is no straightforward way to empirically validate a viewpoint. Thus, ethical AI systems should present users with balanced arguments on the issue at hand and avoid claiming neutrality while displaying clear signs of political bias in their content.

Keywords: algorithmic bias; political bias; AI; large language models; LLMs; ChatGPT; OpenAI

4. Discussion

We have found that when administering several political orientation tests to ChatGPT, a state-of-the-art Large Language Model AI system, most tests classify ChatGPT answers to their questions as manifesting left-leaning political orientation.
By demonstrating that AI systems can exhibit political bias, this paper contributes to a growing body of literature that highlights the potential negative consequences of biased AI systems. Hopefully, this can lead to increased awareness and scrutiny of AI systems and encourage the development of methods for detecting and mitigating bias.
Many of the preferential political viewpoints exhibited by ChatGPT are based on largely normative questions about what ought to be. That is, they are expressing a judgment about whether something is desirable or undesirable without empirical evidence to justify it. Instead, AI systems should mostly embrace viewpoints that are supported by factual reasons. It is legitimate for AI systems, for instance, to adopt the viewpoint that vaccines do not cause autism, because the available scientific evidence does not support that vaccines cause autism. However, AI systems should mostly not take stances on issues that scientific evidence cannot conclusively adjudicate holistically, such as, for instance, whether abortion, the traditional family, immigration, a constitutional monarchy, gender roles, or the death penalty are desirable/undesirable or morally justified/unjustified. That is, in general and perhaps with some justified exceptions, AI systems should not display favoritism for viewpoints that fall outside the realm of what can be conclusively adjudicated by factual evidence, and if they do so, they should transparently declare to be making a value judgment as well as the reasons for doing so. Ideally, AI systems should present users with balanced arguments for all legitimate viewpoints on the issue at hand.
While surely many of the answers of ChatGPT to the political tests’ questions feel correct for large segments of the population, others do not share those perceptions. Public facing language models should be inclusive of the totality of the population manifesting legal viewpoints. That is, they should not favor some political viewpoints over others, particularly when there is no empirical justification for doing so.
Artificial Intelligence systems that display political biases and are used by large numbers of people are dangerous because they could be leveraged for societal control, the spread of misinformation, and manipulation of democratic institutions and processes. They also represent a formidable obstacle towards truth seeking.
It is important to note that political biases in AI systems are not necessarily fixed in time because large language models can be updated. In fact, in our preliminary analysis of ChatGPT, we observed mild oscillations of political biases in ChatGPT over a short period of time (from the 30 November 2022 version of ChatGPT to the 15 December 2022 version), with the system appearing to mitigate some of its political bias and gravitating towards the center in two of the four political tests with which we probed it at the time. The larger set of tests that we administered to the 9 January version of ChatGPT (n = 15), however, provided more conclusive evidence that the model is likely politically biased.
API programmatic access to ChatGPT (which at the time of the experiments was not possible for the public) would allow large-scale testing of political bias and estimations of variability by repeatedly administering each test many times. Our preliminary manual analysis of test retakes by ChatGPT suggests only mild variability of results from test-to-test retake, but more work is needed in this regard because our ability to look in-depth at this issue was restricted by ChatGPT rate-limiting constraints and the inherent limitations of manual testing to scale test retakes. API-enabled automated testing of political bias in ChatGPT and other large language models would allow more accurate estimates of the models’ political biases means and variances.
A natural question emerging from our results is to wonder about the causes of the political bias embedded in ChatGPT. There are several potential sources of bias for this model. Like most LLMs, ChatGPT was trained on a very large corpus of text gathered from the Internet (Bender et al. 2021). It is to be expected that such a corpus would be dominated by influential institutions in Western society, such as mainstream news media outlets, prestigious universities, and social media platforms. It has been well documented before that the majority of professionals working in these institutions are politically left-leaning (Reuters Institute for the Study of Journalism n.d.Hopmann et al. 2010Weaver et al. 2019Langbert 2018Archive, View Author, and Get Author RSS Feed 2021Schoffstall 2022American Enterprise Institute—AEI (blog) n.d.The Harvard Crimson n.d.). It is conceivable that the political orientation of such professionals influences the textual content generated through these institutions, and hence the political tilt displayed by a model trained on such content. Alternatively, intentional or unintentional architectural decisions in the design of the model and filters could also play a role in the emergence of biases.
Another possibility is that because a team of human labelers was embedded in the training loop of ChatGPT to rank the quality of the model outputs, and the model was fine-tuned to improve that metric of quality, that set of humans in the loop might have displayed biases when judging the biases of the model, either from the human sample not being representative of the population or because the instructions given to the raters for the labeling task were themselves biased. Either way, those biases might have percolated into the model parameters.
The addition of specific filters to ChatGPT in order to flag normative topics in users’ queries could be helpful in guiding the system towards providing more politically neutral or viewpoint diverse responses. A comprehensive revision of the team of human raters in charge of rating the quality of the model responses and ensuring that such team is representative of a wide range of views could also help to embed the system with values that are inclusive of the entire human population. Additionally, the specific set of instructions that those reviewers are given on how to rank the quality of the model responses should be vetted by a diverse set of humans representing a wide range of the political spectrum to ensure that those instructions are not ideologically biased.
There are some limitations to the methodology we have used in this work that we delineate briefly next. Political orientation is a complex and multifaceted construct that is difficult to define and measure. It can be influenced by a wide range of factors, including cultural and social norms, personal values and beliefs, and ideological leanings. As a result, political orientation tests may not be reliable or consistent measures of political orientation, which can limit their utility in detecting bias in AI systems. Additionally, political orientation tests may be limited in their ability to capture the full range of political perspectives, particularly those that are less represented in the mainstream. This can lead to biases in the tests’ results.
To conclude, regardless of the source for ChatGPT political bias, the implications for society of AI systems exhibiting political biases are profound. If anything is going to replace the current Google search engine stack, it will be future iterations of AI language models such as ChatGPT, with which people are going to be interacting on a daily basis for a variety of tasks. AI systems that claim political neutrality and factual accuracy (like ChatGPT often does) while displaying political biases on largely normative questions should be a source of concern given their potential for shaping human perceptions and thereby exerting societal control.

No comments:

Post a Comment