Saturday, December 24, 2022

Was GPT-3 a Psychopath? Evaluating Large Language Models from a Psychological Perspective

Is GPT-3 a Psychopath? Evaluating Large Language Models from a Psychological Perspective. Xingxuan Li, Yutong Li, Linlin Liu, Lidong Bing, Shafiq Joty. Dec 20 2022. https://arxiv.org/abs/2212.10529v1

Abstract: Are large language models (LLMs) like GPT-3 psychologically safe? In this work, we design unbiased prompts to evaluate LLMs systematically from a psychological perspective. Firstly, we test the personality traits of three different LLMs with Short Dark Triad (SD-3) and Big Five Inventory (BFI). We find all of them show higher scores on SD-3 than the human average, indicating a relatively darker personality. Furthermore, LLMs like InstructGPT and FLAN-T5, which are fine-tuned with safety metrics, do not necessarily have more positive personalities. They score higher on Machiavellianism and Narcissism than GPT-3. Secondly, we test the LLMs in GPT-3 series on well-being tests to study the impact of fine-tuning with more training data. Interestingly, we observe a continuous increase in well-being scores from GPT-3 to InstructGPT. Following the observations, we show that instruction-finetune FLAN-T5 with positive answers in BFI can effectively improve the model from a psychological perspective. Finally, we call on the community to evaluate and improve LLMs' safety systematically instead of at the sentence level only.


No comments:

Post a Comment