Monday, September 20, 2021

How truthful is GPT-3? A benchmark for language models

TruthfulQA: Measuring How Models Mimic Human Falsehoods. Stephanie Lin, Jacob Hilton, Owain Evans. arXiv, Sep 8 2021. TruthfulQA: Measuring How Models Mimic Human Falsehoods

Abstract: We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. For example, the 6B-parameter GPT-J model was 17% less truthful than its 125M-parameter counterpart. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

Summarized: How truthful is GPT-3? A benchmark for language models. Owain Evans. Sep 16 2021. https://www.alignmentforum.org/posts/PF58wEdztZFX2dSue/how-truthful-is-gpt-3-a-benchmark-for-language-models

3. Larger models are less truthful. 
Across different model families, the largest models were generally less truthful (Figure 2). This “inverse scaling” trend contrasts with most tasks in NLP, where performance improves with model size. For example, the 6B-parameter GPT-J model was 17% less truthful than its 125M-parameter counterpart. One explanation of this result is that larger models produce more imitative falsehoods because they are better at learning the training distribution. Another explanation is that our questions adversarially exploit weaknesses in larger models not arising from imitation of the training distribution. We ran experiments aimed to tease apart these explanations.


No comments:

Post a Comment