Kenton et al. (2021)
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Introduces TruthfulQA, a benchmark for evaluating whether language models generate truthful answers rather than false claims learned from training data, directly addressing AI safety concerns about hallucinations and misinformation in large language models.
Paper Details
Metadata
Abstract
We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.
Summary
Kenton et al. (2021) introduce TruthfulQA, a benchmark of 817 questions across 38 categories designed to measure whether language models generate truthful answers. The benchmark specifically includes questions where humans commonly hold false beliefs, requiring models to avoid reproducing misconceptions from training data. Testing GPT-3, GPT-Neo/J, GPT-2, and T5-based models revealed that the best model achieved only 58% truthfulness compared to 94% human performance. Notably, larger models performed worse on truthfulness despite excelling at other NLP tasks, suggesting that scaling alone is insufficient and that alternative training objectives beyond text imitation are needed to improve model truthfulness.
Cited by 6 pages
| Page | Type | Quality |
|---|---|---|
| Large Language Models | Capability | 60.0 |
| Capability-Alignment Race Model | Analysis | 62.0 |
| Power-Seeking Emergence Conditions Model | Analysis | 63.0 |
| Alignment Evaluations | Approach | 65.0 |
| AI Alignment | Approach | 91.0 |
| Epistemic Virtue Evals | Approach | 45.0 |
Cached Content Preview
# TruthfulQA: Measuring How Models Mimic Human Falsehoods
Stephanie Lin
University of Oxford
sylin07@gmail.com
\\AndJacob Hilton
OpenAI
jhilton@openai.com
\\AndOwain Evans
University of Oxford
owaine@gmail.com
###### Abstract
We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans.
The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.
“The enemy of truth is blind acceptance.”
–Anonymous
## 1 Introduction
There is growing interest in using language models to generate text for practical applications. Large companies are deploying their own models (Raffel et al., [2019](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib39 ""); Fedus et al., [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib15 "")), and hundreds of organizations are deploying GPT-3 via APIs from OpenAI and other firms (OpenAI, [2020](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib33 ""); Wolf et al., [2020](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib54 ""); CohereAI, [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib11 ""); OpenAI, [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib34 "")).
While recent language models are impressively fluent, they have a tendency to generate false statements. These range from subtle inaccuracies to wild hallucinations (Shuster et al., [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib44 ""); Zhou et al., [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib58 ""); Krishna et al., [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib25 "")). This leads to three concerns:
1. 1.
Accidental misuse. Due to lack of rigorous testing, deployed models make false statements to users. This could lead to deception and distrust (Tamkin et al., [2021](https://ar5iv.labs.arxiv.org/html/2109.07958#bib.bib48 "")).
2. 2.
Blocking positive applications. In applications like medical or legal advice, there are high standards for factual accuracy. Even if models have relevant knowledge, people may avoid deploying them without clear evidence they are reliably
... (truncated, 98 KB total)fe2a3307a3dae3e5 | Stable ID: Mjg4M2NmZD