Skip to content
Longterm Wiki
All Source Checks
Citation

Reducing Hallucinations in AI-Generated Wiki Content - Footnote 3

partial85% confidence

1 evidence check

Last checked: 4/3/2026

The claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."

Evidence — 1 source, 1 check

partial85%Haiku 4.5 · 4/3/2026
Found: For context, GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks, while general chatbots exhibit rates between 3-27% when summarizing documents. However, these rates ca

Note: The claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."

Debug info

Record type: citation

Record ID: page:reducing-hallucinations:fn3