Reducing Hallucinations in AI-Generated Wiki Content - Footnote 4
1 evidence check
Last checked: 4/3/2026
The claim states hallucination rates of 2-18% for RAG-based systems, but the source states 2%-18% vs 39% for nonparametric memory LLMs (CIS Chatbot/Google chatbot) had fewer hallucinations than the parametric memory LLM (conventional chatbot). The claim states 19-40% for conventional approaches, but the source states approximately 40%.
Evidence — 1 source, 1 check
Note: The claim states hallucination rates of 2-18% for RAG-based systems, but the source states 2%-18% vs 39% for nonparametric memory LLMs (CIS Chatbot/Google chatbot) had fewer hallucinations than the parametric memory LLM (conventional chatbot). The claim states 19-40% for conventional approaches, but the source states approximately 40%.
Debug info
Record type: citation
Record ID: page:reducing-hallucinations:fn4