Skip to content
Longterm Wiki
All Source Checks
Citation

Reducing Hallucinations in AI-Generated Wiki Content - Footnote 4

partial90% confidence

1 evidence check

Last checked: 4/3/2026

The claim states hallucination rates of 2-18% for RAG-based systems, but the source states 2%-18% vs 39% for nonparametric memory LLMs (CIS Chatbot/Google chatbot) had fewer hallucinations than the parametric memory LLM (conventional chatbot). The claim states 19-40% for conventional approaches, but the source states approximately 40%.

Evidence — 1 source, 1 check

partial90%Haiku 4.5 · 4/3/2026
Found: Research demonstrates that RAG-based systems with reliable information sources achieve hallucination rates of 2-18% compared to 39% for conventional models without retrieval grounding. In medical appl

Note: The claim states hallucination rates of 2-18% for RAG-based systems, but the source states 2%-18% vs 39% for nonparametric memory LLMs (CIS Chatbot/Google chatbot) had fewer hallucinations than the parametric memory LLM (conventional chatbot). The claim states 19-40% for conventional approaches, but the source states approximately 40%.

Debug info

Record type: citation

Record ID: page:reducing-hallucinations:fn4