All Source Checks
Citation
Reducing Hallucinations in AI-Generated Wiki Content - Footnote 43
confirmed100% confidence
1 evidence check
Last checked: 4/3/2026
Migrated from citation_quotes. Original verdict: accurate
Evidence — 1 source, 1 check
hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries(1 check)
confirmed100%Haiku 4.5 · 4/3/2026
Found: Research from Stanford HAI examining legal AI models found hallucination rates of 58-82% on legal queries for general chatbots. Even RAG-based legal tools designed specifically for legal research cont…
Note: Migrated from citation_quotes accuracy check. Original verdict: accurate
Debug info
Record type: citation
Record ID: page:reducing-hallucinations:fn43