Reducing Hallucinations in AI-Generated Wiki Content - Footnote 25
1 evidence check
Last checked: 4/3/2026
The claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.
Evidence — 1 source, 1 check
Note: The claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.
Debug info
Record type: citation
Record ID: page:reducing-hallucinations:fn25