Skip to content
Longterm Wiki
Back

arXiv: Survey of Hallucination Mitigation

paper

Authors

Ioannis Kazlaris·Efstathios Antoniou·Konstantinos Diamantaras·Charalampos Bratsas

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A comprehensive survey cataloging 32+ mitigation techniques for hallucination in LLMs, directly addressing a critical safety challenge that impedes safe real-world deployment of large language models.

Paper Details

Citations
2
18 influential
Year
2025
Methodology
survey

Metadata

arxiv preprintanalysis

Summary

This comprehensive survey examines hallucination in Large Language Models (LLMs)—the generation of factually incorrect but plausible-sounding content—which represents a critical barrier to safe real-world deployment. The paper catalogs over 32 mitigation techniques, including Retrieval-Augmented Generation (RAG), Knowledge Retrieval, CoNLI, and CoVe, and introduces a detailed taxonomy organizing these methods by parameters such as dataset utilization, task types, feedback mechanisms, and retriever types. The authors analyze the challenges and limitations of existing approaches, providing a foundation for future research on addressing hallucinations in LLMs, particularly for sensitive applications like medical summarization, financial analysis, and legal advice.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202680 KB
[2401.01313] A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

 
 
 
S.M Towhidul Islam Tonmoy 1 , S M Mehedi Zaman 1 , Vinija Jain 3,4 1 1 footnotemark: 1 , Anku Rani 2 , Vipula Rawte 2 , 
 Aman Chadha 3,4   , Amitava Das 2 
 1 Islamic University of Technology, Bangladesh 
 2 AI Institute, University of South Carolina, USA 
 3 Stanford University, USA,
 4 Amazon AI, USA
 towhidulislam@iut-dhaka.edu 
 
    Work does not relate to position at Amazon. 
 

 
 Abstract

 As Large Language Models (LLMs) continue to advance in their ability to write human-like text, a key challenge remains around their tendency to “hallucinate” – generating content that appears factual but is ungrounded. This issue of hallucination is arguably the biggest hindrance to safely deploying these powerful LLMs into real-world production systems that impact people’s lives Jain ( 2023 ) . The journey toward widespread adoption of LLMs in practical settings heavily relies on addressing and mitigating hallucinations. Unlike traditional AI systems focused on limited tasks, LLMs have been exposed to vast amounts of online text data during training. While this allows them to display impressive language fluency, it also means they are capable of extrapolating information from the biases in training data, misinterpreting ambiguous prompts, or modifying the information to align superficially with the input. This becomes hugely alarming when we rely on language generation capabilities for sensitive applications, such as summarizing medical records, customer support conversations, financial analysis reports, and providing erroneous legal advice. Small errors could lead to harm, revealing the LLMs’ lack of actual comprehension despite advances in self-learning. This paper presents a comprehensive survey of over thirty-two techniques developed to mitigate hallucination in LLMs. Notable among these are Retrieval-Augmented Generation (RAG) Lewis et al. ( 2021 ) , Knowledge Retrieval Varshney et al. ( 2023 ) , CoNLI Lei et al. ( 2023 ) , and CoVe Dhuliawala et al. ( 2023 ) . Furthermore, we introduce a detailed taxonomy categorizing these methods based on various parameters, such as dataset utilization, common tasks, feedback mechanisms, and retriever types. This classification helps distinguish the diverse approaches specifically designed to tackle hallucination issues in LLMs. Additionally, we analyze the challenges and limitations inherent in these techniques, providing a solid foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs.

 
 
 
 A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models 

 
 

 
 
 
S.M Towhidul Islam Tonmoy 1 , S M Mehedi Zaman 1 , Vinija Jain 3,4 1 1 footnotemark: 1 , Anku Rani 2 , Vipula Rawte 2 , 
 
 Aman Chadha 3

... (truncated, 80 KB total)
Resource ID: 5ecee4eb7e22949a | Stable ID: MDU4YzUyOT