[2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Foundational RAG paper from Facebook AI Research; highly relevant to understanding how modern LLMs are augmented with external knowledge, which has implications for factuality, hallucination reduction, and agentic system design in AI safety contexts.
Paper Details
Metadata
Abstract
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
Summary
This paper introduces Retrieval-Augmented Generation (RAG), a framework that combines parametric memory (pre-trained language models) with non-parametric memory (a dense vector index of Wikipedia) to improve performance on knowledge-intensive NLP tasks. RAG models retrieve relevant documents at inference time and condition text generation on them, achieving state-of-the-art results on open-domain QA benchmarks while enabling knowledge updates without full retraining.
Key Points
- •RAG combines a pre-trained seq2seq generator with a dense retrieval component (DPR) to ground generation in retrieved external documents.
- •Two variants are proposed: RAG-Sequence (uses same retrieved doc for full answer) and RAG-Token (can use different docs per token generated).
- •Outperforms parametric-only models on open-domain QA tasks and reduces hallucination by grounding outputs in retrieved evidence.
- •Non-parametric memory can be updated independently, offering a practical alternative to expensive full model retraining for knowledge updates.
- •Foundational architecture enabling modern agentic and retrieval-augmented LLM systems widely used in AI safety-relevant deployment contexts.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Long-Horizon Autonomous Tasks | Capability | 65.0 |
Cached Content Preview
# Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Patrick Lewis‡†{}^{\\dagger}{}^{\\ddagger}, Ethan Perez⋆,&Aleksandra Piktus†, Fabio Petroni†, Vladimir Karpukhin†, Naman Goyal†, Heinrich Küttler†,&Mike Lewis†, Wen-tau Yih†, Tim Rocktäschel‡†{}^{\\dagger}{}^{\\ddagger}, Sebastian Riedel‡†{}^{\\dagger}{}^{\\ddagger}, Douwe Kiela†
†Facebook AI Research; ‡University College London; ⋆New York University;
plewis@fb.com
###### Abstract
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.
However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures.
Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems.
Pre-trained models with a differentiable access mechanism to explicit non-parametric memory
have so far been only investigated for extractive downstream tasks.
We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation.
We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.
We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, and another which can use different passages per token.
We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures.
For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
## 1 Introduction
Pre-trained neural language models have been shown to learn a substantial amount of in-depth knowledge from data \[ [47](https://ar5iv.labs.arxiv.org/html/2005.11401#bib.bib47 "")\].
They can do so without any access to an external memory, as a parameterized implicit knowledge base \[ [51](https://ar5iv.labs.arxiv.org/html/2005.11401#bib.bib51 ""), [52](https://ar5iv.labs.arxiv.org/html/2005.11401#bib.bib52 "")\].
While this development is exciting, such models do have downsides:
They cannot easily expand or revise their memory, can’t straightforwardly provide insight into their predictions, and may produce “hallucinations” \[ [38](https://ar5iv.labs.arxiv.org/html/2005.11401#bib.bib38 "")\].
Hybrid models that combine parametric memory with non-parametric (i.e., retrieval-based) memories \[ [20](https://ar5iv.labs.arxiv.org/html/2005.11401#bib.bib20 ""), [26](https://ar5iv.labs.arxiv.org/html/2005.114
... (truncated, 88 KB total)02ad74cdb0c9081f | Stable ID: YzgwM2RkYj