Skip to content
Longterm Wiki
Back

Attention Is All You Need

paper

Authors

Ashish Vaswani·Noam Shazeer·Niki Parmar·Jakob Uszkoreit·Llion Jones·Aidan N. Gomez·Lukasz Kaiser·Illia Polosukhin

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational paper introducing the Transformer architecture, which became the basis for large language models like GPT and Claude. Critical for understanding modern AI systems that are central to AI safety research.

Paper Details

Citations
171,168
19524 influential
Year
2017

Metadata

arxiv preprintprimary source

Abstract

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Summary

This paper introduces the Transformer, a novel neural network architecture that relies entirely on attention mechanisms, eliminating the need for recurrence and convolutions used in previous sequence-to-sequence models. The Transformer achieves state-of-the-art results on machine translation benchmarks (28.4 BLEU on WMT 2014 English-to-German and 41.8 BLEU on English-to-French) while being significantly more parallelizable and requiring substantially less training time than existing models. The authors demonstrate the architecture's generalizability by successfully applying it to English constituency parsing tasks.

Cited by 2 pages

PageTypeQuality
Large Language ModelsConcept62.0
Dense TransformersConcept58.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20266 KB
[1706.03762] Attention Is All You Need 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Computation and Language

 

 
 arXiv:1706.03762 (cs)
 
 
 
 
 
 [Submitted on 12 Jun 2017 ( v1 ), last revised 2 Aug 2023 (this version, v7)] 
 Title: Attention Is All You Need

 Authors: Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N. Gomez , Lukasz Kaiser , Illia Polosukhin View a PDF of the paper titled Attention Is All You Need, by Ashish Vaswani and 7 other authors 
 View PDF 
 HTML (experimental) 

 
 Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
 

 
 
 
 Comments: 
 15 pages, 5 figures 
 
 
 Subjects: 
 
 Computation and Language (cs.CL) ; Machine Learning (cs.LG) 
 
 Cite as: 
 arXiv:1706.03762 [cs.CL] 
 
 
 
 (or 
 arXiv:1706.03762v7 [cs.CL] for this version)
 
 
 
 
 https://doi.org/10.48550/arXiv.1706.03762 
 
 
 Focus to learn more 
 
 
 
 arXiv-issued DOI via DataCite 
 
 
 
 
 
 
 
 Submission history

 From: Llion Jones [ view email ] 
 [v1] 
 Mon, 12 Jun 2017 17:57:34 UTC (1,102 KB)

 [v2] 
 Mon, 19 Jun 2017 16:49:45 UTC (1,125 KB)

 [v3] 
 Tue, 20 Jun 2017 05:20:02 UTC (1,125 KB)

 [v4] 
 Fri, 30 Jun 2017 17:29:30 UTC (1,124 KB)

 [v5] 
 Wed, 6 Dec 2017 03:30:32 UTC (1,124 KB)

 [v6] 
 Mon, 24 Jul 2023 00:48:54 UTC (1,124 KB)

 [v7] 
 Wed, 2 Aug 2023 00:41:18 UTC (1,124 KB)

 
 
 
 
 
 Full-text links: 
 Access Paper:

 
 
View a PDF of the paper titled Attention Is All You Need, by Ashish Vaswani and 7 other authors View PDF 
 HTML (experimental) 
 TeX Source
 
 
 view license 
 
 
 Current browse context: cs.CL 

 
 
 < prev 
 
 | 
 next > 
 

 
 new 
 | 
 recent 
 | 2017-06 
 
 Change to browse by:
 
 cs 
 cs.LG 
 
 

 
 
 References & Citations

 
 NASA ADS 
 Google Scholar 

 Semantic Scholar 

 
 
 

 
 
 123 blog links 

 ( what is this? )


... (truncated, 6 KB total)
Resource ID: a7468c6851652691 | Stable ID: YmIxNjlhMj