[1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
paperAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
BERT is a foundational large language model architecture that established bidirectional transformer pre-training, relevant to AI safety for understanding capabilities, alignment challenges, and behavioral properties of modern LLMs.
Paper Details
Metadata
Summary
BERT (Bidirectional Encoder Representations from Transformers) introduces a novel pre-training approach for language models that conditions on both left and right context across all layers, enabling deep bidirectional representations from unlabeled text. Unlike previous language models, BERT can be fine-tuned with minimal task-specific modifications to achieve state-of-the-art results across diverse NLP tasks. The model demonstrates significant empirical improvements on eleven benchmark tasks, including GLUE (80.5%), MultiNLI (86.7%), and SQuAD question answering (93.2% and 83.1% on v1.1 and v2.0 respectively).
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Deep Learning Revolution Era | Historical | 44.0 |
Cached Content Preview
[1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Happy Open Access Week from arXiv!
YOU make open access possible! Tell us why you support #openaccess and give to arXiv this week to help keep science open for all.
Donate!
-->
Computer Science > Computation and Language
arXiv:1810.04805 (cs)
[Submitted on 11 Oct 2018 ( v1 ), last revised 24 May 2019 (this version, v2)]
Title: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Authors: Jacob Devlin , Ming-Wei Chang , Kenton Lee , Kristina Toutanova View a PDF of the paper titled BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, by Jacob Devlin and 3 other authors
View PDF
Abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Subjects:
Computation and Language (cs.CL)
Cite as:
arXiv:1810.04805 [cs.CL]
(or
arXiv:1810.04805v2 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.1810.04805
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Ming-Wei Chang [ view email ]
[v1]
Thu, 11 Oct 2018 00:50:01 UTC (227 KB)
[v2]
Fri, 24 May 2019 20:37:26 UTC (309 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, by Jacob Devlin and 3 other authors View PDF
TeX Source
view license
Current browse context: cs.CL
< prev
|
next >
new
|
recent
| 2018-10
Change to browse by:
cs
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
109 blog links
( what is this? )
DBLP - CS Bibliography
listing | bibtex
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
... (truncated, 5 KB total)32f3c7d144f036a0 | Stable ID: NGUwZjI4Y2