Skip to content
Longterm Wiki
Back

Kaplan et al. (2020)

paper

Authors

Jared Kaplan·Sam McCandlish·Tom Henighan·Tom B. Brown·Benjamin Chess·Rewon Child·Scott Gray·Alec Radford·Jeffrey Wu·Dario Amodei

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational empirical study establishing power-law scaling relationships for language model loss across model size, dataset size, and compute, which is critical for understanding AI capability development and resource requirements in AI safety research.

Paper Details

Citations
7,388
546 influential
Year
2020

Metadata

arxiv preprintprimary source

Abstract

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

Summary

Kaplan et al. (2020) empirically characterize scaling laws for language model performance, demonstrating that cross-entropy loss follows power-law relationships with model size, dataset size, and compute budget across seven orders of magnitude. The study reveals that architectural details like width and depth have minimal impact, while overfitting and training speed follow predictable patterns. Crucially, the findings show that larger models are significantly more sample-efficient, implying that optimal compute-efficient training involves training very large models on modest datasets and stopping before convergence.

Cited by 8 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# Scaling Laws for Neural Language Models

Jared Kaplan

Johns Hopkins University, OpenAI

jaredk@jhu.edu

&Sam McCandlish∗

OpenAI

sam@openai.com

\\ANDTom Henighan

OpenAI

henighan@openai.com

&Tom B. Brown

OpenAI

tom@openai.com

&Benjamin Chess

OpenAI

bchess@openai.com

&Rewon Child

OpenAI

rewon@openai.com

&Scott Gray

OpenAI

scott@openai.com

&Alec Radford

OpenAI

alec@openai.com

&Jeffrey Wu

OpenAI

jeffwu@openai.com

&Dario Amodei

OpenAI

damodei@openai.com

Equal contribution.

Contributions: Jared Kaplan and Sam McCandlish led the research. Tom Henighan contributed the LSTM experiments. Tom Brown, Rewon Child, and Scott Gray, and Alec Radford developed the optimized Transformer implementation. Jeff Wu, Benjamin Chess, and Alec Radford developed the text datasets. Dario Amodei provided guidance throughout the project.

###### Abstract

We study empirical scaling laws for language model performance on the cross-entropy loss.
The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude.
Other architectural details such as network width or depth have minimal effects within a wide range.
Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size.
These relationships allow us to determine the optimal allocation of a fixed compute budget.
Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

## 1 Introduction

Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in language modeling, with state of the art models \[ [RNSS18](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx30 ""), [DCLT18](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx7 ""), [YDY+19](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx47 ""), [LOG+19](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx24 ""), [RSR+19](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx33 "")\] approaching human-level performance on many specific tasks \[ [WPN+19](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx44 "")\], including the composition of coherent multi-paragraph prompted text samples \[ [RWC+19](https://ar5iv.labs.arxiv.org/html/2001.08361#bib.bibx34 "")\].

One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these f

... (truncated, 98 KB total)
Resource ID: 85f66a6419d173a7 | Stable ID: YTM5OGYyMG