Skip to content
Longterm Wiki
Back

[2202.05924] Compute Trends Across Three Eras of Machine Learning

paper

Authors

Jaime Sevilla·Lennart Heim·Anson Ho·Tamay Besiroglu·Marius Hobbhahn·Pablo Villalobos

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Empirical analysis of compute scaling trends in machine learning across three eras, documenting the acceleration of training compute requirements since deep learning's emergence. Relevant to AI safety discussions about resource requirements, scaling laws, and the trajectory of AI development.

Paper Details

Citations
388
10 influential
Year
2022

Metadata

arxiv preprintanalysis

Abstract

Compute, data, and algorithmic advances are the three fundamental factors that guide the progress of modern Machine Learning (ML). In this paper we study trends in the most readily quantified factor - compute. We show that before 2010 training compute grew in line with Moore's law, doubling roughly every 20 months. Since the advent of Deep Learning in the early 2010s, the scaling of training compute has accelerated, doubling approximately every 6 months. In late 2015, a new trend emerged as firms developed large-scale ML models with 10 to 100-fold larger requirements in training compute. Based on these observations we split the history of compute in ML into three eras: the Pre Deep Learning Era, the Deep Learning Era and the Large-Scale Era. Overall, our work highlights the fast-growing compute requirements for training advanced ML systems.

Summary

This paper by Sevilla et al. analyzes historical trends in computational requirements for machine learning training. The authors identify three distinct eras: the Pre-Deep Learning Era (where compute doubled every ~20 months following Moore's law), the Deep Learning Era (beginning in the early 2010s with compute doubling every ~6 months), and the Large-Scale Era (starting in late 2015 with 10-100x jumps in compute requirements). The work demonstrates that compute scaling has dramatically accelerated since deep learning's emergence, with significant implications for the resources needed to train state-of-the-art ML systems.

Cited by 1 page

PageTypeQuality
Epoch AIOrganization51.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20261 KB
glorot\_understanding

Conversion to HTML had a Fatal error and exited abruptly. This document may be truncated or damaged.

[◄](https://ar5iv.labs.arxiv.org/html/2202.05922) [![ar5iv homepage](https://ar5iv.labs.arxiv.org/assets/ar5iv.png)](https://ar5iv.labs.arxiv.org/) [Feeling\\
\\
lucky?](https://ar5iv.labs.arxiv.org/feeling_lucky) [Conversion\\
\\
report](https://ar5iv.labs.arxiv.org/log/2202.05924) [Report\\
\\
an issue](https://github.com/dginev/ar5iv/issues/new?template=improve-article--arxiv-id-.md&title=Improve+article+2202.05924) [View original\\
\\
on arXiv](https://arxiv.org/abs/2202.05924) [►](https://ar5iv.labs.arxiv.org/html/2202.05925)
Resource ID: a9007e0713dc6b7f | Stable ID: NDliNjgyMm