Skip to content
Longterm Wiki
Back

Epoch AI - AI Research and Forecasting Organization

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Epoch AI

Epoch AI is a key reference organization for empirical data on AI scaling trends; their compute and training run databases are widely cited in AI safety and governance discussions.

Metadata

Importance: 72/100tool pagehomepage

Summary

Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.

Key Points

  • Tracks and analyzes trends in AI compute, dataset sizes, and algorithmic efficiency over time
  • Produces publicly available datasets and forecasts on AI progress to inform researchers and policymakers
  • Focuses on empirical, data-driven approaches to understanding the pace and direction of AI development
  • Research supports AI governance, prioritization, and resource allocation decisions
  • Bridges technical AI capabilities research with broader safety and policy communities

Review

Epoch AI represents a critical effort to systematically document and analyze the trajectory of artificial intelligence technologies, focusing on quantitative metrics related to computational scaling. Their research provides unique insights into the exponential growth of AI model training compute, demonstrating that training compute for frontier AI models has grown approximately 5x per year since 2020, with significant implications for understanding technological progress. The project's key contributions include tracking trends in computational performance, training costs, and model complexity across different domains. By maintaining detailed databases of AI models, computing power, and hardware developments, Epoch AI offers a data-driven perspective on AI's rapid evolution. Their work is particularly valuable for researchers, policymakers, and industry professionals seeking to understand the technical and economic dynamics driving AI advancement.

Cited by 15 pages

Cached Content Preview

HTTP 200Fetched Apr 4, 20267 KB
Epoch AI 

 
 
 
 

 
 
 

 

 
 Investigating the trajectory of AI for the benefit of society.

 LLM inference prices 

 
 
 
 
 
 
 40 ×/year 
 
 
 2 months 
 
 
 2 OOM/year The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks. 

 The cost to inference an LLM at a fixed level of performance has been halving every 2 months. 

 The cost to inference an LLM at a fixed level of performance has fallen by 2 OOMs per year. 

 Model Performance 90% CI 10× to 900× 1 to 4 months 1 to 3 OOM Compute stock growth 

 
 
 
 
 
 
 2.3 ×/year 
 
 
 10 months 
 
 
 0.36 OOM/year The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year. 

 The total computing power of the stock of NVIDIA chips is doubling every 10 months. 

 The total computing power of the stock of NVIDIA chips is growing by 0.36 OOMs per year. 

 AI Companies 90% CI 2.2× to 2.5× 9 to 11 months 0.34 to 0.40 OOM Training compute 

 
 
 
 
 
 
 5 ×/year 
 
 
 5.2 months 
 
 
 0.7 OOM/year Training compute for frontier language models has been growing at 5× per year since 2020. 

 Training compute for frontier language models has been doubling every 5.2 months since 2020. 

 Training compute for frontier language models has been growing at 0.7 OOMs per year since 2020. 

 Training Runs 90% CI 4× to 6× 4.6 to 6.0 months 0.6 to 0.8 OOM Algorithmic progress 

 
 
 
 ÷ 3.0 ×/year 7.6 months 0.5 OOM/year Pre-training compute efficiency is improving at roughly 3.0× per year. 

 Pre-training compute efficiency is doubling roughly every 7.6 months. 

 Pre-training compute efficiency is improving by roughly 0.5 OOMs per year. 

 Training Runs 90% CI 2.8× to 4.4× 5.6 to 8.1 months 0.4 to 0.6 OOM Largest AI data center 

 
 
 
 600,000 H100e The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips. 

 Data Centers 90% CI 400k to 900k H100e FLOP/s per dollar 

 
 
 
 
 
 
 1.37 ×/year 
 
 
 2.2 years 
 
 
 0.14 OOM/year AI chip performance per dollar has improved by 37% per year. 

 AI chip performance per dollar has doubled every 2.2 years. 

 AI chip performance per dollar has improved by 0.14 OOMs per year. 

 Hardware 90% CI 1.30× to 1.45× 1.9 to 2.6 years 0.11 to 0.16 OOM See all AI trends & statistics 
 
 
 
 
 
 Trusted by leading publications and institutions

 Reliable data and independent research

 Data Updated Apr. 2, 2026 Frontier Data Centers

 Open database of AI data centers using satellite and permit data to show compute, power use, and construction timelines.

 Data Insight Feb. 4, 2026 Compute accounts for the majority of expenses of AI companies

 Report Aug. 20, 2024 Can AI scaling continue through 2030?

 FrontierMath FrontierMath

 Data Updated Apr. 4, 2026 AI Capabilities

 Stay informed!

 Get the latest updates on our research and weekly commentary on AI news and developments.

 Subscribe Our latest work

 See the latest 
 
 
 data-insight Mar. 24, 2026 Total AI ch

... (truncated, 7 KB total)
Resource ID: 120adc539e2fa558 | Stable ID: YjQ3MzM4Zj