Skip to content
Longterm Wiki
Back

Scaling Up AI: Trends, Capabilities, and Implications — Our World in Data

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Our World in Data

A data-journalism explainer from Our World in Data on AI scaling trends; useful for grounding discussions of AI progress in empirical data, though content specifics are unavailable for deeper verification.

Metadata

Importance: 52/100blog posteducational

Summary

An Our World in Data article examining the scaling of AI systems, covering historical trends in compute, model size, and capabilities growth. It provides data-driven visualizations and analysis of how AI has advanced and what continued scaling may mean for society and safety.

Key Points

  • Documents exponential growth in AI training compute over decades, drawing on empirical data and historical benchmarks.
  • Visualizes trends in model parameters, dataset sizes, and benchmark performance to contextualize the pace of AI development.
  • Discusses implications of scaling for both beneficial applications and potential risks, framing AI progress in a broad societal context.
  • Provides accessible, evidence-based explainers suited for policymakers, researchers, and general audiences.
  • Situates AI scaling within broader technological and economic trends tracked by Our World in Data.

Cited by 1 page

PageTypeQuality
AI Compute Scaling MetricsAnalysis78.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202619 KB
[Home](https://ourworldindata.org/) [Artificial Intelligence](https://ourworldindata.org/artificial-intelligence)

For most of Artificial Intelligence’s (AI’s) history, many researchers expected that building truly capable systems would need a long series of scientific breakthroughs: revolutionary algorithms, deep insights into human cognition, or fundamental advances in our understanding of the brain. While scientific advances have played a role, recent AI progress has revealed an unexpected insight: a lot of the recent improvement in AI capabilities has come simply from scaling up existing AI systems. [1](https://ourworldindata.org/scaling-up-ai#note-1)

Here, scaling means deploying more computational power, using larger datasets, and building bigger models. This approach has worked surprisingly well so far. [2](https://ourworldindata.org/scaling-up-ai#note-2) Just a few years ago, state-of-the-art AI systems struggled with basic tasks like counting. [3](https://ourworldindata.org/scaling-up-ai#note-3) [4](https://ourworldindata.org/scaling-up-ai#note-4) Today, they can [solve complex math problems,](https://ourworldindata.org/grapher/test-scores-ai-capabilities-relative-human-performance) write software, create extremely realistic images and videos, and discuss academic topics.

This article will provide a brief overview of scaling in AI over the past years. The data comes from [Epoch](https://epochai.org/), an organization that analyzes trends in computing, data, and investments to understand where AI might be headed. [5](https://ourworldindata.org/scaling-up-ai#note-5) Epoch maintains the most extensive dataset on AI models and regularly publishes [key figures](https://epochai.org/trends) on AI growth and change.

# What is scaling in AI models?

Let’s briefly break down what scaling means in AI. Scaling is about increasing three main things during training, which typically need to grow together:

- The amount of data used for training the AI;
- The model’s size, measured in “parameters”;
- Computational resources, often called "compute" in AI.

The idea is simple but powerful: bigger AI systems, trained on more data and using more computational resources, [tend to perform better](https://ourworldindata.org/grapher/ai-performance-knowledge-tests-vs-training-computation). Even without substantial changes to the algorithms, this approach often leads to better performance across many tasks. [6](https://ourworldindata.org/scaling-up-ai#note-6)

Here is another reason why this is important: as researchers scale up these AI systems, they not only [improve](https://theaidigest.org/progress-and-dangers) in the tasks they were trained on but can sometimes lead them to develop new abilities that they did not have on a smaller scale. [7](https://ourworldindata.org/scaling-up-ai#note-7) For example, language models initially struggled with simple arithmetic tests like three-digit addition, but larger models could handle these easily once they reached a

... (truncated, 19 KB total)
Resource ID: cb1cd9e4d736df7f | Stable ID: NjBlZGI3Yz