Skip to content
Longterm Wiki
Back

Emerging Technology Observatory - State of Global AI Safety Research

web

Published by Georgetown's Emerging Technology Observatory, this report provides an empirical, data-driven overview of the global AI safety research ecosystem, useful for policymakers and researchers assessing where investment and collaboration efforts are needed.

Metadata

Importance: 62/100organizational reportanalysis

Summary

An analysis by the Emerging Technology Observatory (ETO) examining the global landscape of AI safety research, including which countries, institutions, and researchers are contributing to the field. The report maps publication trends, funding, and collaboration patterns to assess the current state and distribution of AI safety work worldwide.

Key Points

  • Maps the global distribution of AI safety research output across countries and institutions
  • Analyzes publication trends and growth in AI safety as a research field over time
  • Identifies leading organizations and researchers contributing to AI safety globally
  • Examines international collaboration patterns and potential gaps in AI safety research coverage
  • Provides data-driven context for policy decisions about AI safety investment and coordination

Review

The Emerging Technology Observatory's report provides a comprehensive overview of the current state of global AI safety research, highlighting its rapid but still nascent development. The study reveals that while AI safety research grew by an impressive 315% between 2017 and 2022, it remains a tiny fraction of overall AI research, comprising just 2% of total AI-related publications. The research emphasizes American dominance in the field, with 40% of AI safety articles and 58% of highly cited papers having American authors. Notably, the research is not only growing but also highly impactful, with AI safety articles receiving an average of 33 citations compared to 16 citations for general AI research. The analysis also reveals interesting trends in research clusters, including focuses on data poisoning, algorithmic fairness, explainable machine learning, and bias detection, suggesting a multifaceted approach to addressing potential risks in AI development.

Cached Content Preview

HTTP 200Fetched Apr 7, 20265 KB
The state of global AI safety research – Emerging Technology Observatory Back to blog The state of global AI safety research

 Originally published 2024-04-03

 Topics Map of Science , Research Almanac , artificial intelligence , AI safety 

 Share Insights from ETO's Research Almanac and Map of Science 🔔 Attention Substack users! ETO blog posts are also available on Substack .

 
 Over the past several months, we've been busy improving and updating the research data that powers many of our tools , such as the Map of Science and Research Almanac . Today, we're launching a new series featuring topic-by-topic insights from the data, beginning with the increasingly high-profile field of AI safety research .

 Key findings

 
 AI safety research is growing fast, but is still a drop in the bucket of AI research overall. 

 American schools and companies lead the field, with Chinese organizations less prevalent than in other AI-related research domains. 

 Notable clusters of AI safety research from ETO's Map of Science covered themes including data poisoning, algorithmic fairness, explainable machine learning, gender bias and out-of-distribution detection. 

 
 Build your insight into global AI research

 Subscribe to ETO's newsletter for the latest insights on global AI trends. 
 Overall trends

 
 According to the latest estimates from the Research Almanac , about 30,000 AI safety-related articles were released between 2017 and 2022. (This total, and the other Research Almanac-derived findings in this post, are based on articles with English titles or abstracts in our Merged Academic Corpus; they omit articles published solely in Chinese and non-public research. For further details and caveats, see the Almanac documentation .)

 AI safety research grew 315% between 2017 and 2022.

 
 
 
 Despite this rapid growth, we estimate AI safety research comprises only 2% of all research into AI.

 
 
 
 Pound for pound, AI safety research is highly cited - the average AI safety-related research article has been cited 33 times , compared with 16 times for the average article across all AI fields.

 
 Country trends

 
 40% of the AI safety-related articles in the Research Almanac dataset had American authors . 12 % had Chinese authors, and 19 % had European authors. (Note that some articles lack information about author nationality, and articles without English titles or abstracts are omitted, which could affect the numbers for Chinese authors.)

 Looking only at highly cited articles, America continues to lead in research production. 58% of top-cited AI safety articles (defined as the 10% of articles in each publication year with the most citations) had American authors, compared to 20% with Chinese authors and 15% with European authors.

 
 
 
 Compared to the U.S., Chinese authors tend to be less prevalent in AI safety research than in AI research overall, or research in other AI-related subfields (in all cases, looking at researc

... (truncated, 5 KB total)
Resource ID: 09909a27d1bb2f61 | Stable ID: YmY4MWJhYz