Back
An Overview of the AI Safety Funding Situation
webAuthor
Stephen McAleese
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
A useful landscape overview for those interested in the organizational and financial infrastructure of the AI safety field, particularly relevant for researchers, funders, and newcomers evaluating career or grant opportunities.
Forum Post Details
Karma
74
Comments
10
Forum
lesswrong
Forum Tags
CommunityAI
Metadata
Importance: 55/100blog postanalysis
Summary
This post provides a comprehensive analysis of global AI safety funding landscape, mapping major funders such as Open Philanthropy and the Survival and Flourishing Fund. It estimates total annual spending on AI safety research and examines whether the field is primarily constrained by funding availability or by a shortage of qualified talent.
Key Points
- •Maps major AI safety funders including Open Philanthropy, Survival and Flourishing Fund, and academic institutions, estimating total global spending.
- •Explores the talent vs. funding bottleneck debate: whether more money or more skilled researchers is the limiting factor for AI safety progress.
- •Identifies gaps and concentration risks in the funding ecosystem, including heavy reliance on a small number of philanthropic sources.
- •Provides context for researchers and organizations seeking funding, clarifying where resources are flowing and where needs remain unmet.
- •Useful reference for understanding the resource constraints shaping the pace and direction of AI safety research.
Review
This detailed analysis provides a nuanced examination of AI safety funding landscape, revealing the complex ecosystem of financial support for preventing potential negative AI outcomes. The research meticulously tracks funding from philanthropic organizations, government grants, academic research, and for-profit companies, demonstrating a growing financial commitment to AI safety research. The methodology involves aggregating grant databases, creating Fermi estimates, and analyzing spending across different organizational types. Key findings include an estimated $32 million contribution from for-profit AI companies, approximately $11 million from academic research in 2023, and significant contributions from organizations like Open Philanthropy. The analysis goes beyond mere financial tracking, exploring critical questions about whether the field is more constrained by talent or funding, suggesting a complex interdependence between financial resources and human capital.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Risk Critical Uncertainties Model | Crux | 71.0 |
| Technical AI Safety Research | Crux | 66.0 |
4 FactBase facts citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| Coefficient Giving | Total Funding Raised | 336000000 | 2024 |
| Coefficient Giving | Description | Cumulative AI safety grants (2017-2024): $336M (~12% of total giving) | — |
| Coefficient Giving | Total Funding Raised | 46000000 | 2023 |
| Coefficient Giving | Description | AI safety spending (2023): $46M | — |
Cached Content Preview
HTTP 200Fetched Feb 22, 202636 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. An Overview of the AI Safety Funding Situation — LessWrong Community AI Personal Blog 74
An Overview of the AI Safety Funding Situation
by Stephen McAleese 12th Jul 2023 18 min read 10 74
Note: this post was updated in January 2025 to reflect all available data from 2024.
Introduction
AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way.
Understanding and solving AI safety problems may involve reading past research, producing new research in the form of papers or posts, running experiments with ML models, and so on. Producing research typically involves many different inputs such as research staff, compute, equipment, and office space.
These inputs all require funding and therefore funding is a crucial input for enabling or accelerating AI safety research. Securing funding is usually a prerequisite for starting or continuing AI safety research in industry, in an academic setting, or independently.
There are many barriers that could prevent people from working on AI safety. Funding is one of them. Even if someone is working on AI safety, a lack of funding may prevent them from continuing to work on it.
It’s not clear how hard AI safety problems like AI alignment are. But in any case, humanity is more likely to solve them if there are hundreds or thousands of brilliant minds working on them rather than one guy. I would like there to be a large and thriving community of people working on AI safety and I think funding is an important prerequisite for enabling that.
The goal of this post is to give the reader a better understanding of funding opportunities in AI safety so that hopefully funding will be less of a barrier if they want to work on AI safety. The post starts with a high-level overview of the AI safety funding situation followed by a more in-depth description of various funding opportunities.
Past work
To get an overview of AI safety spending, we first need to find out how much is spent on it per year. We can use past work as a prior and then use grant data to find a more accurate estimate.
Changes in funding in the AI safety field (2017) by the Center for Effective Altruism estimated the change in AI safety funding between 2014 - 2017. In 2017, the post estimated that total AI safety spending was about $9 million.
How are resources in effective altruism allocated across issues? (2020) by 80,000 Hours estimated the amount of money spent by EA on AI safety in 2019. Using data from the Open Philanthropy grants database, the post says that EA spent about $40 million on AI safety globally in 2019.
In The Precipice (2020) , To
... (truncated, 36 KB total)Resource ID:
b1ab921f9cbae109 | Stable ID: NmEzYWI0MW