Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

Published by Coefficient Giving, a philanthropic advisory organization; useful for funders and researchers interested in understanding the funding landscape and gaps in AI safety and security.

Metadata

Importance: 52/100organizational reportanalysis

Summary

This research piece from Coefficient Giving argues that AI safety and security research is significantly underfunded relative to the risks involved, and makes the case for philanthropists and funders to increase financial support for the field. It examines funding gaps, highlights promising organizations and research areas, and encourages diversification of the funder base beyond a few major donors.

Key Points

  • The AI safety and security field is critically underfunded compared to the scale of potential risks from advanced AI systems.
  • Current funding is concentrated among a small number of major philanthropists, creating fragility and gaps in the ecosystem.
  • Diversifying the funder base would improve field resilience and allow more research directions to be explored.
  • There are numerous high-impact organizations and programs in AI safety that could absorb additional philanthropic capital effectively.
  • Coefficient Giving positions this as a high-leverage opportunity for donors seeking to reduce catastrophic and existential risks.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Feb 27, 202622 KB
[Skip to Content](https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/#content)

October 2, 2025

# AI Safety and Security Need More Funders

_**Editor’s note:** This article was published under our former name, Open Philanthropy._

## Introduction

Leading AI systems [outperform human experts](https://time.com/7279010/ai-virus-lab-biohazard-study/) in virology tasks relevant to creating novel pathogens and [show signs](https://www.antischeming.ai/) of deceptive behavior. Many experts predict that these systems will become smarter-than-human in the next decade.\[1\]But efforts to mitigate the risks remain profoundly underfunded. **In this post, we argue that now is a uniquely high-impact moment for new philanthropic funders to enter the field of AI safety and security.**

We cover:

- **Why more philanthropic funders are needed now:** Additional funders can help build a more effective coalition behind AI safety and security; back areas and organizations that Good Ventures (our largest funding partner) is not well-positioned to support; and increase the total amount of funding in this space, which we think is still too low. Because of these factors, we are typically able to recommend giving opportunities to external funders that are 2-5x as cost-effective as Good Ventures’ marginal AI safety funding. ( [More](https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/#whymorephilanthropic))
- **Examples of previous philanthropic wins in AI safety and security:** Our experience over 10 years of grantmaking in this space shows that well-targeted philanthropy can meaningfully reduce worst-case risks from advanced AI. We discuss several examples across our three investment pillars: visibility, safeguards, and capacity. ( [More](https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/#philanthropycanachieve))
- **How other funders can get involved:** We help new funders reduce the time required to find high-impact philanthropic opportunities in the field by developing custom portfolios of grant recommendations that fit each donor’s interests and preferences. This includes connecting them with other leading experts and advisors, offering support to evaluate giving opportunities, and sourcing co-funding opportunities. ( [More](https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/#howtoget))

This is the third in a three-part series on our approach to safety, progress, and AI. The [first](https://www.coefficientgiving.org/research/why-we-fund-both-progress-and-safety/) covered why we fund scientific and technological progress while also funding work to reduce risks from emerging technologies like AI. The [second](https://www.coefficientgiving.org/research/our-approach-to-ai-safety-and-security) described our grantmaking approach for AI safety and security.

## Why more philanthropic funders are needed

We’ve long thought that AI could bring enormous benefit

... (truncated, 22 KB total)
Resource ID: 0b2d39c371e3abaa | Stable ID: MjAwZDRmOT