Skip to content
Longterm Wiki
Back

Open Philanthropy Request for Proposals: Technical AI Safety Research

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This Open Philanthropy RFP is a key funding opportunity document that shaped the direction of technical AI safety research by publicly identifying priority areas; useful context for understanding how philanthropic funding influences the field.

Metadata

Importance: 62/100press releasereference

Summary

Open Philanthropy issued a request for proposals seeking technical AI safety research projects, signaling funding priorities and research directions the organization considers most valuable. The RFP outlines areas of interest including interpretability, scalable oversight, and related alignment challenges, aiming to grow the field by supporting researchers and organizations working on these problems.

Key Points

  • Open Philanthropy seeks proposals for technical AI safety research across multiple focus areas including interpretability and scalable oversight.
  • The RFP serves as a field-building mechanism by directing philanthropic funding toward high-priority alignment research gaps.
  • Reflects Open Philanthropy's broader strategy of proactively shaping the AI safety research landscape through targeted grants.
  • Eligibility and submission criteria are provided to help individual researchers and organizations apply for funding.
  • Signals which technical safety problem areas are considered most promising and underfunded by a major AI safety funder.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202638 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/#content)

# Request for Proposals: Technical AI Safety Research

## A call for AI safety research

2025, like 2024, will see the release of the most capable AI system in history. In fact, we may see it happen multiple times, each a few weeks or months apart. This won’t require any spectacular breakthroughs — just the same steady progress we’ve seen for the last few years. No one knows how long this trend will last, but many AI researchers and developers now expect we’ll have human-level AI within a [decade](https://x.com/ylecun/status/1846574605894340950), and that it will be radically [transformative](https://darioamodei.com/machines-of-loving-grace).

At Open Philanthropy, we think the possibility of [transformative AI](https://coefficientgiving.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#0-1-defining-transformative-artificial-intelligence-transformative-ai) is worth taking seriously and planning for right now. In particular, we should prepare for the risk that AI systems could be misaligned — that they might pursue goals that no one gave them and harm people in the process. We think that ML research today can help to clarify and mitigate the likelihood of this failure mode.

Since 2014, Open Philanthropy has put [hundreds of millions of dollars](https://coefficientgiving.org/focus/scientific-research/) toward scientific research. We’ve funded groundbreaking work on [computational protein design](https://coefficientgiving.org/research/how-neil-king-and-david-baker-are-using-ai-to-create-more-effective-vaccines/), novel methods for [malaria eradication](https://coefficientgiving.org/funds/science-and-global-health-rd/?search=%22target+malaria%22#grants-science-and-global-health-randd), and cutting-edge strategies for [pandemic](https://coefficientgiving.org/funds/biosecurity-pandemic-preparedness/?search=far-UVC+sterilization#grants-biosecurity-and-pandemic-preparedness) [prevention](https://naobservatory.org/blog/fund-the-nao-to-detect-pandemics). With transformative AI on the horizon, we see another opportunity for our funding to accelerate highly impactful technical research. In consultation with our technical advisors, we’ve generated a list of research areas that we think offer high leverage for improving our understanding and control of AI.

**We expect to spend roughly $40M on this RFP over the next 5 months****,** and we have funding available to spend substantially more depending on the quality of applications received. We’re open to proposals for grants of many sizes and purposes, ranging from rapid funding for API credits all the way to seed funding for new research organizations.

Whether you’re an expert on one of these research topics or you’ve barely thought about them, we encourage you to apply. Over the last few years, we’ve seen many researchers switch i

... (truncated, 38 KB total)
Resource ID: 913cb820e5769c0b | Stable ID: MWE1NzgyMW