Skip to content
Longterm Wiki
Back

Request for Proposals: Technical AI Safety Research

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This RFP is relevant for AI safety researchers seeking funding; it reflects philanthropic prioritization of technical safety work and may signal which research directions funders consider neglected or high-impact.

Metadata

Importance: 45/100press releasereference

Summary

Coefficient Giving's Navigating Transformative AI fund solicits proposals for technical AI safety research projects, outlining funding priorities, eligibility criteria, and submission guidelines for researchers seeking grants. The RFP focuses on advancing safety-relevant technical work to reduce risks from transformative AI systems. It serves as a funding opportunity announcement for independent researchers and institutions working on AI safety.

Key Points

  • Open call for technical AI safety research proposals through Coefficient Giving's Navigating Transformative AI fund.
  • Targets safety-relevant technical work including areas like alignment, interpretability, and robustness of AI systems.
  • Provides funding criteria, eligibility requirements, and submission process for prospective grantees.
  • Aimed at supporting independent researchers and organizations working outside large well-funded institutions.
  • Part of a broader philanthropic effort to direct capital toward neglected technical AI safety problems.

Cited by 1 page

PageTypeQuality
Coefficient GivingOrganization55.0

2 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Mar 20, 202638 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/#content)

# Request for Proposals: Technical AI Safety Research

## A call for AI safety research

2025, like 2024, will see the release of the most capable AI system in history. In fact, we may see it happen multiple times, each a few weeks or months apart. This won’t require any spectacular breakthroughs — just the same steady progress we’ve seen for the last few years. No one knows how long this trend will last, but many AI researchers and developers now expect we’ll have human-level AI within a [decade](https://x.com/ylecun/status/1846574605894340950), and that it will be radically [transformative](https://darioamodei.com/machines-of-loving-grace).

At Open Philanthropy, we think the possibility of [transformative AI](https://coefficientgiving.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/#0-1-defining-transformative-artificial-intelligence-transformative-ai) is worth taking seriously and planning for right now. In particular, we should prepare for the risk that AI systems could be misaligned — that they might pursue goals that no one gave them and harm people in the process. We think that ML research today can help to clarify and mitigate the likelihood of this failure mode.

Since 2014, Open Philanthropy has put [hundreds of millions of dollars](https://coefficientgiving.org/focus/scientific-research/) toward scientific research. We’ve funded groundbreaking work on [computational protein design](https://coefficientgiving.org/research/how-neil-king-and-david-baker-are-using-ai-to-create-more-effective-vaccines/), novel methods for [malaria eradication](https://coefficientgiving.org/funds/science-and-global-health-rd/?search=%22target+malaria%22#grants-science-and-global-health-randd), and cutting-edge strategies for [pandemic](https://coefficientgiving.org/funds/biosecurity-pandemic-preparedness/?search=far-UVC+sterilization#grants-biosecurity-and-pandemic-preparedness) [prevention](https://naobservatory.org/blog/fund-the-nao-to-detect-pandemics). With transformative AI on the horizon, we see another opportunity for our funding to accelerate highly impactful technical research. In consultation with our technical advisors, we’ve generated a list of research areas that we think offer high leverage for improving our understanding and control of AI.

**We expect to spend roughly $40M on this RFP over the next 5 months****,** and we have funding available to spend substantially more depending on the quality of applications received. We’re open to proposals for grants of many sizes and purposes, ranging from rapid funding for API credits all the way to seed funding for new research organizations.

Whether you’re an expert on one of these research topics or you’ve barely thought about them, we encourage you to apply. Over the last few years, we’ve seen many researchers switch i

... (truncated, 38 KB total)
Resource ID: kb-fd5a7d6cfe6e7e1d | Stable ID: MzFlMTMzZj