Back
Coefficient Giving - AI Governance RFP
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Coefficient Giving
This RFP page is relevant for researchers or organizations seeking funding for AI governance work; it reflects growing philanthropic interest in supporting policy and coordination responses to transformative AI risks.
Metadata
Importance: 35/100press releasereference
Summary
Coefficient Giving's Navigating Transformative AI fund issues a Request for Proposals focused on AI governance initiatives. The RFP seeks to fund projects that address policy, coordination, and oversight challenges posed by advanced AI systems. It represents a philanthropic mechanism for channeling resources toward governance-focused AI safety work.
Key Points
- •RFP from Coefficient Giving's 'Navigating Transformative AI' fund targeting AI governance projects
- •Seeks to fund work addressing policy and oversight challenges from advanced/transformative AI systems
- •Represents a philanthropic funding opportunity for researchers and organizations working on AI governance
- •Focuses on coordination and institutional responses to transformative AI risks
- •Part of a broader ecosystem of funders supporting AI safety and governance work
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Giving Pledge | Organization | 68.0 |
| AI Chip Governance Supply Chain | Concept | -- |
Cached Content Preview
HTTP 200Fetched Mar 20, 202645 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-ai-governance/#content)
# Request for Proposals: AI Governance
_**This RFP closed on January 25, 2026.**_
_We’d like to thank everyone who submitted applications during this RFP round. We’re excited about the applications we’ve funded thus far, and we expect to continue growing our overall giving toward AI Governance in 2026. However, we’ve decided to allocate a larger portion of our staff capacity toward proactively identifying and addressing gaps we believe may be critical to successfully [navigate transformative AI](https://coefficientgiving.org/funds/navigating-transformative-ai/), and away from this particular RFP. We are likely to open more AI governance RFPs in the future, but we haven’t yet decided when._
* * *
AI has [enormous](https://epochai.org/blog/explosive-growth-from-ai-a-review-of-the-arguments) beneficial [potential](https://www.planned-obsolescence.org/the-costs-of-caution/) if it is governed well. However, in line with a [growing](https://www.science.org/doi/10.1126/science.adn0117) [contingent](https://www.safe.ai/work/statement-on-ai-risk) of [AI](https://blog.aiimpacts.org/p/2023-ai-survey-of-2778-six-things) (and [other](https://futureoflife.org/open-letter/long-view-leadership-on-existential-threats/)) [experts](https://idais.ai/) from academia, industry, government, and civil society, we also think that AI systems could [soon](https://www.cold-takes.com/where-ai-forecasting-stands-today/) (e.g. in the next 15 years) cause [catastrophic harm](https://80000hours.org/problem-profiles/artificial-intelligence/). For example, this could happen if malicious human actors [deliberately misuse](https://assets.publishing.service.gov.uk/media/6655982fdc15efdddf1a842f/international_scientific_report_on_the_safety_of_advanced_ai_interim_report.pdf#page=41) advanced AI systems, or if we [lose control](https://www.cold-takes.com/without-specific-countermeasures-the-easiest-path-to-transformative-ai-likely-leads-to-ai-takeover/) of future powerful systems designed to take autonomous actions.[\[1\]](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-ai-governance/#f+20996+2+1) Though the nature and extent of these risks are still an ongoing area of scientific inquiry and debate, we think it is plausible that rapid progress in AI could lead to extreme risks, including (but not limited to):
The permanent disempowerment or extinction of humanity by misaligned AI … [Continue reading](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-ai-governance/#f+20996+2+1)
To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below.
**Strong applications might be funded by Good Ventures (Coefficient Giving’s [partner organization](https://www.goo
... (truncated, 45 KB total)Resource ID:
98043288a980ebe7 | Stable ID: OTdhYzI3Zj