Back
over $9.4 million from Open Philanthropy
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Coefficient Giving
This grant page documents Open Philanthropy's substantial funding of Redwood Research, useful for understanding the organizational and financial landscape of the AI safety field.
Metadata
Importance: 42/100press releasereference
Summary
Open Philanthropy awarded over $9.4 million in general support funding to Redwood Research, an AI safety organization focused on applied alignment research. This grant reflects Open Philanthropy's commitment to supporting technical AI safety work, particularly Redwood's efforts in areas like adversarial training, interpretability, and reducing risks from advanced AI systems.
Key Points
- •Open Philanthropy provided $9.4M+ in general support to Redwood Research, one of the major AI safety organizations.
- •Redwood Research focuses on applied technical safety research including adversarial robustness and interpretability.
- •General support grants indicate high donor confidence in an organization's mission and operational capacity.
- •This funding helps sustain a team dedicated to empirical alignment research and practical safety interventions.
- •The grant reflects broader philanthropic investment in organizations working on near-term and long-term AI risk reduction.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Redwood Research | Organization | 78.0 |
| AI Alignment Research Agendas | Crux | 69.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202613 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/#content)

# Navigating Transformative AI
Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems.
- 480+
grants made
## Contents
- [About the Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/#about-the-fund)
- [Funding Opportunities](https://coefficientgiving.org/funds/navigating-transformative-ai/#funding-opportunities)
- [Research & Updates](https://coefficientgiving.org/funds/navigating-transformative-ai/#research-and-updates)
- [Featured Grants](https://coefficientgiving.org/funds/navigating-transformative-ai/#featured-grants)
## About the Fund
### Program Leads
- [](https://coefficientgiving.org/team/claire-zabel/)
[Claire Zabel](https://coefficientgiving.org/team/claire-zabel/)
Managing Director, Short Timelines Special Projects
- [](https://coefficientgiving.org/team/luke-muehlhauser/)
[Luke Muehlhauser](https://coefficientgiving.org/team/luke-muehlhauser/)
Managing Director, AI Governance & Policy
- [](https://coefficientgiving.org/team/peter-favaloro/)
[Peter Favaloro](https://coefficientgiving.org/team/peter-favaloro/)
Program Director, Technical AI Safety
- [](https://coefficientgiving.org/team/eli-rose/)
[Eli Rose](https://coefficientgiving.org/team/eli-rose/)
Program Director, Global Catastrophic Risks Capacity Building
### Partners
- **[Good Ventures](https://www.goodventures.org/)**
Interested in providing funding within this space? Reach out to [partnerwithus@coefficientgiving.org](mailto:partnerwithus@coefficientgiving.org).
In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains.
We think AI could be the most important technological development in human history. If handled well, it could [accelerate scientific discovery](https://www.coefficientgiving.org/research/how-neil-king-and-david-baker-are-using-ai-to-create-more-effective-vaccines/), improve [health outcomes](https://www.planned-obsolescence.org/the-costs-of-caution/), and create [unprecedented prosperity](https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments). If handled poorly, it could lead to catastrophic consequences: [many](https://aistatement.com/) [experts](https://red-lines.ai/) [think](https://idais.ai/dialogue/idais-shanghai/) that risks from AI-related misuse, loss of control, or drastic societal change could endanger human
... (truncated, 13 KB total)Resource ID:
8c79e00bab007a63 | Stable ID: MTFiNDI4Mj