Skip to content
Longterm Wiki
Back

Open Philanthropy: Potential Risks from Advanced Artificial Intelligence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

Open Philanthropy is a major philanthropic funder whose AI safety grantmaking strategy has significantly shaped the field; this page documents their rationale and scope for those seeking context on funding priorities and institutional perspectives.

Metadata

Importance: 62/100organizational reporthomepage

Summary

Open Philanthropy's focus area page on potential risks from advanced AI outlines their strategic grantmaking approach to reducing catastrophic and existential risks from transformative AI systems. It explains their reasoning for prioritizing AI safety research, policy work, and field-building as among the most important philanthropic opportunities of our time.

Key Points

  • Open Philanthropy treats potential risks from advanced AI as one of its highest-priority cause areas due to the scale and severity of potential harms.
  • Funding is directed toward technical AI safety research, governance, policy, and efforts to build a robust AI safety field.
  • The page reflects a long-termist framework, emphasizing risks from AI systems that could be transformative within decades.
  • Open Philanthropy has become one of the largest funders of AI safety work, shaping research agendas at major labs and universities.
  • The focus area encompasses both near-term and speculative long-term risks, including misalignment and misuse scenarios.

Cited by 1 page

PageTypeQuality
AI Risk Portfolio AnalysisAnalysis64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202616 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/#content)

![](https://coefficientgiving.org/wp-content/uploads/Illustration-Semiconductor.svg)

# Navigating Transformative AI

Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems.

- 480+

grants made


## Contents

- [About the Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/#about-the-fund)
- [Funding Opportunities](https://coefficientgiving.org/funds/navigating-transformative-ai/#funding-opportunities)
- [Research & Updates](https://coefficientgiving.org/funds/navigating-transformative-ai/#research-and-updates)
- [Featured Grants](https://coefficientgiving.org/funds/navigating-transformative-ai/#featured-grants)

## About the Fund

### Program Leads

- [![](https://coefficientgiving.org/wp-content/uploads/claire-zabel-1.png)](https://coefficientgiving.org/team/claire-zabel/)

[Claire Zabel](https://coefficientgiving.org/team/claire-zabel/)

Managing Director, Short Timelines Special Projects

- [![](https://coefficientgiving.org/wp-content/uploads/luke-muehlhauser1.png)](https://coefficientgiving.org/team/luke-muehlhauser/)

[Luke Muehlhauser](https://coefficientgiving.org/team/luke-muehlhauser/)

Managing Director, AI Governance & Policy

- [![](https://coefficientgiving.org/wp-content/uploads/peter-favaloro.png)](https://coefficientgiving.org/team/peter-favaloro/)

[Peter Favaloro](https://coefficientgiving.org/team/peter-favaloro/)

Program Director, Technical AI Safety

- [![](https://coefficientgiving.org/wp-content/uploads/DZ9_5521-3-1-scaled.jpg)](https://coefficientgiving.org/team/eli-rose/)

[Eli Rose](https://coefficientgiving.org/team/eli-rose/)

Program Director, Global Catastrophic Risks Capacity Building


### Partners

- **[Good Ventures](https://www.goodventures.org/)**

Interested in providing funding within this space? Reach out to [partnerwithus@coefficientgiving.org](mailto:partnerwithus@coefficientgiving.org).

In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains.

We think AI could be the most important technological development in human history. If handled well, it could [accelerate scientific discovery](https://www.coefficientgiving.org/research/how-neil-king-and-david-baker-are-using-ai-to-create-more-effective-vaccines/), improve [health outcomes](https://www.planned-obsolescence.org/the-costs-of-caution/), and create [unprecedented prosperity](https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments). If handled poorly, it could lead to catastrophic consequences: [many](https://aistatement.com/) [experts](https://red-lines.ai/) [think](https://idais.ai/dialogue/idais-shanghai/) that risks from AI-related misuse, loss of control, or drastic societal change could endanger human

... (truncated, 16 KB total)
Resource ID: f8f6f3ee55c2babe | Stable ID: YmQ1OGExOG