Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This Open Philanthropy grants page documents funding allocated to AI alignment projects focused on deep learning, useful for understanding the funding landscape and which research directions are receiving philanthropic support in the AI safety field.

Metadata

Importance: 45/100organizational reportreference

Summary

Open Philanthropy funding announcement for AI alignment research projects focused on working with deep learning systems. These grants support technical alignment research aimed at making advanced AI systems safer and more reliably aligned with human values.

Key Points

  • Open Philanthropy is funding multiple AI alignment projects specifically targeting deep learning systems
  • Grants reflect a strategic focus on near-term technical alignment work relevant to current ML paradigms
  • Funding supports the broader ecosystem of researchers working on making powerful AI systems safe
  • Represents Open Philanthropy's ongoing commitment to technical AI safety as a priority cause area

Cited by 1 page

PageTypeQuality
AI Alignment Research AgendasCrux69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
[Skip to Content](https://coefficientgiving.org/funds/navigating-transformative-ai/#content)

![](https://coefficientgiving.org/wp-content/uploads/Illustration-Semiconductor.svg)

# Navigating Transformative AI

Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems.

- 480+

grants made


## Contents

- [About the Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/#about-the-fund)
- [Funding Opportunities](https://coefficientgiving.org/funds/navigating-transformative-ai/#funding-opportunities)
- [Research & Updates](https://coefficientgiving.org/funds/navigating-transformative-ai/#research-and-updates)
- [Featured Grants](https://coefficientgiving.org/funds/navigating-transformative-ai/#featured-grants)

## About the Fund

### Program Leads

- [![](https://coefficientgiving.org/wp-content/uploads/claire-zabel-1.png)](https://coefficientgiving.org/team/claire-zabel/)

[Claire Zabel](https://coefficientgiving.org/team/claire-zabel/)

Managing Director, Short Timelines Special Projects

- [![](https://coefficientgiving.org/wp-content/uploads/luke-muehlhauser1.png)](https://coefficientgiving.org/team/luke-muehlhauser/)

[Luke Muehlhauser](https://coefficientgiving.org/team/luke-muehlhauser/)

Managing Director, AI Governance & Policy

- [![](https://coefficientgiving.org/wp-content/uploads/peter-favaloro.png)](https://coefficientgiving.org/team/peter-favaloro/)

[Peter Favaloro](https://coefficientgiving.org/team/peter-favaloro/)

Program Director, Technical AI Safety

- [![](https://coefficientgiving.org/wp-content/uploads/DZ9_5521-3-1-scaled.jpg)](https://coefficientgiving.org/team/eli-rose/)

[Eli Rose](https://coefficientgiving.org/team/eli-rose/)

Program Director, Global Catastrophic Risks Capacity Building


### Partners

- **[Good Ventures](https://www.goodventures.org/)**

Interested in providing funding within this space? Reach out to [partnerwithus@coefficientgiving.org](mailto:partnerwithus@coefficientgiving.org).

In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains.

We think AI could be the most important technological development in human history. If handled well, it could [accelerate scientific discovery](https://www.coefficientgiving.org/research/how-neil-king-and-david-baker-are-using-ai-to-create-more-effective-vaccines/), improve [health outcomes](https://www.planned-obsolescence.org/the-costs-of-caution/), and create [unprecedented prosperity](https://epoch.ai/blog/explosive-growth-from-ai-a-review-of-the-arguments). If handled poorly, it could lead to catastrophic consequences: [many](https://aistatement.com/) [experts](https://red-lines.ai/) [think](https://idais.ai/dialogue/idais-shanghai/) that risks from AI-related misuse, loss of control, or drastic societal change could endanger human

... (truncated, 13 KB total)
Resource ID: e7f61a6aa8370b8c | Stable ID: YWFmYjFkOT