Skip to content
Longterm Wiki
Back

Open Philanthropy: Progress in 2024 and Plans for 2025

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

Open Philanthropy is one of the largest funders in the AI safety space; their annual progress reports are useful for understanding funding priorities, institutional strategy, and which organizations and research directions receive major philanthropic backing.

Metadata

Importance: 52/100organizational reportnews

Summary

Open Philanthropy reviews its 2024 philanthropic activities and outlines priorities for 2025, with emphasis on AI safety research funding, strategic partnerships, and grants spanning global health and catastrophic risk reduction. The report provides transparency into one of the field's largest funders and signals where major resources will flow in the AI safety ecosystem.

Key Points

  • Open Philanthropy expanded partnerships and increased strategic grant-making across AI safety and global catastrophic risk domains in 2024.
  • The report signals continued prioritization of AI safety research as a core funding area alongside global health initiatives.
  • Plans for 2025 reflect evolving funder strategy in response to rapid AI development and emerging safety challenges.
  • The update offers rare transparency into how a major philanthropic actor allocates resources across the AI safety field-building ecosystem.
  • Field-building and training programs remain key pillars of Open Philanthropy's approach to developing AI safety talent and community.

Review

Open Philanthropy's 2024 report demonstrates a strategic evolution in philanthropic approach, emphasizing collaborative funding and targeted investments in critical global challenges. The organization significantly expanded its work in AI safety, committing approximately $50 million to technical research and developing new frameworks for understanding potential risks from advanced AI systems. The organization's methodology continues to prioritize causes that are important, neglected, and tractable, with a growing focus on building external partnerships and pooled funds. Notable achievements include launching the Lead Exposure Action Fund (LEAF), supporting AI safety research infrastructure, and developing new approaches to tracking and mitigating global catastrophic risks. Their work reflects a nuanced understanding of emerging technological challenges, particularly in AI, while maintaining a broad portfolio of global health, development, and risk mitigation initiatives.

Cited by 4 pages

6 FactBase facts citing this source

EntityPropertyValueAs Of
Coefficient GivingTotal Funding Raised636000002024
Coefficient GivingTotal Funding Raised28000000002024
Coefficient GivingTotal Funding Raised6500000002024
Coefficient GivingTotal Funding Raised$650 million2024
Coefficient GivingDescriptionAI safety spending (2024): $63.6M
Coefficient GivingTotal Funding Raised$2.8 billion2024
Resource ID: 7ca35422b79c3ac9 | Stable ID: OWFiYTY5Yj