Longterm Wiki

Coefficient Giving

coefficient-giving (E521)
← Back to pagePath: /knowledge-base/organizations/coefficient-giving/
Page Metadata
{
  "id": "coefficient-giving",
  "numericId": null,
  "path": "/knowledge-base/organizations/coefficient-giving/",
  "filePath": "knowledge-base/organizations/coefficient-giving.mdx",
  "title": "Coefficient Giving",
  "quality": 55,
  "importance": 62,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-01-29",
  "llmSummary": "Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.",
  "structuredSummary": null,
  "description": "Coefficient Giving (formerly Open Philanthropy) is a major philanthropic organization that has directed over $4 billion in grants since 2014, including $336+ million to AI safety. In November 2025, Open Philanthropy rebranded to Coefficient Giving and restructured into 13 cause-specific funds open to multiple donors. The Navigating Transformative AI Fund supports technical safety research, AI governance, and capacity building, with a $40M Technical AI Safety RFP in 2025. Key grantees include Center for AI Safety ($8.5M in 2024), Redwood Research ($6.2M), and MIRI ($4.1M).",
  "ratings": {
    "novelty": 2.5,
    "rigor": 5,
    "actionability": 6.5,
    "completeness": 6.5
  },
  "category": "organizations",
  "subcategory": "funders",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 3602,
    "tableCount": 21,
    "diagramCount": 2,
    "internalLinks": 9,
    "externalLinks": 50,
    "footnoteCount": 0,
    "bulletRatio": 0.1,
    "sectionCount": 38,
    "hasOverview": true,
    "structuralScore": 15
  },
  "suggestedQuality": 100,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 3602,
  "unconvertedLinks": [
    {
      "text": "Center for Human-Compatible AI",
      "url": "https://humancompatible.ai/",
      "resourceId": "9c4106b68045dbd6",
      "resourceTitle": "Center for Human-Compatible AI"
    },
    {
      "text": "Future of Humanity Institute",
      "url": "https://www.fhi.ox.ac.uk/",
      "resourceId": "1593095c92d34ed8",
      "resourceTitle": "**Future of Humanity Institute**"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "EA Funds",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "S-process rounds",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "overview of AI safety funding",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "An Overview of the AI Safety Funding Situation",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    }
  ],
  "unconvertedLinkCount": 12,
  "convertedLinkCount": 0,
  "backlinkCount": 4,
  "redundancy": {
    "maxSimilarity": 17,
    "similarPages": [
      {
        "id": "ltff",
        "title": "Long-Term Future Fund (LTFF)",
        "path": "/knowledge-base/organizations/ltff/",
        "similarity": 17
      },
      {
        "id": "sff",
        "title": "Survival and Flourishing Fund (SFF)",
        "path": "/knowledge-base/organizations/sff/",
        "similarity": 17
      },
      {
        "id": "longview-philanthropy",
        "title": "Longview Philanthropy",
        "path": "/knowledge-base/organizations/longview-philanthropy/",
        "similarity": 16
      },
      {
        "id": "manifund",
        "title": "Manifund",
        "path": "/knowledge-base/organizations/manifund/",
        "similarity": 16
      },
      {
        "id": "rethink-priorities",
        "title": "Rethink Priorities",
        "path": "/knowledge-base/organizations/rethink-priorities/",
        "similarity": 16
      }
    ]
  }
}
Entity Data
{
  "id": "coefficient-giving",
  "type": "organization",
  "title": "Coefficient Giving",
  "description": "Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering",
  "tags": [],
  "relatedEntries": [],
  "sources": [],
  "lastUpdated": "2026-02",
  "website": "https://coefficientgiving.org",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (4)
idtitletyperelationship
dustin-moskovitzDustin Moskovitz (AI Safety Funder)researcher
intervention-portfolioAI Safety Intervention Portfolioapproach
training-programsAI Safety Training Programsapproach
field-building-analysisAI Safety Field Building Analysisapproach
Frontmatter
{
  "title": "Coefficient Giving",
  "description": "Coefficient Giving (formerly Open Philanthropy) is a major philanthropic organization that has directed over $4 billion in grants since 2014, including $336+ million to AI safety. In November 2025, Open Philanthropy rebranded to Coefficient Giving and restructured into 13 cause-specific funds open to multiple donors. The Navigating Transformative AI Fund supports technical safety research, AI governance, and capacity building, with a $40M Technical AI Safety RFP in 2025. Key grantees include Center for AI Safety ($8.5M in 2024), Redwood Research ($6.2M), and MIRI ($4.1M).",
  "sidebar": {
    "order": 5
  },
  "quality": 55,
  "llmSummary": "Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.",
  "lastEdited": "2026-01-29",
  "importance": 62,
  "update_frequency": 45,
  "ratings": {
    "novelty": 2.5,
    "rigor": 5,
    "actionability": 6.5,
    "completeness": 6.5
  },
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "subcategory": "funders",
  "entityType": "organization"
}
Raw MDX Source
---
title: Coefficient Giving
description: Coefficient Giving (formerly Open Philanthropy) is a major philanthropic organization that has directed over $4 billion in grants since 2014, including $336+ million to AI safety. In November 2025, Open Philanthropy rebranded to Coefficient Giving and restructured into 13 cause-specific funds open to multiple donors. The Navigating Transformative AI Fund supports technical safety research, AI governance, and capacity building, with a $40M Technical AI Safety RFP in 2025. Key grantees include Center for AI Safety ($8.5M in 2024), Redwood Research ($6.2M), and MIRI ($4.1M).
sidebar:
  order: 5
quality: 55
llmSummary: Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.
lastEdited: "2026-01-29"
importance: 62
update_frequency: 45
ratings:
  novelty: 2.5
  rigor: 5
  actionability: 6.5
  completeness: 6.5
clusters:
  - community
  - ai-safety
  - governance
subcategory: funders
entityType: organization
---
import {DataInfoBox, Mermaid, EntityLink} from '@components/wiki';

## Quick Assessment

| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Scale** | Dominant | \$4B+ total grants; ≈\$46M AI safety in 2023 |
| **Structure** | 13 cause-specific funds | Multi-donor pooled funds since Nov 2025 rebrand |
| **AI Safety Focus** | Leading funder | \$336M+ to AI safety since 2014; ≈60% of external AI safety funding |
| **Application Model** | Rolling RFPs + regranting | 300-word EOI, 2-week response; supports platforms like <EntityLink id="E547">Manifund</EntityLink> |
| **Transparency** | High | Public grants database, annual progress reports |
| **Key Funders** | Good Ventures (primary) | <EntityLink id="E436">Dustin Moskovitz</EntityLink> & Cari Tuna; expanding to multi-donor model |

## Organization Details

| Attribute | Details |
|-----------|---------|
| **Full Name** | Coefficient Giving (formerly <EntityLink id="E552">Coefficient Giving</EntityLink>) |
| **Type** | Philanthropic Advising and Funding Organization |
| **Legal Structure** | LLC (independent since 2017) |
| **Founded** | 2014 (as GiveWell outgrowth); 2017 (independent); 2025 (rebranded) |
| **Total Grants** | \$4+ billion (as of June 2025) |
| **AI Safety Grants** | \$336+ million (≈12% of total) |
| **2024 AI Safety Spend** | ≈\$50 million committed |
| **Leadership** | Alexander Berger (CEO), <EntityLink id="E156">Holden Karnofsky</EntityLink> (Board) |
| **Location** | San Francisco, California |
| **Website** | [coefficientgiving.org](https://coefficientgiving.org/) |
| **Grants Database** | [coefficientgiving.org/grants](https://coefficientgiving.org/grants/) |

## Overview

[Coefficient Giving](https://coefficientgiving.org/) is a major philanthropic organization that has directed over \$4 billion in grants since 2014 across global health, AI safety, pandemic preparedness, farm animal welfare, and other cause areas. In November 2025, the organization [rebranded from Open Philanthropy to Coefficient Giving](https://coefficientgiving.org/research/open-philanthropy-is-now-coefficient-giving/), signaling an expansion from serving primarily one anchor donor (Good Ventures, the foundation of Dustin Moskovitz and Cari Tuna) to operating 13 cause-specific funds open to multiple philanthropists. The name "Coefficient" reflects the organization's goal of multiplying impact through research, grantmaking, and partnerships—with "co" nodding to collaboration and "efficient" reflecting their unusual focus on cost-effectiveness.

Coefficient Giving is widely considered the largest funder of AI safety work globally. Since 2014, approximately \$336 million (12% of total grants) has gone to AI safety research and governance, with roughly \$46 million deployed in 2023 alone—making it the dominant external funder in a field where most safety research happens inside frontier AI labs. The organization's [Navigating Transformative AI Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/) supports <EntityLink id="E297">technical AI safety research</EntityLink>, <EntityLink id="E154">AI governance and policy</EntityLink> work, and capacity building, with a [\$40 million Technical AI Safety RFP](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/) launched in 2025 covering 21 research areas.

The organization distinguishes itself through its strategic cause selection methodology—identifying problems that are large, tractable, and neglected relative to their size. This approach, combined with a willingness to fund speculative research and support multiple funding mechanisms (direct grants, regranting programs, pooled funds), has made Coefficient Giving central to the effective altruism funding ecosystem. However, critics have noted concerns about funding concentration, the slow pace of spending relative to the scale of AI risks, and heavy focus on evaluations over alignment research in recent <EntityLink id="E631">technical AI safety</EntityLink> grants.

## History and Evolution

### Origins (2011-2017)

Coefficient Giving traces its origins to 2011 when GiveWell, the charity evaluator founded by Holden Karnofsky and Elie Hassenfeld, began advising Good Ventures on how to deploy Dustin Moskovitz's philanthropic capital effectively. Good Ventures was established by Moskovitz (Facebook co-founder, net worth ≈\$12 billion) and Cari Tuna in 2011. By 2014, this advising relationship formalized into "Coefficient Giving" as a distinct project within GiveWell, focused on identifying high-impact giving opportunities across a broader range of cause areas than GiveWell's traditional global health focus.

In 2017, Coefficient Giving [spun off from GiveWell](https://www.openphilanthropy.org/research/open-philanthropy-is-now-an-independent-organization/) as an independent LLC, enabling it to pursue its own strategic priorities while GiveWell continued focusing on evidence-backed global health interventions. The separation reflected diverging methodologies: GiveWell prioritizes robust evidence of effectiveness, while Coefficient Giving embraced "hits-based giving"—funding speculative, high-variance projects where a few major successes could justify many failures.

### Growth and AI Safety Focus (2015-2024)

Coefficient Giving began supporting AI safety work in 2015, when the field was nascent and institutional support was minimal. Early grants helped establish foundational organizations including the [Machine Intelligence Research Institute](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-ai-safety-retraining-program/) (<EntityLink id="E202">MIRI</EntityLink>), the [Center for Human-Compatible AI](https://humancompatible.ai/) at UC Berkeley, and the [Future of Humanity Institute](https://www.fhi.ox.ac.uk/) at Oxford. By 2023, AI safety had become Coefficient Giving's largest longtermist cause area, reflecting growing concern about advanced AI risks among the leadership team.

| Year | AI Safety Milestone |
|------|-------------------|
| 2015 | First AI safety grants; field had ≈10 full-time researchers |
| 2017 | Independent organization; Holden Karnofsky publishes AI concerns |
| 2019 | AI safety spending exceeds \$20M annually |
| 2022 | \$150M Regranting Challenge launched (not AI-specific) |
| 2023 | ≈\$46M AI safety spending; largest funder in the field |
| 2024 | ≈\$50M committed; 68% to evaluations/benchmarking |
| 2025 | Rebrand to Coefficient Giving; \$40M Technical AI Safety RFP |

### The November 2025 Rebrand

On November 18, 2025, Open Philanthropy [announced its rebranding to Coefficient Giving](https://coefficientgiving.org/research/open-philanthropy-is-now-coefficient-giving/). The change reflected several strategic shifts:

**Multi-Donor Expansion**: The organization moved from primarily serving Good Ventures to operating pooled funds open to any philanthropist. In 2024, Coefficient directed over \$100 million from donors besides Good Ventures; by 2025, non-Good Ventures funding had more than doubled.

**Brand Clarity**: The "Coefficient Giving" name created confusion—journalists mistook them for <EntityLink id="E218">OpenAI</EntityLink>, potential grantees confused them with Open Society Foundations. "Coefficient" provided a distinctive identity.

**Structural Reorganization**: The organization restructured from program areas to 13 distinct funds, each with dedicated leadership and transparent goals, allowing donors to support specific causes at scale.

## Organizational Structure

<Mermaid chart={`
flowchart TD
    GV[Good Ventures<br/>Moskovitz & Tuna] --> CG[Coefficient Giving<br/>4B+ directed]
    OD[Other Donors<br/>\$100M+ in 2024] --> CG

    CG --> NTAI[Navigating<br/>Transformative AI]
    CG --> BPP[Biosecurity &<br/>Pandemic Prep]
    CG --> GCR[Global Catastrophic<br/>Risks Opportunities]
    CG --> GH[Global Health<br/>& Wellbeing]
    CG --> OTHER[10 Other Funds]

    NTAI --> TAIS[Technical AI<br/>Safety Research]
    NTAI --> GOV[AI Governance<br/>& Policy]
    NTAI --> CAP[Capacity Building<br/>& Field Growth]

    TAIS --> CAIS[Center for AI Safety]
    TAIS --> REDWOOD[Redwood Research]
    TAIS --> MIRI[MIRI]

    style GV fill:#e6f3ff
    style CG fill:#ccffcc
    style NTAI fill:#ffffcc
    style TAIS fill:#ffcccc
`} />

### The 13 Funds Model

Since the November 2025 rebrand, Coefficient Giving operates through 13 cause-specific funds, each pooling money from multiple donors:

| Fund | Focus | Key Activities |
|------|-------|----------------|
| **Navigating Transformative AI** | AI safety & governance | Technical research, policy, capacity building |
| **Biosecurity & Pandemic Preparedness** | Catastrophic bio risks | Research, policy, infrastructure |
| **Global Catastrophic Risks Opportunities** | Cross-cutting x-risk work | Ecosystem support, foundational work |
| **Science and Global Health R&D** | Neglected disease research | TB, malaria, high-risk transformational science |
| **Global Health Policy** | Policy for health impact | Lead exposure, air pollution |
| **Global Aid Policy** | Development effectiveness | Evidence-based aid policy |
| **Farm Animal Welfare** | Factory farming reform | Welfare reforms, alternative proteins |
| **Effective Giving and Careers** | EA movement building | Giving What We Can, 80,000 Hours |
| **Abundance & Growth** | Economic prosperity | \$120M launched 2025 for scientific progress |
| **Criminal Justice Reform** | US criminal justice | Bail reform, prosecutorial accountability |
| **Land Use Reform** | Housing and development | YIMBY policy, zoning reform |
| **Immigration Policy** | Immigration reform | Policy research and advocacy |
| **Other Global Health** | Remaining health causes | Malaria, deworming, direct cash transfers |

### Navigating Transformative AI Fund

The [Navigating Transformative AI Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/) is Coefficient's primary vehicle for AI-related grantmaking, supporting:

**Technical AI Safety Research**: Work aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned. This includes interpretability research, robustness to adversarial inputs, scalable oversight methods, and understanding emergent capabilities.

**AI Governance and Policy**: Frameworks for safe, secure, and responsibly managed AI development, including export controls, compute governance, international coordination, and corporate governance mechanisms.

**Capacity Building**: Growing and strengthening the field of researchers and practitioners working on AI challenges, including training programs, career development, and institutional infrastructure.

**Short-Timeline Projects**: New projects expected to be particularly impactful if timelines to transformative AI are short, reflecting Coefficient's view that advanced AI could emerge within the next 5-15 years.

### Regrantor Model

| Component | Description |
|-----------|-------------|
| **Selection** | OP identifies trusted individuals with relevant expertise |
| **Budget** | Each regrantor receives \$200K - \$2M to distribute |
| **Autonomy** | Regrantors make independent decisions within guidelines |
| **Reporting** | Regrantors document grants, OP maintains oversight |
| **Renewal** | Strong performers may receive additional budgets |

### Regrantor Criteria

| Criterion | Description |
|-----------|-------------|
| **Domain Expertise** | Deep knowledge in cause area |
| **Community Connections** | Know who does good work |
| **Judgment** | Track record of good decisions |
| **Capacity** | Time to evaluate and make grants |
| **Values Alignment** | Share EA/longtermist priorities |

## AI Safety Grantmaking

### Major AI Safety Grantees (2024)

Coefficient's largest 2024 AI safety grants reflect priorities across evaluations, interpretability, and theoretical alignment work:

| Grantee | Amount | Focus | Notes |
|---------|--------|-------|-------|
| **Center for AI Safety** | \$8.5M | Field building, research | Training programs, compute grants, advocacy |
| **Redwood Research** | \$6.2M | Alignment research | Interpretability, control research; \$21M+ total from OP |
| **MIRI** | \$4.1M | Theoretical alignment | Agent foundations, deceptive alignment |
| **Epoch AI** | ≈\$3M | AI forecasting | Compute trends, capability timelines |
| **METR (formerly ARC Evals)** | ≈\$3M | Capability evaluations | Model evaluations used by labs and governments |
| **AI Safety Camp** | ≈\$500K | Talent pipeline | Intensive research programs |
| **Various Individuals** | ≈\$10M | Researchers, fellowships | PhDs, postdocs, independent researchers |

### 2024 Technical AI Safety Funding Breakdown

An [analysis of Coefficient Giving's Technical AI Safety funding](https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding) revealed the following distribution of the \$28M recorded in their database:

| Research Area | Percentage | Amount (~) | Assessment |
|---------------|------------|------------|------------|
| **Evaluations/Benchmarking** | 68% | \$19M | Primary focus; critics note AI Safety Institutes already well-resourced |
| **Interpretability** | ≈10% | ≈\$3M | Mechanistic interpretability, circuit analysis |
| **Robustness** | ≈5% | ≈\$1.5M | Adversarial robustness, red-teaming |
| **Value Alignment** | ≈5% | ≈\$1.5M | RLHF alternatives, preference learning |
| **Field Building** | ≈5% | ≈\$1.5M | Training programs, community |
| **Forecasting** | ≈3% | ≈\$1M | Timelines, capabilities |
| **Other** | ≈4% | ≈\$1M | Governance research, miscellaneous |

Note: The \$28M figure underestimates total 2024 spending as some approved grants had not been posted to the database at time of analysis. Coefficient acknowledged spending "roughly \$50 million" on technical AI safety in 2024.

### Historical Major AI Safety Grants

| Grantee | Total (All Years) | Period | Notable Impact |
|---------|-------------------|--------|----------------|
| **MIRI** | \$14M+ | 2014-2024 | Agent foundations, embedded agency |
| **Redwood Research** | \$21M+ | 2021-2024 | Interpretability methods, control research |
| **Center for AI Safety** | \$15M+ | 2022-2024 | Compute cluster, training programs |
| **Future of Humanity Institute** | \$10M+ | 2015-2024 | Strategic analysis (closed 2024) |
| **Center for Human-Compatible AI** | \$8M+ | 2016-2024 | Stuart Russell's CHAI lab |
| **Anthropic** | \$0 directly | N/A | VC-funded; OP staff invested personally |
| **Long-Term Future Fund** | \$3.15M | 2019-2024 | Regranting to LTFF for distribution |

### 2025 Technical AI Safety RFP

In early 2025, Coefficient launched a [\$40 million Request for Proposals](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/) across 21 research areas, with funding available for substantially more based on application quality. Key features:

**Priority Research Areas** (starred items are especially prioritized):

| Category | Research Areas |
|----------|---------------|
| **Alignment Foundations** | Alternatives to adversarial training*, alignment faking*, scalable oversight* |
| **Interpretability** | Mechanistic interpretability*, representation engineering, probing |
| **Evaluation** | Dangerous capability evaluations*, propensity evaluations*, automated red-teaming |
| **Robustness** | Adversarial robustness, distribution shift, specification gaming |
| **Governance-Adjacent** | AI governance research, responsible scaling policies |

**Grant Characteristics**:

| Aspect | Details |
|--------|---------|
| **Size Range** | API credits (\$1-10K) to seed funding for new orgs (\$1M+) |
| **Application** | 300-word expression of interest (EOI) |
| **Response Time** | Within 2 weeks of EOI submission |
| **Decision Timeline** | 4-8 weeks for full proposals |
| **Eligibility** | Academic researchers, nonprofits, independent researchers, new orgs |

## Regranting Ecosystem

Coefficient Giving supports multiple regranting platforms and mechanisms to achieve faster, more distributed funding decisions. This represents a deliberate strategy to complement slower direct grantmaking with nimble, expert-driven allocation.

### Funding Flow Through Regranting

<Mermaid chart={`
flowchart LR
    CG[Coefficient Giving] --> LTFF[Long-Term<br/>Future Fund]
    CG --> MF[Manifund<br/>Regrants]
    CG --> SFF[Survival &<br/>Flourishing Fund]

    LTFF --> I1[Individual<br/>Researchers]
    LTFF --> O1[Small Orgs]

    MF --> I2[Early-Stage<br/>Projects]
    MF --> I3[Emerging<br/>Talent]

    SFF --> O2[Established<br/>Orgs]

    style CG fill:#e6f3ff
    style LTFF fill:#ffffcc
    style MF fill:#ccffcc
    style SFF fill:#ffcccc
`} />

### Long-Term Future Fund (LTFF)

The [Long-Term Future Fund](https://funds.effectivealtruism.org/funds/far-future) is a committee-based grantmaking fund that receives significant support from Coefficient. About half of LTFF funding historically comes from Coefficient Giving donations.

| Aspect | Details |
|--------|---------|
| **Annual Volume** | ≈\$6.7M (2023) |
| **AI Safety Portion** | ≈\$4.3M (≈65% of grants) |
| **Grant Count** | ≈200 grants per year |
| **Median Grant** | ≈\$15-30K |
| **Decision Model** | Committee of fund managers |
| **Transparency** | High (public grant reports) |

LTFF grants tend toward smaller, faster decisions than direct Coefficient grants, serving researchers and projects that may not yet warrant Coefficient's full evaluation process.

### Manifund AI Safety Regranting

[Manifund](https://manifund.org/about/regranting) operates a distinct regranting model where individual experts receive budgets to make independent funding decisions. For 2025, Manifund raised \$2.25 million and announced their first 10 regrantors.

**Named 2025 Regrantors**:

| Regrantor | Budget | Background | Focus |
|-----------|--------|------------|-------|
| **Evan Hubinger** | \$450K | Anthropic AGI Safety Researcher, former LTFF manager | Technical AI safety |
| **Ryan Kidd** | ≈\$100K+ | Co-director of SERI MATS | Emerging talent |
| **Marius Hobbhahn** | ≈\$100K+ | CEO of Apollo Research | Evaluations, scheming |
| **Lisa Thiergart** | ≈\$100K+ | Director at SL5 Task Force, former MIRI | Governance |
| **Gavin Leech** | ≈\$100K+ | Cofounder Arb Research | Research reviews |
| **Dan Hendrycks** | ≈\$100K+ | Director of CAIS | Safety research |
| **Adam Gleave** | ≈\$100K+ | CEO of FAR AI | Adversarial robustness |

**Manifund Regranting Characteristics**:

| Feature | Details |
|---------|---------|
| **Speed** | Grant to bank account in under 1 week |
| **Typical Grant Size** | \$5K-\$50K |
| **Decision Authority** | Solo regrantor decisions |
| **Oversight** | Manifund reviews but doesn't approve |
| **Risk Tolerance** | High (encourages speculative grants) |

**Notable Manifund Grants**:

| Project | Amount | Regrantors | Impact |
|---------|--------|-----------|--------|
| Timaeus (DevInterp) | \$143,200 | Evan Hubinger, Rachel Weinberg, Marcus Abramovitch, Ryan Kidd | First funding; accelerated research months |
| ChinaTalk | \$37,000 | Joel Becker, Evan Hubinger | Coverage of China/AI, including DeepSeek |
| Shallow Review 2024 | \$9,000 | Neel Nanda, Ryan Kidd | Induced further \$5K from OpenPhil |

### Survival and Flourishing Fund (SFF)

The [Survival and Flourishing Fund](https://survivalandflourishing.fund/) uses a unique "S-process" algorithm for grant allocation, primarily funded by Jaan Tallinn (Skype co-founder). While Coefficient and SFF are independent, they share many grantees and strategic priorities.

| Aspect | Coefficient | SFF |
|--------|-------------|-----|
| **2024 Volume** | ≈\$650M total | ≈\$24M |
| **AI Safety %** | ≈12% | ≈86% (\$20M) |
| **Decision Model** | Staff + regrantors | S-process algorithm |
| **Speed** | Rolling | Twice yearly rounds |
| **Overlap** | High | High |

## How to Apply for Funding

### Direct Application to Coefficient

The most straightforward path for substantial funding requests:

| Step | Details | Timeline |
|------|---------|----------|
| **1. Check RFPs** | Review active [Requests for Proposals](https://coefficientgiving.org/apply-for-funding/) | Ongoing |
| **2. Submit EOI** | 300-word expression of interest describing project | N/A |
| **3. Initial Response** | Coefficient responds with interest level | 2 weeks |
| **4. Full Proposal** | If invited, submit detailed proposal with budget | 2-4 weeks to prepare |
| **5. Due Diligence** | Coefficient evaluates organization and proposal | 4-8 weeks |
| **6. Decision** | Grant approval or rejection | Total: 2-4 months |

**Tips for Applicants** (from Coefficient's guidance):

The bar is intentionally low for submitting expressions of interest. Key failure modes to avoid include not demonstrating understanding of prior work (read papers linked in relevant RFP sections) and not demonstrating that your team has prior experience with ML projects. Even uncertain proposals are worth submitting as the RFP is partly an experiment to understand funding demand.

### Via Regranting Platforms

Faster and more accessible for smaller grants:

| Platform | Best For | How to Apply |
|----------|----------|--------------|
| **Manifund** | \$5-50K projects, emerging researchers | Create project on [manifund.org](https://manifund.org), contact regrantors directly |
| **LTFF** | \$10-100K, established track record | Apply via [EA Funds](https://funds.effectivealtruism.org/funds/far-future) |
| **SFF** | \$100K+, established organizations | Apply during [S-process rounds](https://survivalandflourishing.fund/) |

### Finding Regrantors

Many regrantors are reachable through:

- **Direct outreach**: Email or social media (many are publicly active on Twitter/X, LessWrong)
- **EA communities**: EA Forum, Alignment Forum, local EA groups
- **Professional networks**: AI safety conferences (NeurIPS safety track, ICML), SERI MATS alumni
- **Manifund platform**: Create project and regrantors may proactively reach out

## Comparison with Other AI Safety Funders

| Aspect | Coefficient Giving | LTFF | SFF | Manifund |
|--------|-------------------|------|-----|----------|
| **2024 AI Safety Volume** | ≈\$50M | ≈\$4.3M | ≈\$20M | ≈\$2M |
| **Total Assets** | Good Ventures (\$12B+) | Pool of donors | Jaan Tallinn | Donors |
| **Decision Model** | Staff + regrantors | Committee | S-process algorithm | Individual regrantors |
| **Typical Grant Size** | \$100K-\$5M | \$15-100K | \$100K-\$2M | \$5-50K |
| **Speed (EOI to decision)** | 2-4 months | 1-3 months | 6 months (rounds) | Under 2 weeks |
| **Transparency** | Medium (public database) | High (detailed reports) | High (S-process public) | Very high (live on platform) |
| **Risk Tolerance** | Medium | Medium-High | Medium | High |
| **Best For** | Major grants, established orgs | Growing researchers | Established orgs | Early-stage, speculative |

### Funding Gap Analysis

According to an [overview of AI safety funding](https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation), total external philanthropic AI safety funding (≈\$100M annually) is dwarfed by:

| Comparison | Amount | Ratio to Safety Funding |
|------------|--------|------------------------|
| **Generative AI Investment (2023)** | ≈\$24B | 240:1 |
| **Frontier Lab Safety Budgets** | ≈\$500M+ combined | 5:1 |
| **US Government AI R&D** | ≈\$3B annually | 30:1 |

This funding gap is a persistent concern in the AI safety community, though Coefficient and other funders argue that talent constraints, not funding, are often the binding limitation.

## Critical Assessment

### Strengths

**Scale and Stability**: With Good Ventures' multi-billion dollar backing, Coefficient can make commitments that smaller funders cannot. This enables multi-year organizational support, compute grants, and substantial research programs.

**Strategic Sophistication**: The organization's cause selection methodology and research depth (public writeups, shallow investigations, deep dives) provides unusually transparent reasoning for grant decisions.

**Ecosystem Building**: By funding LTFF, Manifund, and other regranting mechanisms, Coefficient amplifies its reach while maintaining quality through trusted intermediaries.

**Hits-Based Giving**: Willingness to fund speculative research acknowledges that transformative progress often comes from unexpected directions, though this increases variance in outcomes.

### Limitations and Criticisms

**Funding Concentration**: With Coefficient representing ~60% of external AI safety funding, the field is heavily dependent on one organization's worldview and priorities. Critics note this could lead to "possible solutions being overlooked or assumptions no longer being questioned."

**Evaluation Focus**: The heavy focus on evaluations/benchmarking (68% of 2024 technical grants) has drawn criticism. As one researcher noted, "This looks much worse than I thought it would, both in terms of funding underdeployment, and in terms of overfocusing on evals." Critics argue AI Safety Institutes are already well-resourced for evaluation work.

**Alignment Neglect**: Some researchers express disappointment that "there's so little emphasis in this RFP about alignment, i.e. research on how to build an AI system that is doing what its developer intended it to do."

**Slow Spending**: Coefficient has [acknowledged](https://coefficientgiving.org/research/our-progress-in-2024-and-plans-for-2025/) that "in retrospect, our rate of spending was too slow, and we should have been more aggressively expanding support for technical AI safety work earlier." Key reasons cited include difficulty making qualified senior hires and disappointment with returns to past spending.

**Grants Database Limitations**: The public grants database "offers an increasingly inaccurate picture" of Coefficient's work, as it generally excludes funding advised from non-Good Ventures donors. Coefficient is considering deprecating it.

### Strategic Questions

| Question | Context |
|----------|---------|
| **Funding deployment rate** | Is \$50M/year appropriate given AI development pace? |
| **Evaluation vs alignment balance** | Should more funding go to core alignment research? |
| **Lab relationships** | How to maintain independence while funding lab-adjacent work? |
| **Multi-donor model** | Will expanding beyond Good Ventures change priorities? |
| **Talent vs funding constraint** | Is the field truly talent-constrained, or is this justifying underspending? |

## Sources and Citations

### Primary Sources
- [Coefficient Giving Official Website](https://coefficientgiving.org/)
- [Coefficient Giving Is Now Coefficient Giving (Nov 2025)](https://coefficientgiving.org/research/open-philanthropy-is-now-coefficient-giving/)
- [The Story Behind Our New Name](https://coefficientgiving.org/research/the-story-behind-our-new-name/)
- [Our Progress in 2024 and Plans for 2025](https://coefficientgiving.org/research/our-progress-in-2024-and-plans-for-2025/)
- [Navigating Transformative AI Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/)
- [Technical AI Safety Research RFP](https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/)

### Analysis and Commentary
- [An Overview of the AI Safety Funding Situation](https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation) - LessWrong
- [Brief Analysis of OP Technical AI Safety Funding](https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding) - LessWrong
- [Coefficient Giving Is Now Coefficient Giving - Inside Philanthropy](https://www.insidephilanthropy.com/home/open-philanthropy-is-now-coefficient-giving-heres-what-has-and-hasnt-changed)
- [How to Get a Grant from Coefficient Giving](https://www.insidephilanthropy.com/find-a-grant/grants-c/coefficient-giving) - Inside Philanthropy
- [Coefficient Giving - Wikipedia](https://en.wikipedia.org/wiki/Coefficient_Giving)

### Regranting Programs
- [Manifund AI Safety Regranting](https://manifund.org/about/regranting)
- [Manifund 2025 Regrants Announcement](https://forum.effectivealtruism.org/posts/fFEkKdoAKchSZm2RA/manifund-2025-regrants) - EA Forum
- [What Makes a Good Regrant?](https://manifund.substack.com/p/reviewing-our-ai-safety-regrants) - Manifund Substack
- [Long-Term Future Fund](https://funds.effectivealtruism.org/funds/far-future) - EA Funds
- [Survival and Flourishing Fund](https://survivalandflourishing.fund/)

### Grantee Information
- [Redwood Research Grant Page](https://www.openphilanthropy.org/grants/redwood-research-ai-safety-research-collaborations/) - Coefficient Giving
- [MIRI AI Safety Retraining Program Grant](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-ai-safety-retraining-program/) - Coefficient Giving
- [Center for AI Safety General Support](https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/) - Coefficient Giving

## External Links

- [Coefficient Giving Website](https://coefficientgiving.org/)
- [Coefficient Giving Grants Database](https://coefficientgiving.org/grants/)
- [Apply for Funding](https://coefficientgiving.org/apply-for-funding/)
- [Navigating Transformative AI Fund](https://coefficientgiving.org/funds/navigating-transformative-ai/)
- [EA Forum Topic: Coefficient Giving](https://forum.effectivealtruism.org/topics/coefficient-giving)
- [Manifund Regrants](https://manifund.org/rounds/regrants)
- [Long-Term Future Fund](https://funds.effectivealtruism.org/funds/far-future)
- [Survival and Flourishing Fund](https://survivalandflourishing.fund/)