Skip to content
Longterm Wiki
Back

Manifund Regranting Program

web

Manifund's regranting model is relevant to AI safety funding infrastructure, offering an alternative funding pathway for researchers and projects outside traditional grant cycles.

Metadata

Importance: 42/100homepagereference

Summary

Manifund's regranting program allows vetted individuals (regrantors) to distribute funding to AI safety and other cause area projects on behalf of donors. Regrantors have discretion over how to allocate their grant budgets, enabling faster and more flexible funding decisions than traditional grant processes.

Key Points

  • Regrantors receive a budget from donors and have discretion to fund projects they find promising without lengthy approval processes
  • The model aims to leverage domain experts' knowledge to identify high-impact opportunities that centralized funders might miss
  • Regrantors are publicly accountable, with their funding decisions and rationales visible on the platform
  • The program is particularly relevant to AI safety funding, enabling faster deployment of capital to emerging research
  • Manifund acts as a fiscal sponsor, handling legal and administrative overhead for grantees

Cited by 2 pages

PageTypeQuality
Coefficient GivingOrganization55.0
ManifundOrganization50.0

2 FactBase facts citing this source

EntityPropertyValueAs Of
ManifundTotal Funding Raised22500002025
ManifundTotal Funding Raised$2.3 million2025

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
# AI Safety Regranting

We partner with _regrantors:_ experts in the field of AI safety, each given an independent budget. Regrantors recommend grants based on their personal expertise; Manifund reviews these recommendations and distributes the funds.

[Donate to AI Safety Regranting](mailto:austin@manifund.org)

### Our regrantors

202320242025

$350K

[![NeelNanda avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2Fe9362a95-cbec-4685-b179-91b4c5ba4edc%2Fac913c11-97ed-988a-764d-e315bdb4b059&w=384&q=75)\\
\\
Neel Nanda\\
\\
Lead of mech interp team at Google DeepMind](https://manifund.org/NeelNanda)

$200K

[![joel_bkr avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2Faa7c88dc-7311-4577-8cd3-c58a0d41fc31%2Fec4ef4ce-c8af-f096-9890-d95ff388caac&w=384&q=75)\\
\\
Joel Becker\\
\\
Member of Technical Staff at METR; CEO at Qally's](https://manifund.org/joel_bkr)

$150K

[![gleech avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2F8a2d245e-1cfe-4d93-8a74-82b2ed695f24%2F038daf62-6c5a-5018-e6d2-186ecfb108a8&w=384&q=75)\\
\\
Gavin Leech\\
\\
Cofounder Arb Research, fellow at Cosmos, LCFI and Foresight.](https://manifund.org/gleech)

$125K

[![Richard avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2F75d73803-3b02-4e20-826c-c7bd96127a9e%2F8f0bf29f-f062-9065-a938-445bdd872b55&w=384&q=75)\\
\\
Richard Ngo\\
\\
AI safety and governance researcher](https://manifund.org/Richard)

$125K

[![ethanjperez avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2Fa3a0607a-b240-47ff-9025-77e6453f171f%2Ff56b59ea-3be1-0f9a-ae09-ff313aacb1e5&w=384&q=75)\\
\\
Ethan Josean Perez\\
\\
I lead the adversarial robustness team at Anthropic, where I’m hoping to reduce existential risks from AI systems. I helped to develop Retrieval-Augmented Generation (RAG), a widely used approach for augmenting large language models with other sources of information. I also helped to demonstrate that state-of-the-art AI safety training techniques do not ensure safety against sleeper agents. I received a best paper award at ICML 2024 for my work showing that debating with more persuasive LLMs leads to more truthful answers. I received my PhD from NYU under the supervision of Kyunghyun Cho and Douwe Kiela and funded by NSF and Open Philanthropy. Previously, I’ve spent time at DeepMind, Facebook AI Research, Montreal Institute for Learning Algorithms, and Google. I was also named one of Forbes’s 30 Under 30 in AI.](https://manifund.org/ethanjperez)

$125K

[![alexandraabates avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%

... (truncated, 13 KB total)
Resource ID: kb-0c3cd3534fa36003 | Stable ID: NTM2M2YzYz