Skip to content
Longterm Wiki
Back

Manifund: The AI Safety Research Fund

web

An unfunded Manifund grant proposal (as of available snapshot) to create a new AI safety grantmaking nonprofit; notable as an attempt to diversify funding sources outside traditional EA infrastructure.

Metadata

Importance: 18/100homepage

Summary

A proposed 501(c)(3) nonprofit grantmaking fund focused on AI safety, aiming to expand funding beyond traditional EA/longtermist circles through transparent, accessible grant rounds. The project seeks initial seed funding via Manifund to establish operations under fiscal sponsorship by Anti Entropy. It targets early-stage researchers, interpretability work, and unconventional alignment approaches.

Key Points

  • Aims to broaden AI safety donor base beyond EA circles with accessible campaigns from $10/month to major gifts
  • Targets early-stage and unconventional AI safety projects, including interpretability, developmental engineering, and moonshot alignment approaches
  • Secured fiscal sponsorship acceptance from Anti Entropy (501(c)(3)) contingent on raising initial funding
  • Proposes regular, transparent grant rounds with guaranteed decision timelines and public impact reports
  • Led by Jaeson Booker (software engineer/AI safety researcher) with advisors from AI Plans and independent alignment research

Cited by 1 page

PageTypeQuality
FTX Future FundOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202630 KB
0

## The AI Safety Research Fund

[Technical AI safety](https://manifund.org/causes/tais)

![JaesonB avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2Fc20e335a-c987-4310-993b-859ac25cd110%2F25f7ad99-83e1-2f51-438b-50adcbf5a85c&w=96&q=75)[Jaeson Booker](https://manifund.org/JaesonB)

Not fundedGrant

$0raised

### **Project Summary**

The **AI Safety Research Fund** is a proposed 501(c)(3) nonprofit dedicated exclusively to advancing AI safety through targeted, transparent, and independent grantmaking. We have already been accepted for fiscal sponsorship by Anti Entropy, a 501(c)(3), contingent on us raising initial funding. **This Manifund project is intended to be a catalyst for securing initial donations so that we can setup operations and get fiscal sponsorship.** Our goal is to fix critical gaps in the current funding landscape by fundraising outside of traditional EA and longtermist circles. Our operations will be guided by responsiveness, accessibility to average donors, and a commitment to broadening the AI safety ecosystem.

### **What are this project's goals? How will you achieve them?**

**Goals:**

1. Increase the total amount of funding going to AI safety.

2. Streamline the process of applying for and distributing safety-focused grants.

3. Provide early-stage support to promising but underfunded or unconventional projects.

4. Expand the donor base beyond traditional Effective Altruism circles.


**How we'll achieve them:**

- Build a fully operational grantmaking nonprofit with fiscal sponsorship for immediate 501(c)(3) status.

- Run regular, predictable grant rounds with transparent criteria and guaranteed decision timelines.

- Offer seed funding and small grants to new organizations and individuals, especially early-stage and experimental projects.

- Engage a wide range of donors through accessible fundraising campaigns, ranging from $10/month contributions to major gifts.

- Operate with full transparency, publishing updates and impact reports for donors and the public.


### **How will this funding be used?**

Initial funding will go to paying a fiscal sponsor fee, setting up operations, and issuing the first grants.

**What we will immediately fund**

There are many who have participated in upskilling programs, who are ready to contribute to AI Safety, but lack the funds. Some have received grants, but many others have not. With the starting donations, many of these promising new researchers can immediately receive funding. We already have made connections with those in the ecosystem who are ready to recommend highly-skilled researchers. We are ready to begin issuing grants and building a track record. Any donation will make a substantial difference, since every new, highly-skilled researcher might make a substantial difference long term.

Other promising avenues we wish to pursue are new forms of interpretability, such as developmental

... (truncated, 30 KB total)
Resource ID: 3ecd9cd31ff6a659 | Stable ID: MDExZDZjY2