Skip to content
Longterm Wiki
Back

MATS Funding - Manifund

web

This is the Manifund crowdfunding page for MATS, a key AI safety talent pipeline program based in Berkeley; useful for understanding the funding landscape and scale of safety research training efforts.

Metadata

Importance: 45/100homepage

Summary

The MATS Program is an AI safety talent development initiative that pairs scholars with experienced researchers, providing stipends, housing, office space, and curriculum support. Having scaled from 30 to 60 scholars across five cohorts, MATS seeks $1M in general support funding via Manifund, with an estimated cost of $35,000 per scholar.

Key Points

  • MATS connects promising scholars with AI safety research mentors, providing comprehensive support including housing, office space, seminars, and research coaching in Berkeley.
  • Program has scaled from 30 scholars and 5 mentors to 60 scholars and 15 mentors across five cohorts over two years.
  • Cost per scholar is approximately $35,000 for the full program duration, not including staff time.
  • Scholars are developed via a 'T-model' (depth, breadth, and research taste) with opt-in curriculum including workshops, peer groups, and networking.
  • Alumni feed into existing AI safety research teams, found new teams, or pursue independent research, expanding the overall talent pipeline.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202623 KB
23

## MATS Program

[Technical AI safety](https://manifund.org/causes/tais) [AI governance](https://manifund.org/causes/ai-gov) [EA Community Choice](https://manifund.org/causes/ea-community-choice) [Long-Term Future Fund](https://manifund.org/causes/ltff) [Global catastrophic risks](https://manifund.org/causes/gcr)

![RyanKidd avatar](https://manifund.org/_next/image?url=https%3A%2F%2Ffkousziwzbnkdkldjper.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Favatars%2Fe2a30cdd-6797-4e2c-8823-f051195fc77a%2Fb12fbe3e-9b6a-adff-af36-9cba98f6d74a&w=96&q=75)[Ryan Kidd](https://manifund.org/RyanKidd)

ActiveGrant

$290,193raised

$1,000,000funding goal

## Donate

[Sign in to donate](https://manifund.org/login?redirect=/projects/mats-funding)

### Project summary

The [ML Alignment & Theory Scholars (MATS) Program](https://matsprogram.org/) is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of [AI alignment](https://en.wikipedia.org/wiki/AI_alignment) and connect them with the Berkeley AI safety research community.

MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research coaching, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and complementary scholar support systems, greatly reducing the barriers to research mentorship.

### What are this project's goals and how will you achieve them?

- Find + accelerate high-impact [research scholars](https://matsprogram.org/alumni):

  - Pair scholars with research mentors via specialized mentor-generated selection questions;

  - Provide a thriving academic community for research collaboration, peer feedback, and social networking;

  - Develop scholars according to the “T-model of research” (depth/breadth/taste)

  - Offer opt-in curriculum elements, including seminars, research strategy workshops, 1-1 research coaching, peer study groups, and networking events.
- Support high-impact [research mentors](https://matsprogram.org/mentors):

  - Scholars are often good research assistants and future hires;

  - Scholars can offer substantive new critiques of alignment proposals;

  - Our operations and community free up valuable mentor time and increase scholar output.
- Help parallelize high-impact AI alignment research:

  - Find, develop, and refer scholars with strong research ability, value alignment, and epistemics;

  - Use alumni for peer mentoring in later cohorts;

  - Update mentor list and curriculum as the needs of the field change.

### How will this funding be used?

We are seeking general support funds for MATS, including future

... (truncated, 23 KB total)
Resource ID: 22ea79916902c60b | Stable ID: NWUzODMwYz