Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Centre for Effective Altruism

MATS is a well-known AI safety fellowship program; this EA listing serves as an opportunity announcement for those seeking structured entry into alignment research careers.

Metadata

Importance: 55/100homepagereference

Summary

The MATS (ML Alignment Theory Scholars) Program is a fellowship opportunity listed on the Effective Altruism opportunities board, designed to support researchers working on AI alignment and safety. It connects promising scholars with mentors and resources to accelerate technical AI safety research.

Key Points

  • MATS is a fellowship program focused on developing talent in AI alignment and safety research
  • Listed on the EA opportunities board, indicating it is considered high-impact for those seeking careers in AI safety
  • Program connects early-career researchers with experienced mentors in the field
  • Targets individuals interested in technical AI safety and alignment work
  • Represents a key pathway into the AI safety research community

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
[Opportunities board](https://www.effectivealtruism.org/opportunities)

![MATS](https://v5.airtableusercontent.com/v3/u/51/51/1773979200000/KxI_3xl0YkA0dqZSJYyXOw/phoN-JU7FaLF0Xm-GZdBXOCUqqgYBZwSxKE3kA6lN-B0KReOkDOJ72PhkXRL-NVIEpRiKZ-ee1WFAavBoWQp9naeDSq2NGauYOniXqDT5FiIh_2P-_20uGO02S3sxjJwWuraw5412AjHfb_7y3nJzSbyda6hYJWC-njXO7mIqDc/9SbDEsx49NkTHDTAbrPIC455XtV3auESuA7rry76r3g)

## MATS

[MATS Program](https://www.matsprogram.org/)

[View](https://www.matsprogram.org/?utm_source=ea-opps)

Opportunity type

Independent project

Advising

Cause areas

AI safety & policy

Routes to impact

Skill-building & building career capital

Learning about important cause areas

Testing your fit for a certain career path

Direct high impact on an important cause

Skill set

Research

Deadline

2025-09-12

Deadline soon

Location

Berkeley, USA

Description

The ML Alignment & Theory Scholars (MATS) Program is a 12-week research and educational seminar in Berkeley, CA, connecting scholars with top mentors in AI alignment, governance, and security.

- Receive mentorship from world-class researchers, a dedicated research manager, and access to seminars, workshops, and networking events.
- Scholars are provided with a $14.4k stipend, $12k compute budget, travel, housing, office space, and catered meals.
- Conduct independent research, culminating in a symposium, with opportunities to join a 6-12 month extension program in London, UK.
- Alumni have gone on to leading organizations like Anthropic, OpenAI, DeepMind, and more; 80% now work in AI alignment.

For more details or to apply, visit [https://www.matsprogram.org/](https://www.matsprogram.org/).

_This text was generated by AI. If you notice any inconsistencies, please let us know using this [form](https://airtable.com/appngKrXQ99QHsAo4/shrrnPHkHnhTQNA4u)_

[View opportunity](https://www.matsprogram.org/?utm_source=ea-opps)

Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the **latest ideas and opportunities**

Sign up

[View past editions](https://www.effectivealtruism.org/ea-newsletter-archives)

We and our partners use cookies, including to review how our site is used, to improve our site's performance, and to serve personalized ads and content. For more information see our [cookie notice](https://www.effectivealtruism.org/cookie-notice). By clicking on "Accept all" you consent to our use of cookies and to the sharing of this data with our partners.

Reject

Accept all
Resource ID: a143c02ae4cceeb9 | Stable ID: YTE4NjhlYW