Skip to content
Longterm Wiki

MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publications, 8000+ citations). The program provides comprehensive support ($27k per scholar) and has produced notable alumni contributions to alignment research.

Related Wiki Pages

Top Related Pages

Approaches

Representation Engineering

Analysis

Short AI Timeline Policy Implications

Organizations

AnthropicOpenAIApollo ResearchSurvival and Flourishing FundAlignment Research CenterCoefficient Giving

Concepts

Situational AwarenessSafety Orgs Overview

Other

Scalable OversightInterpretabilityAjeya CotraEvan Hubinger