Skip to content
Longterm Wiki
Navigation
Updated 2026-02-01HistoryData
Citations verified54 accurate3 flagged15 unchecked
Page StatusContent
Edited 2 months ago2.5k words2 backlinksUpdated every 3 weeksOverdue by 42 days
60QualityGood •31.5ImportanceReference46.5ResearchLow
Content9/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~10Diagrams0/ ~1Int. links35/ ~20Ext. links3/ ~12Footnotes71/ ~7References24/ ~7Quotes57/72Accuracy57/72RatingsN:3 R:6 A:7 C:8Backlinks2
Issues3
QualityRated 60 but structure suggests 87 (underrated by 27 points)
Links3 links could use <R> components
StaleLast edited 63 days ago - may need review

MATS ML Alignment Theory Scholars program

Safety Org

MATS ML Alignment Theory Scholars program

MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publications, 8000+ citations). The program provides comprehensive support ($27k per scholar) and has produced notable alumni who founded organizations like Apollo Research and joined major AI labs.

TypeSafety Org
2.5k words · 2 backlinks

Quick Assessment

DimensionAssessmentEvidence
Program ScaleHigh98 scholars and 57 mentors in most recent cohort (MATS 8.0, Summer 2025)1
Research OutputStrong160+ publications, 8,000+ citations, h-index of 40 over 4 years2
Career ImpactVery High80% of alumni work in AI alignment; placements at Anthropic, OpenAI, DeepMind3
Funding per Scholar$27k$15k stipend + $12k compute resources, plus housing and meals4
SelectivityVery Competitive≈15% acceptance rate; 40+ mentors with independent selection5
SourceLink
Official Websitematsprogram.org
LessWronglesswrong.com
EA Forumforum.effectivealtruism.org

Overview

The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, transparency, and security, connecting them with the Berkeley AI safety research community.6 Founded in late 2021 and initially run as SERI MATS under the Stanford Existential Risks Initiative, the program later became independent and now operates 12-week in-person cohorts in Berkeley, California and London, United Kingdom.7

MATS pairs scholars with leading researchers in AI safety for approximately 1-2 hours of mentorship per week, supplemented by seminars, workshops, guest lectures, and dedicated research manager support.8 The program provides comprehensive support including a $15,000 living stipend, $12,000 in compute resources, private housing, catered meals, and office space.9 Scholars develop independent research projects that culminate in presentations at a Scholar Symposium, with selected fellows invited to continue for 6-12 month extensions.

Since its founding, MATS has trained over 446 researchers.10 The program has generated over 160 research publications with more than 9,000 citations, advancing agendas in mechanistic interpretability, sparse feature analysis, activation engineering, and AI safety evaluation.11 Alumni have gone on to leading organizations like Anthropic, OpenAI, and Google DeepMind, as well as founded new AI safety organizations like Apollo Research, with 80% of alumni now working in AI alignment, transparency, and security.12

History

Founding and Early Development

MATS originated as SERI MATS, an initiative under the Stanford Existential Risks Initiative (SERI) focused on AI safety research training.13 The program structure included a 4-week online upskilling phase (10 hours per week), a 2-week research sprint, and an 8-week intensive in-person program in Berkeley, California.14 Early mentors included Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao, and Stuart Armstrong.15

The program's core mission from inception was to train talented individuals for AI alignment research by addressing risks from unaligned AI through mentorship, training, logistics, and community access.16 The program eventually evolved into an independent organization, maintaining hubs in both Berkeley and London.17

Program Evolution and Growth

Over its first four years, MATS iterated significantly on its structure and curriculum:

Summer 2022: The first cohort produced notable outcomes, including scholars like Johannes Treutlein working under Evan Hubinger, who co-authored papers on predictive models that were later published at the UAI 2023 conference.18

Summer 2023 (4th Iteration): This cohort expanded to 60 scholars and 15 mentors, with 461 applicants (15% acceptance rate for the Training Phase).19 The program introduced the Scholar Research Plan (SRP) requiring a threat model, theory of change, and SMART plan, and implemented distinct phases: Training (Alignment 201), Research (Berkeley), and Extension (London/Berkeley).

Winter 2023-24 (5th Iteration): Further growth to 63 scholars and 20 mentors, with a significant curriculum change replacing Alignment 201 with custom curricula due to feedback.20 This included Neel Nanda's remote mechanistic interpretability curriculum (November 20-December 22) and AI Safety Strategy Discussions.

MATS 8.0 (Summer 2025): The program reached 98 scholars and 57 mentors, concluding with a symposium on August 22, 2025 featuring 10 spotlight talks and a poster session.21

By May 2024, MATS had supported 213 scholars and 47 mentors across five seasonal programs, presenting insights on talent selection and development at the TAIS 2024 conference.22

Program Structure and Support

Core Components

MATS operates as a 12-week in-person fellowship with several key elements:

Mentorship: Scholars receive approximately 1-2 hours per week of working with their mentor, with more frequent communication via Slack.23 Each mentor conducts their own selection process, with some using work tasks and others conducting interviews. Interview topics varied among mentors but commonly included research ideas, career plans, technical machine learning questions, and prior experience, rather than behavioral or mathematical questions.24

Research Development: Scholars develop a Research Plan approximately one month into the program, outlining their threat model, theory of change, and specific deliverables.25 Dedicated research managers provide support for scoping projects, maintaining progress, and removing obstacles throughout the fellowship.26

Educational Programming: The program includes seminars and workshops 2-3 times per week, featuring speakers from organizations like Redwood Research, FAR AI, OpenAI, CHAI, and GovAI.27 Past speakers have included Buck Shlegeris, Adam Gleave, Neel Nanda, William Saunders, Andrew Critch, Lennart Heim, and Ajeya Cotra.

Research Tracks: MATS offers multiple specialization areas including technical governance, empirical research, policy & strategy, theory, and compute governance.28

Financial and Logistical Support

The program provides comprehensive material support valued at approximately $35,000 per scholar:29

  • $15,000 stipend for living expenses (provided by AI Safety Support)30
  • $12,000 compute budget for experiments and evaluations31
  • Private housing for the full program duration in Berkeley or London32
  • Office space access and catered meals33
  • Travel reimbursement where applicable

Extension Opportunities

Selected scholars may continue for an additional 6 or 12 months through extension programs, with London as the main hub, though scholars can also participate from Berkeley, Boston, or Washington D.C. MATS arranges funding to cover monthly stipends and compute resources, and for scholars participating from an AI safety hub, funding also covers housing and office rent.34 To be considered, scholars need a strong research project and an endorsement from their mentors, after which an extension selection committee makes final selections.

Research Impact and Outcomes

Publications and Citations

Over four years, MATS has produced significant research output, with alumni generating over 160 publications that have received more than 8,000 citations, yielding an organizational h-index of 40.35 Notable publications include:

  • Steering Llama 2 via Contrastive Activation Addition (Outstanding Paper Award at ACL 2024)36
  • Conditioning Predictive Models: Risks and Strategies (published at UAI 2023)37
  • Incentivizing Honest Performative Predictions with Proper Scoring Rules (UAI 2023)
  • Neural Networks Learn Statistics of Increasing Complexity
  • Copy Suppression, Inverse Scaling, The Reasons That Agents Act

In a survey of alumni from the first four programs (46% response rate), 78% reported their key publication "possibly" or "probably" would not have happened without MATS, with 10% accelerated by more than 6 months and 14% accelerated by 1-6 months.38

Research Agendas Developed

MATS scholars have advanced numerous technical agendas in AI safety:

  • Sparse auto-encoders for AI interpretability39
  • Activation and representation engineering
  • Emergent misalignment detection
  • Inoculation prompting techniques
  • Developmental interpretability
  • Computational mechanics applications
  • Glitch token analysis
  • Situational awareness evaluations
  • Gradient routing methods
  • Externalized reasoning oversight
  • Formalizing natural abstractions

These research directions span mechanistic interpretability, sparse feature analysis, and studies of latent representations in AI systems.40

Career Outcomes

MATS has achieved strong career placement results for alumni:

Employment: 49% of surveyed alumni reported working or interning on AI alignment or control, with 29% conducting independent alignment research.41 Among earlier cohorts, 39% were hired by research organizations post-MATS, with 50% indicating MATS made them "much more likely" to be hired.42 An additional 22% pursued Master's or PhD programs.

Organizational Placements: Alumni have joined nearly every major AI safety initiative, including Anthropic, OpenAI, DeepMind, CHAI, and Redwood Research.43 Notable examples include:

  • Nina (Summer 2023, mentored by Evan Hubinger): Joined Anthropic as a research scientist; won ACL 2024 Outstanding Paper Award; later mentored SPAR and MATS cohorts44
  • Marius Hobbhahn (Winter 2022/23, mentored by Evan Hubinger): Founded and became CEO of Apollo Research, a London-based technical alignment organization focused on scheming evaluations and AI control45
  • Johannes Treutlein (Summer 2022, mentored by Evan Hubinger): Pursued PhD at CHAI; joined Anthropic in 2024 for alignment stress-testing46

New Organizations: Alumni have founded new AI safety initiatives including Apollo Research, Cadenza Labs, PRISM Eval, and have organized conferences on singular learning theory and developmental interpretability.47

Skill Development: 49% of alumni reported MATS increased their research or technical skills, while 38% gained legible career capital.48

Key People

Leadership

Ryan Kidd serves as Co-Executive Director of MATS and Co-Founder of the London Initiative for Safe AI (LISA).49 He was a scholar in MATS's first iteration (which had only 5 scholars total) and has since become a Manifund Regrantor and advisor to organizations including Halcyon Futures, Catalyze Impact, AI Safety ANZ, and Pivotal Research.

Christian Smith serves as Co-Executive Director and Co-Founder of LISA.50 He brings a background in particle physics and pedagogy from Stanford University, having conducted research at CERN and organized educational programs like the Uncommon Sense Seminar.

Laura Vaughan, a Thiel Fellow (2017) who studied physics at the University of Waterloo, brings experience in ML model dataset creation and training, management, entrepreneurship, full-stack software engineering, and biomedical research.51 She co-founded a stem cell cryogenics startup before joining MATS.

Notable Mentors

MATS mentors come from leading organizations including Anthropic, Google DeepMind, Redwood Research, OpenAI, MIRI, ARC (Alignment Research Center), CHAI, CAIS, and the Centre on Long-Term Risk.52 Selected examples include:

  • Marius Hobbhahn: CEO of Apollo Research, where he also leads the evals team; Apollo focuses on scheming, evals, and control; PhD in Bayesian ML; formerly worked on AI forecasting at Epoch53
  • Sam Bowman: Leads a research group working on AI alignment and welfare at Anthropic, with a particular focus on evaluation; Associate Professor of Computer Science and Data Science at NYU (on leave); has been studying neural network language models since 201254
  • Joe Benton: Member of the Alignment Science team at Anthropic, working on scalable oversight with interests in control, chain-of-thought monitoring, and alignment evaluations55
  • Arthur Conmy: Senior Research Engineer at Google DeepMind on the Language Model Interpretability team with Neel Nanda; focus on practically useful interpretability and related AI safety research; previously did early influential work on automating interpretability and finding circuits; formerly at Redwood Research56
  • Evan Hubinger: Provided mentorship for early SERI MATS trials and multiple cohorts; formerly at MIRI, now at Anthropic57
  • Neel Nanda: Senior Research Scientist leading the mechanistic interpretability team at Google DeepMind; a returning MATS mentor who has run Training Phases for scholars including live research sessions, lectures on mechanistic interpretability and sparse autoencoders, and reading groups on papers such as Toy Models of Superposition; has approximately 50 MATS alumni58

Funding

MATS receives grants from partner organizations to support its fellowship program.59 Financial support for scholars is coordinated through partner organizations rather than directly by MATS:

Primary Funding Sources:

  • AI Safety Support: Provides the $15,000 stipend for each fellow completing the full program (prorated for partial participation)60
  • MATS-arranged funding: Covers extension program costs including monthly stipends, compute, housing, and office rent for 6-12 month extensions61
  • Coefficient Giving: Provided grants to support the early SERI MATS trial program, including grants of $1,008,127 (April 2022), $1,538,000 (November 2022), and $428,942 (June 2023)62
  • Other supporters (2024): Foresight Institute, Survival and Flourishing Fund, Long-Term Future Fund, Craig Falls, and several donors via Manifund63

Per-Scholar Investment: The total cost per scholar is approximately $35,000 for the full program, based on recent cohorts of 60 scholars and 15 mentors.64 This includes the $15k stipend, compute resources, and costs for housing, meals, office space, and program administration.

Historical Funding: In the 2022 SERI MATS program, scholars received $6,000 after completing the training and research sprint phase and $16,000 at program completion, with all accommodation, office space, and event expenses covered. Ongoing discretionary funding was also available to promising scholars at the discretion of research mentors.65

Criticisms and Concerns

While MATS has achieved strong outcomes, program organizers and alumni have identified several concerns and limitations:

Field Growth Risks

Program organizers acknowledge concerns that MATS's appeal—particularly access to scaling lab mentors—could attract aspiring AI researchers not primarily focused on existential risk reduction, potentially introducing viewpoints that dilute the field's epistemic rigor.66 While organizers maintain high selection pressure to prioritize x-risk-motivated scholars, they recognize this tension between growth and field quality as they plan broader advertising.

Mentorship Dependency and Deference

Critics note that scholars might overly defer to mentors, failing to critically analyze assumptions and reducing independent thinking or new viewpoints in the field.67 This concern exists in tension with the opposite problem: insufficient mentorship could lead to excessive peer reliance among inexperienced researchers. MATS rarely accepts scholars without mentors, viewing mentorship as essential for knowledge transfer, which limits scalability and raises barriers since mentors have high entry requirements and capacity constraints.68

Opportunity Costs for Participants

Alumni feedback highlights specific challenges reported by MATS participants:6969

  • Time allocation: Non-research tasks like writing proposals and preparing talks divert effort from core research
  • Career uncertainty: One alumnus noted MATS pushed them into technical research with less than 70% confidence it was positive; another preferred their prior ML engineering role for deeper technical challenges
  • Relationship strain: Some scholars reported impacts on prior commitments, such as strained relationships with PhD supervisors when pausing unrelated work
  • Emotional fit: Some felt out of place in the AI safety community or experienced slowed involvement
  • Grant stress: Short-term funding uncertainty led some to doubt their counterfactual impact when applying to AI safety roles

Selection Challenges

With approximately 15% acceptance rates and 40+ mentors conducting independent selection, even proficient researchers and engineers with AI safety experience frequently receive rejections due to mentor capacity limits rather than candidate quality.70 Application processes involve mentor-specific interviews on ML experience, research proposals, conceptual questions, and experiments, with rejections common even after strong interviews.

Alumni feedback indicates that scholars with prior research experience often rate MATS superior to alternatives like independent research or "hub-hopping," though some note they would have preferred later participation after building more ML skills through programs like ARENA.71

Key Uncertainties

  • Scalability: Can MATS maintain research quality while expanding beyond current mentor capacity constraints, given the program's emphasis on apprenticeship-style learning?
  • Counterfactual Impact: What proportion of alumni would have entered AI safety careers through alternative pathways, and how much does MATS accelerate versus redirect talent?
  • Optimal Program Length: Is the 12-week duration optimal for research skill development, or would longer or shorter programs better serve different scholar populations?
  • Field Dilution Risks: As MATS expands and advertises more broadly, how can the program maintain epistemic standards while increasing accessibility?
  • Extension Selection: With ~70% of scholars historically advancing to extensions, what criteria best predict long-term research impact?
  • Mentor-Scholar Matching: How can the program optimize matching between mentors and scholars to balance deference concerns against knowledge transfer benefits?

Sources

Footnotes

  1. MATS 8.0 Research Projects (Summer 2025)MATS 8.0 Research Projects (Summer 2025)

  2. MATS Program HomepageMATS Program Homepage

  3. MATS Program HomepageMATS Program Homepage

  4. MATS Program HomepageMATS Program Homepage

  5. MATS Summer 2023 RetrospectiveMATS Summer 2023 Retrospective

  6. MATS Program - LessWrongMATS Program - LessWrong

  7. MATS Summer 2023 RetrospectiveMATS Summer 2023 Retrospective

  8. Machine Learning Alignment Theory Scholars - IdealistMachine Learning Alignment Theory Scholars - Idealist

  9. MATS Program HomepageMATS Program Homepage

  10. MATS: A talk on talent selection and developmentMATS: A talk on talent selection and development

  11. MATS Program HomepageMATS Program Homepage

  12. MATS Program - Effective AltruismMATS Program - Effective Altruism

  13. MATS Summer 2023 RetrospectiveMATS Summer 2023 Retrospective

  14. SERI ML Alignment Theory Scholars Program 2022SERI ML Alignment Theory Scholars Program 2022

  15. SERI ML Alignment Theory Scholars Program 2022SERI ML Alignment Theory Scholars Program 2022

  16. Machine Learning Alignment Theory Scholars - IdealistMachine Learning Alignment Theory Scholars - Idealist

  17. MATS Summer 2023 RetrospectiveMATS Summer 2023 Retrospective

  18. MATS AlumniMATS Alumni

  19. MATS Summer 2023 RetrospectiveMATS Summer 2023 Retrospective

  20. MATS Winter 2023-24 RetrospectiveMATS Winter 2023-24 Retrospective

  21. MATS 8.0 Research ProjectsMATS 8.0 Research Projects

  22. MATS: A talk on talent selection and developmentMATS: A talk on talent selection and development

  23. Machine Learning Alignment Theory Scholars - IdealistMachine Learning Alignment Theory Scholars - Idealist

  24. My experience applying to MATS 6.0My experience applying to MATS 6.0

  25. MATS Program HomepageMATS Program Homepage

  26. MATS Program HomepageMATS Program Homepage

  27. Machine Learning Alignment Theory Scholars - IdealistMachine Learning Alignment Theory Scholars - Idealist

  28. MATS Summer 2026 ProgramMATS Summer 2026 Program

  29. MATS Funding - ManifundMATS Funding - Manifund

  30. MATS Program HomepageMATS Program Homepage

  31. MATS Program HomepageMATS Program Homepage

  32. MATS Program HomepageMATS Program Homepage

  33. MATS Program HomepageMATS Program Homepage

  34. MATS FAQMATS FAQ

  35. MATS Program HomepageMATS Program Homepage

  36. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  37. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  38. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  39. MATS Program HomepageMATS Program Homepage

  40. MATS Program HomepageMATS Program Homepage

  41. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  42. MATS: A talk on talent selection and developmentMATS: A talk on talent selection and development

  43. MATS: A talk on talent selection and developmentMATS: A talk on talent selection and development

  44. MATS AlumniMATS Alumni

  45. MATS AlumniMATS Alumni

  46. MATS AlumniMATS Alumni

  47. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  48. MATS Alumni Impact AnalysisMATS Alumni Impact Analysis

  49. Ryan Kidd - TAIS 2024Ryan Kidd - TAIS 2024

  50. MATS TeamMATS Team

  51. MATS Winter 2023-24 RetrospectiveMATS Winter 2023-24 Retrospective

  52. MATS MentorsMATS Mentors

  53. MATS MentorsMATS Mentors

  54. MATS MentorsMATS Mentors

  55. MATS MentorsMATS Mentors

  56. MATS MentorsMATS Mentors

  57. ML Alignment Theory Program under Evan HubingerML Alignment Theory Program under Evan Hubinger

  58. MATS Winter 2023-24 RetrospectiveMATS Winter 2023-24 Retrospective

  59. MATS Funding - ExtructMATS Funding - Extruct

  60. MATS Program HomepageMATS Program Homepage

  61. MATS FAQMATS FAQ

  62. ML Alignment Theory Program under Evan HubingerML Alignment Theory Program under Evan Hubinger

  63. MATS TeamMATS Team

  64. MATS Funding - ManifundMATS Funding - Manifund

  65. SERI ML Alignment Theory Scholars Program 2022SERI ML Alignment Theory Scholars Program 2022

  66. How MATS addresses mass movement building concernsHow MATS addresses mass movement building concerns

  67. How MATS addresses mass movement building concernsHow MATS addresses mass movement building concerns

  68. How MATS addresses mass movement building concernsHow MATS addresses mass movement building concerns

  69. MATS Alumni Impact Analysis - EA ForumMATS Alumni Impact Analysis - EA Forum 2

  70. My experience applying to MATS 6.0My experience applying to MATS 6.0

  71. MATS Alumni Impact Analysis - EA ForumMATS Alumni Impact Analysis - EA Forum

References

Frequently asked questions page for the MATS Summer research program, a 12-week in-person AI safety research fellowship in Berkeley, California. The program provides mentorship from leading AI safety researchers, a $15,000 stipend plus housing and meals, and optional 6-12 month extensions for select fellows.

Claims (2)
MATS arranges funding to cover monthly stipends and compute resources, and for scholars participating from an AI safety hub, funding also covers housing and office rent. To be considered, scholars need a strong research project and an endorsement from their mentors, after which an extension selection committee makes final selections.
Accurate100%Feb 22, 2026
To get into the 6-12 month extension program, MATS scholars need a strong research project and an endorsement from their mentors. Based on these and other supplementary information available, scholars are selected by an extension selection committee for the first six months.
- MATS-arranged funding: Covers extension program costs including monthly stipends, compute, housing, and office rent for 6-12 month extensions
Accurate100%Feb 22, 2026
No, MATS will arrange funding for scholars for the extension program covering a monthly stipend and compute. For participation from an AI safety hub, funding also covers housing and office rent.

The MATS Program is an AI safety talent development initiative that pairs scholars with experienced researchers, providing stipends, housing, office space, and curriculum support. Having scaled from 30 to 60 scholars across five cohorts, MATS seeks $1M in general support funding via Manifund, with an estimated cost of $35,000 per scholar.

Claims (2)
The program provides comprehensive material support valued at approximately \$35,000 per scholar:
Per-Scholar Investment: The total cost per scholar is approximately \$35,000 for the full program, based on recent cohorts of 60 scholars and 15 mentors. This includes the \$15k stipend, compute resources, and costs for housing, meals, office space, and program administration.
3MATS Summer 2026 Programmatsprogram.org

MATS (Machine Learning Alignment Theory Scholars) Summer 2026 is a fellowship program running June-August 2026, connecting 120 fellows with 100 mentors from leading AI safety organizations including Anthropic, UK AISI, Redwood Research, and ARC. Fellows collaborate on AI safety research across streams including empirical alignment, interpretability, policy & strategy, technical governance, and compute infrastructure, with potential 6+ month extensions.

Claims (2)
Research Tracks: MATS offers multiple specialization areas including technical governance, empirical research, policy & strategy, theory, and compute governance.
Accurate100%Feb 22, 2026
MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance.
(footnote definition only, no inline reference found)
4SERI ML Alignment Theory Scholars Program 2022LessWrong·Ryan Kidd, Victor Warlop & ozhang·2022

Announcement for the second iteration of the SERI ML Alignment Theory Scholars (MATS) Program, a structured summer initiative pairing aspiring AI alignment researchers with established mentors including Evan Hubinger, Beth Barnes, and John Wentworth. The program runs in Berkeley with a four-week upskilling phase, two-week research sprint, and eight-week intensive research period, offering substantial funding and full expense coverage.

★★★☆☆
Claims (2)
MATS originated as SERI MATS, an initiative under the Stanford Existential Risks Initiative (SERI) focused on AI safety research training. The program structure included a 4-week online upskilling phase (10 hours per week), a 2-week research sprint, and an 8-week intensive in-person program in Berkeley, California. Early mentors included Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao, and Stuart Armstrong.
Accurate100%Feb 22, 2026
The Stanford Existential Risks Initiative ( SERI ) recently opened applications for the second iteration of the ML Alignment Theory Scholars (MATS) Program , which aims to help aspiring alignment researchers enter the field by pairing them with established research mentors and fostering an academic community in Berkeley, California over the summer. Current mentors include Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao and Stuart Armstrong. Over four weeks, the participants will develop an understanding of a research agenda at the forefront of AI alignment through online readings and cohort discussions, averaging 10 h/week from Jun 6 to Jul 1. After this initial upskilling period, the scholars will be paired with an established AI alignment researcher for a two-week “research sprint” to test fit from Jul 4 to Jul 15. Assuming all goes well, scholars will be accepted into an eight-week intensive research program in Berkeley, California over the US summer break (Jul 25 to Sep 16).
Ongoing discretionary funding was also available to promising scholars at the discretion of research mentors.
Accurate100%Feb 22, 2026
We are happy to continue providing funding after the two month period to promising scholars, at the discretion of our research mentors.
5MATS Winter 2023-24 RetrospectiveLessWrong·utilistrutil et al.·2024

Detailed retrospective on the fifth iteration of the ML Alignment & Theory Scholars (MATS) program, covering 63 scholars and 20 mentors. Reports high scholar satisfaction (9.2/10 NPS), strong mentor assessments of scholar capabilities, and measurable skill development in technical depth, research taste, and theory of change. Documents operational changes and reduced career obstacles for participants post-program.

★★★☆☆
Claims (1)
- Neel Nanda: Senior Research Scientist leading the mechanistic interpretability team at Google DeepMind; a returning MATS mentor who has run Training Phases for scholars including live research sessions, lectures on mechanistic interpretability and sparse autoencoders, and reading groups on papers such as Toy Models of Superposition; has approximately 50 MATS alumni
Minor issues85%Feb 22, 2026
Winter Program Overview Schedule The Winter 2023-24 Program had three phases. Training Phase : Neel Nanda’s scholars participated in a month-long remote curriculum, culminating in a Research Sprint, which informed acceptance decisions for the Research Phase.

The source does not explicitly state that Neel Nanda is a Senior Research Scientist at Google DeepMind, although it does mention his name in the context of the MATS program. The source does not explicitly state that Neel Nanda has approximately 50 MATS alumni.

6MATS Alumni Impact AnalysisLessWrong·utilistrutil et al.·2024

A survey-based impact analysis of 72 alumni (46% response rate) from MATS program cohorts Winter 2021-22 through Summer 2023, showing strong alignment field engagement: 78% working on AI alignment, 68% publishing alignment research, and 63% meeting research collaborators through the program. The report provides evidence that MATS effectively builds career capital and facilitates research collaboration for early-career AI safety professionals.

★★★☆☆
Claims (6)
- Steering Llama 2 via Contrastive Activation Addition (Outstanding Paper Award at ACL 2024)
Unsupported0%Feb 22, 2026
Other publications included: Towards a Situational Awareness Benchmark for LLMs ; Steering Llama 2 via Contrastive Activation Addition ; Invulnerable Incomplete Preferences: A Formal Statement ; Representation Engineering: A Top-Down Approach to AI Transparency ; The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning ; Linear Representations of Sentiment ; Limitations of Agents Simulated by Predictive Models .
Employment: 49% of surveyed alumni reported working or interning on AI alignment or control, with 29% conducting independent alignment research. Among earlier cohorts, 39% were hired by research organizations post-MATS, with 50% indicating MATS made them "much more likely" to be hired. An additional 22% pursued Master's or PhD programs.
Minor issues85%Feb 22, 2026
49% are "Working/interning on AI alignment/control." 29% are "Conducting alignment research independently."

The claim states that 49% of surveyed alumni reported working or interning on AI alignment or control, which is accurate. However, it also mentions that 29% conducted independent alignment research. The source states that 78% of respondents described their current work as either "Working/interning on AI alignment/control" or "Conducting alignment research independently." The claim mentions that among earlier cohorts, 39% were hired by research organizations post-MATS, with 50% indicating MATS made them "much more likely" to be hired. This information is not directly supported by the provided source text. The source mentions that 46% of alumni would benefit from job recommendations, but it does not specify the percentage hired by research organizations or the impact of MATS on their likelihood of being hired. The claim states that an additional 22% pursued Master's or PhD programs. This information is not directly supported by the provided source text. The source mentions the highest academic degree of the respondents, but it does not specify the percentage who pursued Master's or PhD programs after MATS.

New Organizations: Alumni have founded new AI safety initiatives including Apollo Research, Cadenza Labs, PRISM Eval, and have organized conferences on singular learning theory and developmental interpretability.
Accurate100%Feb 22, 2026
Multiple alumni mentioned starting new research organizations to tackle a specific AI safety research agenda. Here is a selection of responses, and how MATS influenced them: “ Apollo Research would counterfactually not exist without MATS” Timaeus Outcome: “Founding a research org [ Timaeus ] based on the above research agenda.” Influence: “Very hard to say. Something like this agenda would have probably come into existence, but we probably accelerated it by more than a year.” Cadenza Labs Outcome: “Founding new AI alignment org ( Cadenza Labs )” Influence: "Probably no version of this would have happened otherwise." PRISM Eval Outcome: “Founding a new AI alignment org!” [ PRISM Eval ] Influence: "Sped up outcome by >6 months, Quality of outcome much higher, Possibly no version of this would have happened otherwise." SLT and alignment conferences Outcome: “Organizing two conferences on singular learning theory and alignment.” Influence: "Probably no version of this would have happened otherwise, Sped up outcome by <6 months, Sped up outcome by >6 months."
+3 more claims

This page lists the mentors affiliated with the ML Alignment Theory Scholars (MATS) program, a research training initiative connecting emerging AI safety researchers with experienced mentors. The mentors work across AI alignment, interpretability, transparency, and AI security. The page also invites applications from prospective mentors.

Claims (5)
MATS mentors come from leading organizations including Anthropic, Google DeepMind, Redwood Research, OpenAI, MIRI, ARC (Alignment Research Center), CHAI, CAIS, and the Centre on Long-Term Risk. Selected examples include:
Minor issues85%Feb 22, 2026
MATS mentors are advancing the frontiers of AI alignment, transparency, and security

The claim mentions CAIS, but the source does not. The claim mentions the Centre on Long-Term Risk, but the source mentions the Center for the Governance of AI (GovAI). The claim mentions ARC (Alignment Research Center), but the source mentions Apollo Research.

- Marius Hobbhahn: CEO of Apollo Research, where he also leads the evals team; Apollo focuses on scheming, evals, and control; PhD in Bayesian ML; formerly worked on AI forecasting at Epoch
Accurate100%Feb 22, 2026
Marius Hobbhahn Apollo Research , CEO — Marius Hobbhahn is the CEO of Apollo Research, where he also leads the evals team. Apollo is an evals research organization focused on scheming, evals and control. Prior to starting Apollo, he did a PhD in Bayesian ML and worked on AI forecasting at Epoch.
- Sam Bowman: Leads a research group working on AI alignment and welfare at Anthropic, with a particular focus on evaluation; Associate Professor of Computer Science and Data Science at NYU (on leave); has been studying neural network language models since 2012
Accurate100%Feb 22, 2026
Sam Bowman Anthropic , Member of Technical Staff — Sam Bowman leads a research group working on AI alignment and welfare at Anthropic, with a particular focus on evaluation. Sam is also on leave from NYU as an Associate Prof. of Computer Science and Data Science. He has been studying neural network language models since 2012.
+2 more claims
8MATS Program - LessWrongLessWrong·Blog post

The MATS Program is an independent AI safety research and education initiative that mentors emerging researchers through workshops, talks, and connections to the SF Bay Area and London AI safety communities. It aggregates outputs from multiple cohorts, including notable work on mechanistic interpretability, sparse autoencoders, and alignment theory. MATS serves as a key pipeline for developing the next generation of alignment researchers.

★★★☆☆
Claims (1)
The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, transparency, and security, connecting them with the Berkeley AI safety research community. Founded in late 2021 and initially run as SERI MATS under the Stanford Existential Risks Initiative, the program later became independent and now operates 12-week in-person cohorts in Berkeley, California and London, United Kingdom.
Minor issues85%Feb 22, 2026
ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment , and connect them with the Berkeley AI safety research community.

The claim mentions the program covers AI alignment, transparency, and security, but the source only mentions AI alignment. The claim states the program was founded in late 2021, but the source does not provide this information. The claim states the program operates 12-week in-person cohorts in Berkeley, California and London, United Kingdom, but the source does not provide this information.

9MATS Alumni Impact Analysis - EA ForumEA Forum·utilistrutil·2024·Blog post

An impact evaluation of 72 alumni from the first four MATS (Machine Learning for Alignment Taskforce) cohorts (2021-2023), finding that 78% work on AI alignment research, 68% published alignment research, and 63% found research collaborators through the program. The report demonstrates MATS's effectiveness as a pipeline for building AI safety research careers and career capital.

★★★☆☆
Claims (2)
Alumni feedback highlights specific challenges reported by MATS participants:
Unsupported0%Feb 22, 2026
Others offered useful criticism: “If not for SERI MATS, I would have probably spent more time upskilling in coding. It wouldn't be useful for ARC, but plausibly it would be useful later. On the other hand, if not for SERI MATS, I would have likely spent some time on some agent foundations work later, so it's probably about the same time I would have spent on programming overall.” “The intermediate steps of MATS of writing a research proposal and also preparing a 5 min talk took quite a bit of time away from research.” “It slightly impacted my relationship with my PhD supervisor as I left for a bit to do stuff completely unrelated to my phd” “If not for MATS I likely wouldn&#x27;t have gone into technical research, as it now seems I will. It&#x27;s unclear (<70%) if the sign here is positive.” “I plausibly would have improved more as an Engineer (i.e. not research) had I stayed at my job as a Machine Learning Engineer as the work I used to work was deeper down the eng stack / had harder technical problems vs the faster paced / higher level work that I did at Mats.” “If having done MATS already decreases my chance of doing it again this summer, it might have [the] same negative effect. I think if i could choose between only attending MATS 23 or 24, I would have chosen 24 because now I got more ML skills through ARENA.” “The overall MATS experience was slightly negative, making me feel like I didn&#x27;t have my place in the AIS community (but maybe a realization for the better?), and slowed down my interest and involvement (also not only MATS&#x27; fault)”

The source does not contain any alumni feedback highlighting specific challenges reported by MATS participants.

Alumni feedback indicates that scholars with prior research experience often rate MATS superior to alternatives like independent research or "hub-hopping," though some note they would have preferred later participation after building more ML skills through programs like ARENA.
Minor issues85%Feb 22, 2026
“If having done MATS already decreases my chance of doing it again this summer, it might have [the] same negative effect. I think if i could choose between only attending MATS 23 or 24, I would have chosen 24 because now I got more ML skills through ARENA.”

The source does not explicitly mention 'hub-hopping' as an alternative to MATS. The source does not explicitly state that alumni with prior research experience rate MATS superior to alternatives.

The MATS (Machine Learning Alignment Theory Scholars) alumni page showcases researchers who have gone through the program and are now contributing to AI safety research globally. It highlights career trajectories and research outputs of alumni who have joined organizations like Anthropic, MIRI, AI Futures Project, and others. The page demonstrates MATS's role as a key pipeline for developing AI safety talent.

Claims (4)
Summer 2022: The first cohort produced notable outcomes, including scholars like Johannes Treutlein working under Evan Hubinger, who co-authored papers on predictive models that were later published at the UAI 2023 conference.
Accurate100%Feb 22, 2026
Johannes completed the MATS Summer 2022 Cohort under the mentorship of Evan Hubinger (then a Research Fellow at MIRI ). As a result of MATS, Johannes co-authored the paper Conditioning Predictive Models: Risks and Strategies with Evan as a lead author. He also published a follow-up paper on Incentivizing honest performative predictions with proper scoring rules at the UAI 2023 conference.
- Nina (Summer 2023, mentored by Evan Hubinger): Joined Anthropic as a research scientist; won ACL 2024 Outstanding Paper Award; later mentored SPAR and MATS cohorts
Accurate100%Feb 22, 2026
Nina participated in the MATS summer 2023 cohort under the mentorship of Evan Hubinger. As a result of MATS, she published the paper Steering Llama 2 via Contrastive Activation Addition which won an Outstanding Paper Award at ACL 2024. After MATS, Nina joined Anthropic as a research scientist, and has mentored a number of SPAR and MATS cohorts working on LLM alignment projects.
- Marius Hobbhahn (Winter 2022/23, mentored by Evan Hubinger): Founded and became CEO of Apollo Research, a London-based technical alignment organization focused on scheming evaluations and AI control
Accurate100%Feb 22, 2026
Marius took part in MATS Winter 2022/23 Cohort under the mentorship of Evan Hubinger (Anthropic). He published multiple pieces on mechanistic interpretability on LessWrong including work on maximum data dimension and double descent. He is currently the CEO and Director of Apollo Research , a new London-based technical alignment organization.
+1 more claims
11MATS Winter 2023-24 RetrospectiveEA Forum·utilistrutil·2024

Comprehensive retrospective on the fifth iteration of the ML Alignment & Theory Scholars (MATS) program, covering 63 scholars mentored by 20 AI safety researchers. The report evaluates program outcomes, scholar satisfaction, and operational changes, while identifying improvements for future cohorts including advisory boards for mentor selection and expanded AI governance mentorship.

★★★☆☆
Claims (2)
Winter 2023-24 (5th Iteration): Further growth to 63 scholars and 20 mentors, with a significant curriculum change replacing Alignment 201 with custom curricula due to feedback. This included Neel Nanda's remote mechanistic interpretability curriculum (November 20-December 22) and AI Safety Strategy Discussions.
Accurate100%Feb 22, 2026
In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs. Summary Key details about the Winter Program: The four main changes we made after our Summer program were: Reducing our scholar stipend from $40/h to $30/h based on alumni feedback; Transitioning Scholar Support to Research Management ; Using the full Lighthaven campus for office space as well as housing; Replacing Alignment 201 with AI Strategy Discussions .
Laura Vaughan, a Thiel Fellow (2017) who studied physics at the University of Waterloo, brings experience in ML model dataset creation and training, management, entrepreneurship, full-stack software engineering, and biomedical research. She co-founded a stem cell cryogenics startup before joining MATS.

This is the Idealist nonprofit directory listing for Machine Learning Alignment Theory Scholars (MATS), a Berkeley-based organization that runs a research training program for aspiring AI safety researchers. The page serves as an organizational profile on the Idealist job and volunteer platform, providing basic nonprofit information for those seeking to engage with or apply to MATS.

Claims (4)
The program's core mission from inception was to train talented individuals for AI alignment research by addressing risks from unaligned AI through mentorship, training, logistics, and community access. The program eventually evolved into an independent organization, maintaining hubs in both Berkeley and London.
Minor issues85%Feb 22, 2026
MATS aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence (AI). We believe that ambitious researchers from a variety of backgrounds have the potential to meaningfully contribute to the field of alignment research. We aim to provide the training, logistics, and community necessary to aid this transition.

The source states the organization joined Idealist in September 2025, which is in the future. This could be a typo or an error in the source itself. The claim that the organization maintains hubs in both Berkeley and London is not explicitly stated in the source. The source only mentions Berkeley, CA as the location.

Mentorship: Scholars receive approximately 1-2 hours per week of working with their mentor, with more frequent communication via Slack. Each mentor conducts their own selection process, with some using work tasks and others conducting interviews.
Minor issues90%Feb 22, 2026
During the Research phase, each scholar spends ~1-2 hours/week working with their mentor, with more frequent communication via Slack.

The source does not mention that each mentor conducts their own selection process, with some using work tasks and others conducting interviews. The source only mentions that the extent of mentor support will vary depending on the project and the mentor.

Educational Programming: The program includes seminars and workshops 2-3 times per week, featuring speakers from organizations like Redwood Research, FAR AI, OpenAI, CHAI, and GovAI. Past speakers have included Buck Shlegeris, Adam Gleave, Neel Nanda, William Saunders, Andrew Critch, Lennart Heim, and Ajeya Cotra.
Accurate100%Feb 22, 2026
Educational seminars and workshops will be held 2-3 times per week. Previously, speakers have included Buck Shlegeris from Redwood Research , Adam Gleave from FAR AI , Neel Nanda from Google DeepMind , William Saunders from OpenAI , Andrew Critch from CHAI , Lennart Heim from GovAI , Ajeya Cotra from Open Philanthropy , and more.
+1 more claims
13MATS Program - Effective AltruismCentre for Effective Altruism

The MATS (ML Alignment Theory Scholars) Program is a fellowship opportunity listed on the Effective Altruism opportunities board, designed to support researchers working on AI alignment and safety. It connects promising scholars with mentors and resources to accelerate technical AI safety research.

★★★☆☆
Claims (1)
Since its founding, MATS has trained over 446 researchers. The program has generated over 160 research publications with more than 9,000 citations, advancing agendas in mechanistic interpretability, sparse feature analysis, activation engineering, and AI safety evaluation. Alumni have gone on to leading organizations like Anthropic, OpenAI, and Google DeepMind, as well as founded new AI safety organizations like Apollo Research, with 80% of alumni now working in AI alignment, transparency, and security.
Minor issues85%Feb 22, 2026
Alumni have gone on to leading organizations like Anthropic, OpenAI, DeepMind, and more; 80% now work in AI alignment.

The source does not mention the exact number of researchers trained (446), the number of research publications (160), or the number of citations (9,000). The source mentions 'DeepMind' instead of 'Google DeepMind'. The source does not mention 'Apollo Research'.

14MATS Summer 2023 RetrospectiveLessWrong·utilistrutil et al.·2023

A retrospective evaluation of the fourth iteration of the MATS (ML Alignment & Theory Scholars) program, which supported 60 emerging AI safety researchers under 15 mentors in Summer 2023. The report assesses program outcomes including scholar satisfaction (8.9/10 recommendation likelihood), technical skill development, research independence, and career trajectory impacts. It highlights both successes in networking and professional growth, and challenges such as publication barriers for scholars pursuing alignment careers.

★★★☆☆
Claims (5)
| Selectivity | Very Competitive | ≈15% acceptance rate; 40+ mentors with independent selection |
Minor issues90%Feb 22, 2026
Our initial application process for scholars was highly competitive. Of 461 applicants, 69 were accepted for the Training Phase (acceptance rate ≈ 15%).

The claim states '40+ mentors with independent selection', but the source says '15 research mentors'. The claim says '≈15% acceptance rate', but the source says 'Of 461 applicants, 69 were accepted for the Training Phase (acceptance rate ≈ 15%)'.

The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, transparency, and security, connecting them with the Berkeley AI safety research community. Founded in late 2021 and initially run as SERI MATS under the Stanford Existential Risks Initiative, the program later became independent and now operates 12-week in-person cohorts in Berkeley, California and London, United Kingdom.
Minor issues85%Feb 22, 2026
The ML Alignment & Theory Scholars program (MATS, formerly SERI MATS) is an education and research mentorship program for emerging AI safety researchers.

The claim that the program connects scholars with the Berkeley AI safety research community is not explicitly stated in the source, although it is implied. The claim that the program was founded in late 2021 is not explicitly stated in the source. The claim that the program operates 12-week in-person cohorts is not explicitly stated in the source, although the summer program ran from June to September.

MATS originated as SERI MATS, an initiative under the Stanford Existential Risks Initiative (SERI) focused on AI safety research training. The program structure included a 4-week online upskilling phase (10 hours per week), a 2-week research sprint, and an 8-week intensive in-person program in Berkeley, California. Early mentors included Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao, and Stuart Armstrong.
Minor issues85%Feb 22, 2026
The ML Alignment & Theory Scholars program (MATS, formerly SERI MATS) is an education and research mentorship program for emerging AI safety researchers.

The claim mentions a 2-week research sprint, but the source refers to a 'Research Phase' that lasted longer. The claim mentions an 8-week in-person program, but the source indicates the 'Research Phase' was held from July 10 to September 1, which is closer to 7 weeks. The claim lists early mentors, but the source only lists mentors from the Summer 2023 program.

+2 more claims

This talk discusses the MATS (ML Alignment Theory Scholars) program's approach to identifying, selecting, and developing talent for AI safety research. It covers the program's philosophy on what makes promising AI safety researchers and how structured mentorship and training can accelerate their development.

★★☆☆☆
Claims (4)
Since its founding, MATS has trained over 446 researchers. The program has generated over 160 research publications with more than 9,000 citations, advancing agendas in mechanistic interpretability, sparse feature analysis, activation engineering, and AI safety evaluation. Alumni have gone on to leading organizations like Anthropic, OpenAI, and Google DeepMind, as well as founded new AI safety organizations like Apollo Research, with 80% of alumni now working in AI alignment, transparency, and security.
By May 2024, MATS had supported 213 scholars and 47 mentors across five seasonal programs, presenting insights on talent selection and development at the TAIS 2024 conference.
Employment: 49% of surveyed alumni reported working or interning on AI alignment or control, with 29% conducting independent alignment research. Among earlier cohorts, 39% were hired by research organizations post-MATS, with 50% indicating MATS made them "much more likely" to be hired. An additional 22% pursued Master's or PhD programs.
+1 more claims

This is a speaker profile page for Ryan Kidd at the Technical AI Safety (TAIS) 2024 conference. The page likely contains information about Kidd's talk, research focus, and background in AI safety. Without content available, details about his specific contributions to the conference agenda are limited.

Claims (1)
Ryan Kidd serves as Co-Executive Director of MATS and Co-Founder of the London Initiative for Safe AI (LISA). He was a scholar in MATS's first iteration (which had only 5 scholars total) and has since become a Manifund Regrantor and advisor to organizations including Halcyon Futures, Catalyze Impact, AI Safety ANZ, and Pivotal Research.
Minor issues80%Feb 22, 2026
Ryan is Co-Director of the ML Alignment & Theory Scholars Program (since early 2022) and a Board Member and Co-Founder of the London Initiative for Safe AI (since early 2023).

The source does not mention Ryan Kidd being a Manifund Regrantor or advisor to Halcyon Futures, Catalyze Impact, AI Safety ANZ, and Pivotal Research. The source does not mention that MATS's first iteration had only 5 scholars total.

17MATS 8.0 Research ProjectsSubstack·Blog post

This resource likely summarizes the research projects undertaken by scholars during the 8th cohort of the ML Alignment Theory Scholars (MATS) program. MATS is a structured research training program pairing emerging AI safety researchers with experienced mentors to produce original alignment research. The post provides an overview of the diverse technical and governance projects emerging from this cohort.

★★☆☆☆
Claims (1)
MATS 8.0 (Summer 2025): The program reached 98 scholars and 57 mentors, concluding with a symposium on August 22, 2025 featuring 10 spotlight talks and a poster session.

This post defends MATS (Machine Learning Alignment and Theory Scholars) against criticisms that AI safety movement-building programs grow the field too rapidly, risk oversupply of researchers, or inadvertently accelerate AI capabilities. MATS argues its recruitment targets already safety-motivated individuals, its scholars would enter AI/ML regardless, and the marginal safety researcher provides significant net benefit over working in capabilities.

★★★☆☆
Claims (3)
Program organizers acknowledge concerns that MATS's appeal—particularly access to scaling lab mentors—could attract aspiring AI researchers not primarily focused on existential risk reduction, potentially introducing viewpoints that dilute the field's epistemic rigor. While organizers maintain high selection pressure to prioritize x-risk-motivated scholars, they recognize this tension between growth and field quality as they plan broader advertising.
Accurate90%Feb 22, 2026
However, because we provide a relatively simple opportunity to gain access to mentorship from scientists at scaling labs, we believe that our program might seem attractive to aspiring AI researchers who are not fundamentally directed toward reducing x-risk.
Critics note that scholars might overly defer to mentors, failing to critically analyze assumptions and reducing independent thinking or new viewpoints in the field. This concern exists in tension with the opposite problem: insufficient mentorship could lead to excessive peer reliance among inexperienced researchers.
Accurate100%Feb 22, 2026
Scholars might defer to their mentors and fail to critically analyze important assumptions, decreasing the average epistemic integrity of the field
MATS rarely accepts scholars without mentors, viewing mentorship as essential for knowledge transfer, which limits scalability and raises barriers since mentors have high entry requirements and capacity constraints.
Accurate90%Feb 22, 2026
Mentorship is critical to MATS. We generally haven't accepted mentorless scholars because we believe that mentors' accumulated knowledge is extremely useful for bootstrapping strong, original researchers.

A first-person account of applying to the ML Alignment & Theory Scholars (MATS) 6.0 program, documenting 12 interview invitations and 5 acceptances. The post reveals what mentors prioritize in selection—particularly live research brainstorming and prior AI safety experience—over written application quality. Offers practical guidance for future applicants to competitive AI safety research training programs.

★★★☆☆
Claims (2)
With approximately 15% acceptance rates and 40+ mentors conducting independent selection, even proficient researchers and engineers with AI safety experience frequently receive rejections due to mentor capacity limits rather than candidate quality. Application processes involve mentor-specific interviews on ML experience, research proposals, conceptual questions, and experiments, with rejections common even after strong interviews.
Minor issues85%Feb 22, 2026
It was pretty normal to receive a rejection email following an interview even when I felt I did reasonably well, given that each mentor could only accept a handful of candidates.

The source does not mention the acceptance rate of 15%. The source does not explicitly state that proficient researchers and engineers with AI safety experience frequently receive rejections due to mentor capacity limits rather than candidate quality, although it does imply that mentor capacity is a factor. The source does not explicitly mention that application processes involve mentor-specific interviews on ML experience, research proposals, conceptual questions, and experiments, but it does mention that these topics were discussed during interviews.

Interview topics varied among mentors but commonly included research ideas, career plans, technical machine learning questions, and prior experience, rather than behavioral or mathematical questions.
Accurate100%Feb 22, 2026
During the interviews, we discussed some of the following things, roughly sorted from most to least common: Research ideas for a specific question they’re planning to research (e.g., related to deception, honesty, robustness). I was often asked follow-up questions to make these ideas more concrete. My career plans Logistics (e.g., “are you interested in the MATS extension program?”) Questions I had about the research project Technical machine learning questions My prior experience Unlike other technical interviews I’ve had before, in MATS I was not asked questions like: Behavioral questions (e.g., “Why are you interested in my stream?” “Tell me about a time when you overcame a challenge.”) Mathematical questions (e.g., “What’s the formula for KL divergence?”)

This page provides information about funding available through the ML Alignment Theory Scholars (MATS) program, which supports researchers working on AI safety and alignment. It likely outlines stipends, grants, or financial support structures for program participants pursuing technical AI safety research.

Claims (1)
MATS receives grants from partner organizations to support its fellowship program. Financial support for scholars is coordinated through partner organizations rather than directly by MATS:
Unsupported20%Feb 22, 2026
Financial support for scholars.

The source mentions financial support for scholars but does not specify that it is coordinated through partner organizations rather than directly by MATS.

SERI launched the ML Alignment Theory Scholars (MATS) program in partnership with Evan Hubinger to grow the pipeline of alignment researchers. Scholars receive funding, mentorship, and community support, beginning with distilling existing alignment research before advancing to novel research projects. The program represents a structured pathway for onboarding new talent into technical AI alignment work.

★★★☆☆
Claims (2)
- Evan Hubinger: Provided mentorship for early SERI MATS trials and multiple cohorts; formerly at MIRI, now at Anthropic
- <EntityLink id="open-philanthropy">Open Philanthropy</EntityLink>: Provided grants to support the early SERI MATS trial program, including grants of \$1,008,127 (April 2022), \$1,538,000 (November 2022), and \$428,942 (June 2023)
22MATS Program Teammatsprogram.org

This page lists the team members of the ML Alignment Theory Scholars (MATS) program, an organization that supports AI safety researchers through mentorship and training. It provides an overview of the staff and leadership behind one of the key talent development pipelines in the AI safety field.

Claims (2)
Christian Smith serves as Co-Executive Director and Co-Founder of LISA. He brings a background in particle physics and pedagogy from Stanford University, having conducted research at CERN and organized educational programs like the Uncommon Sense Seminar.
Accurate100%Feb 22, 2026
Christian Smith Co-Executive Director Christian is Co-Executive Director of MATS and Co-Founder of the London Initiative for Safe AI (LISA) . Previously, he studied particle physics and pedagogy at Stanford University, worked in operations at multiple organizations, performed research at CERN, and organized educational programs like the Uncommon Sense Seminar.
- Other supporters (2024): Foresight Institute, Survival and Flourishing Fund, Long-Term Future Fund, Craig Falls, and several donors via Manifund

Announces the completion of MATS 8.0, a structured AI safety research program involving 98 scholars and 57 mentors working on alignment, interpretability, and security projects during Summer 2025. The cohort culminated in a symposium featuring spotlight talks and poster sessions. This post serves as a directory linking to detailed descriptions of all research projects produced.

★★★☆☆
Claims (1)
| Program Scale | High | 98 scholars and 57 mentors in most recent cohort (MATS 8.0, Summer 2025) |
Accurate100%Feb 22, 2026
This cohort had 98 scholars who conducted research with 57 top mentors in the fields of AI alignment, transparency, and security.
24MATS Research Programmatsprogram.org

MATS is an intensive fellowship program designed to help researchers transition into AI safety careers, offering structured mentorship from leading researchers, stipends, and community integration. Since 2021, it has trained over 446 researchers who have collectively produced 150+ research papers and gone on to work at top AI safety organizations.

Claims (14)
| Research Output | Strong | 160+ publications, 8,000+ citations, h-index of 40 over 4 years |
Minor issues90%Feb 22, 2026
In the past 4 years, we have helped produce more than 170 research publications with over 9,000 collective citations; our organizational h-index is 43.

The claim states "h-index of 40 over 4 years", but the source says "h-index is 43".

| Funding per Scholar | \$27k | \$15k stipend + \$12k compute resources, plus housing and meals |
Minor issues90%Feb 22, 2026
Fellows receive a $15k stipend from AI Safety Support to cover living expenses. Compute budget Fellows are provided with $12k of compute resources to support experiments and evaluations.

The source states that fellows receive a $15k stipend from AI Safety Support, not a $15k stipend in general. The source mentions housing and meals, but does not specify that they are included in the $27k funding per scholar.

MATS pairs scholars with leading researchers in AI safety for approximately 1-2 hours of mentorship per week, supplemented by seminars, workshops, guest lectures, and dedicated research manager support. The program provides comprehensive support including a \$15,000 living stipend, \$12,000 in compute resources, private housing, catered meals, and office space. Scholars develop independent research projects that culminate in presentations at a Scholar Symposium, with selected fellows invited to continue for 6-12 month extensions.
Minor issues85%Feb 22, 2026
MATS provides mentorship, research funding, housing, and community so researchers can devote their energy to solving the world’s most important problem.

The claim states "approximately 1-2 hours of mentorship per week", but the source does not specify the exact amount of mentorship hours per week. The claim mentions "private housing", but the source only states "housing".

+11 more claims
Citation verification: 34 verified, 15 unchecked of 72 total

Related Wiki Pages

Top Related Pages

Approaches

Representation Engineering

Analysis

Short AI Timeline Policy Implications

Organizations

AnthropicOpenAIApollo ResearchSurvival and Flourishing FundAlignment Research CenterCoefficient Giving

Concepts

Situational AwarenessSafety Orgs Overview

Other

Scalable OversightInterpretabilityAjeya CotraEvan Hubinger