Skip to content
Longterm Wiki
Back

Authors

utilistrutil·Juan Gil·Ryan Kidd·Christian Smith·deus_ex_maki·LauraVaughan

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

This retrospective is useful for those evaluating AI safety talent pipelines, fellowship program design, or the effectiveness of mentorship-based research training initiatives like MATS.

Forum Post Details

Karma
78
Comments
34
Forum
lesswrong
Forum Tags
MATS ProgramAI Alignment FieldbuildingPostmortems & RetrospectivesAI

Metadata

Importance: 38/100blog postanalysis

Summary

A retrospective evaluation of the fourth iteration of the MATS (ML Alignment & Theory Scholars) program, which supported 60 emerging AI safety researchers under 15 mentors in Summer 2023. The report assesses program outcomes including scholar satisfaction (8.9/10 recommendation likelihood), technical skill development, research independence, and career trajectory impacts. It highlights both successes in networking and professional growth, and challenges such as publication barriers for scholars pursuing alignment careers.

Key Points

  • Summer 2023 was MATS's fourth cohort, supporting 60 scholars across 15 research mentors in AI safety and alignment.
  • Scholars reported high satisfaction with a mean recommendation likelihood of 8.9/10, indicating strong program value.
  • Significant self-reported improvements in technical skills and research independence were observed among participants.
  • Meaningful professional networking outcomes were achieved, helping scholars integrate into the AI safety research community.
  • Publication records remained a notable barrier for most scholars seeking to establish alignment research careers post-program.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202670 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. MATS Summer 2023 Retrospective — LessWrong MATS Program AI Alignment Fieldbuilding Postmortems & Retrospectives AI Frontpage 78

 MATS Summer 2023 Retrospective 

 by utilistrutil , Juan Gil , Ryan Kidd , Christian Smith , deus_ex_maki , LauraVaughan 1st Dec 2023 32 min read 34 78

 Co-Authors: @Rocket , @Juan Gil , @Christian Smith , @McKennaFitzgerald , @LauraVaughan , @Ryan Kidd 

 The  ML Alignment & Theory Scholars program (MATS, formerly SERI MATS) is an education and research mentorship program for emerging AI safety researchers. This summer, we held the fourth iteration of the MATS program, in which 60 scholars received mentorship from 15 research mentors. In this post, we explain the elements of the program, lay out some of the thinking behind them, and evaluate our impact.

 Summary

 Key details about the Summer 2023 Program:

 Educational attainment of MATS scholars: 30% of scholars are students.
 88% have at least a Bachelor's degree.
 10% are in a Master’s program.
 10% are in a PhD program.
 13% have a PhD.
 
 If not for MATS, scholars might have worked at a tech company (41%), upskilled independently (46%), or conducted research independently over the summer (50%). (Note: this was a multiple-choice response.)
 Key takeaways from our impact evaluation:

 MATS scholars are highly likely to recommend MATS to a friend or colleague. Average likelihood: 8.9/10.
 Mentors rated their enthusiasm for their scholars to continue with their research at 7/10 or greater for 94% of scholars. 
 MATS scholars rate their mentors highly. Average rating: 8.0/10. 61% of scholars report that at least half the value of MATS came from their mentor.
 
 After MATS, scholars reported facing fewer obstacles to a successful alignment career than they did at the start of the program. Most scholars (75%) still reported their publication record as an obstacle to a successful alignment career at the conclusion of the program.
 
 ⅓ of final projects involved evals/demos and ⅕ involved mechanistic interpretability, representing a large proportion of the cohort’s research interests. 
 Scholars self-reported improvements to their research ability on average: Slight increases to the breadth of their AI safety knowledge (+1.75 on 10-point scale over the program).
 Moderate strengthening of technical skills compared to counterfactual summer (7.2/10, where 10/10 is "significant improvement compared to counterfactual summer").
 Moderate improvements to ability to independently iterate on research direction (7.0/10, where 10/10 is "significant improvement") and ability to develop a theory of change for their research (5.9/10, where 10/10 is "substantially developed").
 
 The typical scholar reported making 4.5 professional connections (std. dev. = 6.2) and meeting 5 potential research collaborators on average (std. dev. = 6.8).
 MATS scholars

... (truncated, 70 KB total)
Resource ID: a3355c6c32fab12c | Stable ID: NzRiZDM2Mj