Skip to content
Longterm Wiki
Back

MATS Alumni - Machine Learning Alignment Theory Scholars Program

web
matsprogram.org·matsprogram.org/alumni

MATS is a prominent AI safety research training program; this alumni page is useful for understanding the program's impact and identifying key researchers in the field who have passed through it.

Metadata

Importance: 45/100homepage

Summary

The MATS (Machine Learning Alignment Theory Scholars) alumni page showcases researchers who have gone through the program and are now contributing to AI safety research globally. It highlights career trajectories and research outputs of alumni who have joined organizations like Anthropic, MIRI, AI Futures Project, and others. The page demonstrates MATS's role as a key pipeline for developing AI safety talent.

Key Points

  • MATS alumni have gone on to work at leading AI safety orgs including Anthropic, MIRI, and independent research groups like Timaeus and Poseidon Research.
  • Alumni report accelerated upskilling in AI alignment research, with one citing a >3x learning rate compared to self-directed study.
  • Notable outputs include peer-reviewed papers (e.g., 'Steering Llama 2 via Contrastive Activation Addition', Outstanding Paper at ACL 2024).
  • Alumni have also entered policy roles, including founding the Center for AI Policy, showing MATS's broad influence across technical and governance tracks.
  • The program is positioned as a significant talent pipeline, with alumni spanning mechanistic interpretability, infra-bayesianism, and LLM alignment.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202611 KB
[Alumni](https://www.matsprogram.org/alumni)

# MATS alumni are driving AI alignment research worldwide

[![](https://cdn.prod.website-files.com/6909901117f6b8d83783ca73/6997fbee9574dbdffbd3f05d_6938af7a4f61bb62c54cc7a5_citations.jpeg)\\
\\
Robert Krzyzanowski\\
\\
Poseidon Research](https://www.matsprogram.org/alumni/robert-krzyzanowski)

Before MATS, I had a strong interest in alignment generally but few skillsets relevant to the frontier of research and little idea of how to get started. Directly thanks to MATS, I achieved: (1) a relatively complete understanding of the structure of the most important questions and associated communities in in the AI safety space, (2) legible and significant research outputs that gave me the confidence to continue switching into a full-time career in the space, and (3) access to a broad base of present and future collaborators with a very wide range of perspectives. On this third point, the talent exhibited at MATS is fearsome and highly motivated to solve the problems. It would not be at all surprising to me if when the dust settles and the grand project of alignment reaches eventual fruition, it becomes apparent that over a double digit percentage of the credit attribution to the key problems and solutions belongs to MATS alumni.

I am an independent AI safety researcher currently focused on mechanistic interpretability and training process transparency.

[Read more](https://www.matsprogram.org/alumni/robert-krzyzanowski)

[![](https://cdn.prod.website-files.com/6909901117f6b8d83783ca73/6997fbd65c27e60d7e128f61_69361e1b0c489771205389bd_690fab1b80636dddc01bdd07_1664386776341.webp)\\
\\
Thomas Larsen\\
\\
AI Futures Project](https://www.matsprogram.org/alumni/thomas-larsen)

MATS helped me upskill in alignment at a >3x rate relative to the counterfactual, which was independently learning infra-bayesianism because I liked math and I didn't have an inside view on what parts of alignment was important. MATS caused me to develop a much deeper view of the alignment problem and afterwards I felt like I was able to focus on the most important parts of the problem and biggest sources of confusion within myself.

Thomas took part in the Summer 2022 Cohort with John Wentworth and the Winter 2023 Cohort with Nate Soares. During this time, he wrote a detailed overview of AI Safety approaches. He continued his SERI MATS work at MIRI, before leaving to found the Center for AI Policy, an AI safety advocacy organization. He is currently a researcher at the AI Futures Project and a guest fund manager at the LTFF.

[Read more](https://www.matsprogram.org/alumni/thomas-larsen)

[![](https://cdn.prod.website-files.com/6909901117f6b8d83783ca73/6997fbce7823a9ac646adea5_69361e1a7d35ab5dd481e7a1_690fab19308b7163e21480b8_IMG_7152%25252B%252525281%25252529.webp)\\
\\
Nina Panickssery\\
\\
Anthropic](https://www.matsprogram.org/alumni/nina-panickssery)

Participating in MATS was a great way to rapidly upskill in AI safety research, lear

... (truncated, 11 KB total)
Resource ID: 647bf062f1963873 | Stable ID: NTEyMTZhND