Back
MATS Program Team
webmatsprogram.org·matsprogram.org/team
MATS is a major AI safety talent development program; this team page is primarily useful for identifying key personnel and organizational contacts rather than technical content.
Metadata
Importance: 30/100homepage
Summary
This page lists the team members of the ML Alignment Theory Scholars (MATS) program, an organization that supports AI safety researchers through mentorship and training. It provides an overview of the staff and leadership behind one of the key talent development pipelines in the AI safety field.
Key Points
- •MATS (ML Alignment Theory Scholars) is a prominent AI safety research training and mentorship program
- •The team page identifies key staff responsible for organizing scholar cohorts and connecting researchers with mentors
- •MATS plays a significant role in developing the next generation of AI safety researchers
- •The program connects scholars with leading AI safety researchers at organizations like ARC, Anthropic, and DeepMind
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| MATS ML Alignment Theory Scholars program | Organization | 60.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202635 KB
[Team](https://www.matsprogram.org/team)
# Meet the MATS team

MATS is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, interpretability, governance, and security. The main goal of MATS is to grow the AI safety & security research fields.
[View open roles](https://www.matsprogram.org/careers)
## Leadership
[\\
\\
Ryan Kidd\\
\\
Co-Executive Director](https://www.matsprogram.org/team/Kidd-3065f)
Ryan is Co-Executive Director of MATS, a Co-Founder and Board Member of the [London Initiative for Safe AI (LISA)](https://www.safeai.org.uk/), a [Manifund Regrantor](https://manifund.org/RyanKidd), and advisor to [Halcyon Futures](https://halcyonfutures.org/), [Catalyze Impact](https://www.catalyze-impact.org/), [Pivotal Research](https://pivotal-research.org/), [AI Safety ANZ](https://www.aisafetyanz.com.au/), and [BASE](https://www.baseresearch.org/). Previously, he completed a PhD in Physics at the University of Queensland (UQ) and conducted independent research in AI alignment for the [Stanford Existential Risks Initiative](https://seri.stanford.edu/). Personal website: [ryankidd.ai](https://ryankidd.ai/)
[Read more](https://www.matsprogram.org/team/Kidd-3065f)
[\\
\\
Christian Smith\\
\\
Co-Executive Director](https://www.matsprogram.org/team/smith-ab63f)
Christian is Co-Executive Director of MATS and Co-Founder of the [London Initiative for Safe AI (LISA)](https://www.safeai.org.uk/). Previously, he studied particle physics and pedagogy at Stanford University, worked in operations at multiple organizations, performed research at CERN, and organized educational programs like the Uncommon Sense Seminar.
[Read more](https://www.matsprogram.org/team/smith-ab63f)
## Research Team
[\\
\\
Laura Vaughan\\
\\
Research Director](https://www.matsprogram.org/team/Vaughan)
Laura leads the Research team at MATS. Having joined at the end of 2022, she holds responsibility for ensuring high-impact scholar outcomes and distribution of research resources. The mission of her team is to accelerate the research impact of the program through facilitation of all parts of the research process, including ideation/exploration, research operations, and collaboration.
A Thiel Fellow alum from 2017, Laura studied physics at the University of Waterloo and co-founded a successful stem cell cryogenics startup before pivoting into research, consultation, and software engineering. She brings experience
... (truncated, 35 KB total)Resource ID:
f896cae0a9c6d063 | Stable ID: ZGE0MmUyOT