Skip to content
Longterm Wiki
Back

ML Alignment Theory Program under Evan Hubinger

blog

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

This post announces an early iteration of the MATS program, which has since grown significantly; useful for understanding the origins of structured alignment researcher training pipelines and SERI's early field-building efforts.

Metadata

Importance: 42/100blog postnews

Summary

SERI launched the ML Alignment Theory Scholars (MATS) program in partnership with Evan Hubinger to grow the pipeline of alignment researchers. Scholars receive funding, mentorship, and community support, beginning with distilling existing alignment research before advancing to novel research projects. The program represents a structured pathway for onboarding new talent into technical AI alignment work.

Key Points

  • MATS is a structured program to increase the number of researchers working on AI alignment theory, run by SERI in partnership with Evan Hubinger.
  • Trial phase has scholars distilling and expanding existing alignment research artifacts under direct mentorship from Hubinger.
  • Successful participants advance to a novel research stage, collaborating with experienced mentors on original alignment work.
  • Program provides funding, mentorship, and community infrastructure to lower barriers for promising new alignment theorists.
  • Represents an early instance of field-building efforts to scale up technical AI safety research capacity.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202611 KB
[ML Alignment Theory Program under Evan Hubinger](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#)

2 min read

•

[Community Engagement](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#Community_Engagement)

•

[Program Description](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#Program_Description)

•

[Future Steps](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#Future_Steps)

•

[Acknowledgements](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#Acknowledgements)

[ML Alignment Theory Scholars Program Winter 2021](https://www.alignmentforum.org/s/tDBYJd4p6EorGLEFA)

[AI Alignment Fieldbuilding](https://www.alignmentforum.org/w/ai-alignment-fieldbuilding)[Project Announcement](https://www.alignmentforum.org/w/project-announcement)[AI](https://www.alignmentforum.org/w/ai)
Personal Blog

# 34

# [ML Alignment Theory Program under EvanHubinger](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger)

by [ozhang](https://www.alignmentforum.org/users/ozhang?from=post_header), [evhub](https://www.alignmentforum.org/users/evhub?from=post_header), [Victor W](https://www.alignmentforum.org/users/victor-w?from=post_header)

5th Dec 2021

2 min read

[3](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger#comments)

# 34

In the past six weeks, the Stanford Existential Risks Initiative (SERI) has been running a trial for the “ML Alignment Theory Scholars” (MATS) program. Our goal is to increase the number of people working on alignment theory, and to do this, we’re running a scholars program that provides mentorship, funding, and community to promising new alignment theorists. This program is run in partnership with Evan Hubinger, who has been providing all of the mentorship to each of the scholars for their trial.

As the final phase of the trial, each participant has taken a previous research artifact (usually an Alignment Forum post) and written a distillation and expansion of that post. The posts were picked by Evan and each participant signed up for one they were interested in. **Within the next two weeks (12/7 - 12/17), we’ll be posting all of these posts to lesswrong and the alignment forum as part of a sequence, with a couple of posts going up each day.** (There will be around 10-15 posts total.)

# Community Engagement

Evan will be evaluating each post to determine whether participants make it to the next stage of the seminar program (where they have the opportunity to do novel research with a mentor), but we’d also be interested in hearing community feedback on each post. This could be just through upvotes or alternatively, via comments as well. We’ll run a conclusion post w

... (truncated, 11 KB total)
Resource ID: f6a989dbbd14008d | Stable ID: ZjMxYWExMj