How MATS addresses “mass movement building” concerns
webAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
Relevant to debates about optimal AI safety field-building strategy, talent pipelines, and whether growing the safety researcher base risks accelerating capabilities or diluting research quality.
Forum Post Details
Metadata
Summary
This post defends MATS (Machine Learning Alignment and Theory Scholars) against criticisms that AI safety movement-building programs grow the field too rapidly, risk oversupply of researchers, or inadvertently accelerate AI capabilities. MATS argues its recruitment targets already safety-motivated individuals, its scholars would enter AI/ML regardless, and the marginal safety researcher provides significant net benefit over working in capabilities.
Key Points
- •Most MATS scholars would pursue AI/ML careers regardless, so the program redirects rather than creates new AI labor supply.
- •Recruitment focuses on EA-adjacent, safety-motivated individuals rather than drawing in capability-focused researchers.
- •Program is intentionally made less financially attractive than industry alternatives to filter for genuine safety commitment.
- •MATS estimates one safety researcher offsets 5-10 capabilities researchers in terms of net impact on AI risk.
- •Alumni-founded organizations and expected ecosystem growth are cited as solutions to concerns about job scarcity for graduates.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| MATS ML Alignment Theory Scholars program | Organization | 60.0 |
Cached Content Preview
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. How MATS addresses “mass movement building” concerns — LessWrong MATS Program AI Community Frontpage 63
How MATS addresses “mass movement building” concerns
by Ryan Kidd 4th May 2023 3 min read 9 63
Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus:
Producing more aspiring alignment researchers than there are jobs or training pipelines;
Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities;
Unnecessarily diluting the field’s epistemics by introducing too many naive or overly deferent viewpoints.
At MATS , we think that these are real and important concerns and support mitigating efforts. Here’s how we address them currently.
Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment
How we address this:
Some of our alumni’s projects are attracting funding and hiring further researchers. Three of our alumni have started alignment teams/organizations that absorb talent (Vivek’s MIRI team, Leap Labs , Apollo Research), and more are planned (e.g., a Paris alignment hub).
With the elevated interest in AI and alignment, we expect more organizations and funders to enter the ecosystem. We believe it is important to install competent, aligned safety researchers at new organizations early, and our program is positioned to help capture and upskill interested talent.
Sometimes, it is hard to distinguish truly promising researchers in two months, hence our four-month extension program. We likely provide more benefits through accelerating researchers than can be seen in the immediate hiring of alumni.
Alumni who return to academia or industry are still a success for the program if they do more alignment-relevant work or acquire skills for later hiring into alignment roles.
Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype
How we address this:
Considering that the median MATS scholar is a Ph.D./Masters student in ML, CS, maths, or physics and only 10% are undergrads, we believe most of our scholars would have ended up working in AI/ML regardless of their involvement with the program. In general, mentors select highly technically capable scholars who are already involved in AI/ML; others are outliers.
Our outreach and selection processes are designed to attract applicants who are motivated by reducing global catastrophic risk from AI. We principally advertise via word-of-mouth, AI safety Slack workspaces, AGI Safety Fundamentals and 80,000 Hours job boards, and LessWrong/EA Forum. As seen in the figure below, our scholars generally come from AI safety and EA communities.
MATS Summer 2023 interest form: “How did you hear about us?” (381
... (truncated, 16 KB total)d236b5655e33e109 | Stable ID: NzBkZGNhMm