Skip to content
Longterm Wiki
Back

MATS 8.0 Research Projects (Summer 2025)

blog

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

MATS is a well-known fellowship program that trains and supports emerging AI safety researchers; this post catalogs the outputs of its 8th cohort and is useful for tracking the field's emerging research directions and talent pipeline.

Forum Post Details

Karma
22
Comments
0
Forum
lesswrong
Forum Tags
MATS ProgramAI

Metadata

Importance: 45/100blog postreference

Summary

Announces the completion of MATS 8.0, a structured AI safety research program involving 98 scholars and 57 mentors working on alignment, interpretability, and security projects during Summer 2025. The cohort culminated in a symposium featuring spotlight talks and poster sessions. This post serves as a directory linking to detailed descriptions of all research projects produced.

Key Points

  • MATS 8.0 is the 8th iteration of the Machine Learning Alignment & Theory Scholars Program, run in Summer 2025.
  • 98 scholars collaborated with 57 mentors across AI alignment, transparency, and security research areas.
  • A Program Symposium on August 22 featured 10 spotlight talks and a full poster session from all participants.
  • The post links to a Substack article cataloguing all research projects from the cohort.
  • MATS represents a significant pipeline for producing new AI safety researchers and early-stage technical work.

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20261 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. MATS 8.0 Research Projects — LessWrong MATS Program AI Frontpage 22

 MATS 8.0 Research Projects 

 by Jonathan Michala , DanielFilan , Ryan Kidd 9th Sep 2025 AI Alignment Forum 1 min read 0 22

 Ω 8

 This is a linkpost for https://substack.com/home/post/p-171758976 The 8th iteration of the Machine Learning Alignment & Theory Scholars ( MATS ) Program has come to a close, and we want to share the research projects our scholars have been working on this Summer. This cohort had 98 scholars who conducted research with 57 top mentors in the fields of AI alignment, transparency, and security.

 On Aug 22, we hosted a Program Symposium to showcase their projects to the community; we invited 10 scholar teams to give spotlight talks based on their mid-program Scholar Research Plans, and every scholar contributed to the poster session. You can check out all of the projects in this post!

 0 Comments 0 MATS Program AI Frontpage 22

 Ω 8

 New Comment Submit Moderation Log More from Jonathan Michala View more Curated and popular this week
Resource ID: fb1b13ab89d59cbd | Stable ID: N2JmNzdiOD