Back
Mechanistic Interpretability Workshop at NeurIPS 2025
webmechinterpworkshop.com·mechinterpworkshop.com/
This is the official workshop page for the NeurIPS 2025 Mechanistic Interpretability Workshop, a key community event for researchers developing tools to understand neural network internals as part of the broader AI safety research agenda.
Metadata
Importance: 55/100homepage
Summary
The Mechanistic Interpretability Workshop at NeurIPS 2025 is a dedicated academic venue for researchers working on understanding the internal computations of neural networks. It brings together work on circuits, features, sparse autoencoders, and related techniques aimed at reverse-engineering how AI models process information. The workshop represents a key gathering point for the interpretability research community.
Key Points
- •Official workshop at NeurIPS 2025 focused on mechanistic interpretability research
- •Covers core MI topics including circuits, features, and sparse autoencoders
- •Serves as a major venue for presenting and discussing advances in neural network interpretability
- •Connects academic and industry researchers working on understanding AI model internals
- •Reflects growing institutional recognition of mechanistic interpretability as a serious research field
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Interpretability | Research Area | 66.0 |
| Mechanistic Interpretability | Research Area | 59.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 20265 KB

# Mechanistic Interpretability Workshop
NeurIPS 2025
Sunday, December 7, 2025
San Diego Convention Centre · Room 30A-E

[Attended? Give feedback on the workshop!](https://docs.google.com/forms/d/e/1FAIpQLSe13gWSNKtrC3uGsrhgg5qRLHxSLCBmP3p7ZGoqxDKaV4_Cmg/viewform)
Get workshop updatesSubscribe
As neural networks grow in influence and capability, understanding the mechanisms behind their decisions remains a fundamental scientific challenge. This gap between performance and understanding limits our ability to predict model behavior, ensure reliability, and detect sophisticated adversarial or deceptive behavior. Many of the deepest scientific mysteries in machine learning may remain out of reach if we cannot look inside the black box.
[Mechanistic interpretability](https://arxiv.org/abs/2501.16496) addresses this challenge by developing principled methods to analyze and understand a model’s internals–weights and activations–and to use this understanding to gain greater insight into its behavior, and the computation underlying it.
The field has grown rapidly, with sizable communities in academia, industry and independent research, 140+ papers submitted to our ICML 2024 workshop, dedicated startups, and a rich ecosystem of tools and techniques. This workshop aims to bring together diverse perspectives from the community to discuss recent advances, build common understanding and chart future directions.
See our [Call for Papers](https://mechinterpworkshop.com/cfp/) for submission details and topics of interest.
## Keynote Speakers

### [Chris Olah](https://colah.github.io/about.html)
Interpretability Lead and Co-founder, Anthropic

### [Been Kim](https://beenkim.github.io/)
Senior Staff Research Scientist, Google DeepMind

### [Sarah Schwettmann](https://cogconfluence.com/)
Co-founder, Transluce

The first Mechanistic Interpretability Workshop (ICML 2024).
## Organizing Committee

### [Neel Nanda](https://www.neelnanda.io/about)
Senior Research Scientist, Google DeepMind

### [Andrew Lee](https://ajyl.github.io/)
Post-doc, Harvard
Resource ID:
e78a965cde8d82bd | Stable ID: OGJhNDAzMj