Comprehensive guide to AI safety training programs including MATS (78% alumni in alignment work, 100+ scholars annually), Anthropic Fellows ($2,100/week stipend, 40%+ hired full-time), LASR Labs (5 NeurIPS papers in 2024), and academic pathways. BlueDot Impact has trained 7,000+ people since 2022, with hundreds now working in AI safety. Provides concrete application criteria, timing recommendations, and structured self-study pathways with 1-5 year timeline to research contribution.
AI Safety Training Programs
AI Safety Training Programs
Comprehensive guide to AI safety training programs including MATS (78% alumni in alignment work, 100+ scholars annually), Anthropic Fellows ($2,100/week stipend, 40%+ hired full-time), LASR Labs (5 NeurIPS papers in 2024), and academic pathways. BlueDot Impact has trained 7,000+ people since 2022, with hundreds now working in AI safety. Provides concrete application criteria, timing recommendations, and structured self-study pathways with 1-5 year timeline to research contribution.
Quick Assessment
| Dimension | Rating | Notes |
|---|---|---|
| Tractability | High | Known how to train researchers; programs have proven track records |
| Scalability | Medium | Bottlenecked by mentor availability and quality maintenance |
| Current Maturity | Medium-High | Ecosystem established since 2021; 298+ MATS scholars trained |
| Time Horizon | 1-5 years | Trained researchers take 1-3 years to contribute meaningfully |
| Key Proponents | MATS, BlueDot Impact, AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding..., Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | |
| Estimated Impact | Medium-High | Produces 100-200 new safety researchers annually |
Overview
The AI safety field faces a critical talent bottleneck. While funding has increased substantially—with Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 committing roughly $50 million to technical AI safety researchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100 in 2024—the supply of researchers capable of doing high-quality technical safety work remains constrained. Training programs represent the primary pipeline for addressing this gap, offering structured pathways from general ML expertise to safety-specific research skills.
The landscape has evolved rapidly since 2020. MATS (ML Alignment Theory Scholars) has become the premier research mentorship program, with 78% of surveyed alumni now working in AI alignment. Anthropic launched a Fellows Program specifically for mid-career transitions. BlueDot Impact has trained over 7,000 people since 2022, with hundreds now working at organizations like Anthropic, OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ..., and the UK AI Safety InstituteOrganizationUK AI Safety InstituteThe UK AI Safety Institute (renamed AI Security Institute in Feb 2025) operates with ~30 technical staff and 50M GBP annual budget, conducting frontier model evaluations using its open-source Inspe...Quality: 52/100. Academic programs are emerging at York (SAINTS CDT), Berkeley (CHAIOrganizationCenter for Human-Compatible AICHAI is UC Berkeley's AI safety research center founded by Stuart Russell in 2016, pioneering cooperative inverse reinforcement learning and human-compatible AI frameworks. The center has trained 3...Quality: 37/100), and Cambridge (CHIA). Independent research programs like SPAR and LASR Labs provide part-time pathways. Together, these programs produce perhaps 100-200 new safety researchers annually—a number that may be insufficient given the pace of AI capabilities advancement.
The strategic importance of training extends beyond individual researcher production. Programs shape research culture, determine which problems receive attention, and create networks that influence the field's direction. How training programs select participants, what methodologies they emphasize, and which mentors they feature all have downstream effects on AI safety's trajectory.
Program Comparison
| Program | Duration | Format | Stipend | Selectivity | Key Outcomes |
|---|---|---|---|---|---|
| MATS | 12 weeks + 6mo extension | In-person (Berkeley, London) | Living stipend | ≈5-10% | 78% in alignment work; 75% publish |
| Anthropic Fellows | 6 months | In-person (SF) | $2,100/week | Selective | 40%+ hired full-time at Anthropic |
| LASR Labs | 13 weeks | In-person (London) | £11,000 | Moderate | All 5 Summer 2024 papers at NeurIPS |
| SPAR | 3 months | Remote, part-time | Varies | Moderate | Papers at ICML, NeurIPS; career fair |
| ARENA | 5 weeks | In-person (London) | Housing/travel | Moderate | Alumni at Apollo, METROrganizationMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100, UK AISI |
| BlueDot Technical AI SafetyParameterTechnical AI SafetyThis page contains only code/component references with no actual content about technical AI safety. The page is a stub that imports React components but provides no information, analysis, or substa... | 8 weeks | Online cohorts | None | Low-moderate | 7,000+ trained; hundreds in field |
Major Training Programs
MATS (ML Alignment Theory Scholars)
MATS is the most established and influential AI safety research program, operating as an intensive mentorship connecting promising researchers with leading safety researchers. Since its inception in late 2021, MATS has supported 298 scholars and 75 mentors.
| Attribute | Details |
|---|---|
| Duration | 12 weeks intensive + 6 months extension |
| Format | In-person (Berkeley, London) |
| Focus | Technical alignment research |
| Mentors | Researchers from Anthropic, DeepMind, Redwood, FAR.AI, ARC |
| Compensation | Living stipend provided |
| Selectivity | ≈5-10% acceptance rate |
| Alumni outcomes | 78% now working in AI alignment |
Research Areas:
- Interpretability and mechanistic understanding
- AI controlSafety AgendaAI ControlAI Control is a defensive safety approach that maintains control over potentially misaligned AI through monitoring, containment, and redundancy, offering 40-60% catastrophic risk reduction if align...Quality: 75/100 and containment
- Scalable oversightSafety AgendaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100
- Evaluations and red-teaming
- Robustness and security
Notable Alumni Contributions: MATS fellows have contributed to sparse autoencoders for interpretability, activation engineering research, developmental interpretability, and externalized reasoning oversight. Alumni have published at ICML and NeurIPS on safety-relevant topics. Nina Rimsky received an Outstanding Paper Award at ACL 2024 for "Steering Llama 2 via Contrastive Activation Addition." Alumni have founded organizations including Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100, Timaeus, Leap Labs, and the Center for AI Policy.
Anthropic Fellows Program
Launched in 2024, the Anthropic Fellows Program targets mid-career technical professionals transitioning into AI safety research.
| Attribute | Details |
|---|---|
| Duration | 6 months full-time |
| Format | In-person (San Francisco) |
| Focus | Transition to safety research |
| Compensation | $2,100/week stipend + $15,000/month compute budget |
| Target | Mid-career technical professionals |
| First cohort | March 2025 |
| First cohort outcomes | Over 80% published papers; 40%+ joined Anthropic full-time |
The program addresses a specific gap: talented ML engineers and researchers who want to transition to safety work but lack the mentorship and runway to do so. By providing substantial compensation and direct collaboration with Anthropic researchers, it removes financial barriers to career change. First cohort fellows produced notable research including work on agentic misalignment, attribution graphs for mechanistic interpretability, and autonomous blockchain vulnerability exploitation.
SPAR (Supervised Program for Alignment Research)
SPAR offers a part-time, remote research fellowship enabling broader participation in safety research without requiring full-time commitment.
| Attribute | Details |
|---|---|
| Duration | 3 months |
| Format | Remote, part-time |
| Focus | AI safety and governance research |
| Target | Students and professionals |
| Output | Research projects culminating in Demo Day with career fair |
| Scale | 130+ projects offered in Spring 2026—largest AI safety fellowship round |
SPAR research has been accepted at ICML and NeurIPS, covered by TIME, and led to full-time job offers. Mentors come from Google DeepMindOrganizationGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100, RAND, Apollo Research, UK AISI, MIRIOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, and universities including Cambridge, Harvard, Oxford, and MIT. The program works well for:
- Graduate students exploring safety research
- Professionals testing interest before career change
- Researchers in adjacent fields wanting to contribute
LASR Labs
LASR Labs provides cohort-based technical AI safety research, preparing participants for roles at safety organizations.
| Attribute | Details |
|---|---|
| Duration | 13 weeks |
| Format | In-person (London) |
| Focus | Technical safety research |
| Stipend | £11,000 + office space, food, travel |
| 2024 Outcomes | All 5 Summer 2024 papers accepted to NeurIPS workshops |
| Career Outcomes | Alumni at UK AISI, Apollo Research, OpenAI dangerous capabilities team, Coefficient Giving |
| Satisfaction | 9.25/10 likelihood to recommend; NPS +75 |
Research topics include interpretability (sparse autoencoders, residual streams), AI control, and steganographic collusion in LLMs. Supervisors include researchers from Google DeepMind, Anthropic, and UK AISI.
Global AI Safety Fellowship
Impact Academy's Global AI Safety Fellowship is a fully funded program (up to 6 months) connecting exceptional STEM talent with leading safety organizations.
| Attribute | Details |
|---|---|
| Duration | Up to 6 months |
| Format | In-person collaboration |
| Partners | CHAI (Berkeley), ConjectureOrganizationConjectureConjecture is a 30-40 person London-based AI safety org founded 2021, pursuing Cognitive Emulation (CoEm) - building interpretable AI from ground-up rather than aligning LLMs - with $30M+ Series A ...Quality: 37/100, FAR.AI, UK AISI |
| Funding | Fully funded |
Academic Pathways
PhD Programs
| Program | Institution | Focus | Status |
|---|---|---|---|
| SAINTS CDT | University of York (UK) | Safe Autonomy | Accepting applications |
| CHAI | UC Berkeley | Human-Compatible AI | Established |
| CHIA | Cambridge | Human-Inspired AI | Active |
| Steinhardt Lab | UC Berkeley | ML Safety | Active |
| Other ML programs | Various | General ML with safety focus | Many options |
University of York - SAINTS CDT: The UK's first Centre for Doctoral Training specifically focused on AI safety, funded by UKRI. Brings together computer science, philosophy, law, sociology, and economics to train the next generation of safe AI experts. Based at the Institute for Safe Autonomy.
Key Academic Researchers: Prospective PhD students should consider advisors who work on safety-relevant topics:
- Stuart RussellPersonStuart RussellStuart Russell is a UC Berkeley professor who founded CHAI in 2016 with $5.6M from Coefficient Giving (then Open Philanthropy) and authored 'Human Compatible' (2019), which proposes cooperative inv...Quality: 30/100 (Berkeley/CHAI) - Human-compatible AI
- Jacob Steinhardt (Berkeley) - ML safety and robustness
- Vincent Conitzer (CMU) - AI alignmentApproachAI AlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 theory
- David Duvenaud (Toronto) - Interpretability
- Roger Grosse (Toronto) - Training dynamics
- Victor Veitch (Chicago) - Causal ML, safety
Academic vs. Industry Research
| Dimension | Academic Path | Industry Path |
|---|---|---|
| Timeline | 4-6 years | 0-2 years to entry |
| Research freedom | High | Varies |
| Resources | Limited | Often substantial |
| Publication | Expected | Sometimes restricted |
| Salary during training | PhD stipend (≈$10-50K) | Full salary or fellowship |
| Ultimate outcome | Research career | Research career |
| Best for | Deep expertise, theory | Immediate impact, applied |
Upskilling Resources
For those not yet ready for formal programs or preferring self-directed learning:
Structured Curricula
| Resource | Provider | Coverage | Time Investment |
|---|---|---|---|
| AI Safety Syllabus | 80,000 HoursOrganization80,000 Hours80,000 Hours is the largest EA career organization, reaching 10M+ readers and reporting 3,000+ significant career plan changes, with 80% of $10M+ funding from Coefficient Giving. Since 2016 they've...Quality: 45/100 | Comprehensive reading list | 40-100+ hours |
| Technical AI Safety Course | BlueDot Impact | Structured curriculum | 8 weeks |
| AI Safety Operations Bootcamp | BlueDot Impact | Operations roles in AI safety | Intensive |
| ML Safety Course | Dan HendrycksPersonDan HendrycksBiographical overview of Dan Hendrycks, CAIS director who coordinated the May 2023 AI risk statement signed by major AI researchers. Covers his technical work on benchmarks (MMLU, ETHICS), robustne...Quality: 19/100 | Technical foundations | Semester |
| ARENA | ARENA | Technical implementations (mech interp, transformers) | 5 weeks |
BlueDot Impact has become the primary entry point into the AI safety field, training over 7,000 people since 2022 and raising $35M including $25M in 2025. ARENA alumni have gone on to become MATS scholars, LASR participants, and AI safety engineers at Apollo Research, METR, and UK AISI.
Self-Study Path
Career Transition Considerations
When to Apply to Programs
| Your Situation | Recommended Path |
|---|---|
| Strong ML background, want safety focus | MATS or Anthropic Fellows |
| Exploring interest, employed | SPAR (part-time) |
| Student, want research experience | LASR Labs, SPAR |
| Early career, want PhD | Academic programs |
| Mid-career, want full transition | Anthropic Fellows |
| Strong background, want independence | Self-study + independent research |
Success Factors
Based on program outcomes, successful applicants typically have:
| Factor | Importance | How to Develop |
|---|---|---|
| ML technical skills | Critical | Courses, projects, publications |
| Research experience | High | Academic or industry research |
| Safety knowledge | Medium-High | Reading, courses, writing |
| Communication | Medium | Writing, presentations |
| Clear research interests | Medium | Reading, reflection, pilot projects |
Common Failure Modes
| Failure Mode | Description | Mitigation |
|---|---|---|
| Premature application | Applying without sufficient ML skills | Build fundamentals first |
| No research output | Nothing demonstrating research capability | Complete pilot project |
| Vague interests | Unable to articulate what you want to work on | Read extensively, form views |
| Poor fit | Mismatch between interests and program | Research programs carefully |
| Giving up early | Rejection discouragement | Multiple applications, iterate |
Talent Pipeline Analysis
Current Capacity
| Stage | Annual Output | Bottleneck |
|---|---|---|
| Interested individuals | Thousands | Conversion |
| Program applicants | 500-1000 | Selectivity |
| Program participants | 150-300 | Capacity |
| Research-productive alumni | 100-200 | Mentorship |
| Long-term field contributors | 50-100 | Retention |
Scaling Challenges
| Challenge | Description | Potential Solutions |
|---|---|---|
| Mentor bandwidth | Limited senior researchers available | Peer mentorship, async formats |
| Quality maintenance | Scaling may dilute intensity | Tiered programs |
| Funding | Programs need sustainable funding | Philanthropic, industry, government |
| Coordination | Many programs with unclear differentiation | Better information, specialization |
| Retention | Many trained researchers leave safety | Better career paths, culture |
Strategic Assessment
| Dimension | Assessment | Notes |
|---|---|---|
| Tractability | High | Known how to train researchers |
| If AI risk high | High | Need many more researchers |
| If AI risk low | Medium | Still valuable for responsible development |
| Neglectedness | Medium | $50M+ annually from Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 but scaling gaps |
| Timeline to impact | 1-5 years | Trained researchers take time to contribute |
| Grade | B+ | Important but faces scaling limits |
Risks Addressed
| Risk | Mechanism | Effectiveness |
|---|---|---|
| Inadequate safety research | More researchers doing safety work | High |
| Racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 | Safety talent at labs can advocate | Medium |
| Field capture | Diverse training reduces groupthink | Medium |
Complementary Interventions
- Field BuildingApproachAI Safety Field Building AnalysisComprehensive analysis of AI safety field-building showing growth from 400 to 1,100 FTEs (2022-2025) at 21-30% annual growth rates, with training programs achieving 37% career conversion at costs o...Quality: 65/100 - Broader ecosystem development
- Corporate InfluenceCruxCorporate Influence on AI PolicyComprehensive analysis of corporate influence pathways (working inside labs, shareholder activism, whistleblowing) showing mixed effectiveness: safety teams influenced GPT-4 delays and responsible ...Quality: 66/100 - Placing trained researchers at labs
- AI Safety InstitutesPolicyAI Safety Institutes (AISIs)Analysis of government AI Safety Institutes finding they've achieved rapid institutional growth (UK: 0→100+ staff in 18 months) and secured pre-deployment access to frontier models, but face critic...Quality: 69/100 - Employers for trained researchers
Sources
Program Information
- MATS: matsprogram.org - Official program information; Alumni Impact Analysis (2024)
- Anthropic Fellows: alignment.anthropic.com - Program details; 2026 cohort applications
- SPAR: sparai.org - Supervised Program for Alignment Research
- LASR Labs: lasrlabs.org - London AI Safety Research Labs
- BlueDot Impact: bluedot.org - AI safety courses and career support
- ARENA: arena.education - Alignment Research Engineer Accelerator
- Global AI Safety Fellowship: globalaisafetyfellowship.com
Funding and Ecosystem
- Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100: 2024 Progress and 2025 Plans - $50M committed to technical AI safety in 2024
- Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100: Technical AI Safety RFP - $40M+ available
Career Guidance
- 80,000 Hours: "AI Safety Syllabus" and career guide
- Alignment Forum: Career advice threads
- EA Forum: "Rank Best Universities for AI Safety"
Academic Programs
- University of York SAINTS CDT: york.ac.uk/study/postgraduate-research/centres-doctoral-training/safe-ai-training
- Stanford Center for AI Safety: aisafety.stanford.edu
- CHAI (Berkeley): humancompatible.ai
AI Transition Model Context
AI safety training programs improve the Ai Transition Model through multiple factors:
| Factor | Parameter | Impact |
|---|---|---|
| Misalignment PotentialAi Transition Model FactorMisalignment PotentialThe aggregate risk that AI systems pursue goals misaligned with human values—combining technical alignment challenges, interpretability gaps, and oversight limitations. | Safety-Capability GapAi Transition Model ParameterSafety-Capability GapThis page contains no actual content - only a React component reference that dynamically loads content from elsewhere in the system. Cannot evaluate substance, methodology, or conclusions without t... | Produces 100-200 new safety researchers annually to address research talent bottleneck |
| Misalignment PotentialAi Transition Model FactorMisalignment PotentialThe aggregate risk that AI systems pursue goals misaligned with human values—combining technical alignment challenges, interpretability gaps, and oversight limitations. | Alignment RobustnessAi Transition Model ParameterAlignment RobustnessThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the actual substantive content. | Mentored researchers produce higher-quality alignment work |
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | Institutional QualityAi Transition Model ParameterInstitutional QualityThis page contains only a React component import with no actual content rendered. It cannot be evaluated for substance, methodology, or conclusions. | Trained researchers staff AI Safety Institutes and governance organizations |
Training programs are critical infrastructure for the field; their effectiveness is bottlenecked by limited mentor bandwidth and retention challenges.