Longterm Wiki
Updated 2026-01-29HistoryData
Page StatusResponse
Edited 2 weeks ago3.7k words1 backlinks
65
QualityGood
72
ImportanceHigh
9
Structure9/15
715600%55%
Updated every 6 weeksDue in 4 weeks
Summary

Comprehensive analysis of AI safety field-building showing growth from 400 to 1,100 FTEs (2022-2025) at 21-30% annual growth rates, with training programs achieving 37% career conversion at costs of $5,000-40,000 per career change. Identifies critical bottleneck: talent pipeline over-optimized for researchers while neglecting operations, policy, and organizational roles.

TODOs2
Complete 'How It Works' section
Complete 'Limitations' section (6 placeholders)

AI Safety Field Building Analysis

Approach

AI Safety Field Building Analysis

Comprehensive analysis of AI safety field-building showing growth from 400 to 1,100 FTEs (2022-2025) at 21-30% annual growth rates, with training programs achieving 37% career conversion at costs of $5,000-40,000 per career change. Identifies critical bottleneck: talent pipeline over-optimized for researchers while neglecting operations, policy, and organizational roles.

Related
Organizations
Coefficient Giving
Concepts
Technical AI Safety
Approaches
AI Safety Training ProgramsAI Safety Intervention Portfolio
3.7k words Β· 1 backlinks
Crux

AI Safety Field Building and Community

Growing the AI safety research community through funding, training, and outreach

CategoryMeta-level intervention
Time Horizon3-10+ years
Primary MechanismHuman capital development
Key MetricResearchers produced per year
Entry BarrierLow to Medium
Related
Organizations
Redwood ResearchAnthropic
7 words

Quick Assessment

DimensionAssessmentEvidence
Field Size (2025)1,100 FTEs (600 technical, 500 non-technical)AI Safety Field Growth Analysis 2025β†—
Annual Growth Rate21-30% since 2020Technical: 21% FTE growth; Non-technical: 30%
Total Philanthropic Funding$110-130M/year (2024)Overview of AI Safety Funding↗
Training Program Conversion37% work full-time in AI safetyBlueDot 2022 Cohort Analysis↗
Cost per Career Change$5,000-40,000 depending on programARENA lower-touch, MATS higher-touch
Key BottleneckTalent pipeline over-optimized for researchersEA Forum analysis↗
TractabilityMedium-HighPrograms show measurable outcomes

Overview

Field-building focuses on growing the AI safety ecosystem rather than doing direct research or policy work. The theory is that by increasing the number and quality of people working on AI safety, we multiply the impact of all other interventions.

This is a meta-level or capacity-building interventionβ€”it doesn't directly solve the technical or governance problems, but creates the infrastructure and talent pipeline that makes solving them possible.

The field has grown substantially: from approximately 400 full-time equivalents (FTEs) in 2022 to roughly 1,100 FTEs in 2025, with technical AI safety organizations growing at 24% annually and non-technical organizations at approximately 30% annually. However, this growth has created new challengesβ€”the pipeline may be over-optimized for researchers while neglecting operations, policy, and other critical roles.

Theory of Change

Loading diagram...

Key mechanisms:

  1. Talent pipeline: Train and recruit people into AI safety
  2. Knowledge dissemination: Spread ideas and frameworks
  3. Community building: Create support structures and networks
  4. Funding infrastructure: Direct resources to promising work
  5. Public awareness: Build broader support and understanding

Major Approaches

1. Education and Training Programs

Goal: Teach AI safety concepts and skills to potential contributors.

Training Program Comparison

ProgramFormatDurationScaleCost/ParticipantPlacement RateKey Outcomes
MATSβ†—Research mentorship3-4 months30-50/cohortβ‰ˆ$20,000-40,00075% publish resultsAlumni at Anthropic, OpenAI, DeepMind; founded Apollo Research, Timaeus
ARENAβ†—In-person bootcamp4-5 weeks20-30/cohortβ‰ˆ$5,000-15,0008 confirmed FT positions (5.0 cohort)Alumni at Apollo Research, METR, UK AISI
BlueDot Impactβ†—Online cohort-based8 weeks1,000+/yearβ‰ˆ$440/student37% work FT in AI safety6,000+ trained since 2022; 75% completion rate
SPAR↗Part-time remoteVaries50+/cohortLow (volunteer mentors)Research output focusedConnects aspiring researchers with professionals
AI Safety CampProject-based1-2 weeks20-40/campVariesProject completionMultiple camps globally

Key Programs in Detail:

MATS (ML Alignment & Theory Scholars)β†—:

  • Since 2021, has supported 298 scholars and 75 mentors
  • Summer 2024: 1,220 applicants, 3-5% acceptance rate (comparable to MIT admissions)
  • Spring 2024 Extension: 75% of scholars published results; 57% accepted to conferences
  • Notable: Nina Panickssery's paper on steering Llama 2 won Outstanding Paper Award at ACL 2024
  • Alumni include researchers at Anthropic, OpenAI, and Google DeepMind
  • Received $23.6M in Coefficient Giving fundingβ†— for general support

ARENA (Alignment Research Engineer Accelerator)β†—:

  • Run 2-3 bootcamps per year, each 4-5 weeks, based at LISA in London
  • ARENA 5.0β†—: 8 participants confirmed full-time AI safety positions post-program
  • Participants rate exercise enjoyment 8.7/10, LISA location value 9.6/10
  • Alumni quote: "ARENA was the most useful thing that could happen to someone with a mathematical background who wants to enter technical AI safety research"
  • Claims to be among most cost-effective technical AI safety training programs

BlueDot Impact↗ (formerly AI Safety Fundamentals):

  • Trained 6,000+ professionals worldwide since 2022
  • 2022 cohort analysisβ†—: 123 alumni (37% of 342) now work full-time on AI safety
  • 20 alumni would not be working on AI safety were it not for the course (counterfactual impact)
  • 75% completion rate (vs. 20% for typical Coursera courses)
  • Raised $34M total funding, including $25M in 2025
  • Alumni at Anthropic, Google DeepMind, UK AI Security Institute

Theory of change: Train people in AI safety β†’ some pursue careers β†’ net increase in research capacity

Effectiveness considerations:

  • High leverage: One good researcher can contribute for decades
  • Measurable conversion: BlueDot shows 37% career conversion; ARENA shows 8+ direct placements per cohort
  • Counterfactual question: BlueDot estimates 20 counterfactual career changes from 2022 cohort
  • Quality vs. quantity: More selective programs (MATS, ARENA) show higher placement rates

Cost Per Career Change Estimates

Training programs vary significantly in their cost-effectiveness at converting participants into AI safety careers. Different program modelsβ€”from high-touch research mentorships to scalable online coursesβ€”represent different trade-offs between cost per participant and career conversion rate.

Expert/SourceEstimateReasoning
ARENA (successful cases)$1,000-15,000ARENA represents the lower bound for intensive programs, achieving direct program costs per successful career change through its efficient 4-5 week bootcamp format. The program's in-person structure at LISA combined with focused technical curriculum allows for cost-effective training, with ARENA 5.0 placing 8 participants in full-time AI safety positions. The cost includes venue, materials, and instructor time but benefits from concentrated delivery and high placement rates among participants who complete the program.
MATS$10,000-40,000MATS represents a higher-touch research mentorship model with significantly higher costs per career change, reflecting its 3-4 month duration and personalized 1-on-1 mentorship structure. The program's selectivity (3-5% acceptance rate) and focus on research outputβ€”with 75% of Spring 2024 scholars publishing resultsβ€”justifies higher per-participant investment. Costs include mentor compensation, scholar stipends, and program infrastructure, with the model optimized for producing research-ready talent rather than maximizing conversion volume.
BlueDot Impact$140-2,000BlueDot Impact achieves the lowest cost per career change through its scalable online cohort model, training 1,000+ participants annually at approximately $140 per student. The 37% career conversion rate from the 2022 cohort (123 of 342 alumni working full-time in AI safety) yields an estimated $1,200-2,000 cost per successful career change when accounting for program overhead. The model sacrifices depth for scale but maintains 75% completion ratesβ€”far higher than typical MOOCsβ€”through cohort-based structure and volunteer facilitators.

Who's doing this:

  • ARENA (Redwood Research / independent)
  • MATS (independent, Lightcone funding)
  • BlueDot Impact
  • Various university courses and programs

2. Public Communication and Awareness

Goal: Increase general understanding of AI risk and build support for safety efforts.

Approaches:

Popular Media:

  • Podcasts (Lex Fridman, Dwarkesh Patel, 80K Hours)
  • Books (Superintelligence, The Alignment Problem, The Precipice)
  • Documentaries and videos
  • News articles and op-eds
  • Social media presence

High-Level Engagement:

  • Statement on AI Risk (May 2023): Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei signed
    • "Mitigating the risk of extinction from AI should be a global priority"
    • Raised public and elite awareness
  • Expert testimony to governments
  • Academic conferences and workshops
  • Industry events and presentations

Accessible Explanations:

  • Robert Miles YouTube channel
  • AI Safety memes and infographics
  • Explainer articles
  • University lectures and courses

Theory of change: Awareness β†’ political will for governance + cultural shift toward safety + talent recruitment

Effectiveness:

  • Uncertain impact on x-risk: Unclear if awareness translates to action
  • Possible downsides:
    • AI hype and race dynamics
    • Association with less credible narratives
    • Backlash and polarization
  • Possible upsides:
    • Political support for regulation
    • Recruitment to field
    • Cultural shift in labs

Who's doing this:

  • Individual communicators (Miles, Yudkowsky, Christiano, etc.)
  • Organizations (CAIS, FLI)
  • Journalists covering AI
  • Academics doing public engagement

3. Funding and Grantmaking

Goal: Direct resources to high-impact work and people.

AI Safety Funding Landscape (2024)

Funding SourceAmount (2024)% of TotalKey Recipients
Coefficient Givingβ‰ˆ$63.6M49%CAIS ($8.5M), Redwood ($6.2M), MIRI ($4.1M)
Individual Donors (e.g., Jaan Tallinn)β‰ˆ$20M15%Various orgs and researchers
Government Fundingβ‰ˆ$32.4M25%AI Safety Institutes, university research
Corporate External Investmentβ‰ˆ$8.2M6%Frontier Model Forum AI Safety Fund
Academic Endowmentsβ‰ˆ$6.8M5%University centers
Total Philanthropic$110-130M100%β€”

Source: Overview of AI Safety Funding Situation↗

Note: This excludes internal corporate safety research budgets, estimated at greater than $500M annually across major AI labs. Total ecosystem funding including corporate is approximately $600-650M/year.

Context: Philanthropic funding for climate risk mitigation was approximately $9-15 billion in 2023β€”roughly 20x philanthropic AI safety funding. With over $189 billion invested in AI projected for 2024, safety funding remains less than 2% of total AI investment.

Major Funders:

Coefficient Giving↗:

  • Largest AI safety funder (β‰ˆ$50-65M/year to technical AI safety)
  • 2025 Technical AI Safety RFPβ†—: Expected to spend β‰ˆ$40M over 5 months
  • Key 2024-25 grants: MATS ($23.6M), CAIS ($8.5M), Redwood Research ($6.2M)
  • Self-assessment: "Rate of spending was too slow" in 2024; committed to expanding support
  • Supporting work on AI safety since 2015

AI Safety Fund (Frontier Model Forum)β†—:

  • $10M+ collaborative initiative established October 2023
  • Founding members: Anthropic, Google, Microsoft, OpenAI
  • Philanthropic partners: Patrick J. McGovern Foundation, Packard Foundation, Schmidt Sciences, Jaan Tallinn

Survival and Flourishing Fund (SFF):

  • β‰ˆ$30-50M/year
  • Broad AI safety focus
  • Supports unconventional projects
  • Smaller grants, more experimental

Effective Altruism Funds (Long-Term Future Fund):

  • β‰ˆ$10-20M/year to AI safety
  • Small to medium grants
  • Individual researchers and projects
  • Lower bar for experimental work

Grantmaking Strategies:

Hits-based giving:

  • Accept high failure rate for potential breakthroughs
  • Fund unconventional approaches
  • Support early-stage ideas

Ecosystem development:

  • Fund infrastructure (ARENA, MATS, etc.)
  • Support conferences and gatherings
  • Build community spaces

Diversification:

  • Support multiple approaches
  • Don't cluster too heavily
  • Hedge uncertainty

Theory of change: Capital β†’ enables people and orgs to work on AI safety β†’ research and policy progress

Bottlenecks:

  • Talent exceeds funding for roles, but not for orgs: Plenty of aspiring researchers but not enough organizations to hire themβ†—
  • Grantmaker capacity: Coefficient Giving struggled to make qualified senior hiresβ†— for technical AI safety grantmaking
  • Competition with labs: AI Safety Institutes and external research struggle to compete on compensation with frontier labs

Who should consider this:

  • Program officers at foundations
  • Individual donors with wealth
  • Fund managers
  • Requires: wealth or institutional position + good judgment + network

4. Community Building and Support

Goal: Create infrastructure that supports AI safety work.

Activities:

Gatherings and Conferences:

  • EA Global (AI safety track)
  • AI Safety conferences
  • Workshops and retreats
  • Local meetups
  • Online forums (Alignment Forum, LessWrong, Discord servers)

Career Support:

  • 80,000 Hours career advising
  • Mentorship programs
  • Job boards and hiring pipelines
  • Introductions and networking

Research Infrastructure:

  • Alignment Forum (discussion platform)
  • ArXiv overlays and aggregation
  • Compute access programs
  • Shared datasets and benchmarks

Emotional and Social Support:

  • Community spaces
  • Mental health resources
  • Peer support for difficult work
  • Social events

Theory of change: Supportive community β†’ people stay in field longer β†’ more cumulative impact + better mental health

Challenges:

  • Insularity: Echo chambers and groupthink
  • Barrier to entry: Can feel cliquish to newcomers
  • Time investment: Social events vs. object-level work
  • Ideological narrowness: Lack of diversity in perspectives

Who's doing this:

  • CEA (Centre for Effective Altruism)
  • Local EA groups
  • Lightcone Infrastructure (LessWrong, Alignment Forum)
  • Individual organizers

5. Academic Field Building

Goal: Establish AI safety as legitimate academic field.

University Centers and Programs:

InstitutionCenter/ProgramFocusStatus
UC BerkeleyCHAI↗ (Center for Human-Compatible AI)Foundational alignment researchActive
OxfordFuture of Humanity InstituteExistential risk researchClosed 2024
MITAI Safety InitiativeTechnical safety, governanceGrowing
StanfordHAI (Human-Centered AI)Broad AI policy, some safetyActive
Carnegie MellonAI Safety ResearchTechnical safetyActive
CambridgeLCFI, CSERExistential risk, policyActive

Key Developments (2024-2025):

  • FHI closure at Oxford marks significant shift in academic landscape
  • Growing number of PhD programs with explicit AI safety focus
  • NSF and other agencies beginning to fund safety research specifically
  • Coefficient Giving funding university-based safety researchβ†— including Ohio State

Academic Incentives:

  • Tenure-track positions in AI safety emerging
  • PhD programs with safety focus
  • Grants for safety research (NSF, etc.)
  • Prestigious publication venues (NeurIPS safety track, ICLR)
  • Academic conferences (AI Safety research conferences)

Curriculum Development:

Challenges:

  • Slow timelines: Academic careers are 5-10 year investments
  • Misaligned incentives: Publish or perish vs. impact
  • Capabilities research: Universities also advance capabilities
  • Brain drain: Best people leave for industry/nonprofits (frontier labs pay 2-5x academic salaries)

Benefits:

  • Legitimacy: Academic credibility helps policy
  • Training: PhD pipeline
  • Long-term research: Can work on harder problems
  • Geographic distribution: Not just SF/Bay Area

Theory of change: Academic legitimacy β†’ more talent + more funding + political influence β†’ field growth


Field Growth Statistics

The AI safety field has grown substantially since 2020, with acceleration around 2023 coinciding with increased public attention following ChatGPT's release.

Field Size Over Time

YearTechnical AI Safety FTEsNon-Technical AI Safety FTEsTotal FTEsOrganizations
2015β‰ˆ50β‰ˆ20β‰ˆ70β‰ˆ15
2020β‰ˆ150β‰ˆ50β‰ˆ200β‰ˆ30
2022β‰ˆ300β‰ˆ100β‰ˆ400β‰ˆ50
2024β‰ˆ500β‰ˆ400β‰ˆ900β‰ˆ65
2025β‰ˆ600-645β‰ˆ500β‰ˆ1,100β‰ˆ70

Source: AI Safety Field Growth Analysis 2025β†—

Growth rates:

  • Technical AI safety organizations: 24% annual growth
  • Technical AI safety FTEs: 21% annual growth
  • Non-technical AI safety: approximately 30% annual growth (accelerating since 2023)

Top research areas by FTEs:

  1. Miscellaneous technical safety (scalable oversight, adversarial robustness, jailbreaks)
  2. LLM safety
  3. Interpretability

Methodology note: These estimates may undercount people working on AI safety since many work at organizations that don't explicitly brand themselves as AI safety organizations, particularly in technical safety in academia.


What Needs to Be True

For field-building to be high impact:

  1. Talent is bottleneck: More people actually means more progress (vs. "too many cooks")
  2. Sufficient time: Field-building is multi-year investment; need time before critical period
  3. Quality maintained: Growth doesn't dilute quality or focus
  4. Absorptive capacity: Ecosystem can integrate new people
  5. Right people: Recruiting those with high potential for contribution
  6. Complementarity: New people enable work that wouldn't happen otherwise

Key Bottlenecks and Challenges

The AI safety field faces several structural challenges that limit the effectiveness of field-building efforts:

Pipeline Over-Optimization for Researchers

According to analysis on the EA Forum↗, the AI safety talent pipeline is over-optimized for researchers:

  • The majority of AI safety talent pipelines are optimized for selecting and producing researchers
  • Research is not the most neglected talent type in AI safety
  • This leads to research-specific talent being over-represented in the community
  • Supporting programs strongly select for research skills, missing other crucial roles

Neglected roles: Operations, program management, communications, policy implementation, organizational leadership.

Scaling Gap

There's a massive gap between awareness-level training and the expertise required for selective research fellowships:

  • BlueDot plans to train 100,000 people in AI safety fundamentals over 4.5 years
  • But few programs bridge from introductory courses to elite research fellowships
  • Need scalable programs for the "missing middle"

Organizational Infrastructure Deficit

  • Not enough talented founders are building AI safety organizations
  • Catalyze's pilot programβ†— incubated 11 organizations, with participants reporting the program accelerated progress by an average of 11 months
  • Open positions often don't exist because organizations haven't been founded

Compensation Competition

AI Safety Institutes and external research struggle to compete with frontier AI companies:

  • Frontier companies offer substantially higher compensation packages
  • AISIs must appeal to researchers' desire for public service and impact
  • Some approaches: joint university appointments, research sabbaticals, rotating fellowships

Risks and Considerations

Dilution Risk

  • Too many people with insufficient expertise
  • "Alignment washing" - superficial engagement
  • Noise drowns out signal

Mitigation: Selective programs, emphasis on quality, mentorship

Information Hazards

  • Publicly discussing AI capabilities could accelerate them
  • Spreading awareness of potential attacks
  • Attracting bad actors

Mitigation: Careful communication, expert judgment on what to share

Race Dynamics

  • Public attention accelerates AI development
  • Creates FOMO (fear of missing out)
  • Geopolitical competition

Mitigation: Frame carefully, emphasize cooperation, private engagement

Community Problems

  • Groupthink and echo chambers
  • Lack of ideological diversity
  • Social dynamics override epistemic rigor
  • Cult-like dynamics

Mitigation: Encourage disagreement, diverse perspectives, epistemic humility

Estimated Impact by Worldview

Long Timelines (10+ years)

Impact: Very High

  • Time for field-building to compound
  • Training pays off over decades
  • Can build robust institutions
  • Best time to invest in human capital

Short Timelines (3-5 years)

Impact: Low-Medium

  • Insufficient time for new people to become experts
  • Better to leverage existing talent
  • Exception: rapid deployment of already-skilled people

Optimism About Field Growth

Impact: High

  • Every good researcher counts
  • Ecosystem effects are strong
  • More perspectives improve solutions

Pessimism About Field Growth

Impact: Low

  • Talent bottleneck is overstated
  • Coordination costs dominate
  • Focus on existing excellent people

Who Should Consider This

Strong fit if you:

  • Enjoy teaching, mentoring, organizing
  • Good at operations and logistics
  • Strong communication skills
  • Can evaluate talent and potential
  • Patient with long timelines
  • Value community and culture

Specific roles:

  • Program manager: Run training programs (ARENA, MATS, etc.)
  • Grantmaker: Evaluate and fund projects
  • Educator: Teach courses, create content
  • Community organizer: Events, spaces, support
  • Communicator: Explain AI safety to various audiences

Backgrounds:

  • Education / pedagogy
  • Program management
  • Operations
  • Communications
  • Community organizing
  • Content creation

Entry paths:

  • Staff role at training program
  • Local group organizer β†’ full-time
  • Teaching assistant β†’ program lead
  • Communications role
  • Grantmaking entry programs

Less good fit if:

  • Prefer direct object-level work
  • Impatient with meta-level interventions
  • Don't enjoy working with people
  • Want immediate measurable impact

Key Organizations

Training Programs

  • ARENA (Redwood / independent)
  • MATS (independent)
  • BlueDot Impact (running AGI Safety Fundamentals)
  • AI Safety Camp

Community Organizations

  • Centre for Effective Altruism (CEA)
    • EAG conferences
    • University group support
    • Community health
  • Lightcone Infrastructure
    • LessWrong, Alignment Forum
    • Conferences and events
    • Office spaces

Funding Organizations

  • Coefficient Giving (largest funder)
  • Survival and Flourishing Fund
  • EA Funds - Long-Term Future Fund
  • Founders Pledge

Academic Centers

  • CHAI (UC Berkeley)
  • Various university groups

Communication

  • Individual content creators
  • Center for AI Safety (CAIS) (public advocacy)
  • Journalists and media

Career Considerations

Pros

  • Leveraged impact: Enable many others
  • People-focused: Work with smart, motivated people
  • Varied work: Teaching, organizing, strategy
  • Lower barrier: Don't need research-level technical skills
  • Rewarding: See people grow and succeed

Cons

  • Hard to measure: Impact is indirect and delayed
  • Meta-level: One step removed from object-level problem
  • Uncertain: May not produce expected talent
  • Community dependent: Success depends on others
  • Burnout risk: Emotionally demanding

Compensation

  • Program staff: $10-100K
  • Directors: $100-150K
  • Grantmakers: $80-150K
  • Community organizers: $40-80K (often part-time)

Note: Field-building often pays less than technical research but more than pure volunteering

Skills Development

  • Program management
  • Teaching and mentoring
  • Evaluation and judgment
  • Operations
  • Communication

Complementary Interventions

Field-building enables and amplifies:

  • Technical research: Creates researcher pipeline
  • Governance: Trains policy experts
  • Corporate influence: Provides talent to labs
  • All interventions: Increases capacity across the board

Open Questions

Key Questions

  • ?Is AI safety talent-constrained or idea-constrained?
    Talent-constrained

    We have more ideas than people to execute them. Good researchers are bottleneck. Field-building is critical.

    β†’ Invest heavily in training and recruitment

    Confidence: medium
    Idea-constrained

    We don't know what to work on. More people without better ideas doesn't help. Need conceptual breakthroughs first.

    β†’ Focus on research, not growth; be selective about field-building

    Confidence: medium
  • ?Should we prioritize growth or quality in field-building?
    Growth - quantity has quality of its own

    Bigger field attracts more talent, resources, attention. Can't predict who will contribute most. Inclusive approach.

    β†’ Lower barriers, scale programs, broad recruitment

    Confidence: low
    Quality - excellence is rare and crucial

    One excellent researcher worth 100 mediocre ones. Dilution risks real. Selectivity maintains standards.

    β†’ Highly selective programs, mentorship-heavy, focus on top talent

    Confidence: medium

Getting Started

If you want to contribute to field-building:

  1. Understand the field first:

    • Learn AI safety yourself
    • Engage with community
    • Understand current state
  2. Identify your niche:

    • Teaching? β†’ Develop curriculum, TA for programs
    • Organizing? β†’ Start local group, help with events
    • Funding? β†’ Learn grantmaking, advise donors
    • Communication? β†’ Write, make videos, explain concepts
  3. Start small:

    • Volunteer for existing programs
    • Organize local reading group
    • Create content
    • Help with events
  4. Build track record:

    • Demonstrate impact
    • Get feedback
    • Iterate and improve
  5. Scale up:

    • Apply for staff roles
    • Launch new programs
    • Seek funding for initiatives

Resources:

  • CEA community-building resources
  • 80,000 Hours on field-building
  • Alignment Forum posts on field growth
  • MATS/ARENA/BlueDot as examples

Sources & Further Reading

Field Growth and Statistics

  • AI Safety Field Growth Analysis 2025β†— β€” Comprehensive dataset of technical and non-technical AI safety organizations and FTEs
  • AI Safety Field Growth Analysis 2025 (LessWrong)β†— β€” Cross-post with additional discussion

Funding

Training Programs

  • MATS Programβ†— β€” ML Alignment & Theory Scholars official site
  • MATS Spring 2024 Extension Retrospectiveβ†— β€” Detailed outcomes data
  • ARENA 5.0 Impact Reportβ†— β€” Program outcomes and effectiveness
  • ARENA 4.0 Impact Reportβ†— β€” Earlier cohort data
  • BlueDot Impact: 2022 AI Alignment Course Impactβ†— β€” Detailed analysis showing 37% career conversion

Talent Pipeline

Industry Assessment

  • FLI AI Safety Index 2024β†— β€” Assessment of AI company safety practices
  • AI Safety Index Winter 2025β†— β€” Updated industry assessment
  • CAIS 2024 Impact Reportβ†— β€” Center for AI Safety annual report

International Coordination

  • International AI Safety Report 2025β†— β€” Report by 96 AI experts on global safety landscape
  • The Global Landscape of AI Safety Institutesβ†— β€” Overview of government AI safety efforts

AI Transition Model Context

Field building improves the Ai Transition Model through multiple factors:

FactorParameterImpact
Misalignment PotentialSafety-Capability GapGrew field from 400 to 1,100 FTEs (2022-2025) at 21-30% annually
Misalignment PotentialAlignment RobustnessTraining programs achieve 37% career conversion at $1K-40K per career change
Civilizational CompetenceInstitutional QualityBuilds capacity across labs, government, and advocacy organizations

Key bottleneck is talent pipeline over-optimization for researchers; the field needs more governance, policy, and operations professionals.

Related Pages

Top Related Pages

Labs

Center for AI Safety

Models

Capabilities-to-Safety Pipeline ModelAI Safety Researcher Gap Model

Concepts

Coefficient GivingSafety-Capability GapAi Transition ModelMisalignment PotentialInstitutional QualityCivilizational Competence

Transition Model

Safety Research

Key Debates

Technical AI Safety Research

Organizations

MATS ML Alignment Theory Scholars programUK AI Safety Institute