Skip to content
Longterm Wiki
Back

AI safety technical research | Career review | 80,000 Hours

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: 80,000 Hours

A practical career-guidance resource from 80,000 Hours aimed at individuals considering entering AI safety research; useful for understanding field entry points and skill requirements rather than technical research content itself.

Metadata

Importance: 62/100blog posteducational

Summary

80,000 Hours provides a comprehensive career guide for technical AI safety research, covering empirical and theoretical paths, entry requirements, key organizations, and strategies for building relevant skills. It frames AI safety research as one of the highest-impact career choices given existential risks from advanced AI systems.

Key Points

  • Distinguishes two main paths: empirical AI safety (ML-focused, hands-on experimentation) and theoretical AI safety (mathematics, formal methods, alignment theory).
  • Strong quantitative background and programming skills are essential; top ML PhD candidates or strong software engineers are best positioned to enter the field.
  • Field is highly neglected relative to its importance, creating strong demand for skilled researchers despite difficulty in finding entry-level positions.
  • Covers practical entry routes including PhD considerations, direct hiring at key organizations (e.g., OpenAI, Anthropic, DeepMind, MIRI), and self-study resources.
  • Acknowledges downsides: shortage of mentors/managers, risk of inadvertently accelerating capabilities, and no guaranteed solutions to alignment problems.

Review

The source document offers an in-depth exploration of technical AI safety research as a high-impact career path. It emphasizes the pressing need to develop technical solutions that can prevent AI systems from engaging in potentially harmful behaviors, particularly as AI capabilities rapidly advance. The field is characterized by its interdisciplinary nature, requiring strong quantitative skills, programming expertise, and a deep understanding of machine learning and safety techniques. The review highlights multiple approaches to AI safety, including scalable learning from human feedback, threat modeling, interpretability research, and cooperative AI development. While acknowledging the field's significant challenges and uncertainties, the document maintains an optimistic stance that technical research can meaningfully reduce existential risks. Key recommendations include building strong mathematical and programming foundations, gaining practical research experience, and remaining adaptable in a quickly evolving domain.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Mar 7, 202660 KB
AI safety technical research | Career review | 80,000 Hours Search for: On this page:

 Introduction 
 1 Why AI safety technical research is high impact 1.1 Want to learn more about risks from AI? Read the problem profile. 
 
 2 What does this path involve? 2.1 What does work in the empirical AI safety path involve? 
 2.2 What does work in the theoretical AI safety path involve? 
 2.3 Some exciting approaches to AI safety 
 
 3 What are the downsides of this career path? 
 4 How much do AI safety technical researchers earn? 
 5 Examples of people pursuing this path 
 6 How to predict your fit in advance 
 7 How to enter 7.1 Learning the basics 
 7.2 Should you do a PhD? 
 7.3 Getting a job in empirical AI safety research 
 7.4 Getting a job in theoretical AI safety research 
 7.5 Key organisations 
 
 8 Want one-on-one advice on pursuing this path? 
 9 Find a job in this path 
 10 Learn more about AI safety technical research 10.1 Top recommendations 
 10.2 Further recommendations 
 
 Progress in AI — while it could be hugely beneficial — comes with significant risks. Risks that we’ve argued could be existential .

 But these risks can be tackled.

 With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.

 This article explains how you can help.
 

 Table of Contents

 1 Why AI safety technical research is high impact 1.1 Want to learn more about risks from AI? Read the problem profile. 
 
 2 What does this path involve? 2.1 What does work in the empirical AI safety path involve? 
 2.2 What does work in the theoretical AI safety path involve? 
 2.3 Some exciting approaches to AI safety 
 
 3 What are the downsides of this career path? 
 4 How much do AI safety technical researchers earn? 
 5 Examples of people pursuing this path 
 6 How to predict your fit in advance 
 7 How to enter 7.1 Learning the basics 
 7.2 Should you do a PhD? 
 7.3 Getting a job in empirical AI safety research 
 7.4 Getting a job in theoretical AI safety research 
 7.5 Key organisations 
 
 8 Want one-on-one advice on pursuing this path? 
 9 Find a job in this path 
 10 Learn more about AI safety technical research 10.1 Top recommendations 
 10.2 Further recommendations 
 
 In a nutshell: Artificial intelligence will have transformative effects on society over the coming decades, and could bring huge benefits — but we also think there’s a substantial risk. One promising way to reduce the chances of an AI-related catastrophe is to find technical solutions that could allow us to prevent AI systems from carrying out dangerous behaviour.

 Pros

 Opportunity to make a significant contribution to a hugely important area of research
 Intellectually challenging and interesting work
 The area has a strong need for skilled researchers and engineers, and is highly neglected overall
 Cons

 Due to a shortage of managers, it’s difficult to get jobs and might take you some time to

... (truncated, 60 KB total)
Resource ID: 6c3ba43830cda3c5 | Stable ID: ZmEyZjhlNz