80,000 Hours: Updates to Our Research About AI Risk and Careers
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
A 2024 update from 80,000 Hours, a career-guidance organization focused on high-impact careers, revising their guidance on AI risk and how individuals can best contribute to AI safety efforts through career choices.
Metadata
Summary
80,000 Hours updates its research and recommendations regarding AI risk and career paths in AI safety, reflecting evolving views on the urgency and tractability of AI-related existential risks. The post outlines revised thinking on how individuals can best contribute to reducing AI risks through career choices, and adjusts priority areas based on current landscape assessments.
Key Points
- •80,000 Hours revises its assessment of AI risk timelines and the relative importance of different AI safety career paths.
- •The update reflects increased concern about near-term AI risks and adjusts recommendations for how people can have the most impact.
- •Career paths highlighted include technical AI safety research, AI governance and policy, and field-building roles.
- •The post addresses uncertainty in AI development trajectories and how that affects career prioritization decisions.
- •Recommendations are updated to account for the rapidly changing AI landscape and growing institutional attention to AI safety.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Safety Field Building and Community | Crux | 0.0 |
Cached Content Preview
Updates to our research about AI risk and careers | 80,000 Hours Search for: On this page:
Introduction
1 1. We now rank AI governance and policy at the top of our list of impactful career paths
2 2. New interview about California's AI bill
3 3. Catastrophic misuse of AI
4 4. Working at a frontier AI company: opportunities and downsides
5 5. Emerging approaches in AI governance
6 Learn more
This week, we’re sharing new updates on:
Top career paths for reducing risks from AI
An AI bill in California that’s getting a lot of attention
The potential for catastrophic misuse of advanced AI
Whether to work at frontier AI companies if you want to reduce catastrophic risks
The variety of approaches in AI governance
Here’s what’s new:
Table of Contents
1 1. We now rank AI governance and policy at the top of our list of impactful career paths
2 2. New interview about California’s AI bill
3 3. Catastrophic misuse of AI
4 4. Working at a frontier AI company: opportunities and downsides
5 5. Emerging approaches in AI governance
6 Learn more
1. We now rank AI governance and policy at the top of our list of impactful career paths
It’s swapped places with AI technical safety research, which is now second.
Here are our reasons for the change:
Many experts in the field have been increasingly excited about “ technical AI governance ” — people using technical expertise to inform and shape policies. For example, people can develop sophisticated compute governance policies and norms around evaluating increasingly advanced AI models for dangerous capabilities .
We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
We’re seeing a particularly large increase in the number of roles available in AI governance and policy, and we’re excited to encourage (even) more people to get involved now vs before. Governments are also more poised to take action now than they appeared to be just a few years ago.
AI governance is still a less developed field than AI safety technical research.
We now see clear efforts from the industry to push back against efforts to create risk-reducing AI policy, so it’s plausible that more work is needed to advocate for sensible approaches.
Good AI governance will be needed to reduce a range of risks from AI — not just misalignment but also catastrophic misuse (discussed below), as well as emerging societal risks, like the potential suffering of digital minds or stable totalitarianism . It’s plausible (though highly uncertain) that these other risks could make up the majority of the potential bad outcomes in worlds with transformative AI.
As AI progress ac
... (truncated, 9 KB total)7d10a79dcca9750a | Stable ID: NWE5ZTliZG