Skip to content
Longterm Wiki
Back

Stanford AI Alignment - What is AI Safety

web

This is a landing page from Stanford's student AI alignment group, best suited as an introductory gateway for newcomers exploring AI safety concepts and career paths rather than as a primary technical reference.

Metadata

Importance: 35/100homepageeducational

Summary

An introductory page from Stanford AI Alignment that defines AI alignment as a research field focused on ensuring advanced AI benefits humanity and avoids catastrophic failures. It curates starter resources for learning about AI safety and guides students toward high-impact careers in technical AI safety research and policy.

Key Points

  • Defines AI alignment broadly as research addressing how to ensure advanced AI benefits humanity and prevents catastrophic failures.
  • Curates accessible starter resources including articles, videos, and syllabi from Vox, 80,000 Hours, Future of Life Institute, and OpenAI.
  • Highlights career pathways including technical AI safety research and AI policy/strategy roles.
  • Links to AGI Safety Fundamentals Curriculum by Richard Ngo as a structured learning pathway.
  • Serves as an entry point for university students considering high-impact careers in AI safety.

Cited by 1 page

PageTypeQuality
Elicit (AI Research Tool)Organization63.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
[0](https://stanfordaialignment.org/cart)

## **What is AI Alignment?**

As an emerging field, there are many definitions of the field AI Alignment. Broadly AI Alignment is a research field aimed at tackling the questions “how do we ensure the development of advanced artificial intelligence benefits humanity? and how do we avoid catastrophic failures while building advanced AI systems?”

### **Starter resources**

- Short Article: [The case for taking AI seriously as a threat to humanity](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) by Kelsey Piper, Vox

- Article: [Preventing an AI-related catastrophe](https://80000hours.org/problem-profiles/artificial-intelligence/) by Ben Hilton, 80,0000 Hours

- Video: [Intro to AI Safety](https://www.youtube.com/watch?v=pYXy-A4siMw) by Rob Miles

- Report: [Benefits & Risks of Artificial Intelligence](https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/) by Ariel Conn, Future of Life Institute

- Syllabus: [AGI Safety Fundamentals Curriculum](https://www.eacambridge.org/agi-safety-fundamentals) by Richard Ngo, OpenAI

- More: [Lots of Links](https://www.aisafetysupport.org/resources/lots-of-links) from AI Safety Support


## **How can I work on this problem?**

A career in AI Alignment may be the most impactful way to spend your working hours. As a university group, much of our focus is on preparing students for such pursuits.

### **Starter resources**

- Career guide: [Guide to working in AI policy and strategy](https://80000hours.org/articles/ai-policy-guide/) from 80,000 Hours

- Career guide: [Guide to pursuing a career in technical AI safety research](https://80000hours.org/career-reviews/artificial-intelligence-risk-research/) from 80,000 Hours

- Career guide: [Your biggest opportunity to make a difference: our guide to what makes for a high-impact career](https://80000hours.org/key-ideas/) by 80,000 Hours

- More: [Lots of Links](https://www.aisafetysupport.org/resources/lots-of-links) from AI Safety Support


­

­
Resource ID: 339a3454d9ae31cc | Stable ID: N2NmMGQ2Nm