Alignment Forum - 2021 AI Alignment Literature Review
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Alignment Forum
An annual tradition on the Alignment Forum, this 2021 edition is a go-to reference for understanding the organizational landscape of AI safety research and is widely used by newcomers and donors assessing where to direct resources.
Metadata
Summary
A comprehensive annual review of the AI alignment research landscape, surveying major organizations (FHI, MIRI, Anthropic, DeepMind, OpenAI, ARC, and others), their research approaches, and contributions to AI safety. The document also serves as a comparative guide for donors evaluating AI safety charities, and as an entry point for those new to AI existential risk.
Key Points
- •Surveys 15+ major AI safety research organizations including FHI, MIRI, CHAI, Anthropic, DeepMind, OpenAI, ARC, Redwood Research, and Ought.
- •Provides comparative analysis of organizational approaches, output quality, and funding needs to guide effective altruist charitable giving decisions.
- •Includes an introductory section for readers new to AI as an existential risk, making it accessible to newcomers.
- •Covers both technical alignment research organizations and adjacent groups focused on governance, policy, and global catastrophic risk.
- •Represents a snapshot of the 2021 AI safety ecosystem, useful for understanding how the field was structured and prioritized at that time.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Elicit (AI Research Tool) | Organization | 63.0 |
Cached Content Preview
[2021 AI Alignment Literature Review and Charity Comparison](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#)
87 min read
•
[Introduction](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#Introduction)
•
[How to read this document](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#How_to_read_this_document)
•
[New to Artificial Intelligence as an existential risk?](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#New_to_Artificial_Intelligence_as_an_existential_risk_)
•
[Conflict of Interest](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#Conflict_of_Interest)
•
[Research Organisations](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#Research_Organisations)
•
[FHI: The Future of Humanity Institute](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#FHI__The_Future_of_Humanity_Institute)
•
[GovAI: The Center for the Governance of AI](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#GovAI__The_Center_for_the_Governance_of_AI)
•
[CHAI: The Center for Human-Compatible AI](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#CHAI__The_Center_for_Human_Compatible_AI)
•
[MIRI: The Machine Intelligence Research Institute](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#MIRI__The_Machine_Intelligence_Research_Institute)
•
[GCRI: The Global Catastrophic Risks Institute](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#GCRI__The_Global_Catastrophic_Risks_Institute)
•
[CSER: The Center for the Study of Existential Risk](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#CSER__The_Center_for_the_Study_of_Existential_Risk)
•
[OpenAI](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#OpenAI)
•
[Google Deepmind](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#Google_Deepmind)
•
[Anthropic](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#Anthropic)
•
[ARC: Alignment Research Center](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison#ARC__Alignment_Research_Center)
•
[Redwood Research](https://www.alignmentforum.org
... (truncated, 98 KB total)2015b4d610e5549c | Stable ID: MjIzMDU4Mz