Skip to content
Longterm Wiki

Apollo Research is an AI safety research organization founded in 2023 with a specific focus on one of the most concerning potential failure modes: deceptive alignment and scheming behavior in advanced AI systems.

Facts

1
General
Websitehttps://www.apolloresearch.ai

Divisions

1

Related Wiki Pages

Top Related Pages

Approaches

AI Safety CasesEvaluation AwarenessScalable Eval ApproachesThird-Party Model Auditing

Analysis

AI Safety Intervention Effectiveness MatrixAI Risk Interaction Network Model

Policy

Voluntary AI Safety Commitments

Risks

AI Capability Sandbagging

Concepts

Situational AwarenessExistential Risk from AILarge Language ModelsPersuasion and Social Manipulation

Other

AI EvaluationsRed TeamingJaan TallinnKamal Ndousse

Organizations

Alignment Research CenterAnthropic

Key Debates

AI Accident Risk CruxesTechnical AI Safety Research