Skip to content
Longterm Wiki

Center for AI Risk Management & Alignment (CARMA)

Safety Organization

AI safety organization incubated by the Future of Life Foundation, led by Richard Mallah (former FLI Principal AI Safety Strategist since 2014). Focus areas include risk assessment, policy strategy, and technical safety. Based in Calabasas, CA. Fiscally sponsored by Social & Environmental Entrepreneurs, Inc. Described by Anthony Aguirre as "the one thing FLF has sort of fully launched." Team of 12+ people. Advisors include Anthony Aguirre and Eric Drexler.

Related Wiki Pages

Top Related Pages

Approaches

AI Alignment

Analysis

Alignment Robustness Trajectory Model

Key Debates

Why Alignment Might Be HardAI Alignment Research Agendas

Other

Value LearningRLHF

Risks

Epistemic Sycophancy

Historical

Anthropic-Pentagon Standoff (2026)