Skip to content
Longterm Wiki

Nonprofit organization investigating offensive AI capabilities and controllability of frontier AI models through empirical research on autonomous hacking, shutdown resistance, and agentic misalignment.

Related Wiki Pages

Top Related Pages

Approaches

Capability Elicitation

Policy

Voluntary AI Safety Commitments

Other

Jaan TallinnElon MuskBenjamin Weinstein-RaunRed Teaming

Organizations

Redwood ResearchCenter for Applied RationalityOpenAIAI ImpactsLionheart VenturesMachine Intelligence Research Institute

Key Debates

Why Alignment Might Be HardTechnical AI Safety Research

Concepts

Agentic AISafety Orgs Overview

Risks

Power-Seeking AIAI ProliferationAI Disinformation