Skip to content
Longterm Wiki

AI interpretability research lab developing tools to decode and control neural network internals for safer AI systems.

Key Metrics

Funding Rounds

$57Mtotal raised
Funding Rounds. Seed (Jun 2024): $7M raised; Series A (Apr 2025): $50M raised. Total: $57M.$0$14M$29M$43M$57M20242025SeedSeries A
Per round
Total

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

AI AlignmentAI Output Filtering

Policy

EU AI Act

Organizations

Google DeepMindApollo ResearchSeldon LabElicit (AI Research Tool)Apart Research

Other

InterpretabilityMechanistic InterpretabilityRLHFNeel NandaZvi MowshowitzMax Tegmark

Risks

SchemingSycophancy

Concepts

Safety Orgs Overview

Key Debates

AI Alignment Research Agendas