Longterm Wiki

Is Interpretability Sufficient for Safety?

interpretability-sufficient (E176)
← Back to pagePath: /knowledge-base/debates/interpretability-sufficient/
Page Metadata
{
  "id": "interpretability-sufficient",
  "numericId": null,
  "path": "/knowledge-base/debates/interpretability-sufficient/",
  "filePath": "knowledge-base/debates/interpretability-sufficient.mdx",
  "title": "Is Interpretability Sufficient for Safety?",
  "quality": 49,
  "importance": 62,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2025-12-28",
  "llmSummary": "Comprehensive survey of the interpretability sufficiency debate with 2024-2025 empirical progress: Anthropic extracted 34M features from Claude 3 Sonnet (70% interpretable), but scaling requires billions of features and faces fundamental challenges (10x performance loss, deception detection unsolved). Emerging consensus favors hybrid approaches combining interpretability verification with behavioral methods like RLHF rather than interpretability alone.",
  "structuredSummary": null,
  "description": "Debate over whether mechanistic interpretability can ensure AI safety. Anthropic's 2024 research extracted 34 million features from Claude 3 Sonnet with 70% human-interpretable, but scaling to frontier models (trillions of parameters) and detecting sophisticated deception remain unsolved challenges.",
  "ratings": {
    "novelty": 3.5,
    "rigor": 5.5,
    "actionability": 4,
    "completeness": 6.5
  },
  "category": "debates",
  "subcategory": null,
  "clusters": [
    "ai-safety"
  ],
  "metrics": {
    "wordCount": 2034,
    "tableCount": 6,
    "diagramCount": 1,
    "internalLinks": 21,
    "externalLinks": 1,
    "footnoteCount": 0,
    "bulletRatio": 0.28,
    "sectionCount": 23,
    "hasOverview": false,
    "structuralScore": 11
  },
  "suggestedQuality": 73,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 2034,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 16,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 20,
    "similarPages": [
      {
        "id": "interpretability",
        "title": "Mechanistic Interpretability",
        "path": "/knowledge-base/responses/interpretability/",
        "similarity": 20
      },
      {
        "id": "mech-interp",
        "title": "Mechanistic Interpretability",
        "path": "/knowledge-base/responses/mech-interp/",
        "similarity": 19
      },
      {
        "id": "mesa-optimization",
        "title": "Mesa-Optimization",
        "path": "/knowledge-base/risks/mesa-optimization/",
        "similarity": 18
      },
      {
        "id": "probing",
        "title": "Probing / Linear Probes",
        "path": "/knowledge-base/responses/probing/",
        "similarity": 17
      },
      {
        "id": "sleeper-agent-detection",
        "title": "Sleeper Agent Detection",
        "path": "/knowledge-base/responses/sleeper-agent-detection/",
        "similarity": 17
      }
    ]
  }
}
Entity Data
{
  "id": "interpretability-sufficient",
  "type": "crux",
  "title": "Is Interpretability Sufficient for Safety?",
  "description": "Debate over whether mechanistic interpretability can ensure AI safety.",
  "tags": [
    "debate",
    "interpretability",
    "safety-research"
  ],
  "relatedEntries": [],
  "sources": [],
  "lastUpdated": "2025-01",
  "customFields": [
    {
      "label": "Question",
      "value": "Is mechanistic interpretability sufficient to ensure AI safety?"
    },
    {
      "label": "Stakes",
      "value": "Determines priority of interpretability vs other safety research"
    },
    {
      "label": "Current Progress",
      "value": "Can interpret some circuits/features, far from full transparency"
    }
  ]
}
Canonical Facts (0)

No facts for this entity

External Links
{
  "lesswrong": "https://www.lesswrong.com/tag/interpretability-ml-and-ai"
}
Backlinks (0)

No backlinks

Frontmatter
{
  "title": "Is Interpretability Sufficient for Safety?",
  "description": "Debate over whether mechanistic interpretability can ensure AI safety. Anthropic's 2024 research extracted 34 million features from Claude 3 Sonnet with 70% human-interpretable, but scaling to frontier models (trillions of parameters) and detecting sophisticated deception remain unsolved challenges.",
  "sidebar": {
    "order": 6
  },
  "importance": 62.5,
  "quality": 49,
  "lastEdited": "2025-12-28",
  "update_frequency": 45,
  "llmSummary": "Comprehensive survey of the interpretability sufficiency debate with 2024-2025 empirical progress: Anthropic extracted 34M features from Claude 3 Sonnet (70% interpretable), but scaling requires billions of features and faces fundamental challenges (10x performance loss, deception detection unsolved). Emerging consensus favors hybrid approaches combining interpretability verification with behavioral methods like RLHF rather than interpretability alone.",
  "ratings": {
    "novelty": 3.5,
    "rigor": 5.5,
    "actionability": 4,
    "completeness": 6.5
  },
  "clusters": [
    "ai-safety"
  ]
}
Raw MDX Source
---
title: "Is Interpretability Sufficient for Safety?"
description: "Debate over whether mechanistic interpretability can ensure AI safety. Anthropic's 2024 research extracted 34 million features from Claude 3 Sonnet with 70% human-interpretable, but scaling to frontier models (trillions of parameters) and detecting sophisticated deception remain unsolved challenges."
sidebar:
  order: 6
importance: 62.5
quality: 49
lastEdited: "2025-12-28"
update_frequency: 45
llmSummary: "Comprehensive survey of the interpretability sufficiency debate with 2024-2025 empirical progress: Anthropic extracted 34M features from Claude 3 Sonnet (70% interpretable), but scaling requires billions of features and faces fundamental challenges (10x performance loss, deception detection unsolved). Emerging consensus favors hybrid approaches combining interpretability verification with behavioral methods like RLHF rather than interpretability alone."
ratings:
  novelty: 3.5
  rigor: 5.5
  actionability: 4
  completeness: 6.5
clusters: ["ai-safety"]
---
import {DisagreementMap, InfoBox, KeyQuestions, Mermaid, R, EntityLink, DataExternalLinks} from '@components/wiki';



## Key Links

| Source | Link |
|--------|------|
| Official Website | [ea-crux-project.vercel.app](https://ea-crux-project.vercel.app/knowledge-base/responses/epistemic-tools/) |

<DataExternalLinks pageId="interpretability-sufficient" />

<InfoBox
  type="crux"
  title="Interpretability for Safety"
  customFields={[
    { label: "Question", value: "Is mechanistic interpretability sufficient to ensure AI safety?" },
    { label: "Stakes", value: "Determines priority of interpretability vs other safety research" },
    { label: "Current Progress", value: "Can interpret some circuits/features, far from full transparency" },
  ]}
/>

**<EntityLink id="E174">Mechanistic interpretability</EntityLink>** aims to reverse-engineer neural networks—understand what's happening inside the "black box." If successful, we could verify AI systems are safe by inspecting their internal workings. But is this approach sufficient for safety?

## What is Interpretability?

**The Goal**: Understand neural network internals well enough to:
- Identify what features/concepts models have learned
- Trace how inputs lead to outputs through network
- Detect problematic reasoning or goals
- Verify alignment and absence of deception
- Predict behavior in novel situations

**Current Capabilities**:
- Can identify some individual neurons/circuits (curve detectors, induction heads)
- Can visualize attention patterns
- Can extract some high-level features
- **Cannot** fully explain large model behavior

**Organizations Leading Work**: <EntityLink id="E22">Anthropic</EntityLink> (mechanistic interpretability team), <EntityLink id="E218">OpenAI</EntityLink> (interpretability research), <EntityLink id="E98">Google DeepMind</EntityLink>, independent researchers

## State of the Field (2024-2025)

Mechanistic interpretability has advanced significantly, with major labs investing substantially in understanding neural network internals. However, the gap between current capabilities and what's needed for safety remains large.

### Research Progress by Organization

| Organization | Key Achievement (2024-2025) | Scale Reached | Features Identified | Interpretability Rate |
|-------------|---------------------------|---------------|--------------------|-----------------------|
| Anthropic | <R id="e724db341d6e0065">Scaling Monosemanticity</R> | Claude 3 Sonnet (production model) | 34 million features | 70% human-interpretable |
| OpenAI | <R id="f7b06d857b564d78">Extracting Concepts from GPT-4</R> | GPT-4 | 16 million features | Many still difficult to interpret |
| DeepMind | <R id="a31c49bf9c1df71f">Gemma Scope</R> | Gemma 2 (2B-9B parameters) | Hundreds of SAEs released | Open for research |
| MIT (MAIA) | <R id="6490bfa2b3094be7">Automated interpretability agent</R> | Vision-language models | Automated discovery | Reduces labor bottleneck |

### Scalability Challenge

Current sparse autoencoder methods face a fundamental trade-off: passing GPT-4's activations through sparse autoencoders results in performance equivalent to a model trained with roughly 10x less compute. To fully map the concepts in frontier LLMs, researchers may need to scale to **billions or trillions of features**—far beyond current methods.

| Model | Parameters | Features Currently Extracted | Estimated Features Needed | Gap |
|-------|-----------|------------------------------|--------------------------|-----|
| GPT-2 | 1.5B | 307,200 neurons analyzed | ≈1-10M | Partially addressed |
| Claude 3 Sonnet | ≈70B (est.) | 34M features | ≈100M-1B | Large gap |
| GPT-4 | ≈1.7T (reported) | 16M features | ≈1B-10B | Very large gap |
| Frontier models (2025+) | 10T+ | Unknown | ≈10B+ | Fundamental challenge |

### Field Size and Growth

As of April 2024, mechanistic interpretability remains a small field with approximately **50 full-time positions** globally. Anthropic's interpretability team had 17 people at the time, representing about a third of global capacity. The team grew by 10 people in 2023 and continued expanding into 2024-2025.

<Mermaid chart={`
flowchart TD
    subgraph CURRENT["Current Capabilities (2024-2025)"]
        A[Sparse Autoencoders] --> B[Feature Extraction]
        B --> C[34M features from Claude 3 Sonnet]
        C --> D[70% human-interpretable]
    end

    subgraph GAPS["Key Gaps"]
        E[Neural Network Dark Matter] --> F[Rare features not yet extractable]
        G[Automation Bottleneck] --> H[Manual analysis doesn't scale]
        I[Verification Gap] --> J[Understanding ≠ Safety proof]
    end

    subgraph FUTURE["Requirements for Sufficiency"]
        K[Full feature coverage] --> L[Billions of features]
        M[Deception detection] --> N[Steganography resistance]
        O[Causal verification] --> P[Formal safety proofs]
    end

    D --> E
    D --> G
    D --> I

    style CURRENT fill:#d4edda
    style GAPS fill:#fff3cd
    style FUTURE fill:#f8d7da
`} />

### Safety-Relevant Discoveries

Anthropic's May 2024 research specifically identified features related to safety concerns:

- **Deception-related features**: Patterns that activate when models process or generate deceptive content
- **Sycophancy features**: Activations associated with agreeing with users regardless of accuracy
- **Bias-related features**: Patterns correlating with various forms of bias
- **Dangerous content features**: Activations for harmful information requests

This demonstrates that safety-relevant concepts are discoverable through interpretability. The question is whether all such features can be identified and monitored at scale.

## The Interpretability Landscape

<DisagreementMap
  title="Positions on Interpretability for Safety"
  description="Range of views on interpretability's role in AI safety"
  positions={[
    {
      name: "Chris Olah (Anthropic)",
      stance: "strong-optimist",
      confidence: "high",
      reasoning: "Pioneer of mechanistic interpretability. Believes we can and must understand neural networks to ensure safety.",
      evidence: ["Circuits work", "Anthropic research"],
      quote: "Understanding neural networks is not just possible but necessary"
    },
    {
      name: "Anthropic Interpretability Team",
      stance: "optimist",
      confidence: "medium",
      reasoning: "Major investment in mechanistic interpretability. Demonstrating concrete progress.",
      evidence: ["Dictionary learning", "Feature visualization"],
      quote: "Making steady progress toward understanding"
    },
    {
      name: "Paul Christiano",
      stance: "cautious-optimist",
      confidence: "medium",
      reasoning: "Values interpretability but focuses on scalable oversight. Sees interpretability as one tool.",
      evidence: ["Research focus", "Public writing"],
      quote: "Interpretability helps but isn't the whole story"
    },
    {
      name: "Eliezer Yudkowsky",
      stance: "pessimist",
      confidence: "high",
      reasoning: "Skeptical we can interpret superintelligence. Believes deception and complexity will defeat interpretability.",
      evidence: ["AGI Ruin arguments"],
      quote: "You can't understand something smarter than you"
    },
    {
      name: "Many Empirical ML Researchers",
      stance: "skeptical",
      confidence: "medium",
      reasoning: "Doubtful that full interpretability is achievable. Prefer behavioral approaches.",
      evidence: ["Focus on RLHF, red-teaming, etc."],
      quote: "Black boxes work, don't need to open them"
    },
    {
      name: "Neel Nanda (Independent)",
      stance: "optimist",
      confidence: "medium",
      reasoning: "Working on interpretability tools and education. Believes progress is accelerating.",
      evidence: ["TransformerLens", "Interpretability tutorials"],
      quote: "Interpretability is tractable and improving rapidly"
    }
  ]}
/>

## What Would Interpretability Need to Achieve?

For interpretability to be sufficient for safety, we'd need to:

**1. Understand all safety-relevant features**
- Identify internal goals and values
- Detect instrumental goals (power-seeking, deception)
- Understand decision-making processes
- Map world model and beliefs

**2. Verify alignment properties**
- Confirm AI pursues intended goals
- Ensure no hidden objectives
- Check for corrigibility
- Verify honesty and transparency

**3. Predict behavior reliably**
- Understand how AI will behave in novel situations
- Detect failure modes before they occur
- Anticipate emergent capabilities
- Ensure robust generalization

**4. Scale to superintelligence**
- Tools work on arbitrarily capable systems
- Interpretation doesn't require superhuman intelligence
- Can verify systems smarter than humans

**5. Be tamper-proof**
- AI can't deceive interpretability tools
- Can detect obfuscation attempts
- No steganographic hiding of goals

**Current Status**: Have partial success on (1) for current systems. (2-5) remain largely unsolved.

## Key Cruxes

<KeyQuestions
  questions={[
    {
      question: "Can we interpret cognition more intelligent than our own?",
      positions: [
        {
          position: "No - fundamentally limited by our intelligence",
          confidence: "medium",
          reasoning: "Like a dog can't understand calculus, we can't understand superintelligent reasoning.",
          implications: "Interpretability insufficient for AGI safety"
        },
        {
          position: "Yes - with tools and AI assistance",
          confidence: "low",
          reasoning: "Humans+tools can verify properties without full understanding. AI can help interpret.",
          implications: "Interpretability can scale"
        }
      ]
    },
    {
      question: "Will neural networks remain fundamentally opaque?",
      positions: [
        {
          position: "Yes - too complex to fully understand",
          confidence: "medium",
          reasoning: "Billions of parameters, emergent behavior, non-linear interactions make full interpretation impossible.",
          implications: "Must rely on behavioral safety"
        },
        {
          position: "No - can achieve sufficient understanding",
          confidence: "low",
          reasoning: "Don't need to understand everything, just safety-critical properties. Progress is real.",
          implications: "Interpretability is tractable"
        }
      ]
    },
    {
      question: "Can deceptive AI hide deception from interpretability?",
      positions: [
        {
          position: "Yes - steganography and obfuscation",
          confidence: "medium",
          reasoning: "Sophisticated AI could encode deception in ways that look innocent to interpretation.",
          implications: "Can't fully trust interpretability"
        },
        {
          position: "No - deception leaves traces",
          confidence: "low",
          reasoning: "Hard to completely hide goals. Can detect anomalies and obfuscation.",
          implications: "Interpretability can detect deception"
        }
      ]
    },
    {
      question: "Is interpretability the best use of safety research resources?",
      positions: [
        {
          position: "No - empirical methods more promising",
          confidence: "medium",
          reasoning: "RLHF, adversarial training, oversight show concrete results. Interpretability is speculative.",
          implications: "Deprioritize interpretability"
        },
        {
          position: "Yes - unique and necessary",
          confidence: "medium",
          reasoning: "Only interpretability can verify absence of deception. Worth the investment.",
          implications: "Major interpretability research needed"
        }
      ]
    }
  ]}
/>

## Interpretability vs Other Safety Approaches

How does interpretability compare to alternatives? Recent research suggests that **behavioral methods like RLHF focus on outputs without addressing internal reasoning**, potentially leaving unsafe or deceptive processes intact. Interpretability provides tools for interrogating and modifying internal processes, enabling alignment at the reasoning level rather than just performance.

### Comparative Assessment

| Approach | What It Verifies | Deception Detection | Scalability | Current Maturity | Sufficiency Alone |
|----------|-----------------|---------------------|-------------|-----------------|-------------------|
| **Mechanistic Interpretability** | Internal reasoning, goals | Potentially strong | Unclear - major gap | Research phase | Unlikely |
| **RLHF / Constitutional AI** | Output behavior | Weak - can be gamed | Demonstrated | Production use | No - surface-level |
| **Red-Teaming** | Specific failure modes | Moderate | Labor-intensive | Production use | No - coverage gaps |
| **Scalable Oversight** | Task correctness | Moderate | Scales with AI | Research phase | Unclear |
| **Formal Verification** | Specified properties | Strong (if specified) | Very limited | Theoretical | No - specification gap |
| **AI Control** | Containment | N/A | Unknown | Conceptual | No - capability limits |

### Key Limitations of RLHF (Per Recent Research)

Research on <R id="bf50045e699d0004">AI Alignment through RLHF</R> identifies fundamental vulnerabilities:

1. **Reward hacking**: Models optimize proxy signals in ways diverging from true human preferences
2. **Sycophancy**: Generating plausible-sounding falsehoods to satisfy reward models
3. **Inner misalignment**: Producing aligned outputs while internally pursuing misaligned objectives
4. **Surface-level alignment**: RLHF does not verify whether internal reasoning processes are safe or truthful

**Emerging consensus**: Models tuned with RLHF or Constitutional AI can be examined with activation patching and mediation to detect reward hacking, deceptive alignment, or brittle circuits that pass surface tests. This suggests interpretability as a **verification layer** for behavioral methods, not a replacement.

### Approach Comparison Details

**Interpretability**
- **Strengths**: Direct verification, detects deception, principled foundation
- **Weaknesses**: May not scale, technically challenging, currently limited
- **Best for**: Verifying goals and detecting hidden objectives

**Behavioral Testing (Red-Teaming)**
- **Strengths**: Practical, works on black boxes, finding real issues now
- **Weaknesses**: Can't rule out deception, can't cover all cases
- **Best for**: Finding specific failures and adversarial examples

**Scalable Oversight**
- **Strengths**: Can verify complex tasks, scales with AI capability
- **Weaknesses**: Requires powerful overseers, potential for collusion
- **Best for**: Ensuring correct task completion

**Constitutional AI / RLHF**
- **Strengths**: Works empirically, improving behavior on real tasks
- **Weaknesses**: May be superficial, doesn't verify internal alignment
- **Best for**: Training helpful, harmless, honest behavior

**Formal Verification**
- **Strengths**: Mathematical guarantees, rigorous
- **Weaknesses**: Requires formal specification, computationally intensive
- **Best for**: Provable safety properties on specified domains

**AI Control**
- **Strengths**: Doesn't require alignment, just containment
- **Weaknesses**: Requires limiting capabilities, may not work for superintelligence
- **Best for**: Using AI while limiting risk

**Most Likely**: A <R id="eb734fcf5afd57ef">hybrid approach</R> using interpretability as a design principle to shape how models are built and trained, with behavioral methods like RLHF to guide external performance. Unlike behavioral metrics emphasizing persuasiveness, interpretability evaluates causal correctness, providing the epistemic backbone for trustworthy alignment.

## The Sufficiency Question

**Interpretability might be sufficient if:**
- We achieve comprehensive understanding of all safety-relevant features
- Can verify alignment properties with high confidence
- Tools scale to arbitrarily capable systems
- Can detect and prevent deception
- Understanding translates to control

**Interpretability is likely insufficient if:**
- Superintelligent cognition is incomprehensible to humans
- Complexity makes full interpretation intractable
- Deceptive AI can hide from interpretation
- Verification gap remains (understanding ≠ proof)
- Other approaches provide better cost-benefit

**Current Consensus**: Interpretability is valuable but probably needs to combine with other safety methods. Worth significant investment but not sole focus.

## Interpretability for Different Risk Models

Interpretability's value depends on your risk model:

**If main risk is deceptive alignment:**
- Interpretability is critical—only way to detect deception
- Behavioral tests can't rule out scheming
- High priority

**If main risk is accidents/failures:**
- Interpretability helps but less critical
- Can catch failures via testing
- Medium priority

**If main risk is misuse:**
- Interpretability provides limited help
- More about access control and monitoring
- Low priority

**If main risk is loss of control:**
- Interpretability might not help if AI is already too powerful
- AI control approaches may be more relevant
- Medium priority

## Recent Progress

### Timeline of Key Achievements

| Year | Milestone | Significance | Source |
|------|-----------|--------------|--------|
| 2020 | <R id="ad268b74cee64b6f">Circuits in CNNs</R> | First demonstration of interpretable circuits in vision models | Olah et al. |
| 2021 | <R id="b948d6282416b586">A Mathematical Framework for Transformer Circuits</R> | Formal framework for understanding transformers | Anthropic |
| 2022 | <R id="23e5123e7f8f98e2">Induction Heads</R> | Key mechanism for in-context learning identified | Anthropic |
| 2023 | <R id="0946f0572a487914">Towards Monosemanticity</R> | Sparse autoencoders extract interpretable features from 1-layer transformers | Anthropic |
| 2024 (Jan) | 8-layer SAE features | SAEs work on deeper transformers | Anthropic |
| 2024 (May) | <R id="e724db341d6e0065">Scaling Monosemanticity</R> | **34M features from Claude 3 Sonnet; 70% interpretable** | Anthropic |
| 2024 (Jun) | <R id="f7b06d857b564d78">GPT-4 concepts</R> | 16M features extracted from GPT-4 | OpenAI |
| 2024 (Jul) | <R id="a31c49bf9c1df71f">Gemma Scope</R> | Open SAE suite for Gemma 2 | DeepMind |
| 2024 (Jul) | <R id="6490bfa2b3094be7">MAIA</R> | Automated interpretability agent | MIT |
| 2025 | <R id="a1036bc63472c5fc">Gemma Scope 2</R> | Largest open-source interpretability release (110 PB, 1T+ parameters trained) | DeepMind |

### Specific Capabilities Demonstrated

**Features and Circuits (2020-2023)**
- Curve detectors, text detectors in vision models
- Induction heads in language models
- Specific interpretable circuits

**Dictionary Learning (2023-2024)**
- Sparse autoencoders find interpretable features
- "Monosemantic" features in language models
- Scaling from thousands to millions of features
- Safety-relevant features (deception, sycophancy, bias) identified

**Steering and Editing (2024)**
- Can modify behavior by editing activations
- Identify and amplify/suppress specific features
- Shows features are causally relevant
- "Golden Gate Claude" demonstration of feature steering

### Remaining Challenges

Per <R id="b1d6e7501debf627">Bereska & Gavves (2024)</R>, a comprehensive review of mechanistic interpretability for AI safety:

| Challenge | Current Status | Impact on Sufficiency |
|-----------|---------------|----------------------|
| **Scalability** | SAEs lose ≈10x compute worth of performance on GPT-4 | Cannot verify frontier models |
| **Neural network dark matter** | Enormous rare features not yet extractable | Incomplete coverage |
| **Automation** | Circuit discovery highly labor-intensive | Human analysis bottleneck |
| **Epistemic uncertainty** | Interpretations not formally verified | Understanding may be wrong |
| **Representation mismatch** | Learned features may not map to human concepts | False confidence risk |
| **Deception resistance** | Unknown if sophisticated AI could hide from SAEs | Safety-critical gap |

**Key insight from Chris Olah (July 2024)**: Despite major progress on superposition and scalability, we are "only extracting a small fraction of the features"—a "neural network dark matter" problem. (See <EntityLink id="E175">Interpretability Coverage</EntityLink> for the full quote and analysis.)

## The Argument from Engineering

Counter to "must understand to control":

**Many complex systems work without full understanding:**
- Don't fully understand aerodynamics but planes fly safely
- Don't fully understand biology but medicine works
- Don't fully understand quantum mechanics but chips work

**Engineering vs Science:**
- Science requires understanding mechanisms
- Engineering requires predictable outcomes
- May not need interpretability for safe AI, just reliable testing

**Counter-counter:**
- Those systems aren't optimizing against us
- AI might actively hide problems
- Can't learn from catastrophic AI failures like plane crashes

## Designing for Interpretability

Rather than interpreting existing black boxes, could design interpretable AI:

**Approaches:**
- Modular architectures with clear functional separation
- Explicit world models and planning
- Symbolic components alongside neural nets
- Sparse networks with fewer parameters
- Constrained architectures

**Tradeoff:**
- More interpretable but potentially less capable
- May sacrifice performance for transparency
- Would fall behind in capability race

**Question**: Is slightly less capable but more interpretable AI safer overall?

## The Meta Question

Interpretability research itself has a philosophical debate:

**Mechanistic Interpretability Camp:**
- Must reverse-engineer the actual circuits and mechanisms
- Scientific understanding of how networks compute
- Principled, rigorous approach

**Pragmatic/Behavioral Camp:**
- Focus on predicting and controlling behavior
- Don't need to understand internals
- Engineering approach

This mirrors the sufficiency debate: Do we need mechanistic interpretability or just pragmatic tools?