Response Pages Style Guide
Response Pages Style Guide
Response pages describe interventions, policies, and technical approaches that address AI risks. They explain how something works and assess its effectiveness.
Prerequisite: All response pages must follow the Common Writing Principles — epistemic honesty, language neutrality, and analytical tone. The objectivity rating dimension measures this.
Page Type Detection
Response pages are at: /knowledge-base/responses/**/*.mdx
Required Frontmatter
---
title: "Response Name"
description: "One sentence explaining what this response does and its key mechanism."
quality: 60 # 0-100
importance: 70 # 0-100
lastEdited: "2026-01-28"
---
Required Sections
1. Overview (2-3 paragraphs)
What is this response and why does it matter?
2. Quick Assessment Table
## Quick Assessment
| Dimension | Rating | Notes |
|-----------|--------|-------|
| Tractability | Medium | Requires significant research investment |
| Scalability | High | Applies to most foundation models |
| Current Maturity | Low | Early research stage |
| Time Horizon | 5-10 years | Needs fundamental advances |
| Key Proponents | Anthropic, DeepMind | Active research programs |
3. How It Works
Technical explanation with diagram:
## How It Works
<Mermaid chart={`
flowchart LR
A[Input] --> B[Safety Layer]
B --> C{Check}
C -->|Pass| D[Output]
C -->|Fail| E[Block/Modify]
`} />
[Detailed explanation of mechanism]
4. Risks Addressed
## Risks Addressed
| Risk | Relevance | How It Helps |
|------|-----------|--------------|
| [Deceptive Alignment](/...) | High | Detects hidden goals |
| [Reward Hacking](/...) | Medium | Identifies misspecified rewards |
5. Limitations
What this approach cannot do or gets wrong.
6. Current State
Who is working on this, what progress has been made.
7. Open Questions
Unsolved problems and research directions.
Claude Code Workflows
Creating a New Response Page
Task({
subagent_type: 'general-purpose',
prompt: `Create a response page for [RESPONSE_NAME].
FIRST: Read /internal/response-style-guide/.
THEN: Research using WebSearch:
- Academic papers on the technique
- Lab blog posts and announcements
- Current implementations and results
Create at: src/content/docs/knowledge-base/responses/[category]/[name].mdx
Include:
1. Overview (2-3 paragraphs)
2. Quick Assessment table
3. How It Works (with diagram)
4. Risks Addressed table
5. Limitations
6. Current State
7. Open Questions`
})
Improving a Response Page
Task({
subagent_type: 'general-purpose',
prompt: `Improve response page at [PATH].
Read /internal/response-style-guide/ first.
Add:
1. Quick Assessment table (if missing)
2. Mermaid diagram
3. Risks Addressed cross-links
4. Citations from recent papers
Use WebSearch for current research.`
})
Quality Criteria
Pages are scored on seven dimensions (0-10 scale). Scoring is harsh - a 7 is exceptional, most content should score 3-5.
| Dimension | 3-4 (Adequate) | 5-6 (Good) | 7+ (Exceptional) |
|---|---|---|---|
| Novelty | Accurate summary | Some original framing | Significant original insight |
| Rigor | Mixed sourcing | Mostly sourced | Fully sourced with quantification |
| Objectivity | Some insider language or false certainty | Mostly neutral, some uncertainty noted | Fully accessible, all estimates hedged |
| Actionability | Abstract implications | Some actionable takeaways | Concrete decision guidance |
| Completeness | Notable gaps | Covers main points | Thorough coverage |
Derived quality (0-100) combines subscores with word count and citation bonuses. See CLAUDE.md for formula.
Example
See Mechanistic InterpretabilityApproachMechanistic InterpretabilityMechanistic interpretability aims to reverse-engineer neural networks to understand internal computations, with $100M+ annual investment across major labs. Anthropic extracted 30M+ features from Cl...Quality: 59/100 for reference.