A hierarchical controlled vocabulary system for standardizing longtermist risk terminology, enabling combinatorial analysis across ~500-1000 meaningful risk scenarios (Domain × AI-Stage × Actor × Intent × Scale). Enables searchable/comparable concepts through composable tags and supports gap analysis by making 2,304 theoretical cells explicit (though most pruned as implausible).
Added `evergreen: false` frontmatter field to allow pages (reports, experiments, proposals) to opt out of the update schedule. Full feature implementation: frontmatter schema + validation (evergreen: false + update_frequency is an error), Page interface + build-data, getUpdateSchedule(), bootstrap/reassign scripts, updates command, staleness checker, PageStatus UI (shows "Point-in-time content · Not on update schedule"), IssuesSection (no stale warnings for non-evergreen). Applied to all 6 internal report pages. Updated automation-tools docs.
Controlled Vocabulary for Longtermist Analysis
Executive Summary
| Challenge | Solution | Benefit |
|---|---|---|
| Inconsistent terminology across longtermist writing | Controlled vocabulary with composable terms | Searchable, comparable concepts |
| Thousands of concepts to organize | Hierarchical specificity with modifiers | Flexible precision without explosion |
| AI is both a risk domain AND a context modifier | Separate vocabularies for each | Clearer analysis |
| Visual clutter in diagrams | Colored clickable tags | Scannable, interactive |
Hierarchical Specificity: Use the most general term that captures what matters for the context. Add modifiers only when the distinction is load-bearing for the argument.
Terminology System
Risk Domains
The primary categories of existential and catastrophic risk:
| Domain | Abbrev | Description |
|---|---|---|
| Bio | BIO | Biological/pandemic risks |
| Cyber | CYBER | Digital infrastructure attacks |
| Nuclear | NUC | Nuclear weapons/war |
| Epistemic | EPIS | Information integrity, collective reasoning |
| Authoritarian | AUTH | Lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100 of oppressive governance |
| Misaligned-AI | MAI | AI systems pursuing goals humans don't endorse |
| Power-Concentrating-AI | PCAI | AI enabling extreme human power concentration |
| Unknown | UNK | Surprise/unforeseen threats |
Note that MAI and PCAI are risks from AI. This is distinct from AI as a context modifier (see AI Stages below), which describes how AI transforms the landscape for all risks.
Cross-Cutting Modifiers
Modifiers can be appended to any domain when the distinction matters:
| Modifier | Meaning |
|---|---|
| Natural | Not human-engineered |
| Engineered | Human-created or modified |
| Emergent | Arising from system dynamics |
Composition Examples
Bio # General biological risk
Bio-Engineered # When engineering is relevant
Bio-Engineered-Deliberate # True bioweapon scenario
Bio-State-Offensive # State bioweapons capability
Cyber-Individual # Lone actor cyber threat
NUC-State-Accidental # Nuclear accident by state actor
EPIS-Corp-Deliberate # Corporate disinformation campaign
AI Stages (Context Modifier)
AI stages describe when risks and interventions occur relative to transformative AI. These are abstract by default, with optional concrete annotations.
| Stage | Meaning | Character |
|---|---|---|
| Near | Close to present | High confidence about world state |
| Mid | Between Near and TAI | More uncertainty, longer-term positioning |
| TAI | Transformation point | Critical period, possibly short |
| Post-TAI | After transformation | Different action landscape |
These stages are defined relative to TAI, not calendar years. If TAI is 2030, "Mid" means something different than if TAI is 2045.
Concrete Annotations (Optional)
When making specific claims, annotate with your assumptions:
Near (~2028)
Mid (TAI-50% capability)
Mid (10^28 FLOP threshold)
TAI (assuming 2035 timeline)
The abstract terms are the controlled vocabulary. Concrete annotations clarify individual assumptions without forcing agreement on timelines.
Combining AI Stage with Risk Domains
Every non-AI risk domain gets transformed by AI stage:
Bio-Near # Bio risks in current context
Bio-Mid # Bio risks as AI advances
Bio-TAI # Bio risks during transformation
Cyber-Mid-State # State cyber threats in mid-period
NUC-TAI-Accidental # Nuclear accident risk during TAI transition
State Variables
Concepts for describing the current risk landscape:
| Concept | Definition | Usage |
|---|---|---|
| Offensive-Potential | Total capability to cause harm | [Domain]-Offensive-Potential |
| Defensive-Potential | Total capability to prevent harm | [Domain]-Defensive-Potential |
| Exposure | Net vulnerability (Offensive - Defensive) | [Domain]-Exposure |
| Resilience | Ability to recover from harm | [Domain]-Resilience |
| Fragility | Susceptibility to cascading failure | [Domain]-Fragility |
Example usage:
Bio-Near-Offensive-Potential # Current bioweapon capability
Bio-Near-Defensive-Potential # Current biodefense capability
Bio-Near-Exposure # Net bio vulnerability today
Intervention Categories
By Mechanism
| Mechanism | Description |
|---|---|
| Technical | Engineering solutions |
| Governance | Rules, laws, institutions |
| Coordination | Multi-actor agreements |
| Cultural | Norms, values, education |
| Economic | Incentive structures |
| Epistemic | Information/knowledge improvements |
By Target
| Target | Description |
|---|---|
| Prevention | Stop bad events from occurring |
| Detection | Identify threats early |
| Mitigation | Reduce severity if occurs |
| Recovery | Restore after harm |
| Adaptation | Long-term adjustment |
Composition
Technical-Bio-Prevention # Technical solutions to prevent biopandemics
Governance-MAI-Detection # Governance for detecting AI misalignment
Coordination-NUC-Mitigation # International coordination to limit nuclear damage
Epistemic Markers
For qualifying claims and tracking disagreements:
| Marker | Use |
|---|---|
| Assumption | Premise taken as given |
| Crux | Disagreement that would change conclusions if resolved |
| Uncertainty | Known unknowns |
| Speculation | Reasoned but weakly-grounded |
| Consensus | Broadly agreed upon |
| Contested | Actively disputed |
Visual Presentation
In diagrams and interfaces, vocabulary terms appear as colored clickable tags:
Tag Display
┌─────────────────────────────────────────────────────┐
│ AI-enabled pandemic creation by non-state actors │
│ │
│ [Bio] [Engineered] [Mid] [Individual] [Deliberate] │
│ red orange blue purple yellow │
└─────────────────────────────────────────────────────┘
Each tag:
- Has a consistent color by category (domains = red, AI stages = blue, etc.)
- Is clickable to filter/search for related concepts
- Expands on hover to show definition
Alternative Layouts
For different contexts:
Inline with separators:
Bio · Engineered · Mid · Individual
Primary + context:
┌─────────────────────────────────┐
│ Bio Risk │ ← Primary (large)
│ in Mid period, Individual actor │ ← Context (smaller)
└─────────────────────────────────┘
Use Cases
1. Tagging Concrete Risks
When discussing specific risks, apply vocabulary tags for consistency:
| Risk Description | Tags |
|---|---|
| "GPT-7 used to design novel pathogen" | Bio, Engineered, Mid, Deliberate |
| "Accidental nuclear launch from AI misinterpretation" | NUC, Mid, AI-Agent, Accidental |
| "State-sponsored disinformation undermining elections" | EPIS, Near, State, Deliberate |
| "Recursive self-improvement leads to misaligned superintelligence" | MAI, TAI, Existential |
This enables:
- Search: Find all
Bio-Midrisks - Comparison: Compare
StatevsIndividualactor risks - Gap analysis: Which combinations have no coverage?
2. Combinatorial Risk Maps
Generate importance scores across all meaningful combinations:
Simple 2D Slice: Domain × AI Stage
| Near | Mid | TAI | Post-TAI | |
|---|---|---|---|---|
| Bio | 4 | 7 | 8 | 5 |
| Cyber | 5 | 6 | 6 | 4 |
| Nuclear | 5 | 6 | 7 | 5 |
| MAI | 2 | 5 | 10 | 8 |
| EPIS | 6 | 7 | 8 | 6 |
| AUTH | 4 | 5 | 7 | 9 |
Scores are illustrative. Actual scores would be developed through structured elicitation.
Multi-Dimensional Analysis
Full dimensionality:
Domain (8) × AI-Stage (4) × Actor (6) × Intent (3) × Scale (4)
= 2,304 cells
With pruning of implausible combinations (e.g., NUC-Individual-Existential), perhaps 500-1000 meaningful cells.
What This Enables
- Gap analysis: High importance + low attention = opportunity
- Priority disputes made explicit: "You rate MAI-TAI as 10, I rate it 6"
- Neglectedness identification: Which cells have no interventions?
- Interaction effects: How does moving along one dimension shift others?
Open Questions
This vocabulary is evolving. The following questions remain open.
Granularity Decisions
- Should
Cybersplit intoCyber-Infrastructure,Cyber-Financial,Cyber-Military? - Is
Power-Concentrating-AIdistinct enough fromAuthoritarian? - Do we need
AI-Accidentseparate fromMAI?
Scoring Approach
Starting simple (single importance score), expanding over time:
- Phase 1: Single importance score (1-10)
- Phase 2: Probability × Severity decomposition
- Phase 3: Add Neglectedness, Tractability
- Phase 4: Full ITN framework per cell
Timeline Uncertainty
The AI stages are relative to TAI. How do we handle:
- People with very different TAI timelines?
- Possibility that TAI never arrives?
- Multiple transformation points rather than single TAI?
Positive Outcomes
Current vocabulary is risk-focused. Should we add:
Flourishing-PotentialCoordination-SuccessAlignment-Success
Implementation Notes
Data Model
type RiskConcept = {
domain: Domain; // Bio, Cyber, NUC, etc.
aiStage?: AIStage; // Near, Mid, TAI, Post-TAI
modifiers?: Modifier[]; // Engineered, State, Deliberate, etc.
stageAnnotation?: string; // Optional: "~2028", "TAI-50%"
importanceScore?: number; // 1-10
epistemicStatus?: EpistemicMarker; // Assumption, Crux, Contested, etc.
}
Serialization Format
For plain text contexts (filenames, URLs, search):
[Domain](-[Modifier])*(-[AIStage])?
Examples:
bio-engineered-mid
nuc-state-accidental-tai
mai-existential-post-tai
Sources & References
AI Development Frameworks
- DeepMind Levels of AGI Paper - Capability-based AGI levels
- AGI Definitions Comparison - OpenAI, Anthropic, DeepMind approaches
Existential Risk Literature
- How Do AI Timelines Affect Existential Risk? - LessWrong analysis
- Existential Risk from AI - Wikipedia
Related Vocabulary Projects
- EA Forum: Crucial Consideration - Bostrom's framework
- CFAR Double Crux - Crux terminology origin