Longterm Wiki

Controlled Vocabulary for Longtermist Analysis

Executive Summary

ChallengeSolutionBenefit
Inconsistent terminology across longtermist writingControlled vocabulary with composable termsSearchable, comparable concepts
Thousands of concepts to organizeHierarchical specificity with modifiersFlexible precision without explosion
AI is both a risk domain AND a context modifierSeparate vocabularies for eachClearer analysis
Visual clutter in diagramsColored clickable tagsScannable, interactive
Core Principle

Hierarchical Specificity: Use the most general term that captures what matters for the context. Add modifiers only when the distinction is load-bearing for the argument.


Terminology System

Risk Domains

The primary categories of existential and catastrophic risk:

DomainAbbrevDescription
BioBIOBiological/pandemic risks
CyberCYBERDigital infrastructure attacks
NuclearNUCNuclear weapons/war
EpistemicEPISInformation integrity, collective reasoning
AuthoritarianAUTHLock-in of oppressive governance
Misaligned-AIMAIAI systems pursuing goals humans don't endorse
Power-Concentrating-AIPCAIAI enabling extreme human power concentration
UnknownUNKSurprise/unforeseen threats
AI as Risk Domain vs Context

Note that MAI and PCAI are risks from AI. This is distinct from AI as a context modifier (see AI Stages below), which describes how AI transforms the landscape for all risks.


Cross-Cutting Modifiers

Modifiers can be appended to any domain when the distinction matters:

ModifierMeaning
NaturalNot human-engineered
EngineeredHuman-created or modified
EmergentArising from system dynamics

Composition Examples

Bio                           # General biological risk
Bio-Engineered                # When engineering is relevant
Bio-Engineered-Deliberate     # True bioweapon scenario
Bio-State-Offensive           # State bioweapons capability

Cyber-Individual              # Lone actor cyber threat
NUC-State-Accidental          # Nuclear accident by state actor
EPIS-Corp-Deliberate          # Corporate disinformation campaign

AI Stages (Context Modifier)

AI stages describe when risks and interventions occur relative to transformative AI. These are abstract by default, with optional concrete annotations.

StageMeaningCharacter
NearClose to presentHigh confidence about world state
MidBetween Near and TAIMore uncertainty, longer-term positioning
TAITransformation pointCritical period, possibly short
Post-TAIAfter transformationDifferent action landscape
Relative, Not Absolute

These stages are defined relative to TAI, not calendar years. If TAI is 2030, "Mid" means something different than if TAI is 2045.

Concrete Annotations (Optional)

When making specific claims, annotate with your assumptions:

Near (~2028)
Mid (TAI-50% capability)
Mid (10^28 FLOP threshold)
TAI (assuming 2035 timeline)

The abstract terms are the controlled vocabulary. Concrete annotations clarify individual assumptions without forcing agreement on timelines.

Combining AI Stage with Risk Domains

Every non-AI risk domain gets transformed by AI stage:

Bio-Near                  # Bio risks in current context
Bio-Mid                   # Bio risks as AI advances
Bio-TAI                   # Bio risks during transformation
Cyber-Mid-State           # State cyber threats in mid-period
NUC-TAI-Accidental        # Nuclear accident risk during TAI transition

State Variables

Concepts for describing the current risk landscape:

ConceptDefinitionUsage
Offensive-PotentialTotal capability to cause harm[Domain]-Offensive-Potential
Defensive-PotentialTotal capability to prevent harm[Domain]-Defensive-Potential
ExposureNet vulnerability (Offensive - Defensive)[Domain]-Exposure
ResilienceAbility to recover from harm[Domain]-Resilience
FragilitySusceptibility to cascading failure[Domain]-Fragility

Example usage:

Bio-Near-Offensive-Potential     # Current bioweapon capability
Bio-Near-Defensive-Potential     # Current biodefense capability
Bio-Near-Exposure                # Net bio vulnerability today

Intervention Categories

By Mechanism

MechanismDescription
TechnicalEngineering solutions
GovernanceRules, laws, institutions
CoordinationMulti-actor agreements
CulturalNorms, values, education
EconomicIncentive structures
EpistemicInformation/knowledge improvements

By Target

TargetDescription
PreventionStop bad events from occurring
DetectionIdentify threats early
MitigationReduce severity if occurs
RecoveryRestore after harm
AdaptationLong-term adjustment

Composition

Technical-Bio-Prevention         # Technical solutions to prevent biopandemics
Governance-MAI-Detection         # Governance for detecting AI misalignment
Coordination-NUC-Mitigation      # International coordination to limit nuclear damage

Epistemic Markers

For qualifying claims and tracking disagreements:

MarkerUse
AssumptionPremise taken as given
CruxDisagreement that would change conclusions if resolved
UncertaintyKnown unknowns
SpeculationReasoned but weakly-grounded
ConsensusBroadly agreed upon
ContestedActively disputed

Visual Presentation

In diagrams and interfaces, vocabulary terms appear as colored clickable tags:

Tag Display

┌─────────────────────────────────────────────────────┐
│ AI-enabled pandemic creation by non-state actors   │
│                                                     │
│ [Bio] [Engineered] [Mid] [Individual] [Deliberate] │
│  red    orange    blue    purple       yellow      │
└─────────────────────────────────────────────────────┘

Each tag:

  • Has a consistent color by category (domains = red, AI stages = blue, etc.)
  • Is clickable to filter/search for related concepts
  • Expands on hover to show definition

Alternative Layouts

For different contexts:

Inline with separators:

Bio · Engineered · Mid · Individual

Primary + context:

┌─────────────────────────────────┐
│ Bio Risk                        │  ← Primary (large)
│ in Mid period, Individual actor │  ← Context (smaller)
└─────────────────────────────────┘

Use Cases

1. Tagging Concrete Risks

When discussing specific risks, apply vocabulary tags for consistency:

Risk DescriptionTags
"GPT-7 used to design novel pathogen"Bio, Engineered, Mid, Deliberate
"Accidental nuclear launch from AI misinterpretation"NUC, Mid, AI-Agent, Accidental
"State-sponsored disinformation undermining elections"EPIS, Near, State, Deliberate
"Recursive self-improvement leads to misaligned superintelligence"MAI, TAI, Existential

This enables:

  • Search: Find all Bio-Mid risks
  • Comparison: Compare State vs Individual actor risks
  • Gap analysis: Which combinations have no coverage?

2. Combinatorial Risk Maps

Generate importance scores across all meaningful combinations:

Simple 2D Slice: Domain × AI Stage

NearMidTAIPost-TAI
Bio4785
Cyber5664
Nuclear5675
MAI25108
EPIS6786
AUTH4579

Scores are illustrative. Actual scores would be developed through structured elicitation.

Multi-Dimensional Analysis

Full dimensionality:

Domain (8) × AI-Stage (4) × Actor (6) × Intent (3) × Scale (4)
= 2,304 cells

With pruning of implausible combinations (e.g., NUC-Individual-Existential), perhaps 500-1000 meaningful cells.

What This Enables

  1. Gap analysis: High importance + low attention = opportunity
  2. Priority disputes made explicit: "You rate MAI-TAI as 10, I rate it 6"
  3. Neglectedness identification: Which cells have no interventions?
  4. Interaction effects: How does moving along one dimension shift others?

Open Questions

Work in Progress

This vocabulary is evolving. The following questions remain open.

Granularity Decisions

  • Should Cyber split into Cyber-Infrastructure, Cyber-Financial, Cyber-Military?
  • Is Power-Concentrating-AI distinct enough from Authoritarian?
  • Do we need AI-Accident separate from MAI?

Scoring Approach

Starting simple (single importance score), expanding over time:

  • Phase 1: Single importance score (1-10)
  • Phase 2: Probability × Severity decomposition
  • Phase 3: Add Neglectedness, Tractability
  • Phase 4: Full ITN framework per cell

Timeline Uncertainty

The AI stages are relative to TAI. How do we handle:

  • People with very different TAI timelines?
  • Possibility that TAI never arrives?
  • Multiple transformation points rather than single TAI?

Positive Outcomes

Current vocabulary is risk-focused. Should we add:

  • Flourishing-Potential
  • Coordination-Success
  • Alignment-Success

Implementation Notes

Data Model

type RiskConcept = {
  domain: Domain;                    // Bio, Cyber, NUC, etc.
  aiStage?: AIStage;                 // Near, Mid, TAI, Post-TAI
  modifiers?: Modifier[];            // Engineered, State, Deliberate, etc.
  stageAnnotation?: string;          // Optional: "~2028", "TAI-50%"
  importanceScore?: number;          // 1-10
  epistemicStatus?: EpistemicMarker; // Assumption, Crux, Contested, etc.
}

Serialization Format

For plain text contexts (filenames, URLs, search):

[Domain](-[Modifier])*(-[AIStage])?

Examples:
  bio-engineered-mid
  nuc-state-accidental-tai
  mai-existential-post-tai

Sources & References

AI Development Frameworks

Existential Risk Literature

Related Vocabulary Projects