AI Transition Model
| Technical AI Safety | AIMisalignment Potential | 45 | 60 | 85 | 70 | View → | |
| AI Governance | AIMisalignment Potential | 55 | 50 | 60 | 75 | View → | |
| Lab Safety Practices | AIMisalignment Potential | 65 | 40 | 50 | 45 | View → | |
| Governance | AICivilizational Competence | 35 | 45 | 55 | 70 | View → | |
| Epistemics | AICivilizational Competence | 25 | 55 | 40 | 65 | View → | |
| Adaptability | AICivilizational Competence | 30 | 50 | 50 | 60 | View → | |
| Compute | AIAI Capabilities | — | — | — | — | View → | |
| Algorithms | AIAI Capabilities | 20 | 55 | 75 | 85 | View → | |
| Adoption | AIAI Capabilities | 40 | 40 | 45 | 70 | View → | |
| Economic Stability | SocietyTransition Turbulence | 40 | 50 | 35 | 55 | View → | |
| Racing Intensity | SocietyTransition Turbulence | 50 | 45 | 65 | 50 | View → | |
| Recursive AI Capabilities | AIAI Uses | 35 | 70 | 85 | 90 | View → | |
| Industries | AIAI Uses | 30 | 35 | 30 | 75 | View → | |
| Governments | AIAI Uses | 40 | 50 | 55 | 70 | View → | |
| Coordination | AIAI Uses | 45 | 55 | 40 | 65 | View → | |
| Biological Threat Exposure | SocietyMisuse Potential | 45 | 60 | 80 | 40 | View → | |
| Cyber Threat Exposure | SocietyMisuse Potential | 35 | 50 | 55 | 45 | View → | |
| Robot Threat Exposure | SocietyMisuse Potential | 40 | 65 | 60 | 50 | View → | |
| Surprise Threat Exposure | SocietyMisuse Potential | 20 | 85 | 70 | 55 | View → | |
| Countries | AIAI Ownership | 25 | 50 | 45 | 65 | View → | |
| Companies | AIAI Ownership | 35 | 45 | 50 | 70 | View → | |
| Shareholders | AIAI Ownership | 30 | 40 | 25 | 60 | View → |
| Rapid | AI Takeover | 40 | 70 | 95 | 85 | View → | |
| Gradual | AI Takeover | 55 | 60 | 80 | 90 | View → | |
| State Actor | Human-Caused Catastrophe | 45 | 55 | 75 | 70 | View → | |
| Rogue Actor | Human-Caused Catastrophe | 35 | 65 | 70 | 45 | View → | |
| Economic Power | Long-term Lock-in | — | — | — | — | View → | |
| Political Power | Long-term Lock-in | — | — | — | — | View → | |
| Epistemics | Long-term Lock-in | — | — | — | — | View → | |
| Values | Long-term Lock-in | — | — | — | — | View → | |
| Suffering Lock In | Long-term Lock-in | — | — | — | — | View → |
The AI Transition Model is a causal framework for understanding how various factors influence the trajectory of AI development and its ultimate outcomes for humanity. The interactive diagram above shows how Root Factors flow through Ultimate Scenarios to determine Ultimate Outcomes.
This model helps identify:
- Leverage points: Which factors have the most influence on outcomes
- Intervention targets: Where effort can most effectively shift trajectories
- Key uncertainties: Which causal relationships are most uncertain
- Scenario dependencies: How different pathways interact
How Parameters, Risks, and Interventions Connect
Both risks and interventions connect to root factors:
- Risks (like deceptive alignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100, racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100) tend to increase harmful factors or decrease protective ones
- Interventions (like interpretability research, compute governance) work to counteract risks
Interactive Views:
- Parameter TableE675An interactive sortable table displaying parameters from the AI Transition Model with ratings for changeability, uncertainty, and impact. Links to similar frameworks from Open Philanthropy and othe...Quality: 24/100 - Sortable tables with ratings (changeability, uncertainty, x-risk impact, trajectory)
- Graph View - Visual causal diagram showing relationships between factors, scenarios, and outcomes
Why This Framing Matters
Traditional Risk Framing
- "Trust erosion is a risk we must prevent"
- "Concentration of powerRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to $100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching $1-10B per model by 20...Quality: 65/100 threatens democracy"
- Focus: Avoiding negative outcomes
Parameter Framing
- "Trust is a parameter that AI affects in both directions"
- "Power distribution is a variable we can influence through policy"
- Focus: Understanding dynamics and identifying intervention points
The parameter framing enables:
- Better modeling: Can estimate current levels, trends, and intervention effects
- Clearer priorities: Which parameters matter most for good outcomes?
- Strategic allocation: Where should resources go to maintain critical parameters?
- Progress tracking: Are our interventions actually improving parameter levels?
Relationship to Other Sections
| Section | Relationship to Parameters |
|---|---|
| Risks | Many risks describe decreases in parameters (e.g., "trust erosion" = trust declining) |
| Interventions | Interventions aim to increase or stabilize parameters |
| Metrics | Metrics are concrete measurements of parameter levels |
| Models | Analytical models often estimate parameter dynamics and trajectories |
How to Use This Section
For Researchers
- Understand which underlying variables matter for AI outcomes
- Identify gaps between current and optimal parameter levels
- Design studies to measure parameter changes
For Policymakers
- Prioritize interventions based on which parameters are most degraded
- Monitor parameter trends to assess policy effectiveness
- Coordinate across domains (a single parameter may affect multiple risks)
For Forecasters
- Use parameters as input variables for scenario modeling
- Estimate how different interventions would shift parameter levels
- Identify tipping points where parameter degradation becomes irreversible