Longterm Wiki
Updated 2025-12-26HistoryData
Page StatusContent
Edited 7 weeks ago2.2k words1 backlinks
62
QualityGood
78
ImportanceHigh
12
Structure12/15
2524500%2%
Updated quarterlyDue in 6 weeks
Summary

This framework maps beliefs about AI timelines (short/medium/long), alignment difficulty (hard/medium/tractable), and coordination feasibility (feasible/difficult/impossible) to intervention priorities, showing 2-10x differences in optimal resource allocation across worldview clusters. The model identifies that 20-50% of field resources may be wasted through worldview-work mismatches, with specific portfolio recommendations for each worldview cluster.

TODOs4
Complete 'Conceptual Framework' section
Complete 'Quantitative Analysis' section (8 placeholders)
Complete 'Strategic Importance' section
Complete 'Limitations' section (6 placeholders)

Worldview-Intervention Mapping

Model

Worldview-Intervention Mapping

This framework maps beliefs about AI timelines (short/medium/long), alignment difficulty (hard/medium/tractable), and coordination feasibility (feasible/difficult/impossible) to intervention priorities, showing 2-10x differences in optimal resource allocation across worldview clusters. The model identifies that 20-50% of field resources may be wasted through worldview-work mismatches, with specific portfolio recommendations for each worldview cluster.

Model TypeStrategic Framework
FocusWorldview-Action Coherence
Key OutputIntervention priorities given different worldviews
Related
Models
AI Risk Portfolio AnalysisAI Development Racing Dynamics
2.2k words ยท 1 backlinks
Model

Worldview-Intervention Mapping

This framework maps beliefs about AI timelines (short/medium/long), alignment difficulty (hard/medium/tractable), and coordination feasibility (feasible/difficult/impossible) to intervention priorities, showing 2-10x differences in optimal resource allocation across worldview clusters. The model identifies that 20-50% of field resources may be wasted through worldview-work mismatches, with specific portfolio recommendations for each worldview cluster.

Model TypeStrategic Framework
FocusWorldview-Action Coherence
Key OutputIntervention priorities given different worldviews
Related
Models
AI Risk Portfolio AnalysisAI Development Racing Dynamics
2.2k words ยท 1 backlinks

Overview

This model maps how beliefs about AI risk create distinct worldview clusters with dramatically different intervention priorities. Different worldviews imply 2-10x differences in optimal resource allocation across pause advocacy, technical research, and governance work.

The model identifies that misalignment between personal beliefs and work focus may waste 20-50% of field resources. AI safety researchersโ†— hold fundamentally different assumptions about timelines, technical difficulty, and coordination feasibility, but these differences often don't translate to coherent intervention choices.

The framework reveals four major worldview clusters - from "doomer" (short timelines + hard alignment) prioritizing pause advocacy, to "technical optimist" (medium timelines + tractable alignment) emphasizing research investment.

Risk/Impact Assessment

DimensionAssessmentEvidenceTimeline
SeverityHigh2-10x resource allocation differences across worldviewsImmediate
LikelihoodVery HighSystematic worldview-work mismatches observedOngoing
ScopeField-wideAffects individual researchers, orgs, and fundersAll levels
TrendWorseningField growth without explicit worldview coordination2024-2027

Strategic Question Framework

Given your beliefs about AI risk, which interventions should you prioritize?

The core problem: People work on interventions that don't match their stated beliefs about AI development. This model makes explicit which interventions are most valuable under specific worldview assumptions.

How to Use This Framework

StepActionTool
1Identify worldviewAssess beliefs on timeline/difficulty/coordination
2Check prioritiesMap beliefs to intervention recommendations
3Audit alignmentCompare current work to worldview implications
4Adjust strategyEither change work focus or update worldview

Core Worldview Dimensions

Three belief dimensions drive most disagreement about intervention priorities:

Loading diagram...

Dimension 1: Timeline Beliefs

TimelineKey BeliefsStrategic ConstraintsSupporting Evidence
Short (2025-2030)AGI within 5 years; scaling continues; few obstaclesLittle time for institutional change; must work with existing structuresAmodei predictionโ†— of powerful AI by 2026-2027
Medium (2030-2040)Transformative AI in 10-15 years; surmountable obstaclesTime for institution-building; research can matureMetaculus consensusโ†— โ‰ˆ2032 for AGI
Long (2040+)Major obstacles remain; slow takeoff; decades availableFull institutional development possible; fundamental research valuableMIRI positionโ†— on alignment difficulty

Dimension 2: Alignment Difficulty

DifficultyCore AssumptionsResearch ImplicationsCurrent Status
HardAlignment fundamentally unsolved; deception likely; current techniques inadequateTechnical solutions insufficient; need to slow/stop developmentScheming researchโ†— shows deception possible
MediumAlignment difficult but tractable; techniques improve with scaleTechnical research highly valuable; sustained investment neededConstitutional AIโ†— shows promise
TractableAlignment largely solved; RLHF + interpretability sufficientFocus on deployment governance; limited technical urgencyOpenAI safety approachโ†— assumes tractability

Dimension 3: Coordination Feasibility

FeasibilityInstitutional ViewPolicy ImplicationsHistorical Precedent
FeasibleTreaties possible; labs coordinate; racing avoidableInvest heavily in coordination mechanismsNuclear Test Ban Treaty, Montreal Protocol
DifficultPartial coordination; major actors defect; limited cooperationFocus on willing actors; partial governanceClimate agreements with partial compliance
ImpossiblePure competition; no stable equilibria; universal racingTechnical safety only; governance futileFailed disarmament during arms races

Four Major Worldview Clusters

Loading diagram...

Cluster 1: "Doomer" Worldview

Beliefs: Short timelines + Hard alignment + Coordination difficult

Intervention CategoryPriorityExpected ROIKey Advocates
Pause/slowdown advocacyVery High10x+ if successfulEliezer Yudkowsky
Compute governanceVery High5-8x via bottlenecksRAND reportsโ†—
Technical safety researchHigh2-4x (low prob, high value)MIRI approach
International coordinationMedium8x if achieved (low prob)FHI governance workโ†—
Field-buildingLow1-2x (insufficient time)Long-term capacity building
Public engagementMedium3-5x via political supportPause AI movementโ†—

Coherence Check: If you believe this worldview but work on field-building or long-term institution design, your work may be misaligned with your beliefs.

Cluster 2: "Technical Optimist" Worldview

Beliefs: Medium timelines + Medium difficulty + Coordination possible

Intervention CategoryPriorityExpected ROILeading Organizations
Technical safety researchVery High8-12x via direct solutionsAnthropic, Redwood
InterpretabilityVery High6-10x via understandingChris Olah's work
Lab safety standardsHigh4-6x via industry normsPartnership on AIโ†—
Compute governanceMedium3-5x supplementary valueCSETโ†— research
Pause advocacyLow1x or negative (unnecessary)Premature intervention
Field-buildingHigh5-8x via capacityCHAI, MATSโ†—

Coherence Check: If you believe this worldview but work on pause advocacy or aggressive regulation, your efforts may be counterproductive.

Cluster 3: "Governance-Focused" Worldview

Beliefs: Medium-long timelines + Medium difficulty + Coordination feasible

Intervention CategoryPriorityExpected ROIKey Institutions
International coordinationVery High10-15x via global governanceUK AISI, US AISI
Domestic regulationVery High6-10x via norm-settingEU AI Actโ†—
Institution-buildingVery High8-12x via capacityAI Safety Instituteโ†— development
Technical standardsHigh4-6x enabling governanceNIST AI RMFโ†—
Technical researchMedium3-5x (others lead)Research coordination role
Pause advocacyLow1-2x prematureGovernance development first

Coherence Check: If you believe this worldview but focus purely on technical research, you may be underutilizing comparative advantage.

Cluster 4: "Accelerationist/Optimist" Worldview

Beliefs: Any timeline + Tractable alignment + Any coordination level

Intervention CategoryPriorityExpected ROIRationale
Capability developmentVery High15-25x via benefitsAI solves problems faster than creates them
Deployment governanceMedium2-4x addressing specific harmsTargeted harm prevention
Technical safetyLow1-2x already adequateRLHF sufficient for current systems
Pause/slowdownVery LowNegative ROIDelays beneficial AI
Aggressive regulationVery LowLarge negative ROIStifles innovation unnecessarily

Coherence Check: If you hold this worldview but work on safety research or pause advocacy, your work contradicts your beliefs about AI risk levels.

Intervention Effectiveness Matrix

The following analysis shows how intervention effectiveness varies dramatically across worldviews:

InterventionShort+Hard (Doomer)Short+Tractable (Sprint)Long+Hard (Patient)Long+Tractable (Optimist)
Pause/slowdownVery High (10x)Low (1x)Medium (4x)Very Low (-2x)
Compute governanceVery High (8x)Medium (3x)High (6x)Low (1x)
Alignment researchHigh (3x)Low (2x)Very High (12x)Low (1x)
InterpretabilityHigh (4x)Medium (5x)Very High (10x)Medium (3x)
International treatiesMedium (2x)Low (1x)Very High (15x)Medium (4x)
Domestic regulationMedium (3x)Medium (4x)High (8x)Medium (3x)
Lab safety standardsHigh (6x)High (7x)High (8x)Medium (4x)
Field-buildingLow (1x)Low (2x)Very High (12x)Medium (5x)
Public engagementMedium (4x)Low (2x)High (7x)Low (1x)
Critical Insight

Working on "Very High" priority interventions under the wrong worldview can waste 5-10x resources compared to optimal allocation. This represents one of the largest efficiency losses in the AI safety field.

Portfolio Strategies for Uncertainty

Timeline Uncertainty Management

Uncertainty LevelRecommended AllocationHedge Strategy
50/50 short vs long60% urgent interventions, 40% patient capitalCompute governance + field-building
70% short, 30% long80% urgent, 20% patient with option valueStandards + some institution-building
30% short, 70% long40% urgent, 60% patient developmentInstitution-building + some standards

Alignment Difficulty Hedging

Belief DistributionTechnical ResearchGovernance/CoordinationRationale
50% hard, 50% tractable40% allocation60% allocationGovernance has value regardless
80% hard, 20% tractable20% allocation80% allocationFocus on buying time
20% hard, 80% tractable70% allocation30% allocationTechnical solutions likely

Coordination Feasibility Strategies

ScenarioUnilateral CapacityMultilateral InvestmentLeading Actor Focus
High coordination feasibility20%60%20%
Medium coordination feasibility40%40%20%
Low coordination feasibility60%10%30%

Current State & Trajectory

Field-Wide Worldview Distribution

Worldview ClusterEstimated PrevalenceResource AllocationAlignment Score
Doomer15-20% of researchersโ‰ˆ30% of resourcesModerate misalignment
Technical Optimist40-50% of researchersโ‰ˆ45% of resourcesGood alignment
Governance-Focused25-30% of researchersโ‰ˆ20% of resourcesPoor alignment
Accelerationist5-10% of researchersโ‰ˆ5% of resourcesUnknown

Observed Misalignment Patterns

Based on AI Alignment Forumโ†— surveys and 80,000 Hoursโ†— career advising:

Common MismatchFrequencyEstimated Efficiency Loss
"Short timelines" researcher doing field-building25% of junior researchers3-5x effectiveness loss
"Alignment solved" researcher doing safety work15% of technical researchers2-3x effectiveness loss
"Coordination impossible" researcher doing policy10% of policy researchers4-6x effectiveness loss

2024-2027 Trajectory Predictions

TrendLikelihoodImpact on Field Efficiency
Increased worldview polarizationHigh-20% to -30% efficiency
Better worldview-work matchingMedium+15% to +25% efficiency
Explicit worldview institutionsLow+30% to +50% efficiency

Key Uncertainties & Cruxes

Key Questions

  • ?What's the actual distribution of worldviews among AI safety researchers?
  • ?How much does worldview-work mismatch reduce field effectiveness quantitatively?
  • ?Can people reliably identify and articulate their own worldview assumptions?
  • ?Would explicit worldview discussion increase coordination or create harmful polarization?
  • ?How quickly should people update worldviews based on new evidence?
  • ?Do comparative advantages sometimes override worldview-based prioritization?

Resolution Timelines

UncertaintyEvidence That Would ResolveTimeline
Actual worldview distributionComprehensive field survey6-12 months
Quantified efficiency lossesRetrospective impact analysis1-2 years
Worldview updating patternsLongitudinal researcher tracking2-5 years
Institutional coordination effectsNatural experiments with explicit worldview orgs3-5 years

Implementation Guidance

For Individual Researchers

Career StagePrimary ActionSecondary Actions
Graduate studentsIdentify worldview before specializingTalk to advisors with different worldviews
PostdocsAudit current work against worldviewConsider switching labs if misaligned
Senior researchersMake worldview explicit in workMentor others on worldview coherence
Research leadersHire for worldview diversityCreate space for worldview discussion

For Organizations

Organization TypeStrategic PriorityImplementation Steps
Research organizationsClarify institutional worldviewSurvey staff, align strategy, communicate assumptions
Grantmaking organizationsDevelop worldview-coherent portfoliosMap grantee worldviews, identify gaps, fund strategically
Policy organizationsCoordinate across worldview differencesCreate cross-worldview working groups
Field-building organizationsFacilitate worldview discussionHost workshops, create assessment tools

For Funders

Funding ApproachWhen AppropriateRisk Management
Single worldview concentrationHigh confidence in specific worldviewDiversify across intervention types within worldview
Worldview hedgingHigh uncertainty about key parametersFund complementary approaches, avoid contradictory grants
Worldview arbitrageIdentified underinvested worldview-intervention combinationsFocus on neglected high-value combinations

Failure Mode Analysis

Individual Failure Modes

Failure ModePrevalenceMitigation Strategy
Social conformity biasHighCreate protected spaces for worldview diversity
Career incentive misalignmentMediumReward worldview-coherent work choices
Worldview rigidityMediumEncourage regular worldview updating
False precision in beliefsHighEmphasize uncertainty and portfolio approaches

Institutional Failure Modes

Failure ModeSymptomsSolution
Worldview monocultureAll staff share same assumptionsActively hire for belief diversity
Incoherent strategyContradictory intervention portfolioMake worldview assumptions explicit
Update resistanceStrategy unchanged despite new evidenceCreate structured belief updating processes

Sources & Resources

Research Literature

CategoryKey SourcesQualityFocus
Worldview surveysAI Alignment Forum surveyโ†—MediumCommunity beliefs
Intervention effectiveness80,000 Hours researchโ†—HighCareer prioritization
Strategic frameworksCoefficient Giving worldview reportsโ†—HighCause prioritization

Tools & Assessments

ResourcePurposeAccess
Worldview self-assessmentIndividual belief identificationAI Safety Fundamentalsโ†—
Intervention prioritization calculatorPortfolio optimizationEA Forum toolsโ†—
Career decision frameworksWork-belief alignment80,000 Hours coachingโ†—

Organizations by Worldview

OrganizationPrimary WorldviewCore Interventions
MIRIDoomer (short+hard)Agent foundations, pause advocacy
AnthropicTechnical optimistConstitutional AI, interpretability
CSETโ†—Governance-focusedPolicy research, international coordination
Redwood ResearchTechnical optimistAlignment research, interpretability

Related Models & Pages

Complementary Models

  • AI Risk Portfolio Analysis - Risk category prioritization across scenarios
  • Racing Dynamics - How competition affects coordination feasibility
  • International Coordination Game - Factors affecting cooperation

Related Worldviews

  • Doomer Worldview - Short timelines, hard alignment assumptions
  • Governance-Focused Worldview - Coordination optimism, institution-building focus
  • Long Timelines Worldview - Patient capital, fundamental research emphasis

Related Pages

Top Related Pages

Models

AI Safety Intervention Effectiveness MatrixAI Risk Activation Timeline ModelAI Safety Research Value Model

Concepts

Machine Intelligence Research InstituteUK AI Safety InstituteUS AI Safety InstituteRedwood ResearchCenter for Human-Compatible AIEliezer Yudkowsky