Longterm Wiki
Data
Page StatusAI Transition Model

Misuse Potential

Entry

Misuse Potential

Model RoleRoot Factor (AI System)
Key ParametersBiological Threat Exposure, Cyber Threat Exposure, Racing Intensity
Primary OutcomeExistential Catastrophe
Related
ai-transition-model-scenarios
Existential CatastropheHuman-Caused Catastrophe
ai-transition-model-parameters
Biological Threat ExposureCyber Threat ExposureRacing IntensityAI Control Concentration
Entry

Misuse Potential

Model RoleRoot Factor (AI System)
Key ParametersBiological Threat Exposure, Cyber Threat Exposure, Racing Intensity
Primary OutcomeExistential Catastrophe
Related
ai-transition-model-scenarios
Existential CatastropheHuman-Caused Catastrophe
ai-transition-model-parameters
Biological Threat ExposureCyber Threat ExposureRacing IntensityAI Control Concentration

Misuse Potential measures the likelihood and severity of AI being weaponized or exploited by malicious actors. Unlike Misalignment Potential (which measures unintended AI behavior), Misuse Potential measures intentional harmful use by humans.

Primary outcome affected: Existential Catastrophe ↑↑↑

High misuse potential means more vectors for catastrophe even if AI systems are well-aligned. These threats can overwhelm defenses, trigger cascading failures, or exploit concentrated power.


Component Parameters


Internal Dynamics

These threats compound each other:

  • Concentration amplifies impact: When control is concentrated, single failures affect everyone
  • Bio and cyber interact: Same AI capabilities that enable one often enable the other
  • Threat success breeds threat: Successful attacks demonstrate viability and attract more actors

This creates threat escalation dynamics—each incident makes the next more likely.


How This Affects Scenarios


Relationship to Misalignment Potential

Misuse Potential and Misalignment Potential are complementary risk factors:

FactorMisalignment PotentialMisuse Potential
SourceAI behaviorHuman actors
DirectionAI pursues unintended goalsHumans weaponize AI
FocusTechnical alignmentAccess controls & defense
ImprovabilityResearch & investmentHarder to reduce

Both matter: Either can cause existential catastrophe independently. High misalignment potential means AI might cause harm on its own; high misuse potential means humans will use AI to cause harm.


Why Concentration Matters

AI Control Concentration is unique because its effect depends on who controls:

  • If control concentrates in safety-conscious actors: May reduce risk
  • If control concentrates in reckless or malicious actors: Dramatically increases risk
  • In either case: Reduces resilience to bad actors gaining control

This makes concentration a key uncertainty in existential catastrophe assessment.

Related Pages

Top Related Pages

Transition Model

Racing IntensityAI Control ConcentrationMisalignment PotentialAI UsesAI OwnershipCivilizational Competence

Concepts

AI Misuse

Risks

AI-Enabled Biological RisksAI-Enabled Cyberattacks

Key Debates

AI Misuse Risk Cruxes