Misuse Potential
Misuse Potential
Misuse Potential measures the likelihood and severity of AI being weaponized or exploited by malicious actors. Unlike Misalignment PotentialE665Root factor measuring the likelihood AI systems pursue unintended goals. Primary driver of AI Takeover scenarios. (which measures unintended AI behavior), Misuse Potential measures intentional harmful use by humans.
Primary outcome affected: Existential CatastropheAi Transition Model ScenarioExistential CatastropheThis page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment. ↑↑↑
High misuse potential means more vectors for catastrophe even if AI systems are well-aligned. These threats can overwhelm defenses, trigger cascading failures, or exploit concentrated power.
Component Parameters
Internal Dynamics
These threats compound each other:
- Concentration amplifies impact: When control is concentrated, single failures affect everyone
- Bio and cyber interact: Same AI capabilities that enable one often enable the other
- Threat success breeds threat: Successful attacks demonstrate viability and attract more actors
This creates threat escalation dynamics—each incident makes the next more likely.
How This Affects Scenarios
Relationship to Misalignment Potential
Misuse Potential and Misalignment PotentialE665Root factor measuring the likelihood AI systems pursue unintended goals. Primary driver of AI Takeover scenarios. are complementary risk factors:
| Factor | Misalignment Potential | Misuse Potential |
|---|---|---|
| Source | AI behavior | Human actors |
| Direction | AI pursues unintended goals | Humans weaponize AI |
| Focus | Technical alignment | Access controls & defense |
| Improvability | Research & investment | Harder to reduce |
Both matter: Either can cause existential catastrophe independently. High misalignment potential means AI might cause harm on its own; high misuse potential means humans will use AI to cause harm.
Why Concentration Matters
AI Control ConcentrationAi Transition Model ParameterAI Control ConcentrationThis page contains only a React component placeholder with no actual content loaded. Cannot evaluate substance, methodology, or conclusions. is unique because its effect depends on who controls:
- If control concentrates in safety-conscious actors: May reduce risk
- If control concentrates in reckless or malicious actors: Dramatically increases risk
- In either case: Reduces resilience to bad actors gaining control
This makes concentration a key uncertainty in existential catastrophe assessment.