Misuse Potential
misuse-potential (E207)← Back to pagePath: /ai-transition-model/factors-misuse-potential-overview/
Page Metadata
No page data found for "misuse-potential"
Entity Data
{
"id": "misuse-potential",
"type": "ai-transition-model-factor",
"title": "Misuse Potential",
"description": "The aggregate risk from deliberate harmful use of AI—including biological weapons, cyber attacks, autonomous weapons, and surveillance misuse.",
"tags": [
"ai-transition-model",
"factor",
"misuse",
"weapons"
],
"relatedEntries": [
{
"id": "existential-catastrophe",
"type": "ai-transition-model-scenario",
"relationship": "drives"
},
{
"id": "human-catastrophe",
"type": "ai-transition-model-scenario",
"relationship": "enables"
},
{
"id": "biological-threat-exposure",
"type": "ai-transition-model-parameter",
"relationship": "composed-of"
},
{
"id": "cyber-threat-exposure",
"type": "ai-transition-model-parameter",
"relationship": "composed-of"
},
{
"id": "racing-intensity",
"type": "ai-transition-model-parameter",
"relationship": "composed-of"
},
{
"id": "ai-control-concentration",
"type": "ai-transition-model-parameter",
"relationship": "composed-of"
}
],
"sources": [],
"lastUpdated": "2026-01",
"customFields": [
{
"label": "Model Role",
"value": "Root Factor (AI System)"
},
{
"label": "Key Parameters",
"value": "Biological Threat Exposure, Cyber Threat Exposure, Racing Intensity"
},
{
"label": "Primary Outcome",
"value": "Existential Catastrophe"
}
]
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (3)
| id | title | type | relationship |
|---|---|---|---|
| ai-capabilities | AI Capabilities | ai-transition-model-factor | amplifies |
| existential-catastrophe | Existential Catastrophe | ai-transition-model-scenario | driver |
| human-catastrophe | Human-Caused Catastrophe | ai-transition-model-scenario | driven-by |
Frontmatter
{
"numericId": "E666",
"title": "Misuse Potential",
"description": "Root factor measuring the risk of AI being weaponized or exploited by malicious actors. Primary driver of Human-Caused Catastrophe scenarios.",
"sidebar": {
"label": "Overview",
"order": 0
},
"lastEdited": "2026-01-03",
"subcategory": "factors-misuse-potential"
}Raw MDX Source
---
numericId: E666
title: Misuse Potential
description: Root factor measuring the risk of AI being weaponized or exploited by malicious actors. Primary driver of Human-Caused Catastrophe scenarios.
sidebar:
label: Overview
order: 0
lastEdited: "2026-01-03"
subcategory: factors-misuse-potential
---
import {DataInfoBox, FactorSubItemsList, FactorRelationshipDiagram, ImpactList, PageCauseEffectGraph, EntityLink} from '@components/wiki';
<DataInfoBox entityId="E207" />
Misuse Potential measures the likelihood and severity of AI being weaponized or exploited by malicious actors. Unlike <EntityLink id="E665">Misalignment Potential</EntityLink> (which measures unintended AI behavior), Misuse Potential measures intentional harmful use by humans.
**Primary outcome affected:** <EntityLink id="E130">Existential Catastrophe</EntityLink> ↑↑↑
High misuse potential means more vectors for catastrophe even if AI systems are well-aligned. These threats can overwhelm defenses, trigger cascading failures, or exploit concentrated power.
---
## Component Parameters
<FactorRelationshipDiagram nodeId="misuse-potential" direction="outgoing" showSubItems={true} />
<FactorSubItemsList factorId="misuse-potential" />
---
## Internal Dynamics
These threats compound each other:
- **Concentration amplifies impact**: When control is concentrated, single failures affect everyone
- **Bio and cyber interact**: Same AI capabilities that enable one often enable the other
- **Threat success breeds threat**: Successful attacks demonstrate viability and attract more actors
This creates **threat escalation dynamics**—each incident makes the next more likely.
---
## How This Affects Scenarios
<ImpactList nodeId="misuse-potential" direction="from" />
---
## Relationship to Misalignment Potential
Misuse Potential and <EntityLink id="E665">Misalignment Potential</EntityLink> are complementary risk factors:
| Factor | Misalignment Potential | Misuse Potential |
|--------|----------------------|------------------|
| Source | AI behavior | Human actors |
| Direction | AI pursues unintended goals | Humans weaponize AI |
| Focus | Technical alignment | Access controls & defense |
| Improvability | Research & investment | Harder to reduce |
**Both matter**: Either can cause existential catastrophe independently. High misalignment potential means AI might cause harm on its own; high misuse potential means humans will use AI to cause harm.
---
## Why Concentration Matters
<EntityLink id="E7">AI Control Concentration</EntityLink> is unique because its effect **depends on who controls**:
- If control concentrates in **safety-conscious actors**: May reduce risk
- If control concentrates in **reckless or malicious actors**: Dramatically increases risk
- In either case: **Reduces resilience** to bad actors gaining control
This makes concentration a key uncertainty in existential catastrophe assessment.