Misuse Potential
factors-misuse-potential-overview (E666)← Back to pagePath: /ai-transition-model/factors-misuse-potential-overview/
Page Metadata
{
"id": "factors-misuse-potential-overview",
"numericId": "E666",
"path": "/ai-transition-model/factors-misuse-potential-overview/",
"filePath": "ai-transition-model/factors-misuse-potential-overview.mdx",
"title": "Misuse Potential",
"quality": null,
"importance": null,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-03",
"llmSummary": null,
"structuredSummary": null,
"description": "Root factor measuring the risk of AI being weaponized or exploited by malicious actors. Primary driver of Human-Caused Catastrophe scenarios.",
"ratings": null,
"category": "ai-transition-model",
"subcategory": "factors-misuse-potential",
"clusters": [
"ai-safety"
],
"metrics": {
"wordCount": 283,
"tableCount": 1,
"diagramCount": 0,
"internalLinks": 4,
"externalLinks": 0,
"footnoteCount": 0,
"bulletRatio": 0.17,
"sectionCount": 5,
"hasOverview": false,
"structuralScore": 5
},
"suggestedQuality": 33,
"updateFrequency": null,
"evergreen": true,
"wordCount": 283,
"unconvertedLinks": [],
"unconvertedLinkCount": 0,
"convertedLinkCount": 0,
"backlinkCount": 0,
"redundancy": {
"maxSimilarity": 15,
"similarPages": [
{
"id": "factors-misalignment-potential-overview",
"title": "Misalignment Potential",
"path": "/ai-transition-model/factors-misalignment-potential-overview/",
"similarity": 15
}
]
}
}Entity Data
No entity found for "factors-misuse-potential-overview"
Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (0)
No backlinks
Frontmatter
{
"numericId": "E666",
"title": "Misuse Potential",
"description": "Root factor measuring the risk of AI being weaponized or exploited by malicious actors. Primary driver of Human-Caused Catastrophe scenarios.",
"sidebar": {
"label": "Overview",
"order": 0
},
"lastEdited": "2026-01-03",
"subcategory": "factors-misuse-potential"
}Raw MDX Source
---
numericId: E666
title: Misuse Potential
description: Root factor measuring the risk of AI being weaponized or exploited by malicious actors. Primary driver of Human-Caused Catastrophe scenarios.
sidebar:
label: Overview
order: 0
lastEdited: "2026-01-03"
subcategory: factors-misuse-potential
---
import {DataInfoBox, FactorSubItemsList, FactorRelationshipDiagram, ImpactList, PageCauseEffectGraph, EntityLink} from '@components/wiki';
<DataInfoBox entityId="E207" />
Misuse Potential measures the likelihood and severity of AI being weaponized or exploited by malicious actors. Unlike <EntityLink id="E665">Misalignment Potential</EntityLink> (which measures unintended AI behavior), Misuse Potential measures intentional harmful use by humans.
**Primary outcome affected:** <EntityLink id="E130">Existential Catastrophe</EntityLink> ↑↑↑
High misuse potential means more vectors for catastrophe even if AI systems are well-aligned. These threats can overwhelm defenses, trigger cascading failures, or exploit concentrated power.
---
## Component Parameters
<FactorRelationshipDiagram nodeId="misuse-potential" direction="outgoing" showSubItems={true} />
<FactorSubItemsList factorId="misuse-potential" />
---
## Internal Dynamics
These threats compound each other:
- **Concentration amplifies impact**: When control is concentrated, single failures affect everyone
- **Bio and cyber interact**: Same AI capabilities that enable one often enable the other
- **Threat success breeds threat**: Successful attacks demonstrate viability and attract more actors
This creates **threat escalation dynamics**—each incident makes the next more likely.
---
## How This Affects Scenarios
<ImpactList nodeId="misuse-potential" direction="from" />
---
## Relationship to Misalignment Potential
Misuse Potential and <EntityLink id="E665">Misalignment Potential</EntityLink> are complementary risk factors:
| Factor | Misalignment Potential | Misuse Potential |
|--------|----------------------|------------------|
| Source | AI behavior | Human actors |
| Direction | AI pursues unintended goals | Humans weaponize AI |
| Focus | Technical alignment | Access controls & defense |
| Improvability | Research & investment | Harder to reduce |
**Both matter**: Either can cause existential catastrophe independently. High misalignment potential means AI might cause harm on its own; high misuse potential means humans will use AI to cause harm.
---
## Why Concentration Matters
<EntityLink id="E7">AI Control Concentration</EntityLink> is unique because its effect **depends on who controls**:
- If control concentrates in **safety-conscious actors**: May reduce risk
- If control concentrates in **reckless or malicious actors**: Dramatically increases risk
- In either case: **Reduces resilience** to bad actors gaining control
This makes concentration a key uncertainty in existential catastrophe assessment.