Longterm Wiki

Deepfakes

deepfakes (E96)
← Back to pagePath: /knowledge-base/risks/deepfakes/
Page Metadata
{
  "id": "deepfakes",
  "numericId": null,
  "path": "/knowledge-base/risks/deepfakes/",
  "filePath": "knowledge-base/risks/deepfakes.mdx",
  "title": "Deepfakes",
  "quality": 50,
  "importance": 52,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": "pathway",
  "lastUpdated": "2025-12-24",
  "llmSummary": "Comprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capabilities, harm categories, and countermeasures including C2PA content authentication, but focuses primarily on describing the problem rather than prioritizing interventions.",
  "structuredSummary": null,
  "description": "AI-generated synthetic media creating fraud, harassment, and erosion of trust in authentic evidence through sophisticated impersonation capabilities",
  "ratings": {
    "novelty": 3,
    "rigor": 5.5,
    "actionability": 4.5,
    "completeness": 6
  },
  "category": "risks",
  "subcategory": "misuse",
  "clusters": [
    "ai-safety",
    "cyber"
  ],
  "metrics": {
    "wordCount": 1482,
    "tableCount": 14,
    "diagramCount": 0,
    "internalLinks": 44,
    "externalLinks": 0,
    "footnoteCount": 0,
    "bulletRatio": 0.28,
    "sectionCount": 32,
    "hasOverview": true,
    "structuralScore": 10
  },
  "suggestedQuality": 67,
  "updateFrequency": 21,
  "evergreen": true,
  "wordCount": 1482,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 35,
  "backlinkCount": 15,
  "redundancy": {
    "maxSimilarity": 18,
    "similarPages": [
      {
        "id": "fraud",
        "title": "AI-Powered Fraud",
        "path": "/knowledge-base/risks/fraud/",
        "similarity": 18
      },
      {
        "id": "epistemic-collapse",
        "title": "Epistemic Collapse",
        "path": "/knowledge-base/risks/epistemic-collapse/",
        "similarity": 14
      },
      {
        "id": "epistemic-security",
        "title": "AI-Era Epistemic Security",
        "path": "/knowledge-base/responses/epistemic-security/",
        "similarity": 13
      },
      {
        "id": "disinformation",
        "title": "Disinformation",
        "path": "/knowledge-base/risks/disinformation/",
        "similarity": 13
      },
      {
        "id": "persuasion",
        "title": "Persuasion and Social Manipulation",
        "path": "/knowledge-base/capabilities/persuasion/",
        "similarity": 12
      }
    ]
  }
}
Entity Data
{
  "id": "deepfakes",
  "type": "risk",
  "title": "Deepfakes",
  "description": "Deepfakes are AI-generated synthetic media—typically video or audio—that realistically depict people saying or doing things they never did. The technology has rapidly advanced from obviously fake to nearly indistinguishable from reality, creating both direct harms (fraud, harassment, defamation) and systemic harms (erosion of trust in authentic ...",
  "tags": [
    "synthetic-media",
    "identity",
    "authentication",
    "digital-trust",
    "ai-misuse"
  ],
  "relatedEntries": [
    {
      "id": "disinformation",
      "type": "risk"
    },
    {
      "id": "trust-decline",
      "type": "risk"
    }
  ],
  "sources": [
    {
      "title": "Deepfakes and the New Disinformation War",
      "url": "https://www.foreignaffairs.com/"
    },
    {
      "title": "C2PA Content Authenticity Standards",
      "url": "https://c2pa.org/"
    },
    {
      "title": "Fighting Deepfakes With Content Credentials and C2PA",
      "url": "https://www.cmswire.com/digital-experience/fighting-deepfakes-with-content-credentials-and-c2pa/"
    },
    {
      "title": "Content Credentials and 2024 Elections (IEEE Spectrum)",
      "url": "https://spectrum.ieee.org/deepfakes-election"
    },
    {
      "title": "Deepfakes and Disinformation: Elections Impact (TechUK)",
      "url": "https://www.techuk.org/resource/deepfakes-and-disinformation-what-impact-could-this-have-on-elections-in-2024.html",
      "date": "2024"
    },
    {
      "title": "Deepfake Media Forensics: Status and Challenges (PMC)",
      "url": "https://pmc.ncbi.nlm.nih.gov/articles/PMC11943306/",
      "date": "2024"
    },
    {
      "title": "Synthetic Media and Deepfakes (CNTI)",
      "url": "https://innovating.news/article/synthetic-media-deepfakes/"
    },
    {
      "title": "Deepfake Detection Legal Framework Proposal",
      "url": "https://www.sciencedirect.com/science/article/pii/S2212473X25000355",
      "date": "2025"
    }
  ],
  "lastUpdated": "2025-12",
  "customFields": [
    {
      "label": "Status",
      "value": "Widespread"
    },
    {
      "label": "Key Risk",
      "value": "Authenticity crisis"
    }
  ],
  "severity": "medium-high",
  "likelihood": {
    "level": "very-high",
    "status": "occurring"
  },
  "timeframe": {
    "median": 2025
  },
  "maturity": "Mature"
}
Canonical Facts (0)

No facts for this entity

External Links
{
  "wikipedia": "https://en.wikipedia.org/wiki/Deepfake",
  "wikidata": "https://www.wikidata.org/wiki/Q49473179"
}
Backlinks (15)
idtitletyperelationship
societal-trustSocietal Trustai-transition-model-parameterdecreases
misuse-risksAI Misuse Risk Cruxescrux
epistemic-risksAI Epistemic Cruxescrux
deepfakes-authentication-crisisDeepfakes Authentication Crisis Modelmodelrelated
authentication-collapse-timelineAuthentication Collapse Timeline Modelmodelrelated
cyber-psychosis-cascadeCyber Psychosis Cascade Modelmodelrelated
fraud-sophistication-curveFraud Sophistication Curve Modelmodelrelated
marc-andreessenMarc Andreessen (AI Investor)researcher
epistemic-securityAI-Era Epistemic Securityapproach
content-authenticationAI Content Authenticationapproach
disinformationAI Disinformationrisk
epistemic-collapseEpistemic Collapserisk
fraudAI-Powered Fraudrisk
trust-declineAI-Driven Trust Declinerisk
ai-enabled-untraceable-misuseAI-Enabled Untraceable Misuserisk
Frontmatter
{
  "title": "Deepfakes",
  "description": "AI-generated synthetic media creating fraud, harassment, and erosion of trust in authentic evidence through sophisticated impersonation capabilities",
  "sidebar": {
    "order": 5
  },
  "maturity": "Mature",
  "quality": 50,
  "llmSummary": "Comprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capabilities, harm categories, and countermeasures including C2PA content authentication, but focuses primarily on describing the problem rather than prioritizing interventions.",
  "lastEdited": "2025-12-24",
  "importance": 52,
  "update_frequency": 21,
  "causalLevel": "pathway",
  "todos": [
    "Complete 'How It Works' section"
  ],
  "ratings": {
    "novelty": 3,
    "rigor": 5.5,
    "actionability": 4.5,
    "completeness": 6
  },
  "clusters": [
    "ai-safety",
    "cyber"
  ],
  "subcategory": "misuse",
  "entityType": "risk"
}
Raw MDX Source
---
title: Deepfakes
description: AI-generated synthetic media creating fraud, harassment, and erosion of trust in authentic evidence through sophisticated impersonation capabilities
sidebar:
  order: 5
maturity: Mature
quality: 50
llmSummary: Comprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capabilities, harm categories, and countermeasures including C2PA content authentication, but focuses primarily on describing the problem rather than prioritizing interventions.
lastEdited: "2025-12-24"
importance: 52
update_frequency: 21
causalLevel: pathway
todos:
  - Complete 'How It Works' section
ratings:
  novelty: 3
  rigor: 5.5
  actionability: 4.5
  completeness: 6
clusters:
  - ai-safety
  - cyber
subcategory: misuse
entityType: risk
---
import {DataInfoBox, R, EntityLink, DataExternalLinks} from '@components/wiki';

<DataExternalLinks pageId="deepfakes" />

<DataInfoBox entityId="E96" />

## Overview

Deepfakes are AI-generated synthetic media—typically video or audio—that realistically depict people saying or doing things they never did. The technology has evolved from obviously artificial content in 2017 to nearly indistinguishable synthetic media by 2024, creating both direct harms through fraud and harassment and systemic harms by eroding trust in authentic evidence.

High-profile fraud cases demonstrate the financial risks: a \$15.6 million theft at <R id="47f3128e5e7568af">Arup Hong Kong</R> involved an entire video conference of deepfaked executives, while a \$35 million case used voice cloning to impersonate company directors. Beyond individual crimes, deepfakes create a "liar's dividend" where authentic evidence becomes deniable, threatening democratic discourse and justice systems.

| Risk Category | Current Impact | 5-Year Projection | Evidence |
|---------------|----------------|-------------------|----------|
| Financial Fraud | \$60M+ documented losses | Billions annually | <R id="7cfac8f3f8a27b01">FBI IC3</R> |
| Non-consensual Imagery | 90%+ of deepfake videos | Automated harassment | <R id="76caf48d6525d816">Sensity AI Report</R> |
| Political Manipulation | Low but growing | Election interference | <R id="35e3244199e922ad">Reuters Institute</R> |
| Evidence Denial | Emerging | Widespread doubt | Academic studies |

## Risk Assessment

| Factor | Severity | Likelihood | Timeline | Trend |
|--------|----------|------------|----------|--------|
| Financial Fraud | High | Very High | Current | Increasing |
| Harassment Campaigns | High | High | Current | Stable |
| Political <EntityLink id="E102">Disinformation</EntityLink> | Medium-High | Medium | 2-3 years | Increasing |
| Evidence Erosion | Very High | High | 3-5 years | Accelerating |

## Technical Capabilities & Development

### Current Generation Quality

| Capability | 2017 | 2024 | Evidence |
|------------|------|------|---------|
| Face Swapping | Obvious artifacts | Near-perfect quality | <R id="ac49b80df960f905">FaceSwap benchmarks</R> |
| Voice Cloning | Minutes of training data | 3-10 seconds needed | <R id="5a71dcde353b55d6">ElevenLabs</R>, <R id="0727e48c90269b22">Microsoft VALL-E</R> |
| Real-time Generation | Impossible | Live video calls | <R id="889388dfe364a550">DeepFaceLive</R> |
| Detection Resistance | Easily caught | Specialized tools required | <R id="0137bd3f0cb36015">DFDC Challenge results</R> |

### Key Technical Advances

**Real-time Generation**: Modern deepfake tools can generate synthetic faces during live video calls, enabling new forms of impersonation fraud. <R id="889388dfe364a550">DeepFaceLive</R> and similar tools require only consumer-grade GPUs.

**Few-shot Voice Cloning**: Services like <R id="5a71dcde353b55d6">ElevenLabs</R> can clone voices from seconds of audio. <R id="0727e48c90269b22">Microsoft's VALL-E</R> demonstrates even more sophisticated capabilities.

**<EntityLink id="E583">Adversarial Training</EntityLink>**: Modern generators specifically train to evade detection systems, creating an arms race where detection lags behind generation quality.

## Categories of Harm & Impact

### Financial Fraud

| Case | Amount | Method | Year | Source |
|------|--------|--------|------|--------|
| Arup Hong Kong | \$25.6M | Video conference deepfakes | 2024 | <R id="47f3128e5e7568af">CNN</R> |
| Hong Kong Company | \$35M | Voice cloning | 2020 | <R id="69a69fcb0c471689">Forbes</R> |
| WPP (Attempted) | Unknown | Multi-platform approach | 2024 | <R id="285ad0533234d867">BBC</R> |
| Elderly Crypto Scam | \$690K | <EntityLink id="E116">Elon Musk</EntityLink> impersonation | 2024 | <R id="aa0b2348f388763a">NBC</R> |

**Emerging Patterns**:
- Multi-platform attacks combining voice, video, and messaging
- Targeting of elderly populations with celebrity impersonations
- Corporate fraud using executive impersonation
- Real-time video call deception

### Non-consensual Intimate Imagery

<R id="76caf48d6525d816">Sensity AI research</R> found that 90-95% of deepfake videos online are non-consensual intimate imagery, primarily targeting women. This creates:

- Psychological trauma and reputational harm
- Economic impacts through career damage
- Chilling effects on public participation
- Disproportionate gender-based violence

### Political Manipulation & The Liar's Dividend

Beyond creating false content, deepfakes enable the "liar's dividend"—authentic evidence becomes deniable. Political examples include:

- <R id="e12caaa5097b4d9b">Politicians claiming real recordings are deepfakes</R>
- Pre-emptive deepfake denials before scandals break
- Erosion of shared epistemic foundations

This links to broader epistemic risks and <EntityLink id="E360">trust cascade</EntityLink> patterns.

## Detection & Countermeasures

### Detection Technology Performance

| Approach | Best Accuracy | Limitations | Status |
|----------|---------------|-------------|--------|
| Technical Detection | 65% (DFDC winner) | Adversarial training defeats | Losing arms race |
| Platform Moderation | Variable | Scale challenges | Reactive only |
| Content Authentication | 99%+ (when used) | Adoption challenges | Promising |
| Human Detection | &lt;50% for quality fakes | Training helps marginally | Inadequate |

### Content Provenance Standards

**C2PA (Coalition for Content Provenance and Authenticity)**:
- Industry coalition including <R id="200ae71ab6f9f1c8">Adobe</R>, <R id="9af5caf9dd9dc4bd">Meta</R>, <R id="abb5ddea57c82ce1">Microsoft</R>, <R id="d0e196a0c25d35dd">Google</R>
- Cryptographically signs content at creation
- <R id="0faf31f9ad72da33">Content Credentials</R> implementation growing
- Challenge: requires universal adoption to be effective

**Implementation Status**:
| Platform/Tool | C2PA Support | Deployment |
|---------------|--------------|------------|
| Adobe Creative Suite | Full | 2023+ |
| Meta Platforms | Partial | 2024 pilot |
| Google Platforms | Development | 2025 planned |
| Camera Manufacturers | Limited | Gradual rollout |

## Case Study Deep Dives

### Arup Hong Kong (\$25.6M, February 2024)

**Attack Vector**:
- Deepfaked video conference with CFO and multiple executives
- Used publicly available YouTube footage for training
- Real-time generation during Microsoft Teams call
- Social engineering to create urgency

**Detection Failure Points**:
- Multiple familiar faces reduced suspicion
- Corporate context normalized unusual requests
- No authentication protocols for high-value transfers
- Post-hoc verification came too late

**Implications**: Demonstrates sophistication of coordinated deepfake attacks and inadequacy of human detection.

### WPP Defense Success (May 2024)

**Attack Elements**:
- Fake WhatsApp account impersonation
- Voice-cloned Microsoft Teams call
- Edited YouTube footage for visual reference
- Request for confidential client information

**Defense Success**:
- Employee training created suspicion
- Out-of-band verification attempted
- Unusual communication pattern recognized
- Escalation to security team

**Lessons**: Human awareness and verification protocols can defeat sophisticated attacks when properly implemented.

## Current State & Future Trajectory

### Capability Development Timeline

| Milestone | Status | Timeline |
|-----------|--------|----------|
| Consumer-grade real-time deepfakes | Achieved | 2024 |
| Sub-second voice cloning | Achieved | 2023 |
| Perfect detection evasion | Near-achieved | 2025 |
| Live conversation deepfakes | Development | 2025-2026 |
| Full-body synthesis | Limited | 2026-2027 |

### Market & Economic Factors

- Deepfake generation tools increasingly commoditized
- Detection services lag behind generation capabilities
- <EntityLink id="E74">Content authentication</EntityLink> market emerging
- Insurance industry beginning to price deepfake fraud risk

### Regulatory Response

| Jurisdiction | Legislation | Focus | Status |
|-------------|-------------|-------|--------|
| United States | Multiple state laws | Non-consensual imagery | Enacted |
| European Union | AI Act provisions | Transparency requirements | 2025 implementation |
| United Kingdom | Online Safety Act | Platform liability | Phased rollout |
| China | Deepfake regulations | Content labeling | Enforced |

## Key Uncertainties & Debates

### Detection Arms Race

**Core Uncertainty**: Can detection technology ever reliably keep pace with generation advances?

**Arguments for Detection**:
- Fundamental mathematical signatures in AI-generated content
- Provenance systems bypass detection entirely
- Increasing computational resources for detection

**Arguments Against**:
- Adversarial training specifically defeats detectors
- Perfect generation may be mathematically achievable
- Economic incentives favor generation over detection

### Content Authentication Adoption

**Critical Questions**:
- Will C2PA achieve sufficient market penetration?
- Can authentication survive sophisticated circumvention attempts?
- How to handle legacy content without provenance?

**Adoption Challenges**:
| Factor | Challenge | Potential Solutions |
|--------|-----------|-------------------|
| User Experience | Complex workflows | Transparent integration |
| Privacy Concerns | Metadata tracking | Privacy-preserving proofs |
| Legacy Content | No retroactive protection | Gradual transition |
| Circumvention | Technical workarounds | Legal enforcement |

### Societal Impact Thresholds

**Key Questions**:
- At what point does evidence denial become socially catastrophic?
- How much fraud loss is economically sustainable?
- Can democratic discourse survive widespread authenticity doubt?

Research suggests <EntityLink id="E119">epistemic collapse</EntityLink> may occur when public confidence in authentic evidence drops below ~30%, though this threshold remains uncertain.

## Intervention Landscape

### Technical Solutions

| Approach | Effectiveness | Implementation | Cost |
|----------|---------------|----------------|------|
| Content Authentication | High (if adopted) | Medium complexity | Medium |
| Advanced Detection | Medium (arms race) | High complexity | High |
| Watermarking | Medium (circumventable) | Low complexity | Low |
| Blockchain Provenance | High (if universal) | High complexity | High |

### Policy & Governance

**Regulatory Approaches**:
- Platform liability for deepfake content
- Mandatory content labeling requirements
- Criminal penalties for malicious creation/distribution
- Industry standards for authentication

**<EntityLink id="E171">International Coordination</EntityLink>**:
- Cross-border fraud prosecution challenges
- Conflicting privacy vs. transparency requirements
- Technology transfer restrictions

Links to broader governance approaches and <EntityLink id="E392">misuse risk</EntityLink> management.

## Sources & Resources

### Academic Research
| Source | Focus | Key Finding |
|--------|-------|-------------|
| <R id="0137bd3f0cb36015">DFDC Challenge Paper</R> | Detection benchmarks | Best accuracy: 65% |
| <R id="76caf48d6525d816">Sensity AI Reports</R> | Usage statistics | 90%+ non-consensual content |
| <R id="35e3244199e922ad">Reuters Institute Studies</R> | Political impact | Liar's dividend effects |

### Industry Resources
| Organization | Focus | Resource |
|--------------|-------|----------|
| <R id="ff89bed1f7960ab2">C2PA</R> | Content authentication | Technical standards |
| <R id="cfe1ffb8be363af2">Adobe Research</R> | Detection & provenance | Project Content Authenticity |
| <R id="058ff9d6c86939fd">Microsoft Research</R> | Voice synthesis | VALL-E publications |

### Policy & Legal
| Source | Jurisdiction | Focus |
|--------|-------------|--------|
| <R id="5a59669b48e227c8">FBI IC3 Reports</R> | United States | Fraud statistics |
| <R id="1ad6dc89cded8b0c"><EntityLink id="E127">EU AI Act</EntityLink></R> | European Union | Regulatory framework |
| <R id="fb9d27d075721c3b">UK Online Safety</R> | United Kingdom | Platform regulation |

### Detection Tools & Services
| Tool | Type | Capability |
|------|------|------------|
| <R id="d003c0f1cb55479e">Microsoft Video Authenticator</R> | Detection | Real-time analysis |
| <R id="0a901d7448c20a29">Sensity Detection Suite</R> | Commercial | Enterprise detection |
| <R id="6243ff974ba5cab3">Intel FakeCatcher</R> | Research | Blood flow analysis |

## Related Topics