Skip to content
Longterm Wiki
Back

Deloitte's 2024 analysis

web

Industry analyst report from Deloitte relevant to AI governance and deployment policy discussions; useful for understanding economic and societal implications of generative AI misuse, particularly around synthetic media and content authentication standards.

Metadata

Importance: 42/100organizational reportanalysis

Summary

Deloitte's 2024 analysis frames deepfakes as a cybersecurity-scale threat to online trust, projecting the deepfake detection market will grow 42% annually from $5.5B in 2023 to $15.7B by 2026. The report draws parallels to cybersecurity spending trajectories and highlights that costs of maintaining content authenticity will likely be distributed across consumers, creators, and advertisers. Consumer surveys reveal widespread skepticism and demand for standardized AI content labeling.

Key Points

  • Deepfake detection market projected to grow 42% annually, reaching $15.7B by 2026, mirroring historical cybersecurity spending growth patterns.
  • 68% of gen-AI-aware consumers are concerned synthetic content could be used to deceive or scam them; 59% struggle to distinguish AI from human media.
  • 84% of respondents familiar with generative AI support mandatory clear labeling of AI-generated content.
  • Costs of combating deepfakes expected to be passed to consumers, advertisers, and creators, creating economic incentives similar to cybersecurity markets.
  • Industry response likely to include content authentication standards, consortium efforts, and increasing investment in detection technologies.

Cited by 2 pages

PageTypeQuality
AI Capability Threshold ModelAnalysis72.0
Trust Erosion Dynamics ModelAnalysis59.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202632 KB
# Deepfake disruption: A cybersecurity-scale challenge and its far-reaching consequences

## As the effort to detect and combat fake content escalates, the costs of maintaining a credible internet may fall on consumers, creators, and advertisers alike

![](<Base64-Image-Removed>)


Article


•

9-min read

•

19 November 2024

•

Deloitte Center for Technology Media & Telecommunications

- Share  - [linkedin](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html# "linkedin")
  - [twitter](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html# "twitter")
  - [facebook](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html# "facebook")
- Print

![Michael Steinhart](<Base64-Image-Removed>)

#### Michael Steinhart

United States

![Bree Matheson](<Base64-Image-Removed>)

#### Bree Matheson

United States

![Ankit Dhameja](<Base64-Image-Removed>)

#### Ankit Dhameja

India

![Gillian Crossan](<Base64-Image-Removed>)

#### Gillian Crossan

United States

![Ariane Bucaille](<Base64-Image-Removed>)

#### Ariane Bucaille

France

![](https://www.deloitte.com/content/dam/insights/icons/svg/icon-alert.svg)

Deepfakes—photos, videos, and audio clips that seem real but are generated by artificial intelligence tools—are making it harder for audiences to trust content that they see online. As AI-generated content grows in volume and sophistication, online images, videos, and audio can be used by bad actors to spread disinformation and perpetrate fraud. Social media networks have been flooded with such content, leading to widespread skepticism and concern.[1](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html#endnote-1)

In Deloitte’s 2024 Connected Consumer Study, half of respondents said they’re more skeptical of the accuracy and reliability of online information than they were a year ago. Among respondents familiar with or using generative AI, 68% reported concern that synthetic content could be used to deceive or scam them, and 59% reported they have a hard time telling the difference between media created by humans and generated by AI. Eighty-four percent of respondents familiar with gen AI agreed that content developed with gen AI should always be clearly labeled.[2](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html#endnote-2)

![](https://www.deloitte.com/content/dam/insights/icons/svg/icon-alert.svg)

Labeling is one of the ways through which media outlets and social media platforms can flag synthetic content for users, but as deepfake technologies incorporate more advanced models that can generate synthetic content and manipulate existing media, more com

... (truncated, 32 KB total)
Resource ID: 270a29b59196c942 | Stable ID: NDVjYjFjOT