Skip to content
Longterm Wiki
Back

YouTube's approach to synthetic media

web

This is an official YouTube/Google platform policy announcement relevant to AI governance practitioners studying how major content platforms operationalize synthetic media disclosure and deepfake mitigation at scale.

Metadata

Importance: 42/100blog postprimary source

Summary

YouTube outlines its policies and tools for managing AI-generated synthetic media on the platform, including disclosure requirements for realistic AI content, content labeling systems, and privacy protections for individuals whose likeness may be replicated without consent. The post addresses how YouTube balances enabling creative AI use cases while mitigating harms from deepfakes and AI-generated disinformation.

Key Points

  • YouTube requires creators to disclose when content uses realistic AI-generated or altered footage, especially for sensitive topics like elections or public health.
  • A content label will be displayed on videos containing synthetic media that could be mistaken for real people or events.
  • YouTube is developing tools allowing individuals to request removal of AI-generated content that simulates their face or voice without consent.
  • The policy attempts to balance creative use of generative AI with protections against harmful synthetic media and influence operations.
  • Enforcement builds on existing misinformation and harassment policies, extended to cover AI-generated content scenarios.

Cited by 1 page

PageTypeQuality
AI DisinformationRisk54.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20260 KB
**404.** That’s an error.

The requested URL `/news-and-events/our-approach-to-responsible-ai-innovation/` was not found on this server.
That’s all we know.
Resource ID: bbc766b504d2634c | Stable ID: MjU4YzEwMz