Code of Practice on marking and labelling of AI-generated content
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: European Union
An official EU policy initiative relevant to AI governance researchers tracking regulatory approaches to synthetic media transparency and AI disclosure requirements, complementing the EU AI Act's binding provisions.
Metadata
Summary
This European Commission initiative establishes a voluntary code of practice requiring platforms and AI providers to mark and label AI-generated content, including deepfakes and synthetic media. It aims to improve transparency and help users identify AI-generated text, images, audio, and video online. The code is part of the EU's broader digital strategy and supports compliance with the AI Act and Digital Services Act.
Key Points
- •Establishes voluntary commitments for marking and labelling AI-generated content across text, images, audio, and video modalities.
- •Supports EU regulatory frameworks including the AI Act, which mandates transparency for certain AI-generated content.
- •Targets platforms, AI developers, and content distributors to adopt consistent disclosure and watermarking practices.
- •Addresses disinformation risks by making synthetic content detectable and distinguishable from human-created content.
- •Part of the EU's broader digital strategy to ensure trustworthy and accountable AI deployment in public-facing contexts.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Content Authentication | Approach | 58.0 |
Cached Content Preview
[Skip to main content](https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content#main-content)
An official website of the European UnionAn official EU website
How do you know?
All official European Union website addresses are in the **europa.eu** domain.
[See all EU institutions and bodies](https://european-union.europa.eu/institutions-law-budget/institutions-and-bodies/search-all-eu-institutions-and-bodies_en)
This site uses cookies. Visit our [cookies policy page](https://commission.europa.eu/cookies-policy_en) or click the link in any footer for more information and to change your preferences.
[Accept all cookies](https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content#accept) [Accept only essential cookies](https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content#refuse)
# Code of Practice on marking and labelling of AI-generated content
This Code of Practice aims to support compliance with the AI Act transparency obligations related to marking and labelling of AI-generated content.
## Marking and labelling of AI-generated content
The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) aim to ensure transparency of AI-generated or manipulated content, such as deep fakes. The article addresses risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations will complement other rules like those for high-risk AI systems or [general-purpose AI models](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai).
To assist with compliance with these transparency obligations, the AI Office has kick started the process of drawing up a code of practice on transparency of AI-generated content. The Code will be drafted by independent experts appointed by the AI Office in an inclusive process. Eligible stakeholders will be involved contribute to the drafting of the Code. If approved by the Commission, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations under Article 50(2) and (4) AI Act. These obligations pertain to marking and detection of AI generated content and labeling of deep fakes and certain AI generated publications.
## Scope of the working groups
The drafting of the code is centered around 2 working groups, following the structure of the transparency obligations for AI generated content in Article 50.
### Working group 1: Providers
Focuses on obligations, requiring providers of generative AI systems to ensure:
- Outputs of AI systems (audio, image, video, text) are marked in a machine-readable format and detectable as artificially generated or manipulated.
- The employed technical solutions are effective, interoperable, robust, and reliable as far as t
... (truncated, 9 KB total)a9e3e225dba7fdd7 | Stable ID: YWI5Y2Y1OD