Partnership on AI: Synthetic Media
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Partnership on AI
Published by the Partnership on AI, a multi-stakeholder nonprofit, this framework is relevant to AI safety governance discussions around synthetic media misuse, deception, and the societal risks of generative AI at scale.
Metadata
Summary
This Partnership on AI paper establishes responsible practices and guidelines for the creation, distribution, and disclosure of synthetic media (including deepfakes and AI-generated content). It aims to reduce harms from manipulated or fabricated media by providing a framework for industry stakeholders, covering transparency, provenance, and accountability.
Key Points
- •Calls for clear disclosure and labeling when synthetic media is created or distributed to prevent deception and manipulation.
- •Provides guidance for multiple stakeholders including media creators, platforms, and technology developers on their respective responsibilities.
- •Addresses harms including non-consensual intimate imagery, political disinformation, and reputational damage from AI-generated content.
- •Emphasizes provenance and content authentication as technical mechanisms to help audiences identify synthetic media.
- •Promotes multi-stakeholder collaboration across industry, civil society, and policymakers to develop norms and standards.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Induced Cyber Psychosis | Risk | 37.0 |
Cached Content Preview
# Page not Found
Try one of the following links to get out of the rabbit hole.
## Latest Updates
[Recommendations\\
\\
**Demand and Incentives for External AI Assurance**\\
\\
\\
\\
John HowellMar 17, 2026](https://partnershiponai.org/resource/demand-and-incentives-for-external-ai-assurance/)
[Recommendations\\
\\
**Building Justified Trust in AI Assurers**\\
\\
\\
\\
John HowellMar 17, 2026](https://partnershiponai.org/resource/building-justified-trust-in-ai-assurers/)
[\\
\\
**Shaping AI Transparency Processes with NIST**\\
\\
\\
\\
Albert Tanjaya, Thalia KhanMar 17, 2026](https://partnershiponai.org/shaping-ai-transparency-processes-with-nist/)
[\\
\\
**When A Chatbot Becomes Your Therapist**\\
\\
\\
\\
Claire Leibowicz, Emily SaltzMar 12, 2026](https://partnershiponai.org/when-a-chatbot-becomes-your-therapist/)
[\\
\\
**5 Questions with EqualAI’s President & CEO Miriam Vogel**\\
\\
\\
\\
Penelope SosaMar 11, 2026](https://partnershiponai.org/5-questions-with-equalais-president-ceo-miriam-vogel/)
[\\
\\
**Building Momentum: From Delhi to Geneva**\\
\\
\\
\\
Rebecca Finlay, Marjorie Buchser, Stephanie IfayemiFeb 27, 2026](https://partnershiponai.org/building-momentum-from-delhi-to-geneva/)
[\\
\\
**Prioritizing AI Assurance and Civil Society Engagement Following India’s AI Impact Summit**\\
\\
\\
\\
Talita DiasFeb 19, 2026](https://partnershiponai.org/prioritizing-ai-assurance-and-civil-society-engagement-following-indias-ai-impact-summit/)
[Analysis\\
\\
**Closing the AI Assurance Divide: Policy Strategies for Developing Economies**\\
\\
99da086a6d3b6c24 | Stable ID: ZDAwOTM2Mm