Back
The Partnership on AI's synthesis report
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Partnership on AI
A collaborative industry framework from Partnership on AI addressing synthetic media governance; relevant to AI deployment norms, content authenticity, and multi-stakeholder approaches to mitigating AI-enabled disinformation.
Metadata
Importance: 55/100organizational reportreference
Summary
The Partnership on AI's Synthetic Media Framework provides guidelines and best practices for responsible creation, distribution, and governance of AI-generated synthetic media. It aims to address risks of disinformation and manipulation while balancing legitimate creative and informational uses of synthetic content.
Key Points
- •Establishes a multi-stakeholder framework for responsible synthetic media practices across creators, platforms, and distributors
- •Addresses risks of deepfakes and AI-generated content being used for disinformation, fraud, and influence operations
- •Proposes disclosure and provenance standards to help audiences identify synthetic or AI-manipulated media
- •Balances harm mitigation with preserving legitimate creative, journalistic, and educational uses of synthetic media
- •Reflects cross-industry coordination among tech companies, civil society, and researchers on AI governance
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Disinformation | Risk | 54.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20263 KB
# Page not Found
Try one of the following links to get out of the rabbit hole.
## Latest Updates
[Recommendations\\
\\
**Demand and Incentives for External AI Assurance**\\
\\
\\
\\
John HowellMar 17, 2026](https://partnershiponai.org/resource/demand-and-incentives-for-external-ai-assurance/)
[Recommendations\\
\\
**Building Justified Trust in AI Assurers**\\
\\
\\
\\
John HowellMar 17, 2026](https://partnershiponai.org/resource/building-justified-trust-in-ai-assurers/)
[\\
\\
**Shaping AI Transparency Processes with NIST**\\
\\
\\
\\
Albert Tanjaya, Thalia KhanMar 17, 2026](https://partnershiponai.org/shaping-ai-transparency-processes-with-nist/)
[\\
\\
**When A Chatbot Becomes Your Therapist**\\
\\
\\
\\
Claire Leibowicz, Emily SaltzMar 12, 2026](https://partnershiponai.org/when-a-chatbot-becomes-your-therapist/)
[\\
\\
**5 Questions with EqualAI’s President & CEO Miriam Vogel**\\
\\
\\
\\
Penelope SosaMar 11, 2026](https://partnershiponai.org/5-questions-with-equalais-president-ceo-miriam-vogel/)
[\\
\\
**Building Momentum: From Delhi to Geneva**\\
\\
\\
\\
Rebecca Finlay, Marjorie Buchser, Stephanie IfayemiFeb 27, 2026](https://partnershiponai.org/building-momentum-from-delhi-to-geneva/)
[\\
\\
**Prioritizing AI Assurance and Civil Society Engagement Following India’s AI Impact Summit**\\
\\
\\
\\
Talita DiasFeb 19, 2026](https://partnershiponai.org/prioritizing-ai-assurance-and-civil-society-engagement-following-indias-ai-impact-summit/)
[Analysis\\
\\
**Closing the AI Assurance Divide: Policy Strategies for Developing Economies**\\
\\
Resource ID:
b7fa870a08c3d1ad | Stable ID: YmIwMzAzMz