Back
GAO Report: Deepfakes - Detection and Authentication Technologies
governmentgao.gov·gao.gov/products/gao-24-107292
Official U.S. GAO report providing a policy-oriented technical overview of deepfake countermeasures; useful for understanding the current state of media authentication and detection from a regulatory and government accountability perspective.
Metadata
Importance: 52/100organizational reportanalysis
Summary
This U.S. Government Accountability Office report analyzes deepfake technology and the two primary countermeasure strategies: detection (using ML to identify inconsistencies) and authentication (digital watermarks and cryptographic metadata). The report highlights critical limitations of current detection methods and warns that even successful identification may not prevent disinformation spread as adversarial evasion techniques continue to advance.
Key Points
- •Deepfakes pose significant harms in elections, disinformation campaigns, and non-consensual pornography, motivating urgent countermeasure development.
- •Detection technologies use ML to spot generation artifacts and inconsistencies, but have limited real-world effectiveness against sophisticated evasion.
- •Authentication approaches like digital watermarks and cryptographic metadata aim to verify media provenance rather than detect manipulation post-hoc.
- •An adversarial dynamic exists where detection improvements are continually countered by more sophisticated generation and evasion techniques.
- •Even accurate deepfake identification may be insufficient to halt disinformation spread, highlighting need for broader societal and policy responses.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Content Authentication | Approach | 58.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 20269 KB
[Skip to main content](https://www.gao.gov/products/gao-24-107292#main-content)
**GAO-24-107292**
Published: Mar 11, 2024. Publicly Released: Mar 11, 2024.
- [Share on Facebook](https://www.gao.gov/products/gao-24-107292# "Share on Facebook")
- [Share on X](https://www.gao.gov/products/gao-24-107292# "Share on X")
- [Share on LinkedIn](https://www.gao.gov/products/gao-24-107292# "Share on LinkedIn")
- [Share on Bluesky](https://www.gao.gov/products/gao-24-107292# "Share on Bluesky")
- [Share on Threads](https://www.gao.gov/products/gao-24-107292# "Share on Threads")
- [Share by email](https://www.gao.gov/products/gao-24-107292# "Share by email")
[Jump To:](https://www.gao.gov/products/gao-24-107292#block-gao-uswds-jump-to-content)
## Fast Facts
Deepfakes are videos, audio, or images that seem real but have been manipulated with AI. They've been used to try to influence elections and to create non-consensual pornography.
Researchers are developing new ways to detect deepfakes, such as AI models that spot color abnormalities. And authentication methods, like digital watermarks, can help prove that a video or image has been altered.
But simply detecting deepfakes may not be enough to prevent harm. Disinformation can still spread even after deepfakes are identified. And, deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.

Skip to Highlights
## Highlights
## Why This Matters
Malicious use of deepfakes could erode trust in elections, spread disinformation, undermine national security, and empower harassers.
## Key Takeaways
- Current deepfake detection technologies have limited effectiveness in real-world scenarios.
- Watermarking and other authentication technologies may slow the spread of disinformation but present challenges.
- Identifying deepfakes is not by itself sufficient to prevent abuses. It may not stop the spread of disinformation, even after the media is identified as a deepfake.
## The Technology
**What is it?**
Deepfakes are videos, audio, or images that have been manipulated using artificial intelligence (AI), often to create, replace, or alter faces or synthesize speech. They can seem authentic to the human eye and ear. They have been maliciously used, for example, to try to influence elections and to create non-consensual pornography. To combat such abuses, technologies can be used to detect deepfakes or enable authentication of genuine media.
**Detection technologies** aim to identify fake media without needing to compare it to the original, unaltered media. These technologies typically use a form of AI known as machine learning. The models are trained on data from known real and fake media. Methods include looking for (1) facial or vocal inconsistenc
... (truncated, 9 KB total)Resource ID:
bf32ae99c8920f85 | Stable ID: YzI3ZTk2Ym