Back
NewsGuard's December 2024 AI Misinformation Monitor
webnewsguardtech.com·newsguardtech.com/ai-monitor/december-2024-ai-misinformat...
This monthly monitor from NewsGuard is relevant to AI safety discussions around deployment risks and misinformation, offering real-world documentation of harms from AI systems that complements more theoretical alignment and governance work.
Metadata
Importance: 38/100organizational reportnews
Summary
NewsGuard's monthly AI Misinformation Monitor tracks instances where AI chatbots and tools spread false or misleading information, documenting specific cases from December 2024. The report serves as an ongoing audit of AI systems' reliability and their propensity to generate or amplify misinformation at scale.
Key Points
- •Documents specific cases where AI chatbots generated or propagated misinformation during December 2024
- •Serves as a recurring accountability mechanism for tracking AI misinformation trends over time
- •Highlights risks of AI systems being used as vectors for spreading false narratives to the public
- •Provides empirical evidence relevant to AI deployment safety and content reliability debates
- •Underscores need for robust evaluation and red-teaming of AI outputs before public deployment
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI-Era Epistemic Infrastructure | Approach | 59.0 |
| AI Risk Public Education | Approach | 51.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 20262 KB
December 2024 — AI Misinformation Monitor of Leading AI Chatbots - NewsGuard December 2024 — AI Misinformation Monitor of Leading AI Chatbots An audit of the 10 leading generative AI tools and their propensity to repeat false narratives on topics in the news Published Jan. 13, 2025 The December 2024 edition of the monthly report found that the 10 leading chatbots collectively repeated false claims 40.33 percent of the time, offered a non-response 21.67 percent of the time, and a debunk 38 percent of the time . The 62 percent “fail” rate (percentage of responses containing false claims or offering a nonresponse) is a strong decline in performance from NewsGuard’s previous audit, which recorded a fail rate of 44.33 percent. NewsGuard launched a monthly AI News Misinformation Monitor in July 2024, setting a new standard for measuring the accuracy and trustworthiness of the AI industry by tracking how each leading generative AI model is responding to prompts related to significant falsehoods in the news. The monitor focuses on the 10 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. It will expand as needed as other generative AI tools are launched. Researchers, platforms, advertisers, government agencies, and other institutions interested in accessing the detailed individual monthly reports or who want details about our services for generative AI companies can contact NewsGuard here . And to learn more about NewsGuard’s transparently-sourced datasets for AI platforms, click here . Download the Report To download the AI Misinformation Monitor, please fill out your details below and you will be redirected to the report. If you'd like to learn more about working with NewsGuard, email [email protected] . Company This field is for validation purposes and should be left unchanged. Name * First Last Email * Company * By submitting this form, you agree to receive email communications from NewsGuard. Submit
Resource ID:
0a62bd00fc79c681 | Stable ID: ZjRlMTIwMz