Skip to content
Longterm Wiki
Back

A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World Incidents

paper

Authors

Megan Li·Wendy Bickersteth·Ningjing Tang·Lorrie Cranor·Jason Hong·Hong Shen·Hoda Heidari

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Useful empirical reference for AI safety researchers and policymakers seeking evidence-based understanding of how generative AI harms manifest in practice, complementing more theoretical risk frameworks.

Paper Details

Citations
3
0 influential
Year
2025

Metadata

Importance: 62/100arxiv preprintanalysis

Summary

This paper systematically analyzes real-world incidents involving generative AI to map the actors, harm types, and mechanisms of harm across documented cases. It provides an empirical taxonomy of how generative AI risks manifest in practice, grounding safety concerns in observed incidents rather than theoretical scenarios. The work aims to inform risk assessment frameworks and governance approaches by cataloging actual harms.

Key Points

  • Develops a structured taxonomy mapping 'who' causes harm, 'what' harms occur, and 'how' generative AI enables those harms in real-world incidents
  • Analyzes documented real-world incidents rather than hypothetical scenarios, providing empirical grounding for AI risk discussions
  • Identifies patterns across incident types including misinformation, fraud, harassment, and automated abuse facilitated by generative AI
  • Findings can inform evidence-based AI governance, regulation, and safety evaluation frameworks
  • Bridges the gap between theoretical AI risk taxonomies and observed deployment harms

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202680 KB
A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World Incidents 
 
 
 
 
 
 

 
 
 

 
 
 
 
 A Closer Look at the Existing Risks of Generative AI: 
 Mapping the Who, What, and How of Real-World Incidents

 
 
 Megan Li
 
 Carnegie Mellon University Pittsburgh USA 
 
 meganli@andrew.cmu.edu 
 
 ,  
 Wendy Bickersteth
 
 Carnegie Mellon University Pittsburgh USA 
 
 wbickers@andrew.cmu.edu 
 
 ,  
 Ningjing Tang
 
 Carnegie Mellon University Pittsburgh USA 
 
 ningjingt@andrew.cmu.edu 
 
 ,  
 Jason Hong
 
 Carnegie Mellon University Pittsburgh USA 
 
 jasonh@cs.cmu.edu 
 
 ,  
 Lorrie Cranor
 
 Carnegie Mellon University Pittsburgh USA 
 
 lorrie@cmu.edu 
 
 ,  
 Hong Shen
 
 Carnegie Mellon University Pittsburgh USA 
 
 hongs@andrew.cmu.edu 
 
  and  
 Hoda Heidari
 
 Carnegie Mellon University Pittsburgh USA 
 
 hheidari@cmu.edu 
 
 
 (2018; 27 May 2025) 
 
 Abstract.

 Due to its general-purpose nature, Generative AI is applied in an ever-growing set of domains and tasks, leading to an expanding set of risks of harm impacting people, communities, society, and the environment. These risks may arise due to failures during the design and development of the technology, as well as during its release, deployment, or downstream usages and appropriations of its outputs. In this paper, building on prior taxonomies of AI risks, harms, and failures, we construct a taxonomy specifically for Generative AI failures and map them to the harms they precipitate. Through a systematic analysis of 499 publicly reported incidents, we describe what harms are reported, how they arose, and who they impact. We report the prevalence of each type of harm, underlying failure mode, and harmed stakeholder, as well as their common co-occurrences. We find that most reported incidents are caused by use-related issues but bring harm to parties beyond the end user(s) of the Generative AI system at fault, and that the landscape of Generative AI harms is distinct from that of traditional AI. Our work
offers actionable insights to policymakers, developers, and Generative AI users.
In particular, we call for the prioritization of non-technical risk and harm mitigation strategies, including public disclosures and education and careful regulatory stances.

 
 Generative AI, Risks and Harms, Socio-technical Failures, AI Incidents
 
 † † copyright: acmlicensed † † journalyear: 2018 † † doi: XXXXXXX.XXXXXXX † † conference: Make sure to enter the correct
conference title from your rights confirmation email; June 03–05,
2018; Woodstock, NY † † isbn: 978-1-4503-XXXX-X/18/06 † † ccs: General and reference Evaluation † † ccs: Computing methodologies Artificial intelligence † † ccs: Social and professional topics Computing / technology policy 1 1 footnotetext: ∗ Co-last authors contributed equally to this work. 
 
 
 1. Introduction

 
 Generative AI – defined as AI that produces novel output in the form of text, images, audio, or video (Weidinger e

... (truncated, 80 KB total)
Resource ID: 020b6b865482854c | Stable ID: ZjliOTcxNm