Skip to content
Longterm Wiki
Back

arXiv - AI Security Threat Taxonomy

paper

Author

Alexander Sidorkin

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Introduces a structured threat taxonomy for AI systems with 9 domains and 53 sub-threats, designed to facilitate communication between technical and compliance teams on AI risk assessment and business impact.

Paper Details

Citations
0
0 influential
Year
2025
Methodology
peer-reviewed
Categories
AI-EDU Arxiv

Metadata

arxiv preprintprimary source

Summary

This paper introduces the AI System Threat Vector Taxonomy, a structured framework designed to bridge the communication gap between technical security teams and legal/compliance professionals in AI risk assessment. The taxonomy categorizes AI-specific risks into nine domains (Misuse, Poisoning, Privacy, Adversarial, Biases, Unreliable Outputs, Drift, Supply Chain, and IP Threat) with 53 operationally defined sub-threats, mapping each directly to business loss categories (Confidentiality, Integrity, Availability, Legal, Reputation). The framework enables quantitative risk assessment and financial impact modeling while maintaining alignment with ISO/IEC 42001 controls and NIST AI RMF, validated against 133 documented AI incidents from 2025.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202630 KB
Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance A Unified Taxonomy of Threat Vectors in Generative and Agentic AI and Machine Learning Systems 
 
 
 

 
 

 
 
 
 
 
 Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance
 A Unified Taxonomy of Threat Vectors in Generative and Agentic AI and Machine Learning Systems

 
 
 Prof. Hernan Huwyler
 
 
 
 
 MBA CPA
 Correspondence: hhuwyler@faculty.ie.edu; Tel: + 34 915 68 96 00.
 
 
 
 
 Abstract

 The accelerating deployment of artificial intelligence systems across regulated sectors has exposed critical fragmentation in risk assessment methodologies. A significant "language barrier" currently separates technical security teams, who focus on algorithmic vulnerabilities (e.g., MITRE ATLAS), from legal and compliance professionals, who address regulatory mandates (e.g., EU AI Act, NIST AI RMF). This disciplinary disconnect prevents the accurate translation of technical vulnerabilities into financial liability, leaving practitioners unable to answer fundamental economic questions regarding contingency reserves, control return-on-investment, and insurance exposure.

 To bridge this gap, this research presents the AI System Threat Vector Taxonomy , a structured ontology designed explicitly for Quantitative Risk Assessment (QRA). The framework categorizes AI-specific risks into nine critical domains: Misuse, Poisoning, Privacy, Adversarial, Biases, Unreliable Outputs, Drift, Supply Chain, and IP Threat, integrating 53 operationally defined sub-threats. Uniquely, each domain maps technical vectors directly to business loss categories (Confidentiality, Integrity, Availability, Legal, Reputation), enabling the translation of abstract threats into measurable financial impact.

 The taxonomy is empirically validated through an analysis of 133 documented AI incidents from 2025 (achieving 100% classification coverage) and reconciled against the main AI risk frameworks. Furthermore, it is explicitly aligned with ISO/IEC 42001 controls and NIST AI RMF functions to facilitate auditability. By providing the standardized inputs necessary for probabilistic modeling, specifically convolved Monte Carlo simulations, this framework allows organizations to transition from subjective, qualitative "heat maps" to rigorous financial exposure analysis. This establishes a unified language for AI risk communication, enabling evidence-based governance that satisfies both regulatory mandates and operational reality.

 
 
 Keywords: AI Risk Management; Quantitative Risk Assessment (QRA); AI threat taxonomy; AI governance; NIST AI RMF; ISO/IEC 42001; EU AI Act; Algorithmic accountability; Financial risk modeling.

 
 
 JEL Codes: G32; C63; M48; D81; G22; M15; K24; M15; O33

 
 
 
 1 Introduction

 
 Organizations deploying AI systems have responded to growing complexity by creating new governance roles, Chief AI Officers (CAIOs), AI risk managers, model auditors, and spe

... (truncated, 30 KB total)
Resource ID: b800723d60cc4559 | Stable ID: ZGQyZjcwYm