Skip to content
Longterm Wiki
Back

Cooperative AI research

paper

Authors

Sunil Arora·John Hastings

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Research paper addressing security challenges in agentic AI systems, focusing on cyber risks from autonomous decision-making across critical sectors like healthcare and finance, contributing to understanding of AI safety and security frameworks.

Paper Details

Citations
1
0 influential
Year
1991
Methodology
peer-reviewed
Categories
AI & Society

Metadata

arxiv preprintprimary source

Abstract

Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.

Summary

This research addresses security challenges specific to autonomous agentic AI systems by developing MAAIS, a lifecycle-aware security framework designed using Design Science Research methodology. The framework introduces the agentic AI CIAA concept (Confidentiality, Integrity, Availability, and Accountability) and integrates multiple defense layers to protect AI systems across their entire lifecycle. The approach is validated against MITRE ATLAS threat tactics and provides enterprise organizations with structured guidance for securing agentic AI deployments in critical sectors like cybersecurity, finance, and healthcare.

Cited by 1 page

PageTypeQuality
Agentic AICapability68.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202633 KB
HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

- failed: biblatex.sty

Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).

[License: arXiv.org perpetual non-exclusive license](https://info.arxiv.org/help/license/index.html#licenses-available)

arXiv:2512.18043v1 \[cs.CR\] 19 Dec 2025

# Securing Agentic AI Systems - A Multilayer Security Framework

Report issue for preceding element

Sunil Arora [Report issue for preceding element](https://orcid.org/0009-0007-3066-3461 "ORCID 0009-0007-3066-3461"), John Hastings [Report issue for preceding element](https://orcid.org/0000-0003-0871-3622 "ORCID 0000-0003-0871-3622")

Report issue for preceding element

###### Abstract

Report issue for preceding element

Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.

Report issue for preceding element

## I Introduction

Report issue for preceding element

Artificial intelligence (AI) enables machines to perceive, reason, learn, and decide \[russell2020aima\]. Agentic AI is the latest development in the evolution of intelligent systems. Agentic AI systems can make decisions, plan actions, select tools to achieve an outcome, and adjust to changing

... (truncated, 33 KB total)
Resource ID: 4f79c3dae1e7f82a | Stable ID: ZGIxY2I3ZT