AI Governance Library

Agentic AI - Threats and Mitigations

Agentic AI, powered by LLMs, brings new risks that outpace traditional app security models. This guide is a much-needed attempt to slow things down and make sense of what we’re dealing with.
Agentic AI - Threats and Mitigations

What’s Covered?

This first release from the OWASP Agentic Security Initiative focuses on threats and mitigations unique to Agentic AI—systems where LLMs are embedded in workflows that can act autonomously or semi-autonomously. The guide expands on the OWASP Top 10 for LLMs and GenAI and aims to equip builders and defenders of such systems with a shared vocabulary and threat model framework.

The report opens with core definitions and a lightweight reference architecture for Agentic AI, breaking down typical components like planners, memory modules, and tool interfaces. It then presents a structured threat taxonomy split into five categories: Agent Design, Agent Memory, Planning & Autonomy, Tool Use, and Deployment & Operations. This taxonomy maps out how agentic systems might be manipulated—via prompt injections, faulty decision loops, or insecure tool access.

The central contribution is the Agentic Threat Taxonomy Navigator (Section 5), which lets readers track threats across system components and development phases. In addition, the document offers example threat models (Section 6) for common scenarios—like agents helping with coding, research, or multi-agent collaboration. These case studies are paired with playbooks and technical mitigation strategies including sandboxing, activity logging, and circuit breakers.

Each threat entry includes:

  • A description of the risk
  • Example attack patterns
  • System components affected
  • Potential safeguards

💡 Why it matters?

Agentic AI blends LLMs with autonomy, which breaks many traditional app security assumptions. This guide gives teams building or defending these systems a starting point for structured threat modeling—something badly needed as risks shift from misuse to system-driven action. It doesn’t just extend OWASP Top 10—it reframes it for autonomy.

What’s Missing?

The taxonomy is strong, but the guide stops short of detailing how to prioritize threats or score them by severity or likelihood. There’s also limited attention on non-technical controls, like incident response governance or human-in-the-loop mechanisms. The reference architecture feels a bit too simplified for teams working on more complex multi-agent environments. And while the document offers useful prompts and mitigation suggestions, it doesn’t dive into real-world case studies with confirmed exploit data—something that could ground the taxonomy in operational experience.

There’s also no direct alignment with broader AI governance frameworks like the NIST RMF, ISO 42001, or AI Act requirements—meaning security teams in regulated environments will need to do extra mapping work. Lastly, the mitigation playbooks are fairly brief—more like hints than implementation-ready recipes.

Best For:

AI security engineers, red teamers, and developers working with tool-using or autonomous LLM agents. Especially useful for teams designing open-ended assistants or integrating LLMs with APIs, search tools, or memory modules. Also valuable for orgs preparing for product-level threat modeling sessions.

Source Details:

Title: Agentic AI – Threats and Mitigations (v1.0, Feb 2025)

Author: OWASP Agentic Security Initiative

Publisher: Open Worldwide Application Security Project (OWASP)

License: Creative Commons CC BY-SA 4.0 Agentic AI Report

The report was created by the OWASP Agentic Security Initiative, a community-led effort extending the OWASP Top 10 for LLMs and building new standards for agent-based systems. Contributors include seasoned AI red teamers, LLM security experts, and platform engineers. The OWASP foundation supports this work as part of its broader push to define risk categories for AI-native systems. While the document is introductory in tone, it reflects deep threat modeling experience across security and ML research.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.