AI Governance Library

NIST: Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations

Adversarial ML is no longer just a research topic—it’s a real-world problem. This NIST report gives public and private sector orgs a shared vocabulary to talk about attacks and defenses. Perfect for aligning security, AI, and policy teams around common ground.
NIST: Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations

What’s Covered?

This 2023 NIST publication maps out how to describe, understand, and classify adversarial machine learning (AML). It builds a shared language around types of threats, affected ML components, attacker goals, capabilities, and mitigation strategies. The core of the report is a structured taxonomy that breaks down AML risks and countermeasures across three attack phases: input, training, and output.

📄 Abstract Summary:

The document presents a standardized vocabulary and structure to describe AML threats and defenses, helping developers, testers, regulators, and policymakers consistently identify risks and potential mitigations. It doesn’t offer policy recommendations but instead acts as a foundational tool for discussion and strategy across roles.

📘 Content Overview:

Attack surface of ML systems: Identifies entry points for adversaries—such as data pipelines, model APIs, and system-level integrations.

Taxonomy of attack types: Including evasion (like adversarial inputs), poisoning (data or model manipulation during training), and inference attacks (e.g., model extraction or membership inference).

Attacker goals and knowledge: Discusses targeted vs. untargeted attacks, and how full, partial, or zero knowledge of a model impacts threat modeling.

Mitigation strategies: Categorized as proactive (before deployment) or reactive (monitoring, retraining, or detection during deployment).

Examples and mapping: Real-world analogies and references to academic studies that illustrate how these attack types play out in practice.

Why it matters?

ML systems are being embedded into critical infrastructures—from financial systems to healthcare. But they weren’t designed with security in mind. This report offers the groundwork to integrate AML risks into broader cybersecurity and AI assurance efforts, which is a big deal if you’re building AI governance frameworks or working on compliance strategies.

What’s Missing?

The taxonomy is excellent, but it’s static. It doesn’t reflect how fast adversarial tactics evolve, especially with foundation models. There’s also no guidance on regulatory implications or alignment with emerging standards like the EU AI Act or ISO 42001. The report avoids discussing incident response practices or roles/responsibilities, which makes operationalization trickier. More practical case studies or links to frameworks like NIST RMF or CSF 2.0 would’ve helped bridge the gap between theory and implementation. It also lacks a lens on civilian vs. national security use cases, which often have different threat models.

Best For:

Security teams trying to onboard AI risks, AI developers needing a clearer way to explain vulnerabilities, and policymakers shaping AI assurance or procurement frameworks. Also useful for compliance teams drafting threat modeling procedures or risk assessment protocols for regulated AI.

Source Details:

Burns, A., Lynch, C., Popejoy, A., Wood, B., Roellig, T., & Tabassi, E. (2023). Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. National Institute of Standards and Technology (NIST AI 100-2).

Alina O. Proaps Burns (NIST): AI security specialist leading AML efforts across federal agencies.

Chelsea Lynch (NIST): Computer scientist working on trustworthiness and risk analysis for AI.

Andrew Popejoy & Thomas Roellig (NIWC Atlantic): Focused on secure AI integration in defense systems.

Brian Wood (CISA): Contributed to the taxonomy through a national security and threat intelligence lens.

Elham Tabassi (NIST): Chief of Staff at the ITL, a key figure behind the NIST AI Risk Management Framework.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.