AI Governance Library

Draft: Critical AI Security Guidelines v1.1 (2024)

Critical AI Security Guidelines v1.1 is a practitioner-driven, point-in-time guide developed by SANS-affiliated experts and contributors from leading cybersecurity and AI organizations. It presents hands-on security recommendations for teams developing generative and LLM-based models.
Draft: Critical AI Security Guidelines v1.1 (2024)

What’s Covered?

This draft document takes a threat-informed approach to securing AI systems and focuses on the technical and operational safeguards needed to reduce AI-specific risks. Unlike high-level frameworks, it gets into implementation-level advice for AI engineers, DevSecOps teams, and enterprise security architects. The guide covers six main control domains: Access Controls, Data Protection, Deployment Strategies, Inference Security, Monitoring, and Governance/Risk/Compliance (GRC). Each section highlights vulnerabilities and mitigation techniques relevant to generative AI and LLMs, such as prompt injection, backdoored model packages, function misuse in agentic systems, inference monitoring, and model poisoning. Key features include recommendations on securing augmentation data in RAG systems, using internal model registries, managing AI bill-of-materials (AIBOMs), and setting up AI GRC boards. It also urges caution with public models and APIs, and advises against embedding access rules directly into models. Instead, it encourages use of external vector-based access controls, TEE-based protection, and red-teaming at both model and application layers. The guidance is responsive to evolving developments, referencing DeepSeek, Stargate AI, and the EU AI Act. It’s framed as a “living document” shaped by emerging tools, adversarial techniques, and regulatory changes.

The document includes:

• Introduction and Overview

• Section-by-section controls for: – Access Controls – Data Protection – Deployment Strategies – Inference Security – Monitoring – GRC (Governance, Risk, and Compliance)

• Guidance on sandboxing agent-generated code, model red-teaming, API abuse detection, and log tracking

• Notes on evolving international regulatory context

• Appendix: model registries and security ops integration

Why It Matters?

AI security guidance often stops at principles or policy abstractions—this report drills down to where the risks live: inference endpoints, vector stores, rogue model packages, and misused agents. It fills a practical gap for security and AI engineering teams tasked with building or integrating LLMs into real-world systems that must remain trustworthy under attack.

What’s Missing?

The guidelines are highly technical but less structured in governance maturity terms. While it references GRC, there’s limited attention to organizational accountability, auditability, or risk trade-offs. It focuses more on point solutions than phased security strategies. There’s also little comparative benchmarking against existing standards like NIST RMF or ISO 23894, which could help position the controls for wider compliance alignment.

Best For:

Security engineers, AI product leads, and compliance officers in fast-moving environments who need to apply practical protections to AI systems. Especially useful for teams working on LLM-based features, agentic architectures, and high-stakes deployments in regulated or adversarial contexts.

Source Details:

Draft: Critical AI Security Guidelines v1.1 (2024). Developed by a cross-sector group of cybersecurity and AI professionals including contributors from SAP, Fortinet, OWASP, Palo Alto Networks, HiddenLayer, Verizon, and SANS Institute. Notable contributors include Sounil Yu (creator of the Cyber Defense Matrix), Jorge Orchilles (Verizon), and Rob Lee (SANS), all recognized for their leadership in enterprise security, offensive security, and incident response. This draft consolidates frontline insights from AI red teaming, security tool development, and GRC integration into a tactical guide for operational resilience .

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.