AI Governance Library

FRAMEWORK TO ADVANCE AI GOVERNANCE AND RISK MANAGEMENT IN NATIONAL SECURITY

This national security AI framework defines when and how U.S. government agencies can use AI in national security systems. It prohibits dangerous applications, sets minimum risk management standards for high-impact AI use, and mandates oversight, training, and transparency.
FRAMEWORK TO ADVANCE AI GOVERNANCE AND RISK MANAGEMENT IN NATIONAL SECURITY

What’s Covered?

The Framework lays out four pillars to guide federal agency use of AI within National Security Systems (NSS). It fulfills mandates under Section 4.2 of the AI National Security Memorandum (AI NSM) and reflects the U.S. commitment to lawful, rights-respecting AI use in defense, intelligence, and related domains.

1. Pillar I – AI Use Restrictions

This pillar identifies prohibited, high-impact, and sensitive AI use cases.

Prohibited AI includes:

  • Profiling or surveillance based solely on constitutionally protected activity
  • Emotional state inference without consent or justification
  • Sole reliance on biometrics to infer protected characteristics
  • Collateral damage estimation for kinetic actions without oversight
  • Final immigration or asylum decisions
  • Nuclear weapon decisions without human oversight

High-impact AI includes:

  • Biometric tracking for military/law enforcement
  • National security threat classifications
  • Sensitive scientific weaponization risks
  • AI outputs used as sole basis for finished intelligence
  • Code-writing malware with autonomous capabilities

Personnel-Impacting AI includes:

  • AI in federal hiring, promotion, discipline, or health assessment
  • Agencies must notify affected individuals and allow appeals

2. Pillar II – Minimum Risk Management Practices

Applies to high-impact and personnel-impacting AI.

Agencies must:

  • Conduct AI-specific risk and impact assessments
  • Validate data quality and fitness
  • Pilot and monitor AI before full deployment
  • Document model failure modes and bias risks
  • Train operators and mitigate overreliance
  • Maintain human oversight and accountability
  • Enable internal escalation and whistleblower protections

For AI impacting federal employees, agencies must also:

  • Gather workforce input
  • Disclose AI involvement in adverse decisions
  • Provide remedy and appeal mechanisms

Waivers from these safeguards require written justification, are limited to one year, and must be logged, reported, and reviewed annually.

3. Pillar III – Cataloguing and Monitoring AI Use

Agencies must:

  • Maintain annual inventories of high-impact AI use cases
  • Establish enterprise-wide data governance policies
  • Evaluate AI training data for bias and robustness
  • Define how to manage models trained on erroneous or sensitive data
  • Adopt internal standards for auditing and testing AI

Chief AI Officers are required in each agency. Their tasks include advising leadership, overseeing risk and governance structures, tracking AI use, and promoting responsible AI adoption.

Each agency must also have an AI Governance Board, chaired by the Chief AI Officer, composed of senior officials responsible for IT, cybersecurity, legal compliance, civil rights, and budgeting.

4. Pillar IV – Training and Accountability

Agencies must:

  • Train all relevant personnel—developers, supervisors, legal teams—on AI risks and responsible use
  • Update accountability frameworks to clarify roles in risk evaluation, documentation, and incident response
  • Enhance whistleblower protections specific to AI
  • Hold individuals accountable for misuse or poor governance of AI systems

💡 Why it matters?

This is one of the most detailed public frameworks for regulating military and intelligence AI use. It sets hard lines on where AI can’t go—like emotion recognition or immigration decisions—and builds strong scaffolding around AI governance, risk management, and public accountability. It signals how democratic states aim to constrain AI in high-stakes contexts while preserving operational capacity.

What’s Missing?

The framework assumes agencies have the internal capacity and resourcing to implement these controls—yet doesn’t outline how to close capacity gaps. While it emphasizes ethics and law, it doesn’t address gray-zone tactics (like AI-enabled misinformation) or dual-use innovation.

Transparency is encouraged, but classified annexes can limit public insight. The framework lacks mechanisms for cross-agency consistency, and international coordination is referenced (e.g., Political Declaration on Responsible Military AI Use) but not detailed.

Finally, there’s limited mention of how to align this with existing AI policies outside NSS—such as those under civilian oversight or involving public-private collaboration.

Best For:

  • Defense and intelligence leaders seeking operational AI guardrails
  • Privacy and civil liberties experts reviewing AI use in government
  • AI compliance officers in government contracting
  • National security policy advisors designing internal oversight
  • Scholars researching military AI accountability

Source Details:

Title: Framework to Advance AI Governance and Risk Management in National Security

Issuing Bodies: U.S. National Security Council with required action from covered Department Heads (DoD, DHS, ODNI, DOJ, DOE, etc.)

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.