AI Governance Library

Navigating AI Compliance Part 1: TRACING FAILURE PATTERNS IN HISTORY

This report explores historical compliance breakdowns in high-risk industries—like finance, health, and aviation—to extract patterns that can inform AI governance today.
Navigating AI Compliance Part 1: TRACING FAILURE PATTERNS IN HISTORY

What’s Covered?

The Institute for Security and Technology (IST) sets the stage for a structured approach to AI compliance by learning from the past. The report offers a framework for understanding where organizations typically fail and how that might translate to the AI era.

Historical Case Studies (Pages 5–8):

The report draws from 11 high-profile cases including:

  • Theranos (procedural and performance failure)
  • Lehman Brothers (institutional failure)
  • Boeing 737 MAX (all three: institutional, procedural, and performance)
  • FTX and CrowdStrike (governance and software automation risks)

These failures are analyzed to expose three types of compliance breakdowns:

  • Institutional: Organizational culture failed to prioritize or support compliance (e.g., lack of board oversight or missing internal controls)
  • Procedural: Gaps between policy and execution (e.g., vague protocols or untrained staff)
  • Performance: Human or system errors (e.g., a misfiring AI model or analyst oversight)

Key Lessons Identified (Page 8–10):

  • Transparency is non-negotiable for system reliability and trust.
  • Third-party verification mitigates risk blindness.
  • Data governance must evolve beyond traditional record-keeping.
  • Accountability requires enforceable standards across sectors.

Sources of AI Governance (Pages 11–14):

The report introduces a five-source framework that defines the current AI compliance landscape:

  1. Laws & Regulations – Limited at federal level in the U.S., but active at state and international levels
  2. Guidance – White House Executive Orders, agency memos (e.g. OMB M-24-10), and EU AI guidelines
  3. Norms – Nonbinding but socially enforced principles like fairness, explainability, and consent
  4. Standards – Technical protocols (e.g., ISO, NIST AI RMF) that offer structure without being legal mandates
  5. Organizational Policies – Internal commitments, both public and private, forming a patchwork of self-regulation

Defining Compliance Failure in AI (Page 15):

Rather than relying on a singular legal trigger, the authors propose a broader definition. A failure occurs when an AI system:

  • Violates existing law
  • Conflicts with public regulatory guidance
  • Breaches norms or standards
  • Undermines a company’s own published (or internal) responsible AI policies

Current State of Play (Pages 16–18):

The report offers a reality check on common compliance hotspots:

  • Data Privacy and User Consent – Ongoing litigation around AI scraping and GDPR violations
  • Algorithmic Bias – Lawsuits and regulatory probes tied to discrimination in hiring, lending, or healthcare
  • Interpretability – Black-box models make post-incident accountability nearly impossible

Outlook (Page 19):

Three major hurdles are flagged:

  • Ambiguity in safety definitions will hinder adoption in regulated sectors
  • Opaque model architecture makes auditing difficult
  • AI agents create complex questions around who is responsible for harm

💡 Why it matters?

The report’s framing is grounded in the idea that compliance in AI shouldn’t start after a public failure. It offers a practical, multi-industry lens to assess risk—not just in terms of what’s legal, but what’s preventable. The authors emphasize that responsible governance must be internalized, not just documented.

What’s Missing?

The report is U.S.-centric in its legal focus and could benefit from deeper coverage of global regulatory shifts like the EU AI Act or China’s generative AI rules. While it categorizes failure well, it doesn’t go deep into how to operationalize compliance for modern ML systems. Questions like: “How does a startup implement third-party review affordably?” or “What makes a red team effective for foundation models?” remain unanswered.

The report also assumes that compliance culture is adaptable, but it may underestimate resistance in profit-driven environments with tight timelines. There’s little discussion of tradeoffs between performance, cost, and risk mitigation—something critical in real-world product decisions.

Best For:

Compliance leads and general counsel at AI companies

  • Policymakers developing baseline AI risk frameworks
  • Product managers looking to build internal governance tools
  • Researchers studying organizational behavior and AI risk
  • Investors performing risk due diligence on AI ventures

Source Details:

Title: Navigating AI Compliance, Part 1: Tracing Failure Patterns in History

Authors: Mariami Tkeshelashvili, Tiffany Saade

Institution: Institute for Security and Technology (IST), with support from the Patrick J. McGovern Foundation

Date: December 2024rityandtechnology.org

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.