AI Governance Library

AI Governance Controls Mega-map (Feb 2025)

This is one of the best open-source resources for operationalizing multi-standard AI governance. Kavanagh’s framework deserves a place on every AI compliance lead’s desk.
AI Governance Controls Mega-map (Feb 2025)

This is not just another mapping exercise — it’s a full-blown architecture for building a real, usable AI governance framework that aligns security, privacy, and responsible AI across six heavyweight sources. The Mega-map sets a high bar for what control harmonization should look like when an organization is dealing with complex compliance expectations across both security (ISO 27001, SOC 2), privacy (ISO 27701), and AI-specific governance (ISO 42001, NIST AI RMF, EU AI Act).

🧩 What’s inside

At the core of the Mega-map is a rationalized set of:

  • 12 Domains
  • 44 Master Controls

Each domain serves as a thematic building block — from Governance and Leadership (GL), through Safe and Responsible AI (RS), to Transparency and Communication (CO). Every master control aggregates similar requirements across the six frameworks, reducing the duplication and fog that usually plague compliance efforts.

Kavanagh doesn’t just list controls — he explains the methodology:

  • Manual parsing of 1,000+ pages
  • Physical sorting of discrete control statements (yes, with paper)
  • Domain clustering based on both thematic overlap and organizational ownership

The approach blends the logic of a policy analyst with the instincts of an operator. Think ISO meets LEGO — you get building blocks, but they’re snapped together in a way that mirrors real-world workflows.

🔍  Who is this for?

This framework is designed for:

  • Mid-to-large tech firms building AI systems in the EU and US
  • Organizations deploying at least one high-risk AI system under the EU AI Act
  • Teams acting as both data controllers and processors

It’s not for:

  • General-purpose foundation model builders (OpenAI, Mistral, etc.)
  • Defence and law enforcement applications
  • Early-stage startups without dedicated compliance resources

That said, most of the 44 controls apply universally — even if some specific clauses from the EU AI Act don’t.

🧠  Highlights

Master Control Set as a Living Backbone

Instead of dealing with AI Act one day and ISO audits the next, this approach uses the master controls as a single source of compliance truth. Everything maps back to them — audits, assessments, documentation.

Practical Grouping by Real Ownership

Controls aren’t just grouped by topic but by who in the org owns them. That makes it more than academic. It becomes implementable.

Pattern Matching for Cross-Framework Alignment

For example, one master control on Incident Detection and Response (IM-1) aligns with:

  • ISO 42001 (AI incident management)
  • EU AI Act (serious incident reporting)
  • ISO 27001 (security incident response)
  • NIST RMF (incident response and detection functions)

That saves countless hours for teams building out GRC dashboards and internal policy documents.

💡  Why it matters?

This is the most practically useful resource on multi-framework AI compliance out right now.

It doesn’t just help you pass audits — it helps prevent checkbox compliance and avoid the trap of “shadow frameworks” that exist only on paper. The clarity and crosswalk structure also makes it easier to map future updates (like NIS2, U.S. state-level AI laws, or sectoral standards) into your existing setup.

Author: James Kavanagh

Hosted by: The Company Ethos – Doing AI Governance

🔗 Source: Ethos AI Governance Mega-map on Substack

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.