AI Governance Library

Choosing the Right Controls for AI Risks

A visual guide and explanatory article by James Kavanagh, published via The Company Ethos (April 2025), that maps major AI risks—like bias, hallucinations, and adversarial attacks—to practical prevention, detection, and response controls across design-time and run-time phases.
Choosing the Right Controls for AI Risks

🔹 What’s Covered

James Kavanagh’s piece translates the complex domain of AI risk management into a practical decision-support resource. The focus is on control selection—what types of safeguards to apply at what stages of the AI lifecycle, depending on the specific risk.

The article and accompanying chart break down eight critical AI risk types:

  1. Model Drift & Data Distribution Shift
  2. Hallucinations in Generative Models
  3. Bias and Fairness Issues
  4. Adversarial Inputs & Robustness Vulnerabilities
  5. Loss of Personal or Confidential Information
  6. Harmful Content (e.g. Toxicity, Misinformation)
  7. Feedback Loops & Behaviour Amplification
  8. Overreliance on Automation (Erosion of Human Oversight)

Each risk is linked to:

  • Control Purposes: Prevention, Detection, Response
  • Lifecycle Stage: Design-Time vs Run-Time

Kavanagh explains how the control type and timing must match the nature of the risk. For instance, prevention is ideal for adversarial attacks during design, while bias demands both design fairness and run-time audit.

The article also walks through a typical AI risk treatment process:

  • Evaluate existing safeguards
  • Select additional controls to reduce impact/likelihood/feedback
  • Reassess residual risk
  • Implement and monitor effectiveness over time

There’s a strong emphasis on layered defenses, combining human oversight and technical safeguards. The piece closes with reflections on the necessity of human judgment alongside automation in governance contexts.

🔹 💡 Why It Matters?

This resource operationalises AI governance by linking specific risks to tailored control strategies. It’s a valuable bridge between abstract ethical or regulatory goals (like fairness, transparency, safety) and day-to-day decisions made by product, legal, and risk teams.

🔹 What’s Missing

While comprehensive in scope, the framework is not mapped directly to specific legal obligations (e.g. GDPR, EU AI Act) or technical standards (e.g. ISO/IEC 42001). Nor does it provide implementation examples or maturity guidance, which may be helpful for practitioners benchmarking their programs.

🔹 Best For

This is a must-read for AI governance professionals building or refining risk treatment frameworks. It’s especially useful for compliance leads, risk officers, technical governance architects, and policymakers developing control libraries or risk registers.

🔹 Source Details

Title: Choosing the Right Controls for AI Risks

Author: James Kavanagh

Published: April 9, 2025

Platform: The Company Ethos – Doing AI Governance

Link: https://www.ethos-ai.org (article hosted on Substack)

Author Credentials: James Kavanagh is a practitioner in AI governance with experience leading ISO 42001 certification projects and building AI risk frameworks grounded in regulatory and operational best practices.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.