🔹 What’s Covered
James Kavanagh’s piece translates the complex domain of AI risk management into a practical decision-support resource. The focus is on control selection—what types of safeguards to apply at what stages of the AI lifecycle, depending on the specific risk.
The article and accompanying chart break down eight critical AI risk types:
- Model Drift & Data Distribution Shift
- Hallucinations in Generative Models
- Bias and Fairness Issues
- Adversarial Inputs & Robustness Vulnerabilities
- Loss of Personal or Confidential Information
- Harmful Content (e.g. Toxicity, Misinformation)
- Feedback Loops & Behaviour Amplification
- Overreliance on Automation (Erosion of Human Oversight)
Each risk is linked to:
- Control Purposes: Prevention, Detection, Response
- Lifecycle Stage: Design-Time vs Run-Time
Kavanagh explains how the control type and timing must match the nature of the risk. For instance, prevention is ideal for adversarial attacks during design, while bias demands both design fairness and run-time audit.
The article also walks through a typical AI risk treatment process:
- Evaluate existing safeguards
- Select additional controls to reduce impact/likelihood/feedback
- Reassess residual risk
- Implement and monitor effectiveness over time
There’s a strong emphasis on layered defenses, combining human oversight and technical safeguards. The piece closes with reflections on the necessity of human judgment alongside automation in governance contexts.
🔹 💡 Why It Matters?
This resource operationalises AI governance by linking specific risks to tailored control strategies. It’s a valuable bridge between abstract ethical or regulatory goals (like fairness, transparency, safety) and day-to-day decisions made by product, legal, and risk teams.
🔹 What’s Missing
While comprehensive in scope, the framework is not mapped directly to specific legal obligations (e.g. GDPR, EU AI Act) or technical standards (e.g. ISO/IEC 42001). Nor does it provide implementation examples or maturity guidance, which may be helpful for practitioners benchmarking their programs.
🔹 Best For
This is a must-read for AI governance professionals building or refining risk treatment frameworks. It’s especially useful for compliance leads, risk officers, technical governance architects, and policymakers developing control libraries or risk registers.
🔹 Source Details
Title: Choosing the Right Controls for AI Risks
Author: James Kavanagh
Published: April 9, 2025
Platform: The Company Ethos – Doing AI Governance
Link: https://www.ethos-ai.org (article hosted on Substack)
Author Credentials: James Kavanagh is a practitioner in AI governance with experience leading ISO 42001 certification projects and building AI risk frameworks grounded in regulatory and operational best practices.