AI Governance Library

Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures

Published by the Institute for Security and Technology in March 2025, this report outlines 39 practical strategies—22 technical and 17 policy-based—for AI developers and users to prevent institutional, procedural, and performance failures across the AI system lifecycle.
Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures

What’s Covered?

The report offers a structured toolkit for translating AI compliance from theory into action. Building on Part 1, which explored historic compliance failures in adjacent industries, Part 2 focuses on actionable risk mitigation strategies tailored to AI. It emphasizes that not all harms equate to failures—some offer learning value—and that organizations should apply these practices based on risk, use case, and available resources. The nine top-level recommendations include proportional compliance for high-impact systems, privacy-preserving technologies, strong cybersecurity practices, safety benchmarks, transparency tools like model/data cards, explainability, and bias mitigation. Importantly, the report makes a business case for proactive compliance, highlighting ROI through reduced regulatory exposure, stronger user trust, investor appeal, and improved talent retention. Global frameworks like the EU AI Act, NIST AI RMF, ISO/IEC 42001, GDPR, and OECD principles provide regulatory and technical grounding throughout.

Contents of the document:

• Executive Summary

• Recap of Navigating AI Compliance, Part 1: Tracing Failure Patterns in History

• Introduction

• Methodology

• The AI Lifecycle Stages

• Return on Investment for Implementing Strong Compliance Practices

• Risk Mitigation Strategies for Safeguarding Against Future Failures

 – Data Collection and Preprocessing (for builders)

 – Model Training and Evaluation (for builders)

 – Model Application (for builders and users)

 – User Interaction (for builders and users)

 – Ongoing Monitoring and Maintenance (for builders and users)

• Conclusion

Why It Matters?

This resource provides a grounded, practical approach for applying values like transparency, accountability, and fairness across the AI lifecycle. By offering strategies mapped to each phase, it helps bridge the gap between governance principles and day-to-day decisions in building or using AI—especially in sensitive or high-impact domains.

What’s Missing?

The document doesn’t prioritize strategies by feasibility or cost, which could challenge smaller or under-resourced teams. It’s also largely focused on Western-centric regulatory models, with limited attention to implementation outside the EU/US context or engagement with civil society actors from the Global South.

Best For:

Highly relevant for AI compliance leads, policy teams, regulators, standard setters, and product leaders aiming to align development and deployment with global AI governance expectations and upcoming audit regimes.

Source Details:

Tkeshelashvili, Mariami and Saade, Tiffany. Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures. Institute for Security and Technology, March 2025.

Mariami Tkeshelashvili – AI Governance and Risk Researcher, Institute for Security and Technology

Tiffany Saade – Policy Advisor and Researcher, Institute for Security and Technology

https://securityandtechnology.org

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.