What’s Covered?
This is a hands-on compliance guide tailored for internal auditors navigating the EU AI Act. The first part explains the AI Act’s foundations: its purpose, scope, and the EU’s legal framing of AI systems. AI is broadly defined to future-proof the regulation, encompassing systems that infer outputs with some level of autonomy and impact.
Key features of the guide include:
- Risk-Based ClassificationThe Act classifies AI into four categories:
- Unacceptable risk (e.g. manipulative systems, banned outright)
- High risk (e.g. biometric ID, credit scoring – subject to conformity assessments and monitoring)
- Limited risk (e.g. chatbots – transparency duties)
- Minimal risk (e.g. spam filters – no formal requirements)It also introduces General Purpose AI (GPAI), including a systemic risk threshold based on computational intensity (above 10²⁵ FLOPs) or Commission designation.
- Role-Specific ResponsibilitiesThe guide details obligations depending on whether a company is a provider, deployer, distributor, importer, or authorised representative. The provider has the most substantial responsibilities (e.g. CE marking, risk assessments), while deployers must ensure appropriate use and oversight.
- Timeline and RoadmapA structured compliance roadmap outlines when obligations begin and how internal audit functions can track progress, including operational preparation, documentation, and auditability of AI systems.
- Deep-Dive and Survey InsightsLater sections offer a granular breakdown of obligations by role and risk level, paired with insights from an internal audit community survey. It highlights how many audit teams currently assess AI use and what practices are emerging across industries.
💡 Why it matters?
The AI Act is the first binding horizontal AI regulation worldwide, setting the tone for global compliance regimes. Internal auditors will play a central role in translating legal requirements into operational processes, especially in large companies with cross-border deployments. This guide helps ensure they’re not just reacting—but proactively embedding AI governance into enterprise risk management.
What’s Missing?
The guide doesn’t go deep into technical auditing techniques or sector-specific use cases. For example, high-risk AI in healthcare vs. finance might demand different audit methods, but this nuance is lightly touched. It also doesn’t fully explore challenges like assessing General Purpose AI embedded in third-party tools or managing role shifts (e.g. deployers becoming providers via customization). A stronger focus on assurance frameworks or audit testing examples could boost its utility for more advanced practitioners.
Best For:
Internal auditors, compliance officers, risk managers, and legal counsel working at organizations that develop, import, or use AI systems in the EU. Also valuable for consultants building AI governance programs for regulated entities.
Source Details:
Title: THE AI ACT: ROAD TO COMPLIANCE – A Practical Guide for Internal Auditors
Publisher: European Confederation of Institutes of Internal Auditing (ECIIA)
Context: Offers structured guidance aligned with the AI Act’s phased implementation and compliance regime. The practical orientation makes it one of the more accessible tools for non-technical audit teams trying to meet emerging legal obligations.
Expertise: Draws on internal audit practice and regulatory alignment, with survey-based insights into industry readiness.