AI Governance Library

AI Auditing Checklist for AI Auditing

This checklist is a practical tool for conducting end-to-end socio-technical audits of AI systems. Commissioned by the EDPB, it provides a hands-on methodology for identifying risk, bias, and compliance gaps across the AI lifecycle—from data handling to deployment.
AI Auditing Checklist for AI Auditing

What’s Covered?

The checklist introduces the End-to-End Socio-Technical Algorithmic Audit (E2EST/AA) methodology, focusing on AI systems operating in real-world decision-making contexts. It starts by defining the scope of audits—centred not just on technical components but also on their social and organizational embedding. The audit process is broken down into five steps: the model card, system map, identification of bias moments and sources, bias testing, and adversarial audits.

Model cards act as a structured record of system metadata, training data, decision logic, human oversight, and redress mechanisms. The system map explores how the model interacts with broader technical and organizational systems. The heart of the methodology lies in uncovering bias: not just as statistical deviation, but as a social harm. Bias is mapped across the lifecycle—from the world feeding into data, to data shaping predictions, and predictions impacting the world.

Bias testing includes fairness metrics (risk difference, demographic parity, equal opportunity) and validation techniques across pre-processing, in-processing, and post-processing. The document recommends adversarial audits (e.g. sockpuppeting, crowdsourcing, or impact data scraping) as an additional, often essential, step—especially when working with black-box or unsupervised systems.

It also guides how to structure audit reporting: internal process reports with mitigation suggestions, public accountability reports, and periodic follow-ups. The goal is to shift AI auditing from checkbox compliance to a participatory, iterative process of system accountability.

Why it matters?

This document goes beyond theory—it gives auditors a grounded, practical way to assess real AI systems. By framing audits as dynamic, socio-technical processes, it helps regulators, developers, and procurement teams build traceable, bias-aware, and rights-respecting AI. It’s a rare tool that blends technical and human-centred inspection.

What’s Missing?

While deeply comprehensive, the checklist assumes considerable institutional capacity and auditing experience. There are few simplified templates or examples for smaller teams or less resourced organizations. Some legal references focus heavily on GDPR and the EU AI Act context, limiting usability in non-EU settings. The optional adversarial audit section is powerful but lightly sketched—more step-by-step support or risk modelling would boost implementation.

Best For:

This checklist is ideal for supervisory authorities, procurement officials, AI ethics teams, and auditors conducting compliance reviews in high-risk AI contexts. It’s especially relevant for EU regulators and data protection officers overseeing complex ML systems.

Source Details:

AI Auditing Checklist for AI Auditing by Dr. Gemma Galdon-Clavell (2023), published as part of the EDPB’s Support Pool of Experts (SPE) programme. Dr. Galdon-Clavell is a leading figure in algorithmic auditing and the founder of Eticas Foundation, which specializes in AI impact assessment, auditing, and governance. The document reflects her hands-on experience with public sector and corporate audits across Europe. Though commissioned by the EDPB, the report represents the author’s views and does not constitute an official position of the EDPB.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.