AI Governance Library

AI Fairness in Practice: The Alan Turing Institute

AI Fairness in Practice is a workbook published by The Alan Turing Institute as part of its AI Ethics and Governance in Practice programme. It offers a practical, public sector-focused guide to identifying, mitigating, and managing bias across the AI development lifecycle.
AI Fairness in Practice: The Alan Turing Institute

What’s Covered?

This workbook breaks down fairness into seven interlinked types: data, application, model design, metric-based, implementation, and ecosystem fairness. It introduces key fairness-related concepts including legal obligations like the Public Sector Equality Duty and UK data protection law. It defines fairness as a multivalent, contextual concept and provides hands-on strategies to assess and mitigate bias at every stage of an AI project. The second half of the workbook moves from theory to practice, guiding users through templates for bias self-assessment and risk management plans. It ends with activities and case studies based on public sector use cases. These activities help teams practise recognising bias in data, defining fairness metrics, and redressing implementation issues.

Content Overview:

– Part 1: Introduction to Fairness

• Legal context (e.g., Equality Act 2010)

• Discriminatory non-harm principle

• Types of fairness (data, application, model, metric-based, implementation, ecosystem)

• Technical areas: preprocessing, model training/testing, metric selection

– Part 2: Putting Fairness into Practice

• Bias self-assessment templates

• Fairness position statements

• Activities for teams: bias reports, fairness metric definition, system bias redress

– Appendix A: Fairness techniques across the AI/ML lifecycle

– Appendix B: Taxonomy of biases (confirmation, population, de-agentification, etc.)

– Bibliography and suggested readings

Why It Matters?

This workbook helps public sector teams turn fairness from a principle into a process. Instead of assuming fairness as a fixed goal, it frames it as something to be worked through—at every decision point. It supports transparency, reduces harm to protected groups, and reinforces institutional accountability during AI development and deployment.

What’s Missing?

While the workbook is rich in practical templates and legal grounding, it doesn’t provide concrete examples from actual UK public sector deployments. Case studies are fictionalised or anonymised. It also lacks detailed guidance for cross-sector applications, non-UK legal frameworks, or how to weigh conflicting fairness metrics in high-stakes decisions.

Best For:

This workbook will be most useful for public sector project teams, civil servants working on AI implementation, policy advisors, AI ethics officers, or legal and compliance staff in government bodies. It’s also a great teaching tool for workshop facilitators or ethics champions.

Source Details:

Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A., Bennett, S.J., Burr, C., Aitken, M., Katell, M., Fischer, C., Wong, J., & Kherroubi Garcia, I. (2023). AI Fairness in Practice. The Alan Turing Institute.

David Leslie is Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and author of key reports on AI and human rights. Carlos Rincón is a policy researcher focused on AI fairness. Moira Briggs and Alessandra Perini work on public sector AI capacity-building. Seda Jayadeva and Amina Borda specialise in equality and justice. Other contributors bring interdisciplinary expertise in law, data science, human-computer interaction, and governance. The group brings together academic research and policy implementation within UK government contexts.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.