What’s Covered?
This workbook is part of The Alan Turing Institute’s eight-part AI Ethics and Governance in Practice series. Focused on AI safety, it’s built for civil servants working with or around AI systems—especially those tasked with improving oversight, transparency, and trust. The document comes in two formats: one for facilitators (offering guidance on how to lead workshops) and one for participants (meant for engagement before and during group exercises).
The content is split into two main sections:
1. Key Concepts – introduces the core objectives of AI safety with practical definitions, risk assessments, and scenario-based learning. These include:
- Performance: Does the system do what it claims to do, consistently and accurately?
- Reliability: How well does it perform over time or under varying conditions?
- Security: Is it protected against attacks or misuse?
- Robustness: How does it handle edge cases, unexpected data, or failures?
Each objective is tied to practical risks, including detailed discussion of concepts like ROC curves and performance metrics. The section encourages teams to think reflexively about safety not just as a goal but as an ongoing responsibility.
2. Activities – these are hands-on, group-based exercises that bring safety concepts to life through collaborative analysis, vocabulary-building, and structured risk reflection. Scenarios are inspired by real public sector challenges but generalized enough to allow flexible application.
The workbook also features a structured Safety Self-Assessment and Risk Management Template to help teams map out safety considerations throughout project phases—from design to deployment.
💡 Why it matters?
AI safety is often treated as a backend technical concern, but this workbook flips the script. It shows that safety should be front and center—embedded in the planning, design, testing, and monitoring phases. Public sector projects are especially high-stakes, often impacting healthcare, transportation, or social services. This resource provides a concrete way for government teams to responsibly manage that risk while building trust with the public.
What’s Missing?
The workbook is technically strong but doesn’t engage much with organizational challenges—like culture, procurement pressures, or policy inertia—that often stand in the way of good safety practices. It could benefit from more cross-disciplinary case studies involving legal or operational teams, especially in sectors like law enforcement or education. Also, while it hints at tools like ROC curves, there’s minimal math or code—practitioners with data science backgrounds might want a deeper technical supplement.
Best For:
Great for AI leads, product managers, and civil servants working on or reviewing algorithmic projects in public agencies. It’s especially useful for those running or participating in ethics workshops. While designed for the public sector, consultants and researchers looking to support AI governance efforts will find it valuable too.
Source Details:
AI Safety in Practice (2024) was produced by a multidisciplinary team at The Alan Turing Institute, led by David Leslie with co-authors including Cami Rincón, Christopher Burr, and Claudia Fischer. The workbook reflects research from the Institute’s Public Policy Programme and is part of a broader EPSRC-funded initiative under the UKRI Strategic Priorities Fund. The content was peer-reviewed and field-tested with input from health and government bodies including NICE and the Scottish Digital Office. The authors bring experience from philosophy, data science, public ethics, and technical safety, anchoring the resource in both academic theory and institutional practice.