What’s Covered?
“Key Terms for AI Governance” provides a dense yet digestible glossary that defines foundational concepts in AI ethics, law, and technical development. Originally published in June 2023 and updated in July 2024, the glossary now includes new entries and refinements based on community feedback and continued work by AI governance experts.
The document covers terms alphabetically, ranging from technical AI concepts like “neural networks,” “transformer models,” and “diffusion models,” to legal and policy terms like “contestability,” “AI assurance,” and “impact assessment.” Importantly, it addresses both high-level governance principles and deeper technical mechanisms, making it a rare bridge between policy language and technical vocabulary.
Among the highlights:
- Definitions of core AI capabilities: machine learning, generative AI, large language models, reinforcement learning with human feedback
- Governance-focused terms: accountability, explainability, transparency, trustworthy AI, bias, fairness, contestability
- System-level processes: AI audit, conformity assessment, impact assessment, oversight
- Data-centric entries: data quality, data provenance, data poisoning, synthetic data
- Security-related risks: adversarial attack, hallucinations, deepfakes, red teaming
There’s also clear attention to the intersection of policy and implementation, with definitions for practices like human-in-the-loop, post-processing, and prompt engineering—terms that are now core to both developers and regulators.
The glossary doesn’t shy away from explaining complex tradeoffs, like those involving accuracy, interpretability, and robustness, while still remaining accessible to those without a technical background. It presents these concepts not as static definitions but as the foundation for shared conversations around oversight, risk management, and ethical design.
💡 Why it matters?
Clear language is the foundation of clear rules. With AI governance moving fast across jurisdictions, everyone—from developers to regulators—needs a shared vocabulary to avoid miscommunication. This glossary doesn’t just define terms; it sets a baseline for building policy, compliance programs, and technical safeguards that actually make sense across sectors.
What’s Missing?
This is a definitions-only resource, so it avoids giving opinionated context or case studies. There are no references to jurisdiction-specific legal interpretations (like GDPR, the EU AI Act, or the NIST AI RMF), nor any insights into how definitions might vary across regulatory frameworks. It also lacks any visual structure, which could help users better understand relationships between terms (e.g. clustering definitions related to trust or safety). Readers looking for guidance on applying these terms in specific compliance scenarios might find this document too high-level on its own.
Best For:
Policy teams, AI ethics researchers, in-house legal and compliance professionals, and tech teams working on risk mitigation. It’s especially useful for multi-disciplinary meetings where legal, technical, and policy teams need to get on the same page fast.
Source Details:
This glossary is published by the International Association of Privacy Professionals (IAPP), a leading global organization for privacy and data protection. It draws from the IAPP’s broader AI Governance Center and is informed by technical and legal experts in AI regulation. The July 2024 version builds on the initial June 2023 release, updating definitions and adding new entries based on feedback and evolving use cases. While the IAPP is privacy-focused, this glossary reflects a deliberate expansion into AI governance, bridging law, ethics, and technical design in a single resource.