
AI GOVERNANCE AND ETHICS – GENERATIVE AI Expanded Guide
This expanded guide builds on the 2024 ASEAN AI Governance and Ethics framework, zooming in on the specific risks and policy needs surrounding generative AI (Gen AI).
This expanded guide builds on the 2024 ASEAN AI Governance and Ethics framework, zooming in on the specific risks and policy needs surrounding generative AI (Gen AI).
This guide breaks down how internal auditors should prepare for the AI Act, which came into force in August 2024.
AI Tools in Society- Impacts on Cognitive Offloading and the Future of Critical ThinkingAI Tools in Society- Impacts on Cognitive
This report by CDT’s AI Governance Lab maps out the wide array of methods used to evaluate AI systems—from impact assessments and audits to red-teaming and formal assurance.
This CSET report unpacks how China is hedging its bets on general artificial intelligence (GAI) by pursuing a mix of technical strategies—unlike the West’s heavy focus on large language models (LLMs).
Getting international data transfers right is one of the toughest parts of GDPR compliance. This practical guide from CNIL lays out how to run a Transfer Impact Assessment (TIA) without guesswork.
Regulatory sandboxes aren’t just buzzwords—they’re fast becoming one of the EU’s go-to tools for managing fast-moving AI and cybersecurity risks. This white paper brings together legal, technical, and policy perspectives to offer a grounded roadmap for building and using sandboxes the right way.
This “living repository” shows how companies are starting to take that seriously—with real, varied, and often creative approaches to staff training.
Frontier AI is powerful—but how powerful is too powerful? This Berkeley-led paper proposes a framework for defining and managing intolerable risks, pushing governments and industry to stop waiting for disaster and start drawing lines. It’s a toolkit for acting before things go wrong.
This 2025 paper by Iren, Noldus, and Brouwer offers a much-needed guide to how the EU’s AI Act and the Commission’s new guidelines apply to the emotion recognition field—one of the most contentious areas of affective computing.
This white paper from the UAE’s AI Office captures a rare, high-level dialogue on responsible AI, convened at the World Governments Summit.
This landmark report brings together 96 global experts to create the first shared scientific baseline on general-purpose AI risks and safety. It doesn’t recommend policies—it equips governments, researchers, and regulators with what’s known (and what’s not) so far.
This U.S. Copyright Office report maps out the toughest economic questions about AI and copyright, without pretending to have the answers.
Accountability starts with visibility—especially when AI is doing the work.
The Artificial Intelligence Playbook for the UK Government (Feb 2025), created by the Government Digital Service, is the UK’s most comprehensive public guidance for safely deploying AI across government bodies.
The OWASP Top 10 LLM AI Cybersecurity & Governance Checklist (v1.1, April 2024) is a practical guide for organizations planning to use large language models. I
Innovation’s racing ahead. Responsibility’s limping behind. And leadership? It’s stuck in the middle.
This isn’t just another AI hype deck. It’s a grounded framework to help real businesses figure out where to start, what matters, and what to watch out for.
Agentic AI, powered by LLMs, brings new risks that outpace traditional app security models. This guide is a much-needed attempt to slow things down and make sense of what we’re dealing with.
A detailed 5-step framework for evaluating technical safeguards against misuse of advanced AI systems. It calls for clear safeguard requirements, a documented plan, evidence gathering, ongoing assessment, and explicit justification of sufficiency.
This report, published by the Paris Peace Forum’s Strategic Foresight Hub, proposes cyber policy as a blueprint for global AI risk governance. It focuses on adversarial use of AI in cyberspace, offering a scalable model for global coordination and institutional response.
Public authorities can use generative AI responsibly if they follow GDPR. That’s the message behind this clear and practical guide from Sweden’s data protection authority.
This practical guide by Rhymetec walks SaaS and tech companies through ISO 42001—the first international standard focused on AI management systems.
This OECD report proposes a unified framework for AI incident reporting—offering policymakers a timely and globally adaptable tool to track, assess, and learn from harms linked to AI.
Curated Library of AI Governance Resources