AI Governance Library Newsletter #9:
✒️ Foreword Us is just falling behind in AI, but it certainly is vacating the seat at the governance table—and
✒️ Foreword Us is just falling behind in AI, but it certainly is vacating the seat at the governance table—and
An interdisciplinary legal-philosophical textbook covering AI’s ethical foundations, regulatory implications, and governance challenges through technical, philosophical, and legal lenses.
A voluntary guide tailored to New Zealand businesses, showing how existing governance, legal, and operational practices can anchor AI risk management—without requiring a ground-up rebuild.
Organizations with a Chief AI Officer (CAIO) see 10% higher ROI on AI—and up to 36% more when using centralized operating models. But only 1 in 4 organizations have a CAIO today.
A detailed, scored assessment of how seven frontier AI developers approach safety—tracking real practices, not just promises, across 33 indicators in six domains.
Security-by-default for AI: practical, lifecycle-wide guidance to help providers build and operate AI systems that resist misuse, protect data, and remain reliable—even under attack.
The AGILE Index ranks countries on their AI governance readiness—not by intent, but by measurable capability, institutional maturity, and actual regulatory power.
A foundational governance model for AI systems that act autonomously, delegate tasks, or interface with external tools—built to handle autonomy, unpredictability, and systemic risk.
A protocol to connect AI models with external tools and data sources through a shared interface, solving the M×N integration problem for developers.
A modular toolkit for lawmakers, researchers, and advocates to shape effective, rights-respecting AI policy—built on global principles, practical levers, and tested language from 40+ jurisdictions.
A policy-first toolkit for identifying and mitigating overlooked or under-prioritized AI risks—especially those affecting marginalized groups, low-visibility use cases, and long-term governance gaps.
Culture is governance’s invisible backbone. This framework helps leaders identify, assess, and embed cultural levers—from values to incentives—that shape responsible AI behavior across organizations.
A lifecycle-driven framework offering 70+ mapped AI-specific risks, actionable safeguards, and crosswalks to ISO 42001, NIST AI RMF, OWASP Top 10 for LLMs, and more – aimed at embedding security into every phase of AI development and deployment
A practical, product-integrated framework for managing AI risks across the ML lifecycle—rooted in Databricks’ tooling and aligned with enterprise data governance priorities.
A structured guide to identifying and mitigating privacy risks in LLMs—covering data leakage, user inference, training data exposure, and strategies for auditability and control.
A detailed legislative framework proposing oversight, obligations, and incentives for AI systems—covering everything from foundation models and open-source exemptions to risk-based licensing and whistleblower protection.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
A strategic proposal to professionalize AI risk governance.
The AI Assessment Framework is a practical tool for assessing the ethical and risk considerations of AI systems used by the NSW Government.
Customers acknowledge the need to secure AI systems but simply do not know how.
Organizations that measure the value of AI ethics could be a step ahead. Our holistic AI ethics framework considers three types of ROI.
Quickly gauge your organization’s current maturity across AI discovery, risk management, and compliance.
Without a shared understanding of how bias enters and operates in AI systems, law enforcement agencies risk embedding discrimination into everyday operations.
Frontline users will need a high degree of discretion over how they use AI assistants. But this must be matched with rigorous oversight and clear internal boundaries.
Curated Library of AI Governance Resources