MCP 2025 Edition: The Illustrated Guidebook
A protocol to connect AI models with external tools and data sources through a shared interface, solving the M×N integration problem for developers.
A protocol to connect AI models with external tools and data sources through a shared interface, solving the M×N integration problem for developers.
A modular toolkit for lawmakers, researchers, and advocates to shape effective, rights-respecting AI policy—built on global principles, practical levers, and tested language from 40+ jurisdictions.
A policy-first toolkit for identifying and mitigating overlooked or under-prioritized AI risks—especially those affecting marginalized groups, low-visibility use cases, and long-term governance gaps.
Culture is governance’s invisible backbone. This framework helps leaders identify, assess, and embed cultural levers—from values to incentives—that shape responsible AI behavior across organizations.
A lifecycle-driven framework offering 70+ mapped AI-specific risks, actionable safeguards, and crosswalks to ISO 42001, NIST AI RMF, OWASP Top 10 for LLMs, and more – aimed at embedding security into every phase of AI development and deployment
A practical, product-integrated framework for managing AI risks across the ML lifecycle—rooted in Databricks’ tooling and aligned with enterprise data governance priorities.
A structured guide to identifying and mitigating privacy risks in LLMs—covering data leakage, user inference, training data exposure, and strategies for auditability and control.
A detailed legislative framework proposing oversight, obligations, and incentives for AI systems—covering everything from foundation models and open-source exemptions to risk-based licensing and whistleblower protection.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
A strategic proposal to professionalize AI risk governance.
The AI Assessment Framework is a practical tool for assessing the ethical and risk considerations of AI systems used by the NSW Government.
Customers acknowledge the need to secure AI systems but simply do not know how.
Organizations that measure the value of AI ethics could be a step ahead. Our holistic AI ethics framework considers three types of ROI.
Quickly gauge your organization’s current maturity across AI discovery, risk management, and compliance.
Without a shared understanding of how bias enters and operates in AI systems, law enforcement agencies risk embedding discrimination into everyday operations.
Frontline users will need a high degree of discretion over how they use AI assistants. But this must be matched with rigorous oversight and clear internal boundaries.
AIME distils key principles from existing AI regulations, standards and frameworks to provide an accessible starting point for organisations to assess and improve their AI management systems.
The Agentic Oversight Framework ensures agents are contained and embedded into a secure environment that meets institutional requirements for data handling, oversight, and auditability.
Governance is not compliance. It is about enabling organizational culture, practice, and accountability to ensure that the values and rules enshrined in the AI Act are meaningfully realized.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
Risk tiers clarify the harms AI might present and identify the measures being taken to prevent them.
AI Governance by Design (AIGD) integrates ethical, legal, and societal considerations directly into AI system development from inception.
Ethical AI isn’t a cost—it’s a sophisticated financial risk management and revenue generation strategy with measurable, substantial economic returns.
We are releasing our Impact Assessment Template externally to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.
Curated Library of AI Governance Resources