
Business applications of Artificial Intelligence: A framework to categorise AI use cases
This isn’t just another AI hype deck. It’s a grounded framework to help real businesses figure out where to start, what matters, and what to watch out for.
This isn’t just another AI hype deck. It’s a grounded framework to help real businesses figure out where to start, what matters, and what to watch out for.
Agentic AI, powered by LLMs, brings new risks that outpace traditional app security models. This guide is a much-needed attempt to slow things down and make sense of what we’re dealing with.
A detailed 5-step framework for evaluating technical safeguards against misuse of advanced AI systems. It calls for clear safeguard requirements, a documented plan, evidence gathering, ongoing assessment, and explicit justification of sufficiency.
This report, published by the Paris Peace Forum’s Strategic Foresight Hub, proposes cyber policy as a blueprint for global AI risk governance. It focuses on adversarial use of AI in cyberspace, offering a scalable model for global coordination and institutional response.
Public authorities can use generative AI responsibly if they follow GDPR. That’s the message behind this clear and practical guide from Sweden’s data protection authority.
This practical guide by Rhymetec walks SaaS and tech companies through ISO 42001—the first international standard focused on AI management systems.
This OECD report proposes a unified framework for AI incident reporting—offering policymakers a timely and globally adaptable tool to track, assess, and learn from harms linked to AI.
This technical report by the Cooperative AI Foundation offers the comprehensive early attempt to map the risks that emerge when multiple advanced AI agents interact, adapt, and evolve together.
Lean meets Data & Generative AILean meets Data & Generative AI.pdf797 KBdownload-circle What’s Covered? The paper is built
This resource takes a close look at one of the most cited — and least consistently defined — goals in responsible AI: explainability.
This resource breaks new ground by tackling a blind spot in model lifecycle management: the phenomenon of “AI aging.” The authors propose that temporal degradation is distinct from known issues like concept drift.
This is the book you’d hand to someone serious about understanding AI risk but unsure where to start. With clarity and precision, it lays out how AI could cause major harm—through misalignment, misuse, or sheer scale—and what we can do about it.
Aimed at helping technical and policy audiences evaluate privacy guarantees in practice, NIST SP 800-226 offers tools to reason about parameters, algorithms, and trust assumptions in differentially private systems.
This report lays out a practical framework for evaluating US open-source AI policy through both ideological and geopolitical lenses. It avoids hype and polarization.
This research offers a crisp, nuanced breakdown of what Article 14 AI Act really demands from human oversight—moving beyond vague calls for “humans in the loop.” It highlights the challenges of effectiveness, the shared roles of providers and deployers, and why human oversight is no silver bullet.
These model clauses aim to operationalize the EU AI Act for public sector AI procurement. They provide contracting authorities with a pre-structured set of legal and technical expectations covering the lifecycle of high-risk AI systems.
A practical framework by Credo AI that helps enterprises filter and evaluate foundation models using context-specific trust scores. It introduces “Model Trust Scores” to guide business-informed decisions about AI adoption across capabilities, safety, cost, and latency.
This policy brief outlines a structured approach for collaboration between the EU AI Office and the UK AI Safety Institute (AISI), proposing a practical framework based on four types of institutional engagement: collaboration, coordination, communication, and separation.
India’s new AI Competency Framework equips public sector leaders with the behavioural, functional, and domain-specific skills to responsibly integrate AI in governance. Anchored in the IndiaAI Mission, it marks a major step toward building ethical, capable leadership for AI-driven transformation.
The Top 10 Operational Impacts of the EU AI Act article series by IAPP breaks down the legal and practical implications of the world’s first comprehensive AI regulation. It translates complex provisions into actionable advice for providers, deployers, and regulators navigating the new rules.
This 2025 report from the European Commission’s Joint Research Centre shows that human oversight isn’t a silver bullet against discrimination in AI-aided decisions. In hiring and lending experiments, human biases often reinforced, revealing serious gaps in current oversight assumptions.
Key Terms for AI GovernanceKey Terms for AI Governance.pdf714 KBdownload-circle What’s Covered? “Key Terms for AI Governance” provides
UNIDIR’s “Governance of Artificial Intelligence in the Military Domain” policy brief identifies six priority areas for responsible AI use in defence, rooted in multi-stakeholder input. It supports global cooperation efforts ahead of REAIM 2024.
“AI Safety in Practice” equips teams with the concepts, tools, and workshop activities needed to build safer AI systems. It breaks down technical safety into four practical objectives and shows how to embed them across the AI lifecycle.
Curated Library of AI Governance Resources