AI Governance Library

MIT AI Risk Analysis Framework (AI-RAF)

A tool built for clarity, not complexity. The AI Risk Analysis Framework from MIT offers a structured, policy-relevant approach to thinking about AI risks. It’s designed for teams who need to assess potential harms without getting buried in technical noise.
MIT AI Risk Analysis Framework (AI-RAF)

👉 Try it here: https://airisk.mit.edu

If you’ve ever struggled to explain what AI risk really means to non-technical stakeholders, this is for you.

The MIT AI Risk Analysis Framework (AI-RAF) breaks down the concept of risk in AI into manageable parts. It helps users map potential harms based on impact, likelihood, systemic interactions, and uncertainty. The tool is simple to explore, with examples and filters for different sectors and use cases.

The framework encourages users to look beyond surface-level risk and consider the broader systems AI interacts with. It doesn’t give you scores or outputs — instead, it gives you a structure for thinking.

💡 Why it matters?

AI risk discussions often get stuck between vague fears and narrow technical metrics. This tool creates space for smarter conversations.

The AI-RAF brings three key strengths to the table:

  • It reframes risk as a policy challenge, not just a technical glitch.
  • It surfaces epistemic uncertainty — something missing from most compliance tools.
  • It builds habits of contextual thinking, which is crucial for organizations deploying AI at scale.

This is especially relevant now, as the EU AI Act and other frameworks ask for risk assessments without specifying how to structure them.

🏷️ Who should read this?

If you’re responsible for AI oversight — internally or externally — this framework will help you build your case, map concerns, or pressure-test assumptions.

This is a solid reference for:

  • Policy units drafting AI legislation or internal standards.
  • Governance and ethics teams inside tech companies.
  • Auditors and assessors working on AI impact reviews.
  • Advocacy groups engaging with regulators or industry actors.

It’s also a helpful teaching tool in governance workshops and academic settings.

🔗 How it fits into the bigger picture

Risk is easy to name, harder to define. This framework gives structure without locking you into a checklist.

Compared to other tools (like NIST’s RMF or the OECD’s risk typologies), this one stays flexible. It’s a scaffold — not a solution. But in governance work, especially in early stages, structure is half the battle.

The AI-RAF fits well into conversations about accountability, AI audits, and AI impact assessments. It doesn’t answer every question — it helps you ask better ones.

👥 Key contributors

This project brings together researchers and policy thinkers across MIT, the University of Queensland, and key collaborators in the AI governance space. The core team and contributors reflect a strong mix of policy insight, risk analysis, and technical understanding.

Project Team

  • Neil Thompson – Director, MIT FutureTech & Project Supervisor
  • Alexander Saeri – Project Director, MIT FutureTech & The University of Queensland
  • Peter Slattery – Engagement Lead, MIT FutureTech
  • Michael Noetel – Research Methods Specialist, The University of Queensland
  • Jess Graham – Research Officer, The University of Queensland

Collaborators & Co-authors

  • Emily Grundy – Co-author, AI Risk Repository, MIT FutureTech
  • James Dao – Co-author, AI Risk Repository, Harmony Intelligence
  • Jamie Bernardi – Project Lead, AI Incident Tracker, Institute for AI Policy and Strategy (IAPS)
  • Risto Uuk – Co-author, AI Risk Repository, Future of Life Institute & KU Leuven
  • Simon Mylius – Project Lead, AI Incident Tracker
  • Soroush Pour – Co-author, AI Risk Repository, Harmony Intelligence
  • Stephen Casper – Co-author, AI Risk Repository, MIT CSAIL

  • Daniel Huttenlocher – Dean, MIT Schwarzman College of Computin
About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.