AI Governance Library

Welcome to the AIGL

This is issue #1 of the AIGL newsletter. In the first 24 hours, 382 of you subscribed. In this issue: • A personal note on why this project exists • A review of MIT’s AI Risk Framework—one of the cleanest tools we’ve seen so far • And two sharp links from inside China’s evolving AI landscape
Welcome to the AIGL

I didn’t expect this.

A week ago, I texted a friend of mine: "I have this project and I'm about to launch it. If 20 people follow, I'd say it's a huge success".

It's now been just over 24 hours.

382 of you signed up for this newsletter.

950+ people liked the launch post on LinkedIn.

Hundreds more commented, shared, or messaged me directly.

That’s not just support. That’s a signal.

There’s a real hunger out there for something cleaner, more serious, and actually useful AI governance. And I feel the weight of that. Not as pressure—but as purpose.

AIGL isn’t just another link dump. It’s not a brand. It’s not a business. It’s a commitment to elevating the standard for discussing, evaluating, and applying AI governance resources.

Because let’s be honest:

We’re drowning in PDFs that look important but say nothing. We’re distracted by frameworks that exist just to sell courses. And we’re still citing guidelines from 2019 as if nothing has changed.

So here’s the deal: AIGL stands on three uncompromising values:

  1. Usefulness comes first.If a resource can’t be used in a real decision, policy draft, or audit—it doesn’t make the cut.Theory without traction is just noise.
  2. No hidden sales funnels.This isn’t a lead magnet.No upsells. No “free guides” that pitch a $3,000 workshop.If it’s in the library, it’s there on merit.
  3. No outdated material.AI governance is evolving at full speed.If a paper doesn’t reflect that—if it’s obsolete or overtaken by new standards—it’s gone.

This newsletter will honor those same principles. Every issue. Every word.

And to all 382 of you reading this right now:

🙏
Thank you.

Let’s raise the bar together—and hold the line.

Spotlight Review: MIT AI Risk Analysis Framework (AI-RAF)

💡
Each issue features one deep dive into a key AI governance resource. We break it down using the AIGL format so you can quickly see what’s covered, why it matters, and how (or if) you should use it. The resource is usually already a part of AIGL

Quick Summary

A policy-ready tool from MIT that helps you assess AI risk without the complexity. AI-RAF offers a flexible, practical structure for mapping harms across sectors, impact levels, and uncertainty. It’s built for clarity—and for conversations that matter.

What’s Covered

The AI Risk Analysis Framework (AI-RAF), created by MIT FutureTech and collaborators, is designed for one thing: bringing structure to AI risk assessments without turning them into bureaucracy or buzzword bingo.

Instead of assigning scores or automating judgment, it provides an interactive scaffold for exploring four risk dimensions:

  • Impact
  • Likelihood
  • Systemic interactions
  • Epistemic uncertainty

What makes it stand out is how accessible it is.

You can filter by sector. Browse real-world examples. Move between granular harms and system-level thinking. No math degree or compliance training required.

Its real value? It reframes AI risk as a governance and policy challenge—not just a technical or statistical one. You can use it to map harms, pressure-test assumptions, or facilitate alignment across teams.

Whether you’re in a government office, a startup ethics board, or a university workshop—it speaks your language.

Key contributors come from MIT, University of Queensland, Harmony Intelligence, and the Future of Life Institute, with broad policy and technical insight woven in.

👉 Explore it here: airisk.mit.edu

💡 Why It Matters

AI risk work often collapses under its own jargon. The AI-RAF brings breathing room. It creates a shared mental model, especially for teams that need to make decisions fast—without pretending every uncertainty can be quantified.

It’s especially timely now, as the EU AI Act and similar frameworks demand structured risk analysis—but leave “how” up to you.

What’s Missing

  • It doesn’t offer a scoring system or a predefined output. That’s intentional, but it may leave operational teams looking for tighter metrics.
  • There’s no built-in way to integrate with audits or traceability workflows.
  • Sectoral filters are helpful but still quite broad—customization may be needed for niche contexts.

Best For

  • Policy teams drafting AI regulations
  • Internal AI governance units at tech companies
  • Risk and compliance professionals in early-stage risk mapping
  • Educators designing practical AI ethics sessions
  • Advocacy orgs seeking to frame risks in policy conversations

Source Details

MIT AI Risk Analysis Framework (AI-RAF)

Developed by MIT FutureTech & collaborators

Authors include: Peter Slattery, Neil Thompson, Alexander Saeri, Emily Grundy, Risto Uuk, and others

URL: https://airisk.mit.edu

After Hours

💡
Curious links, quiet breakthroughs, or bold moves I noticed this week. After Hours is where I share what caught my eye—outside the main spotlight but still worth your attention.

🔗 China’s 2025 Tech & Digital Economy Forecast

chozan.co/china-economic-mega-report-2025-tech-digital

An ambitious deep dive from ChoZan into China’s evolving digital priorities.

Highlights include:

  • A 2025 vision for “Digital China”
  • AI talent pipelines and government investment
  • Industrial internet, quantum tech, and digital sovereignty themesMore economic than governance-focused, but the strategic direction is loud and clear.

🔗 AI Safety in China #19

aisafetychina.substack.com/p/ai-safety-in-china-19

This latest Substack issue from AI Safety in China zooms in on:

  • Local efforts in model safety evaluation
  • Regional AI governance pilots
  • How domestic actors are interpreting “alignment” and “risk”Less polished than a policy brief—but that’s the point.If you want signal through the noise, this is it.
About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.