AI Governance Library

Third Draft of the General-Purpose AI Code of Practice

This third draft of the General-Purpose AI Code of Practice sets out voluntary commitments to help general-purpose AI model providers—especially those facing systemic risk—meet their legal duties under the EU AI Act.
Third Draft of the General-Purpose AI Code of Practice

What’s Covered?

This draft aims to operationalize Chapter V of the AI Act, which introduces new rules for providers of general-purpose AI models (GPAIs), including those deemed to pose systemic risk. The Code does this through:

Four Working Groups with distinct scopes:

  • WG1 – Transparency & Copyright: Obligatory for all GPAIs, unless fully open-source under Art. 53(2).
  • WG2 – Risk Assessment for Systemic Risk: Required for GPAI providers deemed systemic under Art. 51.
  • WG3 – Technical Risk Mitigation: Focuses on robustness, red teaming, safety measures.
  • WG4 – Governance Risk Mitigation: Internal controls, oversight boards, reporting duties.

Core structure of the Code includes:

  • A set of 2 baseline commitments for all GPAI providers.
  • 16 additional commitments only for those designated GPAISRs (General Purpose AI Systems with Systemic Risk).
  • Detailed “Measures” linked to each commitment in annexed documentation.
  • A preamble and drafting principles focused on values, risk proportionality, and EU legal alignment.

The Code does not introduce KPIs but strengthens reporting duties. The drafters also introduce a “Model Documentation Form” and an Appendix for Safety & Security. Importantly, it doesn’t override the AI Act—but aims to offer a structured, defensible way to show you’re on track.

The document expects that only a small number of models will fall into the systemic-risk category, and it explicitly reserves room for the AI Office to clarify when and how downstream modifiers become subject to obligations (e.g., model fine-tuners).

💡 Why it matters?

The AI Act mandates real obligations, but enforcement needs clear, actionable pathways. This Code is the scaffolding—translating legal duties into something closer to checklists and benchmarks. It brings together best practices from regulators, model developers, and civil society into one unified track.

For providers, it offers predictability: align with the Code and you’re better positioned to demonstrate compliance. For the AI Office and EU institutions, it creates a shared language that helps evaluate how systemic risk is being mitigated in real-world model development.

And for the wider AI ecosystem, it starts to shift the focus away from vague “responsibility” talk toward actual risk documentation, response frameworks, and infrastructure sharing. It’s not just about transparency—it’s about enforceability.

What’s Missing?

While solid, the draft still leaves some open questions:

  • KPIs deferred: While reporting obligations are strengthened, the lack of clear performance indicators may make external accountability harder, especially for civil society observers.
  • Ambiguity on systemic risk classification: It’s unclear what process determines whether a model falls into the systemic-risk category. The AI Office is expected to issue clarifications, but providers currently face uncertainty.
  • Heavy on process, light on examples: There’s a lot of text about values and proportionality, but fewer concrete examples to guide smaller providers or researchers on how to interpret specific obligations.
  • Interoperability with non-EU codes: The draft hints at international coordination (per Art. 56), but doesn’t yet outline how the Code will interoperate with UK AISI protocols or US evaluation tools, which could be critical given transatlantic model development.

Best For:

  • GPAI providers seeking a clear baseline for demonstrating compliance with Chapter V of the AI Act
  • Regulators working on downstream model governance or future harmonized standards
  • Civil society organisations monitoring model transparency, risk mitigation, and systemic harms
  • Policy teams and AI governance researchers seeking a replicable structure for codes of practice in other jurisdictions or sectors

Source Details:

Third Draft of the General-Purpose AI Code of Practice, published by the independent Chairs and Vice-Chairs of the four Working Groups under mandate from the European AI Office, March 2025.

Key authors include:

  • Yoshua Bengio – Global leader in AI safety and deep learning
  • Marietje Schaake – Former EU lawmaker, digital rights advocate
  • Markus Anderljung – Known for systemic risk research and policy bridging
  • Marta Ziosi – Specialist in international AI governance and standards
  • Alexander Peukert & Nuria Oliver – Experts in transparency, copyright, and data law

This draft represents one of the most ambitious attempts to date to bridge hard regulation, soft governance, and technical reality in foundation model development. While it’s not the final word, it’s a strong anchor for what compliance can look like in practice—particularly for frontier models.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.