AI Governance Library

Harmonised Standards for the European AI Act

The AI Act sets out legal obligations for high-risk systems—but how do you show you’ve met them? This JRC report explains what European harmonised standards will need to deliver. It’s a must-read if you’re building for compliance in 2026 or advising on AI Act conformity.
Harmonised Standards for the European AI Act

What’s Covered?

This policy brief from the European Commission’s Joint Research Centre explores the future role of technical standards in supporting compliance with the AI Act, especially for high-risk systems. It breaks down the essential characteristics harmonised standards must embody to be legally effective and practically useful. Standards will provide a presumption of conformity—so what they require carries real legal weight.

The brief opens with a recap of the AI Act’s timeline: adopted in August 2024, with high-risk obligations kicking in from August 2026 (depending on the system’s classification). It then outlines ten thematic areas where standards are being developed: risk management, data quality, logging, transparency, human oversight, accuracy, robustness, cybersecurity, quality management, and conformity assessment.

Each area is mapped against specific AI Act articles, and the document details what’s expected from future standards. The authors stress the importance of aligning these standards with the Regulation’s intent, not just its letter. That means designing for protection of fundamental rights—not just organisational efficiency.

There’s a strong push for standards to be:

  • Prescriptive and testable (not just guidance)
  • Oriented to AI systems as products, not just internal processes
  • Able to capture full lifecycle risks (including post-market)
  • Sector-agnostic when possible, but adaptable in application
  • Aligned with state-of-the-art techniques, including generative AI components

The report repeatedly draws a contrast between existing ISO/IEC standards (focused on organisational practices) and the product- and outcome-oriented approach demanded by the AI Act. For example, it calls out the need for risk management standards that assess harm to individuals—not just corporate risk tolerance.

Toward the end, the report gives a status update on CEN-CENELEC’s Joint Technical Committee 21, responsible for drafting 37 AI-relevant standards, some building on ISO/IEC work, others home-grown. It acknowledges delays and the difficult consensus-building process across stakeholder groups—but underscores the urgency: high-risk AI developers need these standards before August 2026.

💡 Why it matters?

These standards won’t just offer best practices—they’ll shape how high-risk AI systems are designed, tested, and documented across the EU. For developers, they offer a roadmap to legal compliance. For regulators, they’re the backbone of enforcement. If the standards are weak, vague, or late, the entire AI Act enforcement regime could falter.

What’s Missing?

The report outlines the what and why of AI Act-aligned standards but says little about how consensus will be reached in time. It acknowledges delays and resource constraints but doesn’t offer concrete steps to resolve them. It also omits sectoral examples—healthcare, biometrics, or education—where some granularity could show how horizontal standards might adapt. Another gap is clarity on how international standards (e.g. ISO/IEC 42001) will be reconciled with EU-specific rights-based framing. While the brief gestures at these tensions, it leaves much of the interoperability challenge unaddressed. Lastly, it doesn’t say how draft standards will be publicly shared or how feedback from SMEs and civil society will be incorporated at scale.

Best For:

Compliance teams building high-risk AI systems that will launch in or after 2026. Legal counsel advising on AI Act implementation. Standards watchers tracking the gap between ISO work and EU regulatory expectations. Also useful for public policy advisors preparing sector-specific guidelines.

Source Details:

Title: Harmonised Standards for the European AI Act

Authors: Josep Soler Garrido, Sarah de Nigris, Elias Bassani, Ignacio Sánchez, Tatjana Evas, Antoine-Alexandre André, Thierry Boulangé

Institution: Joint Research Centre, European Commission

Date: Late 2024

Credential highlights: The authors are policy and technical researchers at the JRC and European Commission. Tatjana Evas is known for her legal expertise on AI Act development. Josep Soler Garrido and Ignacio Sánchez have previously co-authored foundational work on AI risk and cybersecurity. This brief serves as the EU’s official framing document on how technical standards will operationalise AI Act compliance.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.