What’s Covered
The Bird & Bird guide is structured as a legal practitioner’s reference to the AI Act. It begins with the risk-based architecture, detailing the four-tier risk classification (unacceptable, high, limited, minimal). It then moves into key definitions, especially the AI system (aligned with OECD’s definition) and the distinction between AI models and systems.
Material and Territorial Scope: The AI Act applies even to non-EU entities if their AI systems’ outputs are used in the EU. This includes providers, deployers, importers, distributors, and even component suppliers of high-risk systems.
Prohibited Practices (Article 5) are described in practical terms—such as facial recognition from untargeted scraping, social scoring, and inferring emotions at work or school. Each prohibition includes exceptions and real-world illustrations.
High-Risk AI Systems are split into Annex I (linked to existing product legislation) and Annex III (standalone AI use cases like recruitment or credit scoring). Exemptions apply if the system doesn’t significantly impact health or rights, but providers must document such assessments.
The guide offers detailed obligations for providers and deployers: from maintaining logs and documentation to human oversight, accuracy, and transparency. It also maps the conformity assessment procedures, which vary depending on harmonised standards and whether a notified body is involved.
General-purpose AI models—especially those with systemic risk—get their own section. The guide walks through obligations for transparency, risk mitigation, copyright compliance, and EU representation.
Other sections address transparency (disclosures for chatbots, deepfakes, biometric systems), regulatory sandboxes, enforcement, and the roles of EU and national authorities. The final chapter covers what’s next—upcoming delegated acts, standardisation efforts, and timelines up to 2030.
💡 Why it matters?
This guide is useful not just for understanding obligations under the AI Act, but for seeing how the pieces connect—AI model providers, downstream users, and regulators each have interlocking roles. The guide explains how exemptions work, where CE marking fits in, and why documentation matters from day one. It’s especially strong on systemic risk in general-purpose models—a topic still evolving. If you’re building, selling, or buying AI in the EU, this is a go-to source.
What’s Missing
Despite its depth, the guide doesn’t extensively address open questions in enforcement: How will market surveillance authorities define “significant risk”? How will regulators handle edge cases between AI systems and traditional software? There’s also less coverage of SMEs and startups beyond basic sandbox references. And while systemic-risk GPAI obligations are listed, the guide doesn’t model what compliance “looks like” in day-to-day product development—something technical teams would benefit from.
Best For
This is ideal for in-house counsel, compliance teams, and AI product leads in the EU or with EU users. It’s particularly useful for multinationals trying to localise AI governance to European standards. Also helpful for law firms advising tech clients or drafting CE conformity files.
Source Details
- Title: European Union Artificial Intelligence Act: A Guide
- Author: Bird & Bird LLP
- Date: April 2025
- Notable Contributors: Tier 1-ranked Legal 500 AI practice team