AI Governance Library

AI Act & Guidelines on Prohibited Artificial Intelligence (AI) Practices: An Analysis for the Emotion Recognition Field

This 2025 paper by Iren, Noldus, and Brouwer offers a much-needed guide to how the EU’s AI Act and the Commission’s new guidelines apply to the emotion recognition field—one of the most contentious areas of affective computing.
AI Act & Guidelines on Prohibited Artificial Intelligence (AI) Practices: An Analysis for the Emotion Recognition Field

What’s Covered?

This report summarizes and analyzes the Commission’s Guidelines on Prohibited Artificial Intelligence Practices, focusing on how they apply to emotion recognition systems under Article 5(1)(f) of the AI Act. These systems are now explicitly banned in both workplaces and educational institutions, unless used for narrowly defined medical or safety purposes. The authors—leaders in affective computing—combine detailed clause-by-clause breakdowns with commentary from ongoing engagement efforts with EU institutions. That mix of legal detail and field insight makes this a standout reference.

The document is structured around the 12-section, 135-page guidelines published by the AI Office in February 2025. The authors extract and organize the most relevant clauses—spanning general-purpose AI, biometric-based inferences, and research exemptions—into a tailored index for researchers and practitioners in the emotion recognition space.

Key issues tackled include:

  • How the AI Act defines “use”, “deployment”, and “deployers”—and why contractual disclaimers won’t protect you if your AI is used illegally.
  • What counts as emotion recognition (beyond just facial expression inference) and how the Act applies to inferred intentions, not just emotions.
  • The boundaries of the research exemption—with a strict ban on real-world classroom or workplace testing.
  • The distinction between biometric and non-biometric emotion inference—a critical point for text-based sentiment tools.
  • The scope of prohibited practices under Articles 5(1)(e) and 5(1)(f), including guidelines on scraping, well-being monitoring, AI in hiring, and employer-deployed coaching tools.
  • Clarifications around what’s still permissible—like emotion detection in customer interactions or for therapeutic use with CE-marked medical devices.

The document also brings practical examples: classroom engagement monitors? Banned. Role-play tools for actor training? Allowed (if not used for grading). Voice stress detectors in call centers? Depends who’s being monitored. The result is a rare, grounded interpretation of abstract legal texts.

💡 Why it matters?

Emotion recognition is one of the AI Act’s most clearly prohibited areas—but it’s also one of the most misunderstood. This paper bridges the gap between black-letter regulation and applied affective computing. It also underscores how the EU is drawing hard lines around power asymmetries in workplaces and classrooms. With high regulatory stakes and global ripple effects, this is required reading for developers, compliance officers, and academic researchers alike.

What’s Missing?

This isn’t a full critique of the guidelines themselves—it’s focused on helping practitioners interpret and comply, not challenge. There’s little exploration of the scientific or ethical debate over emotion inference validity, or the long-term risks of chilling effects on research. The paper also doesn’t directly address how this new regulatory climate might impact investment, commercial deployment, or AI competitiveness in Europe beyond emotion recognition. And while it’s sharp on regulation, readers seeking practical implementation strategies (like product adjustments or compliance workflows) will need complementary materials.

Best For:

Researchers, product managers, or compliance leads working on or adjacent to emotion AI. If you’re developing tools that analyze facial expressions, voice, keystrokes, or biometric patterns, and you operate in the EU (or export there), this guide will help you stay out of legal trouble—and understand what’s still possible.

Source Details:

Full Citation: Iren, D., Noldus, L.P.J.J., & Brouwer, A-M. (2025). AI Act & Guidelines on Prohibited Artificial Intelligence (AI) Practices: An Analysis for the Emotion Recognition Field. Open Universiteit, Radboud University, TNO Human Factors.ink

Author Credentials:

  • Dr. Deniz Iren is a senior researcher at Open Universiteit, with a focus on affective computing and AI regulation.
  • Prof. dr. Lucas Noldus is affiliated with Radboud University and founder of Noldus Information Technology, a pioneer in behavioral research tools.
  • Prof. dr. Anne-Marie Brouwer works at TNO Human Factors and Radboud University, focusing on emotion science and human-AI interaction.All three authors have been actively involved in EU-level dialogue around emotion AI, including direct input into the guideline drafting process.
About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.