AI Governance Library

Fundamental Rights Impact Assessments: What are they? How do they work?

Fundamental Rights Impact Assessments (FRIAs) are required for certain high-risk AI systems under Article 27 of the AI Act. They help ensure that AI deployment doesn’t violate key EU rights.
Fundamental Rights Impact Assessments: What are they? How do they work?

What’s Covered?

This short paper from the CEDPO AI and Data Working Group spells out the purpose, scope, and practical use of Fundamental Rights Impact Assessments (FRIAs), now mandatory for some high-risk AI deployments under the EU AI Act.

What’s a FRIA?

A FRIA assesses how an AI system might affect a person’s fundamental rights—not just privacy, but a full range of rights under the EU Charter (freedom of expression, right to work, non-discrimination, etc.). Unlike traditional risk assessments that aim to eliminate risks, the FRIA is about identifying, managing, and mitigating those risks responsibly.

Who needs to do one?

Only certain deployers must conduct FRIAs—specifically:

  • Public sector bodies
  • Private entities providing public services
  • Deployers of high-risk systems under Annex III (5)(b) and (c) of the AI Act (education and employment scoring systems)

When?

FRIAs must be completed before a high-risk AI system is put into use by the deployer. The logic is simple: while developers (providers) do their own risk reviews, actual risks often emerge in specific, local use cases—which only deployers can assess.

What’s required?

Article 27 outlines the core elements of a FRIA. Deployers must document:

  • The system’s intended use and context
  • How long and how often it will be used
  • Who is likely to be affected
  • What specific risks of harm are involved
  • Oversight mechanisms
  • What happens if things go wrong (complaint channels, governance)

How does it interact with DPIAs or DSA risk assessments?

The paper does a great job mapping the FRIA into a broader ecosystem of regulatory tools:

  • DPIA (under GDPR): Focuses on data and privacy. A FRIA adds a layer for broader human rights.
  • Systemic risk assessments (under DSA): Broader platform-level concerns. A FRIA is more specific, tied to the use of an individual high-risk system.The AI Act even allows FRIA and DPIA to be combined, so long as the broader rights are also addressed.

💡 Why it matters?

FRIAs are a practical safeguard that put human rights at the center of AI system deployment—especially in sensitive areas like recruitment, education, law enforcement, or migration. For practitioners, this isn’t just about compliance—it’s about building ethical, transparent systems that are defensible in the public eye. They also force organizations to think through context-specific harms before rollout, something DPIAs alone can’t cover.

What’s Missing?

The paper lays out what must be done, but less on how to do it. For example:

  • No template or sample questions (though these are promised under Art. 27(5))
  • No guidance on balancing risks or setting thresholds for severity
  • No support for cross-border or multi-deployer scenariosAlso missing is concrete help for small organizations or municipalities with limited capacity to do a full FRIA.

The paper doesn’t address generative AI use cases, though these could soon become common in education, employment, and public services. Lastly, the role of the provider in sharing documentation that feeds into a FRIA is mentioned, but not developed.

Best For:

This guide is ideal for DPOs, compliance officers, and public-sector AI teams preparing for the EU AI Act. It’s also useful for legal teams, AI deployers in education or employment, and digital policy advisors designing governance frameworks.

Source Details:

Title: Fundamental Rights Impact Assessments: What Are They? How Do They Work?

Published by: Confederation of European Data Protection Organisations (CEDPO)

Series: Micro-Insights Series, January 2025

Authors: Thomas Ajoodha, Jared Browne

Publisher Info: CEDPO AI and Data Working Group – based across EU capitals, including Bonn, Bucharest, Dublin, Lisbon, Madrid, Milan, Paris, The Hague, Vienna, Warsaw

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.