AI Governance Library

AI Privacy Risks & Mitigations – Large Language Models

This report, produced under the EDPB’s Support Pool of Experts (SPE) programme, offers structured guidance on managing privacy risks in LLM systems. It lays out risk identification, evaluation, and control strategies tailored to GDPR and AI Act obligations, supporting both developers and deployers.
AI Privacy Risks & Mitigations – Large Language Models

🔹 What’s Covered

The report presents a full-spectrum privacy risk management methodology for LLMs, grounded in both EU data protection and AI regulatory frameworks. It starts with an accessible introduction to how LLMs function, including emerging architectures like agentic AI, and provides a clear explanation of common use cases.

From there, it transitions into a granular examination of LLM-specific data flows and the associated privacy risks, such as data leakage, memorisation of personal data, unintentional profiling, and re-identification. The report aligns these risks with different LLM service models (e.g. APIs, open source deployments) and links them to roles defined by both the GDPR (controller/processor) and the AI Act (provider/deployer).

The risk assessment process is broken into stages:

  • Identification of risks using criteria like context, function, and data type
  • Estimation of likelihood and severity using custom scales
  • Evaluation to determine acceptability and guide prioritisation
  • Treatment using concrete mitigations including privacy-enhancing technologies, human oversight, transparency features, and logging

It further guides readers on residual risk analysis and how to conduct ongoing monitoring throughout the AI lifecycle. Importantly, the report includes three detailed use cases—an LLM-powered customer chatbot, a student learning assistant, and a travel scheduler—that demonstrate how these principles apply in practice.

The final section provides curated links to metrics (e.g. REDACT, METR), methodologies (e.g. ISO 27559, 42001), and regulatory guidance (e.g. EDPB, ENISA, NIST) to support implementation.

🔹 💡 Why It Matters?

This is a rare document that combines legal rigour with technical relevance. It equips organisations with a concrete process to ensure LLM deployments meet privacy-by-design expectations. Especially in the context of GDPR Article 25 and the AI Act’s serious incident obligations, this kind of guidance is essential for building trustworthy AI systems.

🔹 What’s Missing

The report remains high-level in parts and avoids offering templates, scoring matrices, or automated tools to speed up DPIA-aligned assessments. Additionally, risks tied to foundation models in non-text modalities (e.g. vision, multi-modal LLMs) or context-specific domains (e.g. health, finance) are underexplored. Operational complexity in large enterprises is acknowledged but not directly addressed with governance structures.

🔹 Best For

This guidance will benefit privacy engineers, AI governance teams, DPOs, and legal counsel responsible for deploying or auditing LLM systems. It is particularly valuable to those preparing conformity assessments or DPIAs for high-risk AI under the AI Act.

🔹 Source Details

Title: AI Privacy Risks & Mitigations – Large Language Models (LLMs)

Author: Isabel Barberá

Programme: EDPB Support Pool of Experts (SPE)

Date: Submitted February 2025, updated March 2025

Commissioned by: European Data Protection Board (EDPB)

Disclaimer: Views expressed are those of the author and do not reflect the official position of the EDPB. The document may contain redactions to protect privacy or commercial interests.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.