AI Governance Library

GDPR when using by generative AI

Public authorities can use generative AI responsibly if they follow GDPR. That’s the message behind this clear and practical guide from Sweden’s data protection authority.
GDPR when using by generative AI

What’s Covered?

1. Foundational Compliance

The guide sets the tone by affirming that GDPR is not an obstacle, but a baseline. It clarifies when data protection laws apply (anytime personal data is processed) and stresses that even when data isn’t obviously personal, GDPR should be considered as a precaution.

2. Core Principles in Action

The bulk of the report translates GDPR’s abstract principles—like lawfulness, purpose limitation, and data minimization—into AI-specific recommendations:

  • Document usage and purposes before deployment.
  • Minimize personal data both in prompts and outputs.
  • Ensure transparency through updated privacy notices and internal documentation.
  • Guard against hallucinations and biased results by limiting reliance on model output.

3. Role Clarification

It reinforces the distinction between data controllers (the agency deploying the AI) and processors (the AI tool providers). This means contracts, accountability, and responsibility structures must be crystal clear—especially when models are trained or fine-tuned externally.

4. Legal Bases & Risk

The document doesn’t stop at saying “you need a legal basis”—it breaks down how efficiency gains can justify data use if aligned with the agency’s mission. But higher risk = higher scrutiny. For example, using AI in social services decisions? That demands stronger legal grounding than summarizing meeting notes.

5. Decision-Making & Data Transfers

Automated decisions and cross-border data transfers are hot buttons under GDPR. The guide emphasizes:

  • Agencies must check for automated decision-making, even when AI is just assisting.
  • If data leaves the EU/EEA, strict transfer mechanisms must apply.

6. Data Subject Rights

Ensuring rights to information, access, correction, and deletion becomes trickier with generative AI. The guide flags the challenge—but insists rights must still be respected. Agencies are advised to update privacy policies, monitor outputs, and ensure data subjects can make sense of how their data was used.

7. Risk Management & Security

The report advocates a risk-based approach—don’t treat AI like other software. Agencies must:

  • Conduct Data Protection Impact Assessments (DPIAs).
  • Implement governance, access controls, human oversight, and technical safeguards (like temperature tuning, retrieval-augmented generation, PETs).

8. Publicly Available & Integrated Tools

A standout feature is its separate guidance for publicly accessible tools (like ChatGPT) and integrated AI in enterprise systems (e.g., Microsoft Copilot). It urges:

  • Clear internal policies on usage.
  • Restrictions on what types of information can be entered.
  • Strong access control when business data is involved.

💡 Why it matters?

Public administrations are under pressure to use AI efficiently—but responsibly. This guide provides a principled yet pragmatic roadmap for doing just that. It balances innovation with accountability, highlighting how GDPR’s foundational values—transparency, proportionality, and respect for individual rights—can be upheld in modern AI systems.

For policymakers outside Sweden, this resource offers a useful model: how to regulate AI in government without stalling progress. For civil servants, it’s a checklist for building AI governance that’s legally sound and socially legitimate.

What’s Missing?

While the report is strong on high-level guidance, it stops short of offering templates or operational playbooks. Public sector teams looking for hands-on help (e.g., DPIA templates for generative AI use, model classification guides, or audit tools) will need to look elsewhere or wait for future extensions. Similarly, it doesn’t dive into model-level transparency techniques, such as interpretability tooling or internal auditing of model behaviors—topics increasingly relevant for public procurement.

Best For:

  • Public sector AI leads looking to implement or audit generative AI systems under GDPR.
  • DPOs and legal advisors in government exploring AI adoption.
  • AI governance professionals wanting an EU-compatible benchmark for responsible AI in the public sector.

Source Details:

Swedish Privacy Protection Authority (2025). GDPR when using Generative AI.

Document number IMY-2024-9162, published February 5, 2025.

This paper is part of the Swedish government’s broader initiative to develop trustworthy and legally compliant uses of generative AI in public administration. Authored by IMY (Integritetsskyddsmyndigheten), it is intended to guide agencies under Swedish and EU law. While not legally binding, it reflects official expectations and offers practical direction.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.