AI Governance Library

NIST Guidelines for Evaluating Differential Privacy Guarantees

Aimed at helping technical and policy audiences evaluate privacy guarantees in practice, NIST SP 800-226 offers tools to reason about parameters, algorithms, and trust assumptions in differentially private systems.
NIST Guidelines for Evaluating Differential Privacy Guarantees

What’s Covered?

This publication breaks down how to evaluate claims of differential privacy (DP) in real-world systems. It’s not just about understanding ε and δ—it’s about the full privacy “pyramid”: privacy guarantees depend on everything from algorithmic correctness and statistical bias to side-channel resilience and trust models. The document introduces a structure to evaluate DP systems, helps readers compare their protections, and warns about hazards like hidden systemic bias or misuse of aggregation.

It walks through:

  • Core definitions and promises of DP
  • Mathematical variants (pure and approximate DP, bounded/unbounded models)
  • Privacy parameters and how to assess their strength (with examples and risk flowcharts)
  • Common DP algorithms, their uses (counts, averages, ML, synthetic data), and limitations
  • Bias risks: systemic, human, and statistical—how DP can hide or amplify them
  • Deployment guidance: central vs local models, trust assumptions, side-channel concerns
  • Evaluation tools like Jupyter notebooks for hands-on testing

The guide also outlines when and why aggregation or redaction fail as privacy tools, and how DP systems can still leak if the surrounding infrastructure (data storage, access control) is weak. There’s a strong emphasis on practical hazards, not just theoretical models.

This isn’t a how-to for implementing DP from scratch. It’s more like a risk compass—designed to help engineers, privacy officers, and policymakers make informed decisions about the systems they’re building or approving. It closes with a call for standardized evaluation methods and certification pathways to give non-experts confidence that a DP claim means something real.

💡 Why it matters?

As differential privacy moves from academic hype to deployment in public services and commercial products, there’s a growing need to separate credible guarantees from checkbox compliance. This guide doesn’t just teach you what DP is—it gives you a framework to audit whether a system’s privacy claims actually hold up. With U.S. agencies exploring DP for everything from census data to health analytics, this is likely to influence procurement, regulation, and trust frameworks.

What’s Missing?

While the guide provides conceptual clarity and helpful evaluation tools, it stops short of standardization. There’s no checklist or benchmark for what counts as “strong” ε or whether trade-offs are justified in specific settings. The flowcharts help, but they’re interpretive rather than normative.

Also, the treatment of equity is mostly limited to bias correction at the algorithmic level. It doesn’t dig into socio-technical dynamics like who gets to set privacy budgets or how DP may entrench power imbalances if oversight is weak. Lastly, there’s no comparison to international frameworks—something important as global DP deployment accelerates.

Best For:

  • Engineers building or integrating DP into systems
  • Privacy officers evaluating AI/analytics deployments
  • Government agencies weighing open data against individual privacy
  • Researchers and legal experts trying to bridge math and policy

Source Details:

Joseph P. Near (University of Vermont) and David Darais (Galois, Inc.) are both well-known in the privacy engineering community. Near co-leads work on verified privacy systems and DP tooling, while Darais brings a formal methods perspective. Naomi Lefkovitz is a key figure at NIST behind the Privacy Framework and numerous PETs efforts. Gary S. Howarth, also from NIST’s Applied Cybersecurity Division, contributes expertise in federal data practices.

This document builds on NIST’s PET research, the U.S. Census Bureau’s real-world DP deployments, and insights from leading figures like Damien Desfontaines and Nicolas Papernot. It comes with companion Jupyter notebooks hosted by NIST, enabling hands-on learning.

Full citation:

Near JP, Darais D, Lefkovitz N, Howarth GS (2025) Guidelines for Evaluating Differential Privacy Guarantees. NIST SP 800-226. https://doi.org/10.6028/NIST.SP.800‐226

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.