AI Governance Library

A Legal Framework for eXplainable Artificial Intelligence

This paper introduces the Legal-XAI taxonomy—an interdisciplinary framework to guide policymakers, lawyers, and developers in choosing the right kind of explanations for AI decisions.
A Legal Framework for eXplainable Artificial Intelligence

What’s Covered?

The paper argues that without clear, tailored legal frameworks for explainability, automated decision systems risk violating due process, eroding trust, and producing unjust outcomes. It tackles this in three major steps:

The authors propose four dimensions that categorize explanation types:

  • Scope: Global (system-wide logic) vs. Local (individual decisions)
  • Depth: Comprehensive (full logic) vs. Selective (key factors only)
  • Alternatives: Contrastive (why X not Y?) vs. Non-contrastive
  • Flow: Conditional control (if-then logic) vs. Correlation-based (statistical association)

This taxonomy helps match explanation types to policy goals—like correcting errors, increasing transparency, or helping subjects understand decisions.

The paper brings together:

  • Legal principles: Explainability supports rights like appeal, transparency, and due process.
  • Technical limits: Not all AI models can produce intuitive explanations; interpretability methods vary in feasibility and clarity.
  • Behavioral factors: The effectiveness of an explanation depends on how users perceive and process it. For example, showing feature weights may satisfy developers but confuse end users.

3. Practical Applications

The authors map their taxonomy to real-world contexts:

  • Medicaid eligibility (Arkansas case): Local, selective, and conditional explanations are needed to support appeals.
  • University admissions: May require contrastive explanations to justify why one candidate was accepted over another.
  • California privacy law: Vague terms like “logic” and “key parameters” need better operationalization using the taxonomy.

They also provide a roadmap for field experiments to test which explanation formats are most helpful, using a software package built to compare multiple XAI methods across domains.

💡 Why it matters?

Laws are racing to demand “explainable AI,” but most lack guidance on how explainability should work in practice. This paper breaks that deadlock. It shows that legal rights to explanations only function if they’re grounded in how people understand decisions and what technical tools can actually deliver. Without that link, rights may become empty slogans.

What’s Missing?

  • Governance integration: While the taxonomy is robust, the paper doesn’t offer concrete proposals for how institutions (like courts or regulators) should enforce or oversee compliance with explanation requirements.
  • Trade-offs: There’s limited discussion of the potential tension between explanation quality and model performance, especially in complex models like deep neural nets.
  • Cross-jurisdictional comparison: The taxonomy could benefit from deeper discussion on how it maps onto international legal instruments beyond the EU GDPR and California’s CPPA.
  • Automation in enforcement: No mention of how auditing tools or benchmarks might operationalize explanation mandates at scale.

Best For:

Legal technologists, digital rights advocates, AI policy staffers, and regulatory authorities looking for a structured way to define and enforce explainability requirements. It also provides a shared language for lawyers and AI engineers working on compliance.

Source Details:

Title: A Legal Framework for eXplainable Artificial Intelligence

Authors: Aniket Kesari (Fordham Law), Daniela Sele, Elliott Ash, Stefan Bechtold (ETH Zurich)

Published: September 2024

Institution: Center for Law & Economics, ETH Zurich

Credibility: The authors are leading figures in tech law and interdisciplinary AI policy research. Bechtold is a top EU IP and tech law scholar; Kesari is a well-cited U.S. academic at the intersection of privacy and algorithms.

Context: Released during a critical regulatory moment, with the EU AI Act and U.S. state-level algorithmic accountability rules gaining traction. The authors were active in policy discussions, including Emory’s AI legal roundtable.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.