AI Governance Library

AI Liability Along the Value Chain

This report breaks down why that question isn’t so easy to answer when AI is built and deployed by many hands. It proposes a framework for assigning civil liability across the AI value chain.
AI Liability Along the Value Chain

What’s Covered?

The report dives into the challenge of assigning responsibility in a world where AI systems are no longer the product of a single actor but the outcome of a complex value chain. Developers, data suppliers, model fine-tuners, integrators, and deployers all shape how an AI system functions—and all could contribute to downstream harm. The report’s core focus is civil liability: when and how harmed individuals can seek redress, and what legal tools can (or can’t) help them.

Abstract Summary:

Civil liability laws struggle with AI because responsibility is distributed across many actors—what the report calls the “problem of many hands.” The author analyzes how this challenge plays out across different types of AI value chains and shows how contracts, design decisions, and market power influence how liability is allocated in practice. The report argues that a baseline of fault-based liability (i.e. liability for negligence) is the best way to incentivize safety without stifling development, but it also makes space for strict liability in high-risk settings. It then compares this proposed model with the (now withdrawn) EU AI Liability Directive and the revised Product Liability Directive.

Document Contents:

  • Section 1 explains why traditional liability frameworks struggle with AI’s opacity and complexity.
  • Section 2 introduces the “problem of many hands” and maps out different AI value chain configurations.
  • Section 3 explores how responsibilities are structured via technical design choices, contracts, and market dynamics.
  • Section 4 discusses liability design choices: fault vs. strict liability, whether to divide liability, and how to deal with asymmetries of information.
  • Section 5 analyzes how EU instruments like the AI Act, Product Liability Directive, and AI Liability Directive approach these issues.
  • Annex offers a useful summary of how liability is contractually structured in practice—something that’s rarely visible in policy discussions.

💡 Why it matters?

The report offers a much-needed bridge between technical design practices, legal theory, and policy implementation. By analyzing how liability is structured in the real world—through contracts, software choices, and power asymmetries—it shows that law isn’t the only thing that governs accountability. It also makes clear that designing fair AI liability rules requires understanding how AI is actually built and used.

What’s Missing?

While the report proposes a thoughtful baseline—fault-based liability with exceptions—it doesn’t fully tackle how enforcement should work in practice. For example, how should courts handle the technical complexity of AI systems? Who provides the necessary expert evidence? Also, while the report references different types of value chains, it’s still grounded primarily in high-resource, Western contexts. Broader global perspectives—especially from jurisdictions with less regulatory capacity—would enrich the framework. Finally, more empirical case studies would help test the theory against real-world harms.

Best For:

Policymakers looking to regulate AI accountability, legal scholars working on tech and tort law, and AI governance professionals seeking a conceptual map of how responsibility is distributed in complex AI systems. Especially useful for those engaging with EU AI Act implementation or liability reforms.

Source Details:

Beatriz Botero Arcila is a legal scholar and assistant professor at Universidad de los Andes Law School in Bogotá. She also holds affiliations with Harvard’s Berkman Klein Center and is known for her research on technology, governance, and law in global and Latin American contexts. Her interdisciplinary background gives the report a strong mix of conceptual depth and policy pragmatism. This work was supported by Mozilla and is published under a Creative Commons license (CC BY-SA 4.0), which invites reuse and adaptation. The report builds on comparative tort law literature (including the work of the European Tort Law Group) and responds directly to current policy debates in the EU.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.