AI Governance Library

Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs

This study digs into how AI ethics actually gets implemented inside tech companies. It shows how “ethics entrepreneurs” — employees trying to integrate ethical practices — face structural hurdles, lack of leadership support, and personal risks. Despite formal policies, outcomes rarely match.
Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs

What’s Covered?

This is one of the most detailed empirical analyses yet on how AI ethics plays out behind closed doors in big tech. Drawing from 25 in-depth interviews and observational data, the authors uncover the internal friction that ethics workers face. They introduce the term “ethics entrepreneurs” — individuals embedded in tech companies who try to push for responsible AI from within, often with little authority and a lot of personal cost.

The paper maps how these workers operate in an environment where public-facing ethics commitments (“talking the talk”) are often disconnected from actual implementation (“walking the walk”). It identifies four core barriers:

  • Ethics isn’t a priority: Launching products quickly dominates decision-making. If ethics slows things down, it gets sidelined.
  • Ethics is hard to measure: Teams are incentivized by metrics (users, revenue, speed), but fairness or harms can’t always be quantified.
  • Organizational churn disrupts ethics work: Frequent “reorgs” and shifting team structures make relationship-building — which is crucial — hard to sustain.
  • Ethics is personalized and risky: With weak structures in place, it’s up to individuals to speak up. Doing so can harm their careers — especially if they’re from underrepresented groups.

The authors apply neo-institutional theory — particularly “decoupling” (when formal rules diverge from actual practices) — to explain how ethics initiatives get symbolically adopted without real power. They also use the concept of institutional entrepreneurship to describe how ethics workers attempt to recouple values and actions through persuasion, framing, and coalition-building.

The findings are sobering but not nihilistic. While formalization is still limited, some companies are starting to implement impact assessments and governance checkpoints. But most change still depends on individual labor, not systemic reform.

💡 Why it matters?

This study gives us a rare and honest look at the invisible labor of AI ethics work in industry. It helps explain why tech firms keep making the same mistakes — and why diversity in AI teams isn’t just a checkbox, but a risk factor. For regulators, this offers evidence that internal ethics teams aren’t enough. For practitioners, it’s a clear-eyed view of what pushing for change really involves.

What’s Missing?

The paper powerfully captures barriers and politics, but it doesn’t deeply explore what works. It documents frustration more than success, even though it hints at moments where early engagement or framing ethics as product quality had positive effects.

It also leaves out comparative data — how do companies with formalized ethics review processes perform differently? What organizational models (like independent ethics boards or regulatory-style audits) might reduce personal risk? There’s little discussion of global perspectives or small-to-mid-sized firms, where dynamics might vary.

Finally, the authors briefly note that regulation could help but don’t unpack which kinds of regulation would be most effective or least likely to backfire.

Best For:

This is a must-read for AI ethics professionals working in industry, especially those in responsible AI, trust & safety, or fairness roles. It’s also valuable for regulators, researchers, and digital rights groups who want to understand the limits of voluntary ethics inside corporations.

Source Details:

Title: Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs

Authors: Sanna J. Ali (Stanford), Andrew Smart (Google), Angèle Christin (Stanford), Riitta Katila (Stanford)

Published at: ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 2023

Context: First-hand accounts of the institutional frictions that make ethical AI hard to operationalize. Bridges academic theory with real-world tech dynamics. A critical resource for understanding why internal ethics teams so often struggle to make lasting change.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.