AI Governance Library

A Taxonomy of Trustworthiness for Artificial Intelligence

That’s the question this report tackles—offering one of the most detailed attempts so far to break down “trustworthy AI” into 150 actionable properties across the AI lifecycle. The taxonomy isn’t just a list.
A Taxonomy of Trustworthiness for Artificial Intelligence

What’s Covered?

The report builds directly on the NIST AI Risk Management Framework (AI RMF) and expands its categories—validity, safety, security, accountability, explainability, privacy, fairness—into a working inventory of questions and practices. To this foundation, the author adds an eighth category: Responsible Practice and Use, emphasizing AI’s social and organizational dimensions. Each property in the taxonomy is matched to a specific lifecycle stage, from business case design to monitoring, and is accompanied by guiding questions.

Document Highlights:

  • Intro & Framing: Identifies the fragmentation in how “trustworthy AI” is interpreted across sectors.
  • Comparative Analysis: Summarizes existing frameworks like ALTAI, the AI Act, the AI Bill of Rights, and the NIST RMF.
  • Taxonomy Structure: Maps 150 trustworthiness properties across seven AI lifecycle stages and eight trust dimensions.
  • Human-AI Engagement Spectrum: Recognizes that not all systems are human-facing and adjusts the taxonomy accordingly.
  • Use Cases: Designed to support developers, teams writing model/system cards, policymakers, and auditors.

💡 Why it matters?

This paper stands out by translating high-level goals into specific, operational properties that can guide real development and assessment work. Instead of offering yet another set of values or guidelines, it asks: What does it actually mean to be trustworthy, here, now, in this part of the process? And it tries to answer that with targeted prompts and context-aware framing. The inclusion of “Responsible Practice and Use” is a valuable reminder that trust doesn’t only live in code or models—it’s about people and organizations too.

What’s Missing?

This taxonomy is not meant to be a checklist—and the author is clear about that. But users unfamiliar with the NIST RMF or new to lifecycle-based governance might need additional scaffolding to make use of it. There’s also little discussion of how to prioritize among the 150 properties, or how resource-constrained teams might work with them. And although the taxonomy is designed to support diverse AI systems (not just human-facing ones), many properties still skew toward systems that affect individuals.

More practical guidance or toolkits based on this taxonomy would be a welcome next step.

Best For:

  • AI teams developing internal trust checklists or model documentation templates
  • Standards-setting bodies and regulators aligning with the NIST RMF
  • Auditors and independent reviewers needing a comprehensive question set
  • Policy researchers mapping trust and safety dimensions across AI use cases

Source Details:

Jessica Newman is Director of the AI Security Initiative at UC Berkeley’s Center for Long-Term Cybersecurity (CLTC), where she works at the intersection of cybersecurity, responsible AI, and governance. This white paper was informed by a 2022 expert workshop co-hosted with Intel and builds on dozens of interviews, policy sources, and prior toolkits. The document is part of CLTC’s white paper series and is available under open access.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.