AI Governance Library

Understanding Responsibilities in Al Practices

This guidance from New South Wales outlines role-specific responsibilities for implementing responsible AI. It supports public agencies in assigning accountability using ISO-aligned frameworks and practical RACI structures. A useful anchor for everyday governance.
Understanding Responsibilities in Al Practices

📘 What’s Covered

This document is part of the broader NSW AI Assessment Framework and focuses on one of the thorniest parts of responsible AI: who is actually responsible for what.

Rather than prescribing rules, it outlines a flexible accountability structure, built around five role types:

  • Executives: Set strategic direction, sign off on initiatives, and hold ultimate accountability.
  • Managers: Operationalise governance and ensure compliance with ethical, security, and policy requirements.
  • Product Owners: Bridge the business and technical layers, embedding AI risks and controls into the delivery pipeline.
  • Users: Follow system guidance, flag performance issues, and report anomalies or harms.
  • Everyone: Maintain a shared culture of ethical AI use and participate in training and feedback loops.

The responsibilities are not framed as legal obligations but as practical behavioural expectations, grounded in ISO/IEC standards. They are also mapped to RACI matrices — which are particularly useful for larger agencies dealing with overlapping roles or shifting structures.

Agencies are encouraged to:

  • Integrate these responsibilities into performance plans;
  • Incorporate them into training and awareness programs;
  • Use them as scaffolding when designing or auditing AI governance structures.

The guidance fits squarely into a broader culture-change agenda: governance is not just compliance — it’s practice. This emphasis on embedding responsible AI through teams, structures, and behaviours is what sets it apart from more compliance-heavy checklists.

💡 Why it matters?

This guide does something many AI governance documents skip: it assigns roles across the full org chart — not just tech and legal. It reinforces that responsible AI isn’t the job of the ethics team or CDO alone. And it gives public sector teams a flexible, standards-based way to structure accountability before something goes wrong. Clear enough for frontline teams, mature enough for audit trails.

🔍 What’s Missing

While the role definitions are solid, the guidance stops short of providing specific examples — e.g., what does good look like in a use-of-force AI model versus a recruitment one? There’s also little advice on handling conflicts or trade-offs between departments. Additionally, performance measures are mentioned but not exemplified — a missed opportunity for benchmarking.

🎯 Best For

Designed for public sector AI leaders, digital ethics teams, and heads of service delivery. Also useful for HR, risk, and procurement officers embedding AI roles into governance frameworks. The tone and structure are especially suited to governments adopting ISO-aligned AI policies.

📚 Source Details

  • Title: AI Assessment Framework Guidance – Understanding Responsibilities in AI Practices
  • Published by: NSW Department of Customer Service
  • Date: December 2024
  • Standards alignment: ISO/IEC AI governance standards
  • Use case: Role assignment, performance planning, training
  • Document type: Official state government guidance
About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.