What’s Covered?
The FAIRA framework is structured into three main sections: Part A (Components Analysis), Part B (Values Assessment), and Part C (Controls for AI Risks). It guides public sector teams through a granular breakdown of AI solutions, including technical design, data sources, human-machine interaction, legal boundaries, and system outputs. It identifies 12 domains of risk—from human rights and information privacy to procurement and corruption—and prompts agencies to map their AI solution against these.
Part A focuses on mapping the AI system across nine tables, asking questions like: what does the AI do, how is it trained, what data is used, who is impacted, and how are outputs monitored? Part B then evaluates these answers using the National Framework for the Assurance of AI in Government (NFAAIG) principles: human wellbeing, human-centred values, fairness, privacy and security, reliability, transparency, contestability, and accountability. Part C offers a practical table of actions for different business functions (executive, management, technical, policy, engagement) to mitigate risk through training, evaluation, transparency, and stakeholder communication.
Document Contents:
– Instructions and Use Context
– Background: Policy and Governance Basis
– Domains of Risk (12 thematic areas)
– Part A:
• Table 1: AI solution description
• Table 2: Human-machine interface
• Table 3: AI use inputs
• Table 4: AI use outputs
• Table 5: Object of AI action
• Table 6: Design inputs
• Table 7: Sector and context
• Table 8: Broader governance
• Table 9: Monitoring and evaluation
– Part B: Values Assessment (based on Australia’s 8 AI Ethics Principles)
– Part C: Controls for AI Risks (practical responsibilities per business function)
– Licensing and document history
Why It Matters?
FAIRA operationalises high-level ethical AI guidance by embedding it into day-to-day government workflows. It ensures accountability across teams and prompts risk identification before systems go live. With its structured templates, it supports better transparency, safeguards rights, and keeps AI aligned with public interest duties.
What’s Missing?
FAIRA is comprehensive but assumes a relatively high level of institutional capacity. There’s no real-world walkthrough of a completed FAIRA, which could help teams understand expectations. It also doesn’t cover private sector applications or cross-jurisdictional use in detail, limiting its portability beyond Queensland’s public sector.
Best For:
FAIRA is built for public sector executives, ICT governance teams, policy advisors, and AI project leads working in Australian government settings. It’s particularly helpful for those navigating procurement, legal compliance, or assurance requirements tied to the NFAAIG.
Source Details:
Queensland Government Customer and Digital Group, Department of Transport and Main Roads (2024). Foundational Artificial Intelligence Risk Assessment Framework (FAIRA), Version 1.0.0. Finalised in September 2024 under the Queensland Government Enterprise Architecture (QGEA) policy team. Developed in line with Australia’s National Framework for the Assurance of AI in Government, this document was prepared with contributions from the AI Unit and key departmental stakeholders. No named authors are listed, but the policy team is comprised of specialists in ICT governance, risk, and ethical AI implementation across public sector contexts.