What’s Covered?
This policy brief, authored by Yasmin Afina and Giacomo Persi Paoli through UNIDIR’s Security and Technology Programme, sets the stage for global discussion on responsible AI in the military context. It was prepared as input for the upcoming 2024 Responsible AI in the Military Domain (REAIM) summit and as a launchpad for the Roundtable for AI, Security and Ethics (RAISE).
The brief makes the case that military uses of AI, due to their complexity and potential impact on peace and stability, require governance approaches shaped by a wide mix of stakeholders—beyond just national governments.
The six priority areas outlined as foundational for future governance initiatives are:
- Building a Knowledge Base – improving understanding of military AI applications and their implications across sectors.
- Trust Building – fostering transparency and shared responsibility between governments, industry, and civil society.
- The Human Element – clarifying roles, oversight responsibilities, and ethical obligations in human-AI decision chains.
- Data Practices – focusing on security, integrity, and ethical use of data in defence systems.
- Life Cycle Management – developing standards and oversight throughout the AI system’s lifespan, from R&D to decommissioning.
- Destabilization Risks – addressing how AI can unintentionally escalate conflict, intensify geopolitical competition, or undermine arms control.
The brief underscores the benefits of inclusive governance, where academic institutions, private sector developers, humanitarian actors, and technical experts all participate in shaping AI norms. This approach is pitched as essential for legitimacy, compliance with international law (especially IHL and IHRL), and building scalable governance mechanisms.
RAISE, co-convened by UNIDIR and Microsoft, is positioned as a long-term, neutral forum to facilitate this cross-sectoral conversation. The March 2024 inaugural edition focused specifically on military applications of AI, with the goal of generating shared recommendations that sidestep traditional silos and geopolitical rivalries.
💡 Why it matters?
Military AI carries immense risks—from accidents and escalation to accountability gaps and civil harm. But most innovation in this space comes from the private sector, not states. This makes traditional treaty-making harder. This brief pushes a multi-stakeholder model as a way to craft credible, practical governance strategies—where every actor, not just diplomats, has a seat at the table.
What’s Missing?
The brief outlines broad strategic priorities but stops short of offering concrete governance tools, indicators, or operational criteria. While it nods to principles like IHL and accountability, it doesn’t provide examples of best practices or existing efforts to guide implementation. Case studies—especially from the Global South or ongoing military AI projects—are also absent, which limits applicability for less-resourced governments or local civil society groups. There’s a lot of vision, but not yet a clear roadmap for action.
Best For:
Policymakers working on defence, peace, or AI policy; researchers tracking AI and international security; tech companies seeking responsible innovation pathways; and civil society actors aiming to influence military AI norms. It’s especially relevant ahead of multilateral gatherings like REAIM or CCW discussions.
Source Details:
This policy brief was published by the United Nations Institute for Disarmament Research (UNIDIR), an independent think tank within the UN system known for its focus on disarmament, arms control, and international security. Authors Yasmin Afina and Giacomo Persi Paoli are affiliated with UNIDIR’s Security and Technology Programme, which studies the implications of emerging technologies for global security. The work is supported by multiple states—including Germany, the UK, and South Korea—as well as Microsoft. The authors contribute both strategic and technical expertise, drawing from ongoing multilateral policy efforts and prior disarmament research.