What’s Covered?
The report draws on two decades of international cyber policy to propose actionable strategies for global governance of AI-driven cyber risks, especially those with adversarial intent. It argues that although AI has not yet changed the nature of cyber risks, it intensifies their speed, volume, and reach. The key point: existing cybersecurity frameworks can, and should, be adapted for AI use cases—rather than reinventing the governance wheel. The report begins with a detailed landscape scan of current AI governance efforts, highlighting overlaps, gaps, and non-binding mechanisms such as the EU AI Act, UNESCO Recommendations, and G7 Hiroshima Process. Drawing parallels to the cyber domain, it identifies successes like the Budapest Convention and UN GGE norms, and challenges like multilateral slowdowns, lack of enforcement, and normative fragmentation. Section 2 introduces a five-step methodology to assess whether new AI-specific frameworks are needed or existing cybersecurity structures can be adapted. This pragmatic, “use-based” approach emphasizes accountability, transparency, and developer disclosure. The report calls for more mandatory incident reporting, robust cyber defense integration, and expanded investment in AI-aligned cyber capacity. Notably, it warns of the risks tied to agentic AI and open-source misuse, while urging anticipatory action despite a lack of current “novel” cyberattacks. Its final recommendations include strengthening global coordination, aligning funding tools, and scaling up R&D in AI-powered cyber defense. Document structure:
– Executive Summary
– Purpose, Audience, Scope, and Development Process
– Section 1: Shaping AI Risk Governance via Cyber Lessons
– Section 2: Scalable Model for Tackling AI Cyber Risks
– Concluding Highlights: Regulation, Developer Transparency, and Cyber Defense
– Visualizations and survey insights from 50+ stakeholders across 20 countries
💡 Why it matters?
This is one of the first comprehensive efforts to link cyber policy frameworks directly to AI risk governance. It offers a grounded, institution-aware path forward at a time when many proposals are still abstract. By focusing on real mechanisms—like CERTs, OEWG norms, and developer disclosure—it gives AI governance efforts a head start toward global interoperability, accountability, and resilience.
What’s Missing?
The report doesn’t explore regional differences in enforcement capacity or legal culture, which could impact its proposed scaling model. There’s also little treatment of Global South perspectives beyond participation metrics. The recommended five-step framework is a helpful starting point but lacks specificity on how political will, power asymmetries, or non-cooperation would be handled in practice. Its call for developer transparency, while vital, doesn’t address competitive disincentives or liability barriers to disclosure.
Best For:
Policy teams working on AI risk regulation, cybersecurity leaders, intergovernmental organizations, standards bodies, and legal advisors seeking a bridge between AI governance and existing cyber norms. It’s also useful for research institutions drafting operational governance proposals.
Source Details:
Forging Global Cooperation on AI Risks: Cyber Policy as a Governance Blueprint. Paris Peace Forum, Strategic Foresight Hub, February 2025. Lead author: Pablo Rice, Head of Cyber & Emerging Tech Governance at the Paris Peace Forum. Contributors include experts from UC Berkeley, Georgetown CSET, Microsoft, Fortinet, Google Cloud/Mandiant, INTERPOL, MIT FutureTech, and others. The report draws from the Paris Call community and was developed through roundtables and workshops between 2023–2025. Pablo Rice is known for cross-sector work on digital cooperation and strategic governance. Contributors such as Krystal Jackson (UC Berkeley), Giacomo Persi Paoli (UNIDIR), and Nicholas Butts (Microsoft) bring a mix of academic, intergovernmental, and industry expertise, reflecting a deeply multistakeholder approach.