What’s Covered?
The brief acknowledges that both institutions are tasked with overlapping responsibilities — especially around monitoring risks from general-purpose AI, developing evaluation tools, and engaging with global AI safety initiatives. Yet they operate in different legal, strategic, and political environments.
It offers a pragmatic engagement model:
- Collaboration is suggested for shared standards, joint participation in international forums, and mutual research (e.g., joint evaluations or talent development).
- Coordination fits evaluation design and execution, including distributing responsibilities, sharing tools, and creating common performance benchmarks.
- Communication is key for non-sensitive areas like incident reporting and risk trend analysis, particularly where definitions and reporting structures may differ.
- Separation is necessary for proprietary or security-sensitive activities, differing risk thresholds, and politically sensitive international alignments (e.g., UK-US bilateral ties vs EU multilateralism).
The brief backs these proposals with a matrix mapping engagement type to activity (e.g., shared standards = collaboration; CBRN evaluations = separation) and a comparative table of institutional functions. It also gives concrete examples from each side — e.g., the UK’s early evaluation work, the EU’s multi-stakeholder code of practice, or past joint involvement in forums like the G7 and the AISI network.
💡 Why it matters?
There’s growing institutional sprawl in global AI governance. This piece helps cut through the noise by offering a well-reasoned model for bilateral cooperation between two of the most active public AI governance bodies. It’s especially timely as both institutions scale their activities and define their global identities in parallel.
What’s Missing?
The document is rich on principles but light on operational pathways. There’s no roadmap for initiating joint programs, resolving legal incompatibilities (e.g., GDPR vs. UK privacy regime), or addressing resource asymmetries. It also underplays the risk that political divergence — say, over AI export controls or state access to models — could overshadow the cooperation agenda.
Best For:
Policymakers in Brussels and London, international AI policy analysts, governance researchers, and anyone designing coordination architectures for national AI safety institutes.
Source Details:
Published March 2025. Authored by a team of researchers and policy experts affiliated with Future of Life Institute, Oxford, Chatham House, and CeSIA. Key contributors include Risto Uuk (Future of Life, EU AI Act analyst), Marta Ziosi (Oxford Martin, AI standards expert), and Charles Martinet (CeSIA, GPAI expert). The work reflects ongoing efforts to map functional interoperability across jurisdictions and offers a model extendable to the broader AISI network.