What’s Covered?
This working paper, published by the Centre for International Governance Innovation (CIGI), focuses on a unique but increasingly urgent governance challenge: how to manage AI adoption inside research institutions. The author lays out a strong case for using international standards—rather than treaties or legislation—as the preferred first step in building global AI governance for the research sector.
The argument is built in six sections:
- Global AI Risks in Research – da Mota explains that academic institutions are deeply networked and vulnerable to AI misuse through data poisoning, overreliance on tools like LLMs, or content scraping. He warns that academic freedom makes these institutions hard to govern, creating a “low-governance, high-risk” space.
- Why Global Governance is Needed – International cooperation is tough, but necessary. Research institutions, because of their shared values and history of collaboration, might be an easier entry point for global AI governance than other sectors.
- Critiques of Standards – Standards aren’t apolitical or foolproof. They can embed political values, serve corporate interests, or become tools of “ethics washing.” But the paper argues that if standards include stakeholder consensus, peer-review-based assessments, and transparency, these risks can be reduced.
- How to Internationalize CAN/DGSI 128 – Canada’s draft standard for AI in research institutions could be submitted to the ISO or IEEE. The paper reviews the ISO process and recommends involving international research organizations (like IFLA, UNESCO, and the International Council on Archives) to balance neutrality and relevance.
- What Should Be in the Standard – Core features include strong privacy safeguards, alignment with open science values, data sovereignty protections, and a clear path for certification and peer-based conformity assessments using ISO 17029.
- Call to Action – da Mota calls on Canadian institutions, standard bodies, and the international research community to help finalize and scale this standard, making it a model for other governance efforts.
💡 Why it matters?
AI governance in research often gets ignored—even though research data underpins everything from medicine to climate science. This paper offers a concrete, international approach that sidesteps gridlocked politics and avoids over-regulation. If adopted, it could set a precedent for other domains where AI affects sensitive or collaborative environments.
What’s Missing?
While da Mota makes a compelling case for standardization, there’s little discussion of economic or enforcement incentives. What makes institutions adopt voluntary standards—especially if they’re under pressure to innovate fast? The paper also doesn’t explore tensions between open science and institutional IP protection in detail. And while peer-review-based conformity assessments sound ideal, questions about cost, capacity, and accountability remain.
Finally, the paper could do more to connect with broader AI governance developments—like the OECD’s upcoming frameworks, UNESCO’s AI guidance, or the G7’s Hiroshima Process—beyond just referencing them.
Best For:
Ideal for academic leaders, research librarians, standards professionals, and AI governance scholars who want a practical model for cross-border coordination. It’s also valuable for government bodies or international consortia (like UNESCO or the OECD) interested in AI policy that doesn’t hinge on big-tech dynamics.
Source Details:
Title: Standards as a Basis for the Global Governance of AI in Research
Author: Matthew da Mota, CIGI Digital Policy Hub Fellow (Summer 2024)
Published by: Centre for International Governance Innovation (CIGI)
Date: 2024
Context: da Mota leads the expert drafting committee behind Canada’s CAN/DGSI 128 standard for AI in research institutions. His paper draws on his broader work at CIGI’s Global AI Risks Initiative, and builds on earlier papers analyzing AI risks in academic ecosystems.