AI Governance Library

Governance in the Age of Generative AI: A 360º Approach for Resilient Policy and Regulation

The report lays out a three-pillar governance framework to help regulators handle generative AI: build on existing rules, shape inclusive and cross-sector practices now, and prepare for future disruptions through foresight, agility, and international cooperation.
Governance in the Age of Generative AI: A 360º Approach for Resilient Policy and Regulation

What’s Covered?

The paper argues that resilient generative AI governance needs to go beyond regulation—it must build on what already exists, involve all relevant players, and stay flexible enough to adapt. It’s structured around a 360º framework with three pillars: Harness the past, Build the present, and Plan the future.

In Pillar 1, the report pushes policymakers to evaluate and adapt existing laws before writing new ones. It addresses how generative AI complicates privacy, copyright, liability, and competition. It also calls for clarity in responsibility allocation across the AI lifecycle and a reassessment of the enforcement capacities of current regulators. A key tension is flagged: should enforcement sit with a central AI authority, or be distributed across sectoral regulators?

Pillar 2 focuses on building multistakeholder governance. It identifies specific pain points and practical steps for industry, academia, and civil society. For businesses, it encourages incentives (like tax credits, procurement benefits) to support responsible AI. For academia, it highlights the need for compute access and R&D funding. For CSOs, it stresses early inclusion and transparency. The report also highlights risks to children, especially from overexposure to AI systems, and calls for specific safeguards.

Pillar 3 shifts to forward-looking tools: horizon scanning, strategic foresight, and agile regulation. It flags threats from frontier AI (like multi-agent systems and emotional entanglement), warns of synthetic data feedback loops, and explores AI’s convergence with quantum computing, synthetic biology, and neurotech. The paper makes a strong case for international collaboration to avoid fragmentation—especially urging richer countries to support AI capacity-building in lower-resource nations.

This document is packed with tables and examples that outline not just problems but also possible responses—always grounded in existing laws, real-world pilot projects, or ongoing initiatives. It ends with a call for harmonized global approaches that are inclusive, transparent, and sensitive to power imbalances.

💡 Why it matters?

This is one of the most practically useful frameworks out there for policymakers wrestling with generative AI. It sidesteps abstract principles and digs into the granular work of governance—identifying which actors should do what, where gaps exist, and how to fund and structure responsible AI capacity. It’s especially relevant in 2025, as governments move from strategy to enforcement.

What’s Missing?

Despite a strong structure, the paper doesn’t give enough attention to institutional accountability—how to measure whether regulators are actually keeping up or if governance tools are working in practice. There’s also limited focus on worker protection, especially where AI impacts job automation or workplace surveillance. The report suggests incentives but is lighter on enforcement. Finally, while the paper mentions human rights frameworks, it doesn’t embed them deeply into the proposed governance instruments.

Best For:

Policy teams in government, regulators, and global institutions trying to build or refine AI rules. Also helpful for CSOs and academics looking to understand where they can plug into policy design and what’s coming next.

Source Details:

Citation: World Economic Forum & Accenture. Governance in the Age of Generative AI: A 360º Approach for Resilient Policy and Regulation. October 2024.

Lead authors: Rafi Lazerson, Manal Siddiqui, Karla Yee Amezaga (Accenture & WEF), supported by over 100 contributors from industry, academia, civil society, and governments. Notable voices include Gary Marcus, Renée Cummings, Nita Farahany, and officials from UNICEF, Meta, Salesforce, EU Commission, and more.

Author context: This paper builds on the AI Governance Alliance Briefing Paper Series and reflects the work of WEF’s Resilient Governance and Regulation working group. Accenture supported the project as knowledge partner, and contributors range from corporate executives to UN advisers to grassroots CSO leaders—bringing legal, technical, ethical, and geopolitical insights.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.