AI Governance Library

AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models

This resource is a versioned framework published by UC Berkeley’s Center for Long-Term Cybersecurity. It provides targeted risk-management guidance for developers of general-purpose AI and foundation models and aligns with the NIST AI Risk Management Framework.
AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models

What’s Covered?

The profile is designed to guide developers of large-scale GPAI and foundation models—like GPT-4 or Claude 3—through practical applications of the NIST AI RMF. It bridges high-level risk principles with concrete practices suited to the unique architecture, scale, and use-case diversity of foundation models. The document builds on input from the NIST Generative AI Public Working Group and adapts earlier guidance from the NIST AI RMF Playbook and previous Berkeley publications. It addresses both upstream developers and downstream integrators. Key topics include governance, system architecture, misuse risks, and trustworthy AI characteristics. It focuses on specific harms such as manipulation, bias, misinformation, and catastrophic misuse. The document also introduces the concept of the “foundation model frontier,” referencing thresholds for compute and model capabilities. This is especially relevant in the context of US Executive Order 14110 and the EU AI Act.

The profile includes:

• Abstract

• Introduction and Objectives

• Definitions of Key Terms (e.g., GPAI, foundation model, GPAIS, frontier models)

• Intended Audience and Use Cases

• Core Risk Themes and Categories

• Adapted Measures from NIST AI RMF Playbook

• Mapping of Measures to AI Trustworthiness Characteristics (e.g., safety, transparency, fairness)

• Appendices on future updates and standards alignment

Why It Matters?

This profile turns abstract governance goals—like safety, accountability, and proportionality—into a working set of measures specifically tuned to general-purpose and foundation model use. It gives structure to a high-risk part of the ecosystem that’s still often handled ad hoc or with inconsistent standards.

What’s Missing?

The profile leans heavily on voluntary adoption and doesn’t address how implementation could be externally verified. While focused on developers, it only briefly addresses how end-users or impacted communities might evaluate or influence compliance. Enforcement, market incentives, and cross-jurisdictional challenges are left out.

Best For:

The profile is especially useful for technical leads, compliance officers, and AI governance advisors working on or with frontier models. It also serves as a translation tool between policy expectations and development practices.

Source Details:

Barrett, A.M., Newman, J., Nonnecke, B., Madkour, N., Hendrycks, D., Murphy, E.R., Jackson, K., & Raman, D. (2025). AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models, Version 1.1. Center for Long-Term Cybersecurity, UC Berkeley.

Anthony M. Barrett and Jessica Newman are affiliated with the AI Security Initiative at UC Berkeley.

Brandie Nonnecke is a director at CITRIS Policy Lab and faculty at the Goldman School of Public Policy.

Dan Hendrycks is part of the Berkeley AI Research Lab.

Each contributor brings policy, technical, or interdisciplinary expertise to the intersection of AI governance and real-world deployment risks .

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.