What’s Covered?
This document is a curated snapshot of AI literacy practices submitted by companies that signed the EU’s AI Pact—a voluntary, non-binding commitment to prepare for upcoming AI Act obligations. It focuses specifically on Article 4 of the AI Act, which requires providers and deployers to ensure a “sufficient level of AI literacy” for people using AI systems on their behalf.
The repository covers 15 organizations, grouped by the level of implementation:
- Fully implemented practices (e.g., Generali, Telefónica, Booking.com)
- Partially rolled-out practices (e.g., INECO, Collibra, TIM)
- Planned practices (e.g., Milestone Systems)
Each case includes details like:
- AI systems in use
- Target groups (from general staff to AI developers)
- Training formats (e-learning, internal academies, partnerships with universities)
- Customization based on user experience and context
- Metrics for impact: participation rates, skill gains, role evolution, satisfaction
- Challenges (e.g. keeping pace with AI innovation)
The review format makes it easy to spot patterns. Most literacy programs are tiered by user roles and technical familiarity, and many draw on internal or hybrid learning ecosystems (like Generali’s WeLearn + New Roles Schools). Others lean into external collaborations, especially for advanced technical training. Some have sector-specific flavors—for example, health and insurance firms train on use cases directly tied to compliance and risk.
While participation is voluntary and doesn’t prove AI Act compliance, the repository sets a helpful baseline for what “reasonable efforts” toward AI literacy might look like under the law.
💡 Why it matters?
This is one of the first public glimpses into how companies are trying to turn a vague legal requirement (“sufficient AI literacy”) into concrete actions. With Article 4 applying to both providers and deployers, this isn’t just a compliance issue for engineers or lawyers. It’s an organizational readiness challenge. The examples here give companies a practical reference point—and a chance to learn from peers rather than reinventing the wheel.
What’s Missing?
The current version doesn’t cover smaller deployers, NGOs, or public sector bodies outside of INECO. That limits its usefulness for the broader ecosystem. It also skips analysis—there’s no synthesis of what “good” literacy looks like across industries, or what minimum thresholds the EU might expect. And while some metrics are mentioned, there’s no standard framework for measuring AI literacy gains, which could be key to compliance verification later on.
Another gap: there’s little discussion of bias, safety, or ethics literacy. Most programs focus on tooling, not judgment—which is surprising given how tightly Article 4 is linked to trust and responsible use.
Best For:
- Compliance teams preparing for Article 4 of the AI Act
- HR and L&D departments building internal AI training
- SMEs looking for scalable literacy models from larger peers
- Public sector organizations seeking inspiration for training programs
- Researchers and auditors mapping early AI governance practices
Source Details:
Full Citation:
Living Repository of AI Literacy Practices – v. 31.01.2025, published by the EU AI Office as part of the AI Pact initiative.
📍 Context: This repository supports Article 4 of the AI Act, which mandates deployers and providers to ensure “sufficient AI literacy” among users, with attention to roles, technical familiarity, and application context.
👥 Curated By: EU AI Office, drawing from voluntary contributions by AI Pact signatories. Examples include major players like Generali, Booking.com, Telefónica, and smaller outfits like Studio Deussen.
✳️ Important Note: Participation in this repository or in the AI Pact is voluntary. Implementing these practices does not constitute presumed compliance with the AI Act—but the repository serves as a de facto benchmark for what regulators and peers are watching.