What’s Covered?
Titled “The Impact of Human-AI Interaction on Discrimination”, this large-scale study from the EU Policy Lab investigates how human professionals interact with AI-based Decision Support Systems (DSS) in high-stakes domains like hiring and credit lending.
Using a mixed-methods approach, the research team conducted quantitative experiments with 1,411 HR and banking professionals in Germany and Italy, followed by qualitative interviews, focus groups, and participatory design workshops with fair AI experts and EU policymakers.
Key features of the methodology:
- Two AI models tested: a generic accuracy-optimized DSS and a fairness-optimized DSS.
- Decision-makers: real-life professionals asked to evaluate job or loan applicants based on various attributes.
- Behavioral metrics: trust games, real-effort tasks, and ranking exercises to assess judgment patterns.
Main results:
- Participants followed both fair and biased AI advice at similar rates.
- The generic AI reinforced gender and nationality biases (e.g., favoring German men).
- The fair AI reduced discriminatory outcomes, but participants’ own biases still affected choices.
- Human oversight didn’t correct bias when the system was already flawed, and sometimes made outcomes worse.
- Oversight was often shaped by organizational priorities—respondents prioritized company expectations over ethical judgments.
Follow-up workshops with experts and regulators produced six themes for better oversight:
- Define and operationalize fairness across human and algorithmic levels.
- Establish override protocols grounded in rules—not just gut instinct.
- Support mutual accountability between humans and machines.
- Implement feedback loops to adjust systems post-deployment.
- Train users and developers on bias detection and fairness logic.
- Translate findings into flexible, scenario-based regulatory guidance.
💡 Why it matters?
This report challenges a key assumption baked into the EU AI Act: that human oversight alone guarantees fairness. The findings show that oversight systems must be systemic, not individualistic. Without clear override rules, feedback channels, and bias audits that include human decisions, AI fairness can’t be achieved just by inserting a human into the loop.
What’s Missing?
While the study is rigorous, it doesn’t include cross-sector comparisons or jurisdictional diversity beyond Germany and Italy. There’s limited attention to how differences in organizational culture, legal requirements, or AI literacy might influence outcomes in other settings. Also, while qualitative insights are rich, a deeper analysis of power dynamics—how pressure from institutions or incentives influences oversight—would round out the picture. The report offers recommendations but stops short of proposing binding governance models or tools for implementation.
Best For:
EU policymakers shaping post-AI Act guidance, compliance officers in high-risk sectors (HR, finance, healthcare), AI system designers, and researchers studying bias, fairness, and socio-technical systems. Also useful for civil society groups pushing for more accountable AI in public or regulated spaces.
Source Details:
This report was authored by Alexia Gaudeul, Ottla Arrigoni, Vasiliki Charisi, Marina Escobar-Planas, and Isabelle Hupont from the Joint Research Centre (JRC) of the European Commission, under the EU Policy Lab. It forms part of the JRC’s social science-focused workstreams on AI, particularly under the Collaborative Doctoral Partnership Agreement No. 35500. The JRC combines behavioral science, foresight, and policy design to inform EU-wide AI governance. Their participation design approach and collaboration with policymakers ensure that findings feed directly into the operationalization of the EU AI Act—particularly Article 14 on human oversight and Article 10 on bias mitigation.