ULEID's Participation at the AI Lund Fika-to-Fika Workshop on High-Risk AI in the EU

On September 26, 2023, Carlotta Rigotti, from ULEID, a partner of the BIAS consortium, participated in the AI Lund Fika-to-Fika Workshop on Regulating High-Risk AI in the EU, hosted by Lund University. The workshop was a response to the European Commission’s proposed AI Act, aimed at harmonizing rules for Artificial Intelligence (AI). Its primary goal was to gather legal experts to explore the intricacies of ‘high-risk’ AI systems and illuminate the regulations set forth in the AI Act before its implementation.

The AI Act is founded on the recognition that AI systems pose new risks and potential negative consequences for individuals and society. These include manipulative, exploitative, and socially controlling practices, categorizing AI systems into four distinct groups based on design, associated risk levels, and corresponding accountability obligations. High-risk AI systems, detailed in Annexes II and III, encompass diverse applications such as toys, medical devices, border control management, and AI systems for recruitment and selection. For mitigation purposes, the Act mandates a comprehensive certification regime, covering aspects like data governance, technical documentation, transparency, human oversight, and robustness.

In her presentation, Carlotta explained the legal rationale behind designating AI applications for recruitment and selection as high-risk. According to Recital 36 of the AI Act, these applications can significantly impact individual dignity, autonomy, and well-being, perpetuating social subordination and infringing on privacy and data protection rights. Aligning with these concerns, preliminary findings from the BIAS project highlighted risks associated with data protection violations and social discrimination in AI-driven hiring processes. Transparency and explicability issues further compound these challenges, hindering accountability.

In response to these challenges, Carlotta shifted her focus to a key research objective of the BIAS project, the Debiaser. This innovative and trustworthy technology is based on natural language processing (NPL) and case-based reasoning (CBR) and seeks to identify and mitigate diversity bias in the hiring process. The design of the Debiaser is expected to align with the requirements for high-risk AI systems of the AI Act. For this purpose, the ULEID team will perform the Assessment List for Trustworthy AI (also known as ALTAI) in collaboration with BFH and NTNU, responsible for the design of the Debiaser. Beyond mere legal compliance, the Debiaser will potentially emerge as a proactive risk management measure, aiding HR practitioners in upholding fundamental rights in the labor market.

In conclusion, Carlotta’s presentation was a critical discourse within the context of the EU’s ambitious efforts to foster human-centric and trustworthy technological development aligned with fundamental rights. The workshop, extending beyond her specific focus, provided a dynamic platform for diverse stakeholders to engage in a meaningful dialogue about the future of AI regulation in the EU. As the AI Act advances toward implementation, the insights shared during this workshop are poised to significantly shape the landscape of high-risk AI applications within the European single market.