Breaking Ground in AI Fairness: A Glimpse into the Lawtomation Days

In a dynamic intersection of law and technology, Carlotta Rigotti and Eduard Fosch-Villaronga recently took the stage at the second edition of the “IE Lawtomation Days” conference on September 29, 2023, hosted by IE University. This conference, dedicated to dissecting the evolving legal landscape of automated decision-making and Artificial Intelligence (AI), served as a critical rendezvous for professionals, academics, and researchers worldwide.

The BIAS Journey Towards Fairness

Through a comprehensive scoping literature review and qualitative research spanning gender studies, labor studies, law, and computer science, Carlotta and Eduard engaged in a critical examination of the current state-of-the-art concerning fairness in AI applications for recruitment and selection.

Defining and implementing fairness in the realm of AI proves to be a complex undertaking. As their presentation unfolded, a myriad of definitions and classifications emerged, each intricately shaped by the authors’ background and perspective. For instance, within the context of recruitment and selection processes, the concept of fairness becomes entangled with the fundamental question of “who is a good employer?”. The intriguing revelation is that the answer varies depending on whether you seek the opinion of the employee or someone else. This divergence underscores the intricate nature of defining fairness in the ever-evolving landscape of AI-driven decision-making.

The ongoing discourse on fairness also brought to light nuanced perspectives within legal frameworks. Non-discrimination law and data protection law, considered the primary instruments to curb diversity biases in AI applications for recruitment and selection, present diverse viewpoints. Data protection scholars emphasize the imperative to address the asymmetry of power between the data subject and the data controller. Conversely, anti-discrimination law contends that individuals in similar situations should not encounter less favorable treatment due to a particular characteristic. Simultaneously, it deems it fair to engage in positive discrimination in favor of socially marginalized individuals.

Against the backdrop of these diverse perspectives, Carlotta and Eduard brought attention to procedural fairness, establishing a crucial bridge between legal, ethical, and social requirements and the technical methodologies that form the foundation of AI systems. This interdisciplinary approach could forge new pathways to foster a balanced relationship between the law and alternative measures in the definition and design of trustworthy AI systems.

Unveiling the Debiaser

As the conference served as a melting pot of ideas and a useful venue for the BIAS project dissemination, Carlotta and Eduard touched upon the innovative and trustworthy technology that the project Consortium is expected to develop. Based on extensive consultation and co-creation, the Debiaser will identify and mitigate diversity bias in AI applications for recruitment and selection. Not only will it align with applicable laws, but it also holds the potential to contribute to the fight against discrimination in the labor market, where conventional anti-discrimination and data protection laws may seemingly fall short.

Planned as one of the final outcomes of the BIAS project, the Debiaser represents a tangible step toward bridging the gap between theoretical underpinnings of fairness and its real-world implementation.

Towards a Trustworthy Tomorrow

The Lawtomation Days marked a significant chapter in the ongoing quest for fair and trustworthy AI systems. Carlotta Rigotti and Eduard Fosch-Villaronga’s presentation not only propelled the BIAS project forward but also outlined how multi-disciplinary research can shape the future of trustworthy technology.

Stay tuned for more updates as we journey towards a fair tomorrow!

Lawtomation Days