The BIAS project at the JSAI Symposium ‘24

On 28 and 29 May 2024, Carlotta Rigotti from ULEID led a workshop addressing fairness and diversity bias in AI-driven recruitment during the annual symposium of the Japanese Society on Artificial Intelligence, held in Hamamatsu (Japan). Organized within the framework of the BIAS project in collaboration with Eduard Fosch-Villaronga (ULEID), the workshop provided a cutting-edge platform for scholars and experts to discuss the ambitions and limitations of AI applications in the labour market, with a specific focus on recruitment and selection processes.

Titled “The Ambitions and Limitations of AI-driven Recruitment and Selection: Unfolding Fairness and Diversity Bias,” the workshop attracted substantial attention. Following a call for proposals before Christmas, five papers were selected for presentation in February 2024. Impressively, two of these papers have been featured in the conference proceedings, soon to be published by Springer.

After months of preparation and revisions, Carlotta and the speakers gathered in Hamamatsu for the two-day workshop. The first day began with Carlotta’s keynote speech on the BIAS project, explaining its objectives, the interdisciplinary composition of its Consortium, and unfolding some research findings about the multifaceted definition of fairness in AI systems within the labour market. Additionally, this served as an ideal opportunity to introduce the latest publication authored by Carlotta and Eduard on this subject, which was recently featured in the Computer Law and Security Review.

The series of presentations began with Muhammad Jibril and Theresia Averina Florentina from Universitas Gadjah Mada, who presented their paper titled “Governing AI in Hiring: An Effort to Eliminate Biased Decision“. They emphasized the necessity for policymakers to consider various facets such as AI tool definitions, the extent of AI usage, permissible human involvement in the employment process, employment scope, and compliance measures while regulating AI in employment.

Subsequently, Ardianto Budi Rahmawan, also from Universitas Gadjah Mada, presented his paper “Contextualization of AI in Labour Recruiting Process: A Study from Indonesia Administrative Law.” His presentation shed light on the ambitions and limitations of AI systems in the Indonesian labour market and legal framework, particularly considering the imbalanced population growth, the enactment of Law No. 6 Year 2023 on Job Creation, and the increasing adoption of AI systems.

Maka Alsandia, representing the Norwegian University of Science and Technology, presented her paper “Navigating the Artificial Intelligence Dilemma: Exploring Paths for Norway’s Future“. She identified critical knowledge gaps and a reliance on abstract ethical principles in Norway’s national strategy, urging alignment with emerging international standards. Highlighting systemic discrimination against ethnic minorities in the Norwegian labour market, Maka underscored the imperative of rigorous risk assessments before AI system deployment in recruitment.

Akshaya Kamalnath, from The Australian National University, presented her paper “From NYC with love – What Can Companies Learn from NYC’s Law about the Use of AI in Recruitment?“. Her presentation underscored the importance of understanding both risks and rewards from a corporate law perspective, advocating for transparency and adoption of AI policies to address associated risks.

Lastly, Marisa Cruzado and Maite Sáenz from IA+Igual shared insights from a pioneering collaborative project addressing biases in learning models used in HR. She presented an empirical study on the impact of biases in HR practices and shared experiences of conducting AI system audits.

On the second day, the workshop delved into discussions on how AI applications for HR practices could be made trustworthy, building on the assessment list for trustworthy AI — also known as ALTAI. Emphasis was placed on human agency, oversight, and the role of the private sector in ensuring accountability for discrimination and harm caused by AI systems towards workers. Transparency, its boundaries, and accessibility of AI systems for HR practices were also extensively discussed. This discussion was particularly valuable, given the current involvement of the BIAS Consortium in conducting the ALTAI assessment to guarantee the trustworthiness of the Debiaser, namely the innovative technology it is developing to address diversity bias in AI applications within the labour market.

In conclusion, the workshop aimed to foster collaborative efforts towards creating a more equitable and just landscape in AI-driven recruitment practices. It facilitated the establishment of a network of stakeholders beyond the geographical scope of the BIAS Consortium, underscoring the global significance of addressing fairness and diversity bias in AI systems for recruitment.

The BIAS project at the JSAI Symposium ‘24