Recap of the AI Fairness Cluster Inaugural Conference & Workshop on AI Bias

The AI Fairness Cluster Inaugural Conference & Workshop on AI Bias convened in Amsterdam, on 19-20 March 2024, bringing together stakeholders from academia and industry to address the critical issue of algorithmic fairness. Organized by the collaborative efforts of the AI Fairness Cluster composed by four projects: AEQUITAS, BIAS, MAMMOth, and FINDHR; the event aimed to explore the multifaceted dimensions of AI bias and foster dialogue on strategies for mitigating its adverse effects.

The conference featured a diverse array of presentations and discussions, highlighting various aspects of AI bias and fairness. Discussions during the event underscored the complexity of addressing algorithmic bias, emphasizing the need for nuanced approaches grounded in socio-technical contexts. Key themes included the juxtaposition of AI’s utility with its ethical implications, the societal roots of bias in information systems, and the challenges in defining and operationalizing fairness metrics. Additionally, participants explored the intricate interplay between human and algorithmic biases, recognizing the imperative of holistic considerations in assessing AI systems’ impact.

The conference yielded several key takeaways that underscored the complexities inherent in addressing algorithmic bias:

Nuanced Understanding: Participants recognized the nuanced nature of bias, highlighting the interplay between systemic AI bias and individual human bias. This understanding emphasized the importance of contextual considerations in evaluating AI systems’ impact.

Imperfect Solutions: Discussions highlighted the pragmatic reality that perfection in bias mitigation strategies remains elusive. Participants acknowledged the imperfect nature of bias metrics and mitigation strategies, advocating for incremental progress bolstered by legislative frameworks.

Collaborative Endeavours: The event underscored the value of collaboration between research and industry in advancing algorithmic fairness. The successful integration of academic insights and industry expertise demonstrated the potential for collaborative endeavours to drive meaningful progress in this domain.

The BIAS project team has delivered invaluable contributions. Roger A. Søraa (NTNU), the project’s Principal Investigator, provided a comprehensive overview of the BIAS project, while Alexandre Riemann Puttick (BFH), Postdoctoral Researcher at BFH, shared expertise as one of the panel speakers focusing on Representational Harms in Foundation Models.

The AI Fairness Cluster Inaugural Conference & Workshop on AI Bias provided a platform for stakeholders to engage in substantive discussions and share insights on addressing algorithmic bias. The event underscored the complexities inherent in this endeavour while highlighting the potential for collaborative efforts to drive meaningful progress.

Heartfelt gratitude is extended to all participants who contributed to the vibrancy and success of the conference. Your active engagement and insightful contributions enriched the discussions, furthering our collective understanding of algorithmic fairness.

As we navigate the evolving landscape of AI ethics, continued collaboration and dialogue will be essential in shaping a future where technology aligns with the values of inclusivity and equity.

AI Fairness Cluster