AI Fairness in Focus: Highlights from the AIMMES Workshop and Cluster Event in Barcelona

On 20–21 March 2025, Pompeu Fabra University in Barcelona hosted the AIMMES Workshop and AI Fairness Cluster Event, a two-day gathering that brought together researchers, practitioners, and policymakers to explore pressing issues surrounding AI fairness. Organised under the Horizon Europe framework, the event was a joint effort by the AEQUITAS, BIAS, FINDHR, and MAMMOth projects.

The AIMMES Workshop began with a dynamic series of paper presentations, immediately showcasing its strong interdisciplinary spirit and encouraging dialogue between experts in sociology, psychology, law, and AI. Participants actively engaged in thoughtful discussions around ethical frameworks and the ongoing challenges of bias in artificial intelligence. The talks covered a wide range of topics related to fairness, accountability, and discrimination in AI — from healthcare and recruitment to AI-generated content and regulatory concerns in HR practices. During the coffee break, poster presentations drew significant interest, sparking lively conversations that continued throughout the two-day event.

You can find the full workshop proceedings HERE.

“From our first joint conference in Amsterdam to our second one in Barcelona, it is clear that a community of scholars, professionals, and activists, is coalescing around the topic of fairness in artificial intelligence. Something I would take away from the last meeting in particular was that there are a few key technical areas, including large language models and multimodal processing, and a few application areas including AI in recruitment and AI for medicine, that are attracting a considerable amount of attention from the research community. This converge was really interesting to observe.”
— Professor Carlos Castillo, Coordinator, FINDHR Project

A central theme of the workshop was algorithmic discrimination, with sessions addressing the complexities of auditing AI systems and identifying bias in automated decision-making processes. The BIAS project was represented by several of its consortium members: Alexandre Puttnick and Catherine Ikae from Bern Fachhochschule presented their research on value sensitive design and bias measurement in German prompts respectively, while Silvia Ecclesia from the Norwegian University of Science and Technology presented her research on recruiters’ expectations about AI use.

With experiences from both academia and the private sector, the AIMMES Workshop gathered a number of perspectives on fairness frameworks and technical solutions to ensure AI fairness. In addition, the Horizon Europe Projects Spotlight showcased complementary initiatives to the organising projects, such as ENFIELD and iDEM, which focus on building trustworthy AI and enhancing democratic participation through technology.

The workshop’s plenary session concluded with a set of reflections on the multifaceted challenges of AI fairness — technical, social, and legal — emphasising the continued need for cross-disciplinary collaboration to develop equitable AI systems.

The afternoon continued with a series of parallel hands-on sessions featuring project demonstrators, showcasing their developed products and gathering feedback on project outcomes. Guillem Escriba of Adevinta presented tools developed under the FINDHR project, offering practical insights into how AI systems can be made fairer and less biased in real-world decision-making environments. The BIAS project session immersed participants in the recruiter’s perspective, asking them to simulate a recruitment process supported by AI and reflect on the concept of value sensitive design. MAMMOth and AEQUITAS also presented their results during an interactive session, sparking meaningful exchanges with the audience. These hands-on demonstrations offered insights into the practical challenges of operationalising fairness and, through participants’ input and reflections, advanced participatory practices in AI development.

“Bringing together our four sister projects under the umbrella at the AI Fairness Cluster Conference and AIMMES Workshop in Barcelona was great for synergy and learning. It showed a diversity of approaches within AI fairness and the collective momentum we’re building across Europe. I was proud to witness how the shared vision, interdisciplinary dialogue, and open collaboration can push the boundaries of AI fairness forward.”
— Professor Roger A. Søraa, Coordinator, BIAS Project

In the same venue, the following day, 21 March, the AI Fairness Cluster Conference was headlined by two distinguished speakers.

Professor Linnet Taylor, Professor at the Institute for Law, Technology, and Society of Tilburg University, tackled the ethical and practical dilemmas of using race and ethnicity data in AI. Her talk critically examined the construction of risk models and proposed pathways for achieving justice and equity in data-driven systems.

Dr Susan Leavy, Assistant Professor at the School of Information & Communication, University College Dublin, followed with an analysis of gender and intersectional bias in AI technologies. She discussed the societal impact of biased algorithms, particularly in large language and multimodal models, and reviewed ongoing regulatory efforts aimed at mitigating these risks.

“Having our four sister projects at the AI Fairness Cluster Conference and the AIMMES Workshop in Barcelona was, as always, an immensely enriching and growth-filled experience. The keynote experts ignited a provoking discussion on the enduring challenges in the field of AI fairness. The event’s success highlighted the strong community we have built and underscored the urgency with which this community seeks to address the challenges presented by AI. It was a day brimming with valuable insights and ideas, and I would add, filled with humanity and respect—a truly contrarian day in light of current global events. There is a beacon of hope; let us fuel it together as we continue to spread these valuable ideas and actions.”
— Professor Roberta Calegari, Coordinator, AEQUITAS Project

The morning continued with a series of inspiring presentations from each organising project, during which participants could learn about each project’s research and outcomes—from BIAS and FINDHR’s interdisciplinary research on AI bias in hiring and recruitment to MAMMOth’s technical developments in multi-discrimination mitigation and AEQUITAS’s bias mitigation efforts in different sectors.

After a lunch break filled with networking and engaging discussions, the workshop concluded in the afternoon with an insightful panel bringing together practitioners and researchers to discuss the effects of the recently implemented AI Act on algorithmic fairness. Dr Eduard Forsch-Villaronga from Leiden University, Dr Nina Baranowska from Radboud University, Dr Ian Slesinger from Trilateral Research, and Dr Giulia Sudano from Period Think Tank Aps engaged in a discussion covering topics including the new obligations imposed on AI developers and producers, the potential pitfalls of the AI Act’s risk-based approach due to a lack of awareness and resources around AI bias mitigation, and the role of different stakeholders in advancing the AI fairness agenda.

“Attending the joint AI Fairness conference in Barcelona was exciting and educational. It appears that AI bias and fairness are topics that attract growing interest, especially in light of the rapid developments in Generative AI and the wide adoption of AI systems in a growing number of professional sectors. During the conference, a number of new studies, methods and tools were showcased and it appears that there is steady progress in the field. At the same time, the panel around AI Act and its implications for AI bias and fairness revealed that there is still a lot of analysis to be done and a lot of discussions to be had before reaching a common understanding of how to best implement this ambitious regulation in practical contexts.”
— Pr. Researcher Symeon Papadopoulos, Coordinator, MAMMOth Project

With over 150 attendees participating both in person and online, the event succeeded in creating a vibrant platform for dialogue on the future of fair and responsible AI. Organisers expressed their gratitude to all contributors and participants who helped shape this critical conversation.

AIMMES25