Fairness in the AI-based Decision making 

With the increasing use of artificial intelligence (AI) in various aspects of human life, a need to define operational fairness for the design and development of AI systems has emerged.  Although fairness has long been of interest to scholars in different disciplines such as philosophy, economics, and computer science, the literature does not provide a clear unified definition of fairness. On the contrary, it contributes to the creation of terminological chaos. In the AI literature, fairness predominantly focuses on measuring (un)fairness in terms of equality between advantaged and disadvantaged groups. In this vein of research, mainstream machine learning (ML) methods are the primary tool. Evaluating the fairness of the decision-making process itself in these ML techniques is challenging because of the black box nature of these methods, which are by definition not transparent or explainable. The consequence of this is that the output (the decisions taken) is evaluated, while the concern of injecting fairness into the decision-making algorithms itself is largely left unexamined. This is precisely the gap we aim to fill by studying the so-called “in-process” mitigation of fairness.  In this project, we adopt the principle of “similar individuals should be treated similarly” and aim to develop an AI-based method that implements this principle in the context of recruitment. Crucially, our focus is on the mitigation of unfairness during the decision-making process itself.  

Our first task relates to existing metrics for measuring fairness. In the past, bias and fairness-related literature in AI has heavily focused on measuring the fairness of decisions or outcomes, while methods for “in-process” mitigation have been overlooked. There are more than twenty metrics (Verma and Rubin, 2018) for measuring fairness of outcomes, and discussions revolve around how and why these metrics conflict with each other. Much of the literature investigates which of these mainstream machine learning methods work better in certain situations, in terms of the existing evaluation metrics. The results are inevitably contradictory and not conclusive, given the abundance of contradictory fairness notions under examination. In this rough landscape of literature, we aim to identify the markers of key relevance for the notion of fairness in the context of recruitment in the labour market, with the ultimate purpose of developing an AI-operational fairness account and implementing it in a decision support system in the BIAS project. 

Our second (arguably more important) task is to shift focus from outcomes to processes. From a computer science perspective, measuring fairness after the decisions are made corresponds to the “evaluation” step of the system development process. Our question is how fairness can be injected into the decision-making algorithm itself, i.e., the so-called “in-process” mitigation of fairness. Similarity between individuals is the focal point in our approach which emphasizes consistency, objectivity and explainability as the main pillars of fairness. The AI method we employ, called Case-based Reasoning (CBR) (Aamodt and Plaza 1994) , solves new problems by studying how similar problems have been solved in the past. It provides transparency regarding how each piece of information is weighted and allows us to dynamically and flexibly remove or add information to be used by the decision-making algorithm. In addition, consistency is “built in” to CBR since it relies on the similarity between individuals without aggregating the individual data. CBR combines a data-driven and a knowledge-based decision process. This is especially relevant for the recruitment domain, which is highly context sensitive. The context in which recruitment occurs changes across domains and tasks, countries, and the institutions that will deploy and use the AI system. There cannot be a one-size-fits-all solution. Human hiring agencies employ various laws and policies specific to countries and institutions, and such rules are important from an explainability perspective. Taken together, there is an increasing need for fair AI approaches that ensure consistency, objectivity and explainability. This is what we have set out to do in the BIAS project. 

References 

Sahil Verma and Julia Rubin (2018). Fairness definitions explained. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). IEEE, 2018. 

Agnar Aamodt and Enric Plaza (1994) Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AI Communications 7:39-59, 1994