Paper ID: 2202.08536
Does the End Justify the Means? On the Moral Justification of Fairness-Aware Machine Learning
Hilde Weerts, Lambèr Royakkers, Mykola Pechenizkiy
Fairness-aware machine learning (fair-ml) techniques are algorithmic interventions designed to ensure that individuals who are affected by the predictions of a machine learning model are treated fairly, typically measured in terms of a quantitative fairness metric. Despite the multitude of fairness metrics and fair-ml algorithms, there is still little guidance on the suitability of different approaches in practice. In this paper, we present a framework for moral reasoning about the justification of fairness metrics and explore the moral implications of the use of fair-ml algorithms that optimize for them. In particular, we argue that whether a distribution of outcomes is fair, depends not only on the cause of inequalities but also on what moral claims decision subjects have to receive a particular benefit or avoid a burden. We use our framework to analyze the suitability of two fairness metrics under different circumstances. Subsequently, we explore moral arguments that support or reject the use of the fair-ml algorithm introduced by Hardt et al. (2016). We argue that under very specific circumstances, particular metrics correspond to a fair distribution of burdens and benefits. However, we also illustrate that enforcing a fairness metric by means of a fair-ml algorithm may not result in the fair distribution of outcomes and can have several undesirable side effects. We end with a call for a more holistic evaluation of fair-ml algorithms, beyond their direct optimization objectives.
Submitted: Feb 17, 2022