Paper ID: 2205.00819
A Novel Approach to Fairness in Automated Decision-Making using Affective Normalization
Jesse Hoey, Gabrielle Chan
Any decision, such as one about who to hire, involves two components. First, a rational component, i.e., they have a good education, they speak clearly. Second, an affective component, based on observables such as visual features of race and gender, and possibly biased by stereotypes. Here we propose a method for measuring the affective, socially biased, component, thus enabling its removal. That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself. We thus propose that this may solve three key problems in intersectional fairness: (1) the definition of categories over which fairness is a consideration; (2) an infinite regress into smaller and smaller groups; and (3) ensuring a fair distribution based on basic human rights or other prior information. The primary idea in this paper is that fairness biases can be measured using affective coherence, and that this can be used to normalize outcome mappings. We aim for this conceptual work to expose a novel method for handling fairness problems that uses emotional coherence as an independent measure of bias that goes beyond statistical parity.
Submitted: May 2, 2022