Fairness Influence Function

Fairness influence functions aim to quantify how individual data points or features contribute to bias in machine learning models, enabling the identification and mitigation of unfair outcomes. Current research focuses on decomposing overall bias into contributions from specific features and subgroups, often leveraging techniques like influence functions and global sensitivity analysis within various model types. This work is significant because it allows for a more granular understanding of bias, facilitating targeted interventions to improve fairness in high-stakes applications while potentially trading off some accuracy. The development of algorithms to efficiently compute and interpret these influence functions is a key area of ongoing investigation.

Papers