Fairness Metric
Fairness metrics are quantitative measures used to assess and mitigate bias in machine learning models and other decision-making systems, aiming to ensure equitable outcomes across different demographic groups. Current research focuses on developing new metrics that address limitations of existing ones, particularly concerning uncertainty, data drift, and the imbalanced nature of real-world datasets; this includes exploring various model architectures and algorithms, such as those based on re-weighting, post-processing, and generative models, to improve fairness while maintaining accuracy. The development and application of robust fairness metrics are crucial for building trustworthy and equitable AI systems across diverse domains, impacting both the ethical development of AI and its practical deployment in sensitive areas like healthcare and finance.
Papers
(Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers
Manh Khoi Duong, Stefan Conrad
Is it Still Fair? A Comparative Evaluation of Fairness Algorithms through the Lens of Covariate Drift
Oscar Blessed Deho, Michael Bewong, Selasi Kwashie, Jiuyong Li, Jixue Liu, Lin Liu, Srecko Joksimovic
Fairness in Social Influence Maximization via Optimal Transport
Shubham Chowdhary, Giulia De Pasquale, Nicolas Lanzetti, Ana-Andreea Stoica, Florian Dorfler
Fairpriori: Improving Biased Subgroup Discovery for Deep Neural Network Fairness
Kacy Zhou, Jiawen Wen, Nan Yang, Dong Yuan, Qinghua Lu, Huaming Chen