Procedural Fairness
Procedural fairness in artificial intelligence focuses on ensuring that AI systems make decisions equitably across different demographic groups, mitigating biases that can lead to discriminatory outcomes. Current research emphasizes developing and evaluating fairness-aware algorithms and models, including those based on adversarial learning, data augmentation techniques like mixup, and distributionally robust optimization, across various applications like healthcare, process analytics, and recommender systems. This research is crucial for building trustworthy AI systems and addressing societal concerns about algorithmic bias, impacting both the development of ethical AI guidelines and the practical deployment of AI in sensitive domains.
Papers
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning
Natalie Dullerud, Karsten Roth, Kimia Hamidieh, Nicolas Papernot, Marzyeh Ghassemi
Improving the Fairness of Chest X-ray Classifiers
Haoran Zhang, Natalie Dullerud, Karsten Roth, Lauren Oakden-Rayner, Stephen Robert Pfohl, Marzyeh Ghassemi
FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing
Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, Anders Søgaard
Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity
Kacper Sokol, Meelis Kull, Jeffrey Chan, Flora Dilys Salim