Procedural Fairness
Procedural fairness in artificial intelligence focuses on ensuring that AI systems make decisions equitably across different demographic groups, mitigating biases that can lead to discriminatory outcomes. Current research emphasizes developing and evaluating fairness-aware algorithms and models, including those based on adversarial learning, data augmentation techniques like mixup, and distributionally robust optimization, across various applications like healthcare, process analytics, and recommender systems. This research is crucial for building trustworthy AI systems and addressing societal concerns about algorithmic bias, impacting both the development of ethical AI guidelines and the practical deployment of AI in sensitive domains.
Papers
Towards measuring fairness in speech recognition: Fair-Speech dataset
Irina-Elena Veliche, Zhuangqun Huang, Vineeth Ayyat Kochaniyan, Fuchun Peng, Ozlem Kalinli, Michael L. Seltzer
Unlocking Intrinsic Fairness in Stable Diffusion
Eunji Kim, Siwon Kim, Rahim Entezari, Sungroh Yoon
Aligning (Medical) LLMs for (Counterfactual) Fairness
Raphael Poulain, Hamed Fayyaz, Rahmatollah Beheshti
Articulation Work and Tinkering for Fairness in Machine Learning
Miriam Fahimi, Mayra Russo, Kristen M. Scott, Maria-Esther Vidal, Bettina Berendt, Katharina Kinder-Kurlanda
On ADMM in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness
Shengkun Zhu, Jinshan Zeng, Sheng Wang, Yuan Sun, Xiaodong Li, Yuan Yao, Zhiyong Peng