Group Fairness
Group fairness in machine learning aims to ensure that algorithms produce equitable outcomes across different demographic groups, mitigating biases that might disproportionately harm certain populations. Current research focuses on developing and evaluating methods to achieve fairness across various model architectures and learning paradigms, including federated learning, graph neural networks, and large language models, often employing techniques like post-processing, re-weighting, and adversarial training to address issues like spurious correlations and intersectional bias. This field is crucial for building trustworthy and ethical AI systems, impacting applications ranging from healthcare and criminal justice to hiring and loan applications by promoting fairness and reducing discrimination.
Papers
Towards Clinical AI Fairness: Filling Gaps in the Puzzle
Mingxuan Liu, Yilin Ning, Salinelat Teixayavong, Xiaoxuan Liu, Mayli Mertens, Yuqing Shang, Xin Li, Di Miao, Jie Xu, Daniel Shu Wei Ting, Lionel Tim-Ee Cheng, Jasmine Chiat Ling Ong, Zhen Ling Teo, Ting Fang Tan, Narrendar RaviChandran, Fei Wang, Leo Anthony Celi, Marcus Eng Hock Ong, Nan Liu
The Impossibility of Fair LLMs
Jacy Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, Alexander D'Amour, Chenhao Tan
Post-Fair Federated Learning: Achieving Group and Community Fairness in Federated Learning via Post-processing
Yuying Duan, Yijun Tian, Nitesh Chawla, Michael Lemmon