Group Fairness
Group fairness in machine learning aims to ensure that algorithms produce equitable outcomes across different demographic groups, mitigating biases that might disproportionately harm certain populations. Current research focuses on developing and evaluating methods to achieve fairness across various model architectures and learning paradigms, including federated learning, graph neural networks, and large language models, often employing techniques like post-processing, re-weighting, and adversarial training to address issues like spurious correlations and intersectional bias. This field is crucial for building trustworthy and ethical AI systems, impacting applications ranging from healthcare and criminal justice to hiring and loan applications by promoting fairness and reducing discrimination.
Papers
Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness
Zahra Ashktorab, Benjamin Hoover, Mayank Agarwal, Casey Dugan, Werner Geyer, Hao Bang Yang, Mikhail Yurochkin
Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization
Sangwon Jung, Taeeon Park, Sanghyuk Chun, Taesup Moon