Subgroup Fairness
Subgroup fairness in machine learning focuses on ensuring that algorithms make equitable predictions across different demographic subgroups, addressing biases that may arise even when overall model accuracy is high. Current research emphasizes developing methods to detect and mitigate these biases, including techniques like last-layer retraining with noise robustness and fairness-aware explainability tools such as extended ALE plots. This work is crucial for building trustworthy AI systems, improving the fairness and accuracy of predictions in sensitive applications like loan applications, recidivism prediction, and healthcare, and advancing our understanding of algorithmic bias.
Papers
June 13, 2024
April 29, 2024
February 16, 2024
January 27, 2024
June 26, 2023
June 22, 2023
June 20, 2023
June 19, 2023
September 12, 2022