Subgroup Fairness

Subgroup fairness in machine learning focuses on ensuring that algorithms make equitable predictions across different demographic subgroups, addressing biases that may arise even when overall model accuracy is high. Current research emphasizes developing methods to detect and mitigate these biases, including techniques like last-layer retraining with noise robustness and fairness-aware explainability tools such as extended ALE plots. This work is crucial for building trustworthy AI systems, improving the fairness and accuracy of predictions in sensitive applications like loan applications, recidivism prediction, and healthcare, and advancing our understanding of algorithmic bias.

Papers