Group Fairness

Group fairness in machine learning aims to ensure that algorithms produce equitable outcomes across different demographic groups, mitigating biases that might disproportionately harm certain populations. Current research focuses on developing and evaluating methods to achieve fairness across various model architectures and learning paradigms, including federated learning, graph neural networks, and large language models, often employing techniques like post-processing, re-weighting, and adversarial training to address issues like spurious correlations and intersectional bias. This field is crucial for building trustworthy and ethical AI systems, impacting applications ranging from healthcare and criminal justice to hiring and loan applications by promoting fairness and reducing discrimination.

Papers