Fairness Problem
The fairness problem in machine learning focuses on mitigating biases in algorithms that lead to discriminatory outcomes against certain groups. Current research emphasizes developing methods to simultaneously satisfy multiple fairness criteria (e.g., demographic parity, equalized odds), often employing techniques like post-processing or incorporating fairness constraints into model training, including federated learning settings. This work is crucial for ensuring equitable outcomes in high-stakes applications like loan applications or criminal justice, and for advancing the theoretical understanding of fairness in artificial intelligence.
Papers
January 3, 2025
November 24, 2024
March 11, 2024
May 16, 2023
April 9, 2023
September 21, 2022
August 24, 2022
June 1, 2022
May 20, 2022