Fairness Problem

The fairness problem in machine learning focuses on mitigating biases in algorithms that lead to discriminatory outcomes against certain groups. Current research emphasizes developing methods to simultaneously satisfy multiple fairness criteria (e.g., demographic parity, equalized odds), often employing techniques like post-processing or incorporating fairness constraints into model training, including federated learning settings. This work is crucial for ensuring equitable outcomes in high-stakes applications like loan applications or criminal justice, and for advancing the theoretical understanding of fairness in artificial intelligence.

Papers