FAir Classifier

Fair classifier research aims to develop machine learning models that make unbiased predictions across different demographic groups, addressing the pervasive issue of algorithmic bias. Current research focuses on mitigating bias through various techniques, including data preprocessing (e.g., re-weighting, data augmentation), in-processing methods (e.g., incorporating fairness constraints into model training), and post-processing adjustments to model outputs. These efforts are crucial for ensuring fairness and equity in high-stakes applications like loan applications, hiring processes, and criminal justice, and are driving advancements in both theoretical understanding of fairness and the development of practical, robust algorithms.

Papers