Fairness Constraint
Fairness constraints in machine learning aim to mitigate algorithmic bias by ensuring equitable outcomes across different demographic groups. Current research focuses on developing algorithms and model architectures that incorporate fairness metrics (e.g., demographic parity, equal opportunity) into the learning process, often addressing the trade-off between fairness and accuracy through techniques like constrained optimization, re-weighting, and data augmentation. This field is crucial for ensuring responsible AI development, impacting various applications from loan approvals and hiring to healthcare and criminal justice by promoting equitable and trustworthy decision-making systems.
Papers
Minimax Optimal Fair Classification with Bounded Demographic Disparity
Xianli Zeng, Guang Cheng, Edgar Dobriban
Looking Beyond What You See: An Empirical Analysis on Subgroup Intersectional Fairness for Multi-label Chest X-ray Classification Using Social Determinants of Racial Health Inequities
Dana Moukheiber, Saurabh Mahindre, Lama Moukheiber, Mira Moukheiber, Mingchen Gao