Fairness Constraint
Fairness constraints in machine learning aim to mitigate algorithmic bias by ensuring equitable outcomes across different demographic groups. Current research focuses on developing algorithms and model architectures that incorporate fairness metrics (e.g., demographic parity, equal opportunity) into the learning process, often addressing the trade-off between fairness and accuracy through techniques like constrained optimization, re-weighting, and data augmentation. This field is crucial for ensuring responsible AI development, impacting various applications from loan approvals and hiring to healthcare and criminal justice by promoting equitable and trustworthy decision-making systems.
Papers
Group Fairness with Uncertainty in Sensitive Attributes
Abhin Shah, Maohao Shen, Jongha Jon Ryu, Subhro Das, Prasanna Sattigeri, Yuheng Bu, Gregory W. Wornell
Preventing Discriminatory Decision-making in Evolving Data Streams
Zichong Wang, Nripsuta Saxena, Tongjia Yu, Sneha Karki, Tyler Zetty, Israat Haque, Shan Zhou, Dukka Kc, Ian Stockwell, Albert Bifet, Wenbin Zhang