Fair Machine Learning
Fair machine learning aims to develop algorithms that make unbiased predictions, avoiding discrimination against sensitive attributes like race or gender. Current research focuses on mitigating bias through various techniques, including modifying model architectures (e.g., using mixed-effects models or incorporating fairness penalties into neural networks), developing fairness-aware data augmentation methods, and employing active learning strategies to improve data representation. This field is crucial for ensuring equitable outcomes in applications ranging from healthcare and loan applications to criminal justice, promoting responsible and ethical use of AI.
Papers
FADE: Towards Fairness-aware Augmentation for Domain Generalization via Classifier-Guided Score-based Diffusion Models
Yujie Lin, Dong Li, Chen Zhao, Minglai Shao, Guihong Wan
AIM: Attributing, Interpreting, Mitigating Data Unfairness
Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Yada Zhu, Hendrik Hamann, Hanghang Tong