Fairness Method
Fairness methods in machine learning aim to mitigate biases in algorithms that lead to discriminatory outcomes across different demographic groups. Current research focuses on developing techniques that achieve fairness while maintaining accuracy, exploring approaches like pre-processing data, modifying model training (in-processing), or adjusting predictions (post-processing), often within specific model architectures such as diffusion models or through optimization techniques like optimal transport. These advancements are crucial for ensuring ethical and equitable use of AI in high-stakes applications like healthcare, loan applications, and criminal justice, impacting both the scientific understanding of algorithmic bias and the development of fairer AI systems.
Papers
FADE: Towards Fairness-aware Augmentation for Domain Generalization via Classifier-Guided Score-based Diffusion Models
Yujie Lin, Dong Li, Chen Zhao, Minglai Shao
What is Fair? Defining Fairness in Machine Learning for Health
Jianhui Gao, Benson Chou, Zachary R. McCaw, Hilary Thurston, Paul Varghese, Chuan Hong, Jessica Gronsbell