Bias Mitigation
Bias mitigation in machine learning aims to create fairer and more equitable algorithms by addressing biases stemming from training data and model architectures. Current research focuses on developing and evaluating various bias mitigation techniques, including data augmentation strategies (like mixup and proximity sampling), adversarial training methods, and post-processing approaches such as channel pruning and dropout. These efforts span diverse applications, from computer vision and natural language processing to medical image analysis and recommender systems, highlighting the broad significance of this field for ensuring responsible and ethical AI development. The ultimate goal is to improve model fairness without sacrificing accuracy or utility, leading to more equitable outcomes across different demographic groups.
Papers
Mitigating Matching Biases Through Score Calibration
Mohammad Hossein Moslemi, Mostafa Milani
Equitable Length of Stay Prediction for Patients with Learning Disabilities and Multiple Long-term Conditions Using Machine Learning
Emeka Abakasanga, Rania Kousovista, Georgina Cosma, Ashley Akbari, Francesco Zaccardi, Navjot Kaur, Danielle Fitt, Gyuchan Thomas Jun, Reza Kiani, Satheesh Gangadharan
How Can We Diagnose and Treat Bias in Large Language Models for Clinical Decision-Making?
Kenza Benkirane, Jackie Kay, Maria Perez-Ortiz
Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation
Louis Hickman, Christopher Huynh, Jessica Gass, Brandon Booth, Jason Kuruzovich, Louis Tay