Bias Mitigation
Bias mitigation in machine learning aims to create fairer and more equitable algorithms by addressing biases stemming from training data and model architectures. Current research focuses on developing and evaluating various bias mitigation techniques, including data augmentation strategies (like mixup and proximity sampling), adversarial training methods, and post-processing approaches such as channel pruning and dropout. These efforts span diverse applications, from computer vision and natural language processing to medical image analysis and recommender systems, highlighting the broad significance of this field for ensuring responsible and ethical AI development. The ultimate goal is to improve model fairness without sacrificing accuracy or utility, leading to more equitable outcomes across different demographic groups.
Papers
A Multi-LLM Debiasing Framework
Deonna M. Owens, Ryan A. Rossi, Sungchul Kim, Tong Yu, Franck Dernoncourt, Xiang Chen, Ruiyi Zhang, Jiuxiang Gu, Hanieh Deilamsalehy, Nedim Lipka
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Robert Morabito, Sangmitra Madhusudan, Tyler McDonald, Ali Emami
Using Backbone Foundation Model for Evaluating Fairness in Chest Radiography Without Demographic Data
Dilermando Queiroz, André Anjos, Lilian Berton
Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems
Farzaneh Dehghani, Mahsa Dibaji, Fahim Anzum, Lily Dey, Alican Basdemir, Sayeh Bayat, Jean-Christophe Boucher, Steve Drew, Sarah Elaine Eaton, Richard Frayne, Gouri Ginde, Ashley Harris, Yani Ioannou, Catherine Lebel, John Lysack, Leslie Salgado Arzuaga, Emma Stanley, Roberto Souza, Ronnie Souza, Lana Wells, Tyler Williamson, Matthias Wilms, Zaman Wahid, Mark Ungrin, Marina Gavrilova, Mariana Bento