Model Debiasing
Model debiasing aims to mitigate biases learned by machine learning models from skewed training data, improving their fairness, robustness, and generalization capabilities. Current research focuses on various techniques, including data augmentation, contrastive learning, anomaly detection, and instruction tuning, applied to diverse model architectures such as deep neural networks, graph neural networks, and large language models. These efforts are crucial for addressing societal biases embedded in AI systems and ensuring reliable performance across different demographic groups and contexts, with applications ranging from loan applications to medical image analysis and cyberbullying detection.
Papers
August 9, 2024
August 6, 2024
July 24, 2024
July 2, 2024
June 24, 2024
June 10, 2024
May 1, 2024
April 7, 2024
April 3, 2024
March 15, 2024
February 26, 2024
February 23, 2024
February 22, 2024
January 22, 2024
October 19, 2023
September 7, 2023
May 20, 2023
April 27, 2023
March 8, 2023