Model Debiasing
Model debiasing aims to mitigate biases learned by machine learning models from skewed training data, improving their fairness, robustness, and generalization capabilities. Current research focuses on various techniques, including data augmentation, contrastive learning, anomaly detection, and instruction tuning, applied to diverse model architectures such as deep neural networks, graph neural networks, and large language models. These efforts are crucial for addressing societal biases embedded in AI systems and ensuring reliable performance across different demographic groups and contexts, with applications ranging from loan applications to medical image analysis and cyberbullying detection.
Papers
October 17, 2022
October 11, 2022
June 24, 2022
June 15, 2022
June 1, 2022
March 25, 2022
December 2, 2021