Bias Annotation
Bias annotation focuses on identifying and mitigating biases present in datasets used to train machine learning models, primarily aiming to improve model fairness, robustness, and generalization. Current research emphasizes developing methods to detect and correct for these biases, employing techniques like multi-task learning with pre-trained models (e.g., using RoBERTa), and leveraging neural network architectures to disentangle bias from true signal. This work is crucial for advancing the reliability and ethical implications of AI systems across diverse applications, from medical image analysis to media bias detection and scene graph generation.
Papers
March 28, 2024
February 27, 2024
October 9, 2023
August 22, 2023
July 28, 2023
June 2, 2023
September 29, 2022
November 10, 2021
November 3, 2021