Annotation Bias
Annotation bias, the systematic error introduced by human labelers in machine learning datasets, significantly impacts the fairness and accuracy of trained models. Current research focuses on identifying and mitigating these biases, particularly in natural language processing and medical image segmentation, using techniques like multi-variable causal inference and transformer-based models to disentangle annotator preferences from stochastic errors. Understanding and addressing annotation bias is crucial for developing robust and reliable AI systems across various domains, improving the trustworthiness and generalizability of machine learning models.
Papers
November 8, 2024
October 21, 2024
June 17, 2024
April 29, 2024
March 2, 2024
August 30, 2023
June 15, 2023
June 2, 2023
October 31, 2022
December 3, 2021
November 26, 2021
November 3, 2021