Individual Annotator
Individual annotator research focuses on understanding and improving the quality and consistency of human-provided labels in machine learning, particularly for natural language processing tasks. Current research explores methods to identify reliable annotators, mitigate biases introduced by individual annotators (including those reflected in LLMs used as annotators), and model annotator variability to enhance model accuracy and fairness. This work is crucial for building robust and reliable AI systems, as the quality of training data directly impacts model performance and reduces the reliance on expensive and time-consuming manual annotation processes.
Papers
June 28, 2024
June 25, 2024
June 22, 2024
June 10, 2024
May 24, 2024
May 2, 2024
April 2, 2024
March 28, 2024
March 4, 2024
December 28, 2023
November 23, 2023
November 16, 2023
October 31, 2023
October 20, 2023
September 19, 2023
September 15, 2023
August 1, 2023
July 26, 2023
June 27, 2023