Individual Annotator
Individual annotator research focuses on understanding and improving the quality and consistency of human-provided labels in machine learning, particularly for natural language processing tasks. Current research explores methods to identify reliable annotators, mitigate biases introduced by individual annotators (including those reflected in LLMs used as annotators), and model annotator variability to enhance model accuracy and fairness. This work is crucial for building robust and reliable AI systems, as the quality of training data directly impacts model performance and reduces the reliance on expensive and time-consuming manual annotation processes.
Papers
The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional Variability
James Tavernor, Yara El-Tawil, Emily Mower Provost
Estimating Contribution Quality in Online Deliberations Using a Large Language Model
Lodewijk Gelauff, Mohak Goyal, Bhargav Dindukurthi, Ashish Goel, Alice Siu