Crowd Sourced Annotation
Crowd-sourced annotation leverages large numbers of individuals to label data, addressing the cost and time constraints of manual annotation in diverse fields like natural language processing and image analysis. Current research focuses on mitigating inherent noise and biases in crowd-sourced labels, employing techniques like confusion matrix correction and multi-view aggregation to improve model accuracy and robustness. This approach is crucial for scaling data annotation efforts, enabling advancements in areas such as commonsense knowledge base population, privacy policy analysis, and ecological monitoring, where large datasets are essential but manual labeling is impractical.
Papers
October 4, 2024
February 16, 2024
December 12, 2023
April 20, 2023
December 19, 2022
October 2, 2022
May 23, 2022