Crowdsourced Label
Crowdsourced labeling leverages human annotators to label large datasets for machine learning, addressing the cost and time constraints of expert annotation. Current research focuses on improving label quality by mitigating annotator disagreement and bias through techniques like incorporating annotator confidence, context-aware labeling strategies, and novel data acquisition methods such as patch labeling. These advancements aim to create more reliable and efficient training datasets, ultimately improving the accuracy and robustness of machine learning models across various applications, from natural language processing to image classification.
Papers
August 26, 2024
April 15, 2024
March 22, 2024
December 12, 2023
October 13, 2023
June 5, 2023
January 31, 2023
September 30, 2022
September 29, 2022