Feedback Annotation
Feedback annotation, the process of labeling data to guide machine learning model training and evaluation, is a crucial area of research impacting diverse fields. Current efforts focus on improving the quality and efficiency of annotation, including developing methods for implicit evaluation, extracting underlying principles from existing feedback, and designing systems for diverse feedback types (e.g., rankings, ratings, multi-level descriptions). These advancements aim to address issues like bias in human annotations and the high cost of manual labeling, ultimately leading to more accurate, reliable, and efficient AI models across applications such as language learning, code review, and traffic control.
Papers
September 28, 2024
August 18, 2024
June 2, 2024
April 26, 2024
March 21, 2024
February 5, 2024
February 4, 2024
August 30, 2023
June 13, 2023