Subjective Task
Subjective task research focuses on improving machine learning models' performance on tasks lacking a single, universally agreed-upon "correct" answer, such as sentiment analysis or moral judgment. Current research emphasizes addressing annotator disagreement by incorporating human judgment directly into model calibration and prediction, exploring methods like consensus-based benchmarking and modeling annotator perspectives to create more robust and equitable models. This work is crucial for advancing natural language processing and other AI fields, leading to more reliable and nuanced systems that better reflect the complexities of human judgment and opinion.
Papers
November 15, 2024
November 5, 2024
October 21, 2024
October 17, 2024
September 10, 2024
August 26, 2024
June 12, 2024
February 27, 2024
February 22, 2024
February 21, 2024
January 26, 2024
December 13, 2023
November 16, 2023
May 11, 2023
November 23, 2022