Disagreement Analysis Framework
Disagreement analysis frameworks aim to understand and leverage discrepancies in human or model judgments, particularly within tasks involving subjective assessments like sentiment analysis or safety evaluations. Current research focuses on developing methods to quantify and interpret these disagreements, employing techniques like multi-task learning architectures and large language models to analyze annotator perspectives and identify systematic biases. This work is crucial for improving the reliability and fairness of machine learning systems, particularly in high-stakes applications where human judgment is involved, and for gaining a deeper understanding of human cognitive processes.
Papers
November 4, 2024
October 18, 2024
November 9, 2023
July 7, 2023
May 23, 2023
May 15, 2023
May 10, 2023
January 31, 2023
September 7, 2022
May 29, 2022
January 19, 2022