Disagreement Analysis Framework

Disagreement analysis frameworks aim to understand and leverage discrepancies in human or model judgments, particularly within tasks involving subjective assessments like sentiment analysis or safety evaluations. Current research focuses on developing methods to quantify and interpret these disagreements, employing techniques like multi-task learning architectures and large language models to analyze annotator perspectives and identify systematic biases. This work is crucial for improving the reliability and fairness of machine learning systems, particularly in high-stakes applications where human judgment is involved, and for gaining a deeper understanding of human cognitive processes.

Papers