Opinion Disagreement

Opinion disagreement, encompassing discrepancies in human judgments and predictions from multiple machine learning models, is a burgeoning research area aiming to understand and leverage disagreement for improved model performance and human-AI collaboration. Current research focuses on developing methods to detect, quantify, and utilize disagreement, employing techniques like graph convolutional networks, multi-agent reinforcement learning, and ensemble methods to analyze diverse perspectives and improve model robustness and calibration. This work has significant implications for various fields, including natural language processing, legal decision-making, and improving the reliability of machine learning explanations by accounting for inherent human label variability and model inconsistencies.

Papers