Human Agreement
Human agreement, crucial for validating AI systems and ensuring reliable decision-making, is a central focus of current research. Studies explore methods to improve the alignment of AI model outputs with human judgments, employing techniques like selective evaluation with large language models (LLMs) and counterfactual analysis to diagnose model reasoning. This research is vital for building trustworthy AI systems across diverse applications, from automated essay scoring and contract analysis to more complex tasks involving negotiation and legal reasoning, ultimately improving the reliability and fairness of AI-driven outcomes. The development of robust metrics for evaluating human-AI agreement remains a key challenge.
Papers
November 28, 2022
November 3, 2022
October 17, 2022
October 11, 2022
August 15, 2022
August 12, 2022
May 9, 2022
March 29, 2022
January 12, 2022