Human Agreement
Human agreement, crucial for validating AI systems and ensuring reliable decision-making, is a central focus of current research. Studies explore methods to improve the alignment of AI model outputs with human judgments, employing techniques like selective evaluation with large language models (LLMs) and counterfactual analysis to diagnose model reasoning. This research is vital for building trustworthy AI systems across diverse applications, from automated essay scoring and contract analysis to more complex tasks involving negotiation and legal reasoning, ultimately improving the reliability and fairness of AI-driven outcomes. The development of robust metrics for evaluating human-AI agreement remains a key challenge.
Papers
October 10, 2024
July 25, 2024
May 29, 2024
April 28, 2024
March 12, 2024
February 27, 2024
February 22, 2024
February 5, 2024
December 13, 2023
December 3, 2023
November 23, 2023
October 27, 2023
October 16, 2023
July 26, 2023
June 13, 2023
May 19, 2023
April 12, 2023
March 22, 2023
December 8, 2022