Human Answer Mistake
Human answer mistakes, encompassing errors in reasoning, knowledge, and judgment, are a central challenge in developing reliable AI systems, particularly large language models (LLMs). Current research focuses on improving model accuracy by leveraging error analysis to refine training data, develop more robust model architectures, and design algorithms that learn from mistakes, including methods like knowledge distillation and mistake-correction data augmentation. Understanding and mitigating these errors is crucial for building trustworthy AI systems across diverse applications, from autonomous vehicles to educational tools, and for advancing our understanding of human-AI interaction.
Papers
January 30, 2024
January 25, 2024
January 10, 2024
October 31, 2023
October 24, 2023
October 16, 2023
September 19, 2023
May 23, 2023
May 19, 2023
May 4, 2023
April 21, 2023
March 16, 2023
March 13, 2023
March 3, 2023
January 26, 2023
January 24, 2023
December 19, 2022
November 15, 2022
October 11, 2022