Human Answer Mistake
Human answer mistakes, encompassing errors in reasoning, knowledge, and judgment, are a central challenge in developing reliable AI systems, particularly large language models (LLMs). Current research focuses on improving model accuracy by leveraging error analysis to refine training data, develop more robust model architectures, and design algorithms that learn from mistakes, including methods like knowledge distillation and mistake-correction data augmentation. Understanding and mitigating these errors is crucial for building trustworthy AI systems across diverse applications, from autonomous vehicles to educational tools, and for advancing our understanding of human-AI interaction.
Papers
November 7, 2024
November 4, 2024
October 21, 2024
October 7, 2024
October 4, 2024
October 2, 2024
September 18, 2024
September 4, 2024
August 29, 2024
August 23, 2024
July 15, 2024
July 8, 2024
June 16, 2024
June 3, 2024
April 11, 2024
March 29, 2024
March 27, 2024
February 26, 2024