Reasoning Error
Reasoning errors in large language models (LLMs) are a significant area of research, focusing on understanding the causes and developing methods to mitigate these inaccuracies in tasks requiring logical deduction and problem-solving. Current research investigates LLMs' internal representations to detect and predict errors, explores improved training methods using error-correction data, and examines the effectiveness of different prompting strategies to enhance reasoning accuracy, particularly within mathematical and logical domains. Addressing these errors is crucial for improving the reliability and trustworthiness of LLMs across various applications, from educational tools to industrial quality control.
Papers
SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
Ling Yang, Zhaochen Yu, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez, Bin Cui, Shuicheng Yan
Logic Error Localization in Student Programming Assignments Using Pseudocode and Graph Neural Networks
Zhenyu Xu, Kun Zhang, Victor S. Sheng