Reasoning Trace
Reasoning traces are step-by-step explanations of a large language model's (LLM) problem-solving process, aiming to improve the transparency and accuracy of LLM reasoning, particularly in complex tasks. Current research focuses on enhancing LLMs' ability to generate these traces using methods like chain-of-thought prompting and integrating them with external knowledge sources, while also critically evaluating the actual contribution of these traces to improved performance. This work is significant because reliable reasoning traces can increase the trustworthiness and interpretability of LLMs, leading to more robust and dependable applications in various fields.
Papers
October 29, 2024
October 13, 2024
June 23, 2024
May 22, 2024
May 8, 2024
August 25, 2023
October 6, 2022
August 30, 2022