Symbolic Reasoning Task
Symbolic reasoning tasks assess the ability of large language models (LLMs) to perform multi-step logical deductions and calculations, often involving mathematical formulas or symbolic manipulations. Current research focuses on improving LLMs' performance on these tasks through techniques like chain-of-thought prompting, which encourages the models to generate intermediate reasoning steps, and data normalization methods to improve handling of tabular data. These advancements are significant because enhanced symbolic reasoning capabilities in LLMs are crucial for broader applications requiring complex logical inference, such as question answering, code generation, and scientific discovery.
Papers
July 1, 2024
June 25, 2024
June 5, 2024
May 21, 2024
February 20, 2024
December 27, 2023
February 15, 2023
November 18, 2022
October 13, 2022
October 5, 2022