Logical Reasoning Capability
Logical reasoning capability in large language models (LLMs) is a burgeoning research area focused on evaluating and enhancing the ability of these models to perform complex deductive, inductive, and abductive reasoning tasks. Current research emphasizes developing robust benchmarks, such as those based on logic games, puzzles, and knowledge graph question answering, to assess LLMs' performance and identify weaknesses in their reasoning processes, often employing techniques like chain-of-thought prompting and contrastive learning. These efforts are crucial for improving the reliability and trustworthiness of LLMs across diverse applications, ranging from legal and medical domains to more general-purpose problem-solving.
Papers
February 22, 2024
December 18, 2023
December 13, 2023
October 13, 2023
October 2, 2023
July 11, 2023
June 16, 2023
May 24, 2023
May 20, 2023
May 12, 2023
March 3, 2023
October 22, 2022
October 20, 2022
September 2, 2022
May 25, 2022
May 20, 2022
May 18, 2022