Logical Reasoning Datasets
Logical reasoning datasets are crucial for evaluating and improving the ability of large language models (LLMs) to perform complex reasoning tasks, mirroring human-like deductive, inductive, and abductive inference. Current research focuses on developing more comprehensive and robust datasets, often incorporating symbolic logic representations and diverse reasoning types, and employing techniques like chain-of-thought prompting, bidirectional chaining, and integration with external symbolic solvers to enhance LLM performance. These advancements are significant because reliable logical reasoning is essential for building trustworthy AI systems with applications ranging from fact verification and fallacy detection to question answering and knowledge graph reasoning.
Papers
Large Language Models are Better Reasoners with Self-Verification
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, Jun Zhao
APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning
Soumya Sanyal, Yichong Xu, Shuohang Wang, Ziyi Yang, Reid Pryzant, Wenhao Yu, Chenguang Zhu, Xiang Ren