Logical Reasoning
Logical reasoning in artificial intelligence focuses on developing models capable of performing complex deductive and inductive inferences, mirroring human-like reasoning abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, process supervision during pre-training, and integration with symbolic reasoning systems such as automated theorem provers. These advancements aim to address LLMs' tendency to rely on superficial patterns rather than true logical understanding, ultimately improving the reliability and trustworthiness of AI systems in various applications, including scientific discovery and legal reasoning. The development of robust benchmarks, such as those based on 3-SAT problems and various logic games, is crucial for evaluating and driving progress in this field.
Papers
Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning
Qiming Bao, Gael Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu
Learning To Teach Large Language Models Logical Reasoning
Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li
GLoRE: Evaluating Logical Reasoning of Large Language Models
Hanmeng liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang
Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models
Shashank Sonkar, Richard G. Baraniuk
Exploring Self-supervised Logic-enhanced Training for Large Language Models
Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty
Query Structure Modeling for Inductive Logical Reasoning Over Knowledge Graphs
Siyuan Wang, Zhongyu Wei, Meng Han, Zhihao Fan, Haijun Shan, Qi Zhang, Xuanjing Huang