Natural Language Reasoning
Natural language reasoning (NLR) focuses on enabling computers to understand and reason with information presented in human language, aiming to replicate human-like logical deduction and inference capabilities. Current research heavily utilizes large language models (LLMs), often augmented with symbolic AI techniques or external tools like SQL databases or symbolic solvers, to improve accuracy and address limitations like hallucinations and inconsistencies in reasoning. This field is crucial for advancing AI's ability to process and interpret complex information, impacting diverse applications from question answering and decision-making in social simulations to medical diagnosis and robotics control.
Papers
SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization
Wanhua Li, Zibin Meng, Jiawei Zhou, Donglai Wei, Chuang Gan, Hanspeter Pfister
Causal Interventions on Causal Paths: Mapping GPT-2's Reasoning From Syntax to Semantics
Isabelle Lee, Joshua Lum, Ziyi Liu, Dani Yogatama