Human Reasoning
Human reasoning research investigates how humans and artificial intelligence (AI) systems draw inferences and solve problems, aiming to understand and replicate human-like cognitive processes. Current research focuses on enhancing AI reasoning capabilities, particularly in large language models (LLMs), through techniques like chain-of-thought prompting, multi-model collaboration, and the integration of world and agent models. This work is significant because it addresses limitations in current AI systems and has implications for improving AI decision-making in various applications, including autonomous driving and claim verification, while also providing insights into the nature of human cognition itself.
Papers
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Though
Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Albalak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, Louis Castricato, Jan-Philipp Franken, Nick Haber, Chelsea Finn
IOLBENCH: Benchmarking LLMs on Linguistic Reasoning
Satyam Goyal, Soham Dan