Human Reasoning

Human reasoning research investigates how humans and artificial intelligence (AI) systems draw inferences and solve problems, aiming to understand and replicate human-like cognitive processes. Current research focuses on enhancing AI reasoning capabilities, particularly in large language models (LLMs), through techniques like chain-of-thought prompting, multi-model collaboration, and the integration of world and agent models. This work is significant because it addresses limitations in current AI systems and has implications for improving AI decision-making in various applications, including autonomous driving and claim verification, while also providing insights into the nature of human cognition itself.

Papers