Human Reasoning
Human reasoning research investigates how humans and artificial intelligence (AI) systems draw inferences and solve problems, aiming to understand and replicate human-like cognitive processes. Current research focuses on enhancing AI reasoning capabilities, particularly in large language models (LLMs), through techniques like chain-of-thought prompting, multi-model collaboration, and the integration of world and agent models. This work is significant because it addresses limitations in current AI systems and has implications for improving AI decision-making in various applications, including autonomous driving and claim verification, while also providing insights into the nature of human cognition itself.
Papers
Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, Ajay Divakaran
Don't Ignore Dual Logic Ability of LLMs while Privatizing: A Data-Intensive Analysis in Medical Domain
Yanrui Du, Sendong Zhao, Muzhen Cai, Ming Ma, Danyang Zhao, Jiawei Cao, Bing Qin