Reasoning Ability
Reasoning ability in large language models (LLMs) is a burgeoning research area focused on evaluating and enhancing the capacity of these models to perform multi-step inferences and solve complex problems requiring logical deduction and inductive learning. Current research emphasizes benchmarking LLMs on diverse tasks, including mathematical reasoning, commonsense reasoning, and following procedures, often employing techniques like chain-of-thought prompting and knowledge distillation to improve performance. Understanding and improving LLM reasoning is crucial for building more reliable and trustworthy AI systems with broader applications across various fields, from scientific discovery to decision-making support.
Papers
MedG-KRP: Medical Graph Knowledge Representation Probing
Gabriel R. Rosenbaum, Lavender Yao Jiang, Ivaxi Sheth, Jaden Stryker, Anton Alyakin, Daniel Alexander Alber, Nicolas K. Goff, Young Joon (Fred) Kwon, John Markert, Mustafa Nasir-Moin, Jan Moritz Niehues, Karl L. Sangwon, Eunice Yang, Eric Karl Oermann
A recent evaluation on the performance of LLMs on radiation oncology physics using questions of randomly shuffled options
Peilong Wang, Jason Holmes, Zhengliang Liu, Dequan Chen, Tianming Liu, Jiajian Shen, Wei Liu
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios
Shantanu Jaiswal, Debaditya Roy, Basura Fernando, Cheston Tan
Disentangling Memory and Reasoning Ability in Large Language Models
Mingyu Jin, Weidi Luo, Sitao Cheng, Xinyi Wang, Wenyue Hua, Ruixiang Tang, William Yang Wang, Yongfeng Zhang
Patience Is The Key to Large Language Model Reasoning
Yijiong Yu