Reasoning Benchmark
Reasoning benchmarks are standardized tests designed to evaluate the logical reasoning capabilities of large language models (LLMs). Current research focuses on developing more challenging benchmarks that go beyond simple question-answering, including those requiring multi-step reasoning, handling long contexts, and incorporating diverse reasoning types (deductive, inductive, abductive, analogical). These benchmarks utilize various techniques like chain-of-thought prompting, in-context learning, and model architectures incorporating generator-discriminator networks or hybrid thinking frameworks to improve LLM performance. The development of robust and comprehensive reasoning benchmarks is crucial for advancing the field of artificial intelligence by providing objective measures of progress and identifying areas needing further research.
Papers
OC-NMN: Object-centric Compositional Neural Module Network for Generative Visual Analogical Reasoning
Rim Assouel, Pau Rodriguez, Perouz Taslakian, David Vazquez, Yoshua Bengio
DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy
Hongda Sun, Weikai Xu, Wei Liu, Jian Luan, Bin Wang, Shuo Shang, Ji-Rong Wen, Rui Yan
How susceptible are LLMs to Logical Fallacies?
Amirreza Payandeh, Dan Pluth, Jordan Hosier, Xuesu Xiao, Vijay K. Gurbani
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models
Navid Rajabi, Jana Kosecka
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang