Mathematical Reasoning
Mathematical reasoning in large language models (LLMs) is a burgeoning research area focused on evaluating and improving the ability of these models to solve mathematical problems, encompassing both symbolic and numerical reasoning. Current research emphasizes developing more robust benchmarks that assess not only final accuracy but also the reasoning process itself, including error detection and correction, and exploring various training methods such as reinforcement learning from human feedback and instruction tuning to enhance model performance. This field is significant because advancements in mathematical reasoning capabilities in LLMs have broad implications for various applications, including education, scientific discovery, and automated problem-solving.
Papers
Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist
Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F. Wong, Xiaowei Huang, Qiufeng Wang, Kaizhu Huang
Self-training Language Models for Arithmetic Reasoning
Marek Kadlčík, Michal Štefánik
Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models -- The Story Goes On
Liang Zeng, Liangjun Zhong, Liang Zhao, Tianwen Wei, Liu Yang, Jujie He, Cheng Cheng, Rui Hu, Yang Liu, Shuicheng Yan, Han Fang, Yahui Zhou
LLMs Are Not Intelligent Thinkers: Introducing Mathematical Topic Tree Benchmark for Comprehensive Evaluation of LLMs
Arash Gholami Davoodi, Seyed Pouyan Mousavi Davoudi, Pouya Pezeshkpour
Robustness Assessment of Mathematical Reasoning in the Presence of Missing and Contradictory Conditions
Shi-Yu Tian, Zhi Zhou, Lin-Han Jia, Lan-Zhe Guo, Yu-Feng Li