Multi Step Reasoning
Multi-step reasoning research focuses on enhancing the ability of large language models (LLMs) to solve complex problems requiring multiple sequential steps of inference. Current efforts concentrate on improving LLMs' ability to plan, execute, and verify these steps, often employing techniques like chain-of-thought prompting, structured planning with world models, and the integration of external tools or knowledge graphs. This research is crucial for advancing AI capabilities in various fields, from automated problem-solving and decision-making to more sophisticated question answering and improved human-computer interaction. The development of robust benchmarks and evaluation metrics is also a key focus, enabling more rigorous comparison and progress tracking of different approaches.
Papers
QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking
Liangming Pan, Xinyuan Lu, Min-Yen Kan, Preslav Nakov
KwaiYiiMath: Technical Report
Jiayi Fu, Lei Lin, Xiaoyang Gao, Pengli Liu, Zhengzong Chen, Zhirui Yang, Shengnan Zhang, Xue Zheng, Yan Li, Yuliang Liu, Xucheng Ye, Yiqiao Liao, Chao Liao, Bin Chen, Chengru Song, Junchen Wan, Zijia Lin, Fuzheng Zhang, Zhongyuan Wang, Di Zhang, Kun Gai