Reasoning Task
Reasoning tasks in large language models (LLMs) focus on improving the ability of these models to perform multi-step inferences and solve complex problems requiring logical deduction and induction. Current research emphasizes developing novel prompting techniques, such as those inspired by Bloom's taxonomy or employing dynamic reasoning trajectories, and improving model training through knowledge distillation and learning from mistakes. These advancements are significant because enhanced reasoning capabilities in LLMs have broad implications for various fields, including improving question answering systems, enhancing personalized recommendation systems, and advancing applications in education and scientific discovery.
Papers
Divide-or-Conquer? Which Part Should You Distill Your LLM?
Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao Gu, VG Vinod Vydiswaran, Navdeep Jaitly, Yizhe Zhang
How Ambiguous are the Rationales for Natural Language Reasoning? A Simple Approach to Handling Rationale Uncertainty
Hazel Kim
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge
Jinlan Fu, Shenzhen Huangfu, Hang Yan, See-Kiong Ng, Xipeng Qiu
Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, Wei He, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang
Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models
Feihu Jin, Yifan Liu, Ying Tan