Reasoning Task
Reasoning tasks in large language models (LLMs) focus on improving the ability of these models to perform multi-step inferences and solve complex problems requiring logical deduction and induction. Current research emphasizes developing novel prompting techniques, such as those inspired by Bloom's taxonomy or employing dynamic reasoning trajectories, and improving model training through knowledge distillation and learning from mistakes. These advancements are significant because enhanced reasoning capabilities in LLMs have broad implications for various fields, including improving question answering systems, enhancing personalized recommendation systems, and advancing applications in education and scientific discovery.
185papers
Papers - Page 9
October 10, 2024
October 8, 2024
October 4, 2024
DOTS: Learning to Reason Dynamically in LLMs via Optimal Reasoning Trajectories Search
Murong Yue, Wenlin Yao, Haitao Mi, Dian Yu, Ziyu Yao, Dong YuLearning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review
Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing HeProcBench: Benchmark for Multi-Step Reasoning and Following Procedure
Ippei Fujisawa, Sensho Nobe, Hiroki Seto, Rina Onda, Yoshiaki Uchida, Hiroki Ikoma, Pei-Chun Chien, Ryota KanaiImage First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
Grant Wardle, Teo Susnjak
October 2, 2024
September 20, 2024
September 13, 2024
September 9, 2024