Reasoning Task
Reasoning tasks in large language models (LLMs) focus on improving the ability of these models to perform multi-step inferences and solve complex problems requiring logical deduction and induction. Current research emphasizes developing novel prompting techniques, such as those inspired by Bloom's taxonomy or employing dynamic reasoning trajectories, and improving model training through knowledge distillation and learning from mistakes. These advancements are significant because enhanced reasoning capabilities in LLMs have broad implications for various fields, including improving question answering systems, enhancing personalized recommendation systems, and advancing applications in education and scientific discovery.
185papers
Papers - Page 9
June 19, 2024
June 16, 2024
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models
On the Role of Entity and Event Level Conceptualization in Generalizable Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions
June 7, 2024
June 5, 2024
May 28, 2024
May 23, 2024
May 8, 2024