Reasoning Ability
Reasoning ability in large language models (LLMs) is a burgeoning research area focused on evaluating and enhancing the capacity of these models to perform multi-step inferences and solve complex problems requiring logical deduction and inductive learning. Current research emphasizes benchmarking LLMs on diverse tasks, including mathematical reasoning, commonsense reasoning, and following procedures, often employing techniques like chain-of-thought prompting and knowledge distillation to improve performance. Understanding and improving LLM reasoning is crucial for building more reliable and trustworthy AI systems with broader applications across various fields, from scientific discovery to decision-making support.
129papers
Papers - Page 6
April 5, 2024
April 4, 2024
April 2, 2024
March 27, 2024
March 21, 2024
March 18, 2024
March 4, 2024
February 16, 2024
February 15, 2024
February 12, 2024
February 5, 2024
January 30, 2024
January 22, 2024