Shot Reasoning
Shot reasoning, particularly few-shot reasoning, investigates the ability of large language models (LLMs) to solve complex problems using only a limited number of examples. Current research focuses on improving LLMs' performance on various reasoning tasks, including linguistic puzzles, mathematical word problems, and visual reasoning, often through techniques like prompt engineering and self-reflection. These advancements are significant because they reveal insights into the inner workings of LLMs and pave the way for more robust and efficient AI systems capable of handling diverse real-world problems with limited training data.
Papers
June 24, 2024
June 14, 2024
March 19, 2024
January 16, 2024
October 16, 2023
June 6, 2023
May 24, 2023
May 22, 2023
December 20, 2022
November 17, 2022
October 16, 2022
October 13, 2022
April 14, 2022