Self Adaptive Prompting
Self-adaptive prompting aims to improve the performance of large language models (LLMs) by automatically generating or refining prompts, reducing reliance on manually crafted examples and extensive labeled datasets. Current research focuses on developing algorithms that leverage LLMs themselves to optimize prompts, incorporating heuristics, definitions, and chain-of-thought reasoning to enhance accuracy and generalization across diverse tasks, including information extraction and reasoning. This approach holds significant potential for improving the efficiency and effectiveness of LLMs in various applications, particularly where labeled data is scarce or task semantics shift dynamically.
Papers
August 30, 2024
July 26, 2024
May 16, 2024
May 13, 2024
March 14, 2024
November 18, 2023
May 24, 2023