Self Adaptive Prompting

Self-adaptive prompting aims to improve the performance of large language models (LLMs) by automatically generating or refining prompts, reducing reliance on manually crafted examples and extensive labeled datasets. Current research focuses on developing algorithms that leverage LLMs themselves to optimize prompts, incorporating heuristics, definitions, and chain-of-thought reasoning to enhance accuracy and generalization across diverse tasks, including information extraction and reasoning. This approach holds significant potential for improving the efficiency and effectiveness of LLMs in various applications, particularly where labeled data is scarce or task semantics shift dynamically.

Papers