Efficient in Context Learning

Efficient in-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few examples within the input prompt, without explicit retraining. Current research emphasizes improving ICL's efficiency and robustness by exploring techniques like adaptive feature extraction, optimized prompt engineering (including template design and example selection), and the integration of smaller, fine-tuned models to augment LLMs. These advancements aim to reduce the computational cost and improve the accuracy and generalizability of ICL, impacting various applications from text summarization and question answering to entity resolution and data-less text classification.

Papers