Demonstration Selection

Demonstration selection in in-context learning (ICL) for large language models (LLMs) focuses on optimizing the choice of example prompts to improve model performance and fairness on downstream tasks. Current research explores methods leveraging semantic similarity, iterative refinement based on model confidence or misclassification, and techniques that consider both diversity and task-specific relevance of demonstrations, often employing LLM-based reranking or clustering strategies. These advancements aim to enhance the efficiency and effectiveness of ICL, reducing the need for extensive fine-tuning while mitigating biases and improving generalization across diverse tasks and datasets.

Papers