Demonstration Selection
Demonstration selection in in-context learning (ICL) for large language models (LLMs) focuses on optimizing the choice of example prompts to improve model performance and fairness on downstream tasks. Current research explores methods leveraging semantic similarity, iterative refinement based on model confidence or misclassification, and techniques that consider both diversity and task-specific relevance of demonstrations, often employing LLM-based reranking or clustering strategies. These advancements aim to enhance the efficiency and effectiveness of ICL, reducing the need for extensive fine-tuning while mitigating biases and improving generalization across diverse tasks and datasets.
Papers
October 30, 2024
August 19, 2024
June 24, 2024
June 14, 2024
May 27, 2024
January 22, 2024
January 12, 2024
December 12, 2023
October 15, 2023