Context Demonstration

Context demonstration, a technique leveraging a few input-output examples to guide large language models (LLMs) on new tasks without parameter updates, aims to improve the efficiency and adaptability of these models. Current research focuses on optimizing demonstration selection strategies, including methods based on influence analysis, contrastive learning, and nearest-neighbor approaches, and exploring the use of diverse data modalities (audio-visual) to enhance learning. This field is significant because effective context demonstration can unlock the potential of LLMs for various applications, including robotics, information extraction, and improving the safety and reliability of LLMs themselves, by reducing the need for extensive fine-tuning.

Papers