LLM in Context

LLM in-context learning (ICL) focuses on improving large language model performance by providing examples within the input prompt, guiding the model's response without explicit retraining. Current research emphasizes optimizing demonstration selection strategies, exploring efficient methods for handling longer sequences (e.g., through interpolation or novel positional embeddings), and adapting LLMs to diverse modalities like video and audio, often incorporating user embeddings for personalized responses. This area is significant because it enhances LLM efficiency and adaptability across various tasks, including code generation, translation, and even automated assessment, while also addressing challenges like fairness and cost-effectiveness.

Papers