LLM in Context
LLM in-context learning (ICL) focuses on improving large language model performance by providing examples within the input prompt, guiding the model's response without explicit retraining. Current research emphasizes optimizing demonstration selection strategies, exploring efficient methods for handling longer sequences (e.g., through interpolation or novel positional embeddings), and adapting LLMs to diverse modalities like video and audio, often incorporating user embeddings for personalized responses. This area is significant because it enhances LLM efficiency and adaptability across various tasks, including code generation, translation, and even automated assessment, while also addressing challenges like fairness and cost-effectiveness.
Papers
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang
User-LLM: Efficient LLM Contextualization with User Embeddings
Lin Ning, Luyang Liu, Jiaxing Wu, Neo Wu, Devora Berlowitz, Sushant Prakash, Bradley Green, Shawn O'Banion, Jun Xie