Shot in Context Learning
Shot-in-context learning (ICL) explores how large language models (LLMs) can perform tasks with minimal explicit training, leveraging a few examples provided within the input prompt. Current research focuses on improving ICL's robustness and efficiency, particularly through techniques like chain-of-thought prompting, multimodal task vectors for compressing many examples, and methods to mitigate biases and hallucinations. This research is significant because it promises more efficient and adaptable AI systems, impacting diverse fields from healthcare diagnostics and molecular design to environmental monitoring and multilingual natural language processing.
Papers
VLM Agents Generate Their Own Memories: Distilling Experience into Embodied Programs
Gabriel Sarch, Lawrence Jang, Michael J. Tarr, William W. Cohen, Kenneth Marino, Katerina Fragkiadaki
Improving Expert Radiology Report Summarization by Prompting Large Language Models with a Layperson Summary
Xingmeng Zhao, Tongnian Wang, Anthony Rios