Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
Inferring Rewards from Language in Context
Jessy Lin, Daniel Fried, Dan Klein, Anca Dragan
Can language models learn from explanations in context?
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill