Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
Re:Draw -- Context Aware Translation as a Controllable Method for Artistic Production
Joao Liborio Cardoso, Francesco Banterle, Paolo Cignoni, Michael Wimmer
Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon
Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou
An epistemic logic for modeling decisions in the context of incomplete knowledge
Đorđe Marković, Simon Vandevelde, Linde Vanbesien, Joost Vennekens, Marc Denecker
Emotion Based Prediction in the Context of Optimized Trajectory Planning for Immersive Learning
Akey Sungheetha, Rajesh Sharma R, Chinnaiyan R
Interpreting User Requests in the Context of Natural Language Standing Instructions
Nikita Moghe, Patrick Xia, Jacob Andreas, Jason Eisner, Benjamin Van Durme, Harsh Jhamtani
ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
Jon Saad-Falcon, Omar Khattab, Christopher Potts, Matei Zaharia