Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
Alt-Text with Context: Improving Accessibility for Images on Twitter
Nikita Srivatsan, Sofia Samaniego, Omar Florez, Taylor Berg-Kirkpatrick
Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting
Akhila Yerukola, Xuhui Zhou, Elizabeth Clark, Maarten Sap
Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling
Linhan Zhang, Qian Chen, Wen Wang, Yuxin Jiang, Bing Li, Wei Wang, Xin Cao