Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
Kai Ruan, Xuan Wang, Jixiang Hong, Peng Wang, Yang Liu, Hao Sun
A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
Chenlong Deng, Zhisong Zhang, Kelong Mao, Shuaiyi Li, Xinting Huang, Dong Yu, Zhicheng Dou
Popularity Estimation and New Bundle Generation using Content and Context based Embeddings
Ashutosh Nayak, Prajwal NJ, Sameeksha Keshav, Kavitha S.N., Roja Reddy, Rajasekhara Reddy Duvvuru Muni
A Deep Semantic Segmentation Network with Semantic and Contextual Refinements
Zhiyan Wang, Deyin Liu, Lin Yuanbo Wu, Song Wang, Xin Guo, Lin Qi
MAGIC: Mastering Physical Adversarial Generation in Context through Collaborative LLM Agents
Yun Xing, Nhat Chung, Jie Zhang, Yue Cao, Ivor Tsang, Yang Liu, Lei Ma, Qing Guo