Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
The Dice loss in the context of missing or empty labels: Introducing $\Phi$ and $\epsilon$
Sofie Tilborghs, Jeroen Bertels, David Robben, Dirk Vandermeulen, Frederik Maes
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Deniz Gunduz, Zhijin Qin, Inaki Estella Aguerri, Harpreet S. Dhillon, Zhaohui Yang, Aylin Yener, Kai Kit Wong, Chan-Byoung Chae