Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
Memory transformers for full context and high-resolution 3D Medical Segmentation
Loic Themyr, Clément Rambour, Nicolas Thome, Toby Collins, Alexandre Hostettler
Transformers generalize differently from information stored in context vs in weights
Stephanie C. Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K. Lampinen, Felix Hill
CEFER: A Four Facets Framework based on Context and Emotion embedded features for Implicit and Explicit Emotion Recognition
Fereshteh Khoshnam, Ahmad Baraani-Dastjerdi, M. J. Liaghatdar
Bimanual rope manipulation skill synthesis through context dependent correction policy learning from human demonstration
T. Baturhan Akbulut, G. Tuba C. Girgin, Arash Mehrabi, Minoru Asada, Emre Ugur, Erhan Oztop