Context Information
Context information, encompassing the surrounding data influencing a system's response, is a crucial area of research across numerous fields, aiming to improve model accuracy, robustness, and explainability. Current research focuses on how to effectively integrate contextual information into various models, including large language models (LLMs), vision-language models (VLMs), and other machine learning architectures, often employing techniques like retrieval-augmented generation (RAG), attention mechanisms, and contrastive learning. This work is significant because effective contextualization is vital for building reliable and trustworthy AI systems across applications ranging from natural language processing and computer vision to medical diagnosis and autonomous navigation.
Papers
AI for the Generation and Testing of Ideas Towards an AI Supported Knowledge Development Environment
Ted Selker
Understanding the impacts of crop diversification in the context of climate change: a machine learning approach
Georgios Giannarakis, Ilias Tsoumas, Stelios Neophytides, Christiana Papoutsa, Charalampos Kontoes, Diofantos Hadjimitsis
CIDER: Context sensitive sentiment analysis for short-form text
James C. Young, Rudy Arthur, Hywel T. P. Williams
Intuitive Access to Smartphone Settings Using Relevance Model Trained by Contrastive Learning
Joonyoung Kim, Kangwook Lee, Haebin Shin, Hurnjoo Lee, Sechun Kang, Byunguk Choi, Dong Shin, Joohyung Lee