Contextual Information
Contextual information, encompassing surrounding data that influences interpretation, is crucial for improving the performance and robustness of various AI models, particularly large language models (LLMs). Current research focuses on effectively integrating contextual information into model architectures, often using techniques like prompting, attention mechanisms, and graph neural networks to enhance understanding and decision-making in tasks ranging from question answering and trajectory prediction to recommendation systems and security applications. This work is significant because it addresses limitations in current AI systems, leading to more accurate, reliable, and contextually aware outputs across diverse fields, ultimately improving the usability and trustworthiness of AI technologies.
Papers
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten Sap
Multiview Identifiers Enhanced Generative Retrieval
Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li
CONA: A novel CONtext-Aware instruction paradigm for communication using large language model
Nan Zhou, Xinghui Tao, Xi Chen