Contextual Information
Contextual information, encompassing surrounding data that influences interpretation, is crucial for improving the performance and robustness of various AI models, particularly large language models (LLMs). Current research focuses on effectively integrating contextual information into model architectures, often using techniques like prompting, attention mechanisms, and graph neural networks to enhance understanding and decision-making in tasks ranging from question answering and trajectory prediction to recommendation systems and security applications. This work is significant because it addresses limitations in current AI systems, leading to more accurate, reliable, and contextually aware outputs across diverse fields, ultimately improving the usability and trustworthiness of AI technologies.
Papers
Context Matters: An Empirical Study of the Impact of Contextual Information in Temporal Question Answering Systems
Dan Schumacher, Fatemeh Haji, Tara Grey, Niharika Bandlamudi, Nupoor Karnik, Gagana Uday Kumar, Jason Cho-Yu Chiang, Paul Rad, Nishant Vishwamitra, Anthony Rios
Improving the Expressiveness of $K$-hop Message-Passing GNNs by Injecting Contextualized Substructure Information
Tianjun Yao, Yiongxu Wang, Kun Zhang, Shangsong Liang