Contextual Understanding
Contextual understanding in AI focuses on enabling models to accurately interpret and utilize information from surrounding text, images, or other modalities to improve performance on various tasks. Current research emphasizes developing robust evaluation benchmarks, improving long-context processing in large language models (LLMs) through techniques like coreference resolution and contrastive decoding, and exploring multimodal approaches that integrate visual and textual information. This area is crucial for advancing AI capabilities in diverse applications, from autonomous driving and legal judgment prediction to question answering and generating more reliable and ethical language models.
Papers
Tell me what I need to know: Exploring LLM-based (Personalized) Abstractive Multi-Source Meeting Summarization
Frederic Kirstein, Terry Ruas, Robert Kratel, Bela Gipp
Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents
Sabit Hassan, Hye-Young Chung, Xiang Zhi Tan, Malihe Alikhani
Extracting Paragraphs from LLM Token Activations
Nicholas Pochinkov, Angelo Benoit, Lovkush Agarwal, Zainab Ali Majid, Lucile Ter-Minassian
MIP-GAF: A MLLM-annotated Benchmark for Most Important Person Localization and Group Context Understanding
Surbhi Madan, Shreya Ghosh, Lownish Rai Sookha, M.A. Ganaie, Ramanathan Subramanian, Abhinav Dhall, Tom Gedeon