Long Context
Long context in large language models (LLMs) focuses on enhancing the ability of these models to process and reason over significantly extended input sequences, exceeding the limitations of traditional context windows. Current research emphasizes developing novel attention mechanisms (e.g., sparse attention, differential attention) and efficient memory management techniques (e.g., compression, retrieval-augmentation) to overcome computational and memory bottlenecks associated with longer contexts. This area is crucial for advancing LLMs' capabilities in complex tasks requiring holistic understanding of extensive information, such as question answering, summarization, and multi-modal reasoning, impacting both scientific understanding of LLMs and their practical applications.
Papers
Long Context RAG Performance of Large Language Models
Quinn Leng, Jacob Portes, Sam Havens, Matei Zaharia, Michael Carbin
TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection
Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun Fu, Zheng Wang, Hui Xiong
VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web Tasks
Lawrence Jang, Yinheng Li, Charles Ding, Justin Lin, Paul Pu Liang, Dan Zhao, Rogerio Bonatti, Kazuhito Koishida
LOGO -- Long cOntext aliGnment via efficient preference Optimization
Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, Min Zhang
Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs
Runchu Tian, Yanghao Li, Yuepeng Fu, Siyang Deng, Qinyu Luo, Cheng Qian, Shuo Wang, Xin Cong, Zhong Zhang, Yesai Wu, Yankai Lin, Huadong Wang, Xiaojiang Liu
MoDification: Mixture of Depths Made Easy
Chen Zhang, Meizhi Zhong, Qimeng Wang, Xuantao Lu, Zheyu Ye, Chengqiang Lu, Yan Gao, Yao Hu, Kehai Chen, Min Zhang, Dawei Song