Long Context
Long context in large language models (LLMs) focuses on enhancing the ability of these models to process and reason over significantly extended input sequences, exceeding the limitations of traditional context windows. Current research emphasizes developing novel attention mechanisms (e.g., sparse attention, differential attention) and efficient memory management techniques (e.g., compression, retrieval-augmentation) to overcome computational and memory bottlenecks associated with longer contexts. This area is crucial for advancing LLMs' capabilities in complex tasks requiring holistic understanding of extensive information, such as question answering, summarization, and multi-modal reasoning, impacting both scientific understanding of LLMs and their practical applications.
Papers
Data-Centric and Heterogeneity-Adaptive Sequence Parallelism for Efficient LLM Training
Yujie Wang, Shiju Wang, Shenhan Zhu, Fangcheng Fu, Xinyi Liu, Xuefeng Xiao, Huixia Li, Jiashi Li, Faming Wu, Bin Cui
LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations
Anian Ruoss, Fabio Pardo, Harris Chan, Bonnie Li, Volodymyr Mnih, Tim Genewein
Retrieval or Global Context Understanding? On Many-Shot In-Context Learning for Long-Context Evaluation
Kaijian Zou, Muhammad Khalifa, Lu Wang
LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios
Xiaodong Wu, Minhao Wang, Yichen Liu, Xiaoming Shi, He Yan, Xiangju Lu, Junmin Zhu, Wei Zhang
LongSafetyBench: Long-Context LLMs Struggle with Safety Issues
Mianqiu Huang, Xiaoran Liu, Shaojun Zhou, Mozhi Zhang, Chenkun Tan, Pengyu Wang, Qipeng Guo, Zhe Xu, Linyang Li, Zhikai Lei, Linlin Li, Qun Liu, Yaqian Zhou, Xipeng Qiu, Xuanjing Huang