Sequential Recommendation
Sequential recommendation aims to predict users' next interactions by analyzing their historical behavior sequences, focusing on capturing dynamic preferences and long-term patterns. Current research heavily utilizes transformer-based architectures, large language models (LLMs), and graph neural networks (GNNs), often incorporating techniques like contrastive learning, test-time training, and knowledge distillation to improve accuracy and efficiency, particularly for large-scale datasets. This field is crucial for personalized recommendations in various applications, driving improvements in user experience and business outcomes through more accurate and timely predictions. Addressing challenges like scalability, noise in user data, and the effective integration of diverse data sources (e.g., textual item descriptions, collaborative filtering signals) remains a key focus.
Papers
Lost in Sequence: Do Large Language Models Understand Sequential Recommendation?
Sein Kim, Hongseok Kang, Kibum Kim, Jiwan Kim, Donghyun Kim, Minchul Yang, Kwangjin Oh, Julian McAuley, Chanyoung ParkKAIST●NAVER Corperation●University of California San DiegoA Systematic Survey on Federated Sequential Recommendation
Yichen Li, Qiyu Qin, Gaoyang Zhu, Wenchao Xu, Haozhao Wang, Yuhua Li, Rui Zhang, Ruixuan Li