Large Time Series Model
Large Time Series Models (LTSMs) aim to create universal forecasting models, analogous to Large Language Models, capable of handling diverse time series data without task-specific training. Current research focuses on transformer-based architectures, exploring optimal pre-training strategies using massive, heterogeneous datasets and investigating techniques like contrastive learning and statistical prompting to improve generalization. This approach promises to significantly improve the efficiency and accuracy of time series analysis across various fields, from finance and healthcare to energy and transportation, by enabling zero-shot and few-shot forecasting capabilities.
Papers
Unified Training of Universal Time Series Forecasting Transformers
Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, Doyen Sahoo
Timer: Generative Pre-trained Transformers Are Large Time Series Models
Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long