Time Series Representation Learning
Time series representation learning aims to extract meaningful features from sequential data, enabling improved performance in tasks like forecasting and classification. Current research heavily utilizes transformer-based architectures, often incorporating contrastive learning or self-supervised techniques to learn robust representations from potentially noisy or irregularly sampled data, sometimes leveraging domain expertise or multi-modal information. These advancements are significantly impacting various fields, including healthcare (e.g., improved disease prediction and patient monitoring) and industrial applications (e.g., enhanced predictive maintenance and resource optimization), by enabling more accurate and efficient analysis of complex temporal patterns.
Papers
Plots Unlock Time-Series Understanding in Multimodal Models
Mayank Daswani, Mathias M.J. Bellaiche, Marc Wilson, Desislav Ivanov, Mikhail Papkov, Eva Schnider, Jing Tang, Kay Lamerigts, Gabriela Botea, Michael A. Sanchez, Yojan Patel, Shruthi Prabhakara, Shravya Shetty, Umesh Telang
TrajGPT: Irregular Time-Series Representation Learning for Health Trajectory Analysis
Ziyang Song, Qingcheng Lu, He Zhu, David Buckeridge, Yue Li
Multi-Knowledge Fusion Network for Time Series Representation Learning
Sagar Srinivas Sakhinana, Shivam Gupta, Krishna Sai Sudhir Aripirala, Venkataramana Runkana
Multi-Source Knowledge-Based Hybrid Neural Framework for Time Series Representation Learning
Sagar Srinivas Sakhinana, Krishna Sai Sudhir Aripirala, Shivam Gupta, Venkataramana Runkana