Temporal Transformer
Temporal transformers are deep learning models designed to analyze and predict sequences of data with inherent spatio-temporal dependencies, aiming to improve upon traditional methods by capturing long-range interactions across both space and time. Current research focuses on applying these models to diverse applications, including traffic prediction, weather forecasting, video analysis (deblurring, object segmentation, action recognition), and human motion analysis, often employing architectures that combine transformers with convolutional neural networks or graph neural networks to leverage different types of data representations. The resulting advancements have significant implications for various fields, offering improved accuracy and efficiency in tasks ranging from autonomous driving to medical diagnosis.
Papers
Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Shilin Yan, Renrui Zhang, Ziyu Guo, Wenchao Chen, Wei Zhang, Hongyang Li, Yu Qiao, Hao Dong, Zhongjiang He, Peng Gao
Stecformer: Spatio-temporal Encoding Cascaded Transformer for Multivariate Long-term Time Series Forecasting
Zheng Sun, Yi Wei, Wenxiao Jia, Long Yu
DG-Trans: Dual-level Graph Transformer for Spatiotemporal Incident Impact Prediction on Traffic Networks
Yanshen Sun, Kaiqun Fu, Chang-Tien Lu
MSTFormer: Motion Inspired Spatial-temporal Transformer with Dynamic-aware Attention for long-term Vessel Trajectory Prediction
Huimin Qiang, Zhiyuan Guo, Shiyuan Xie, Xiaodong Peng