Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
Exploring Frequency-Inspired Optimization in Transformer for Efficient Single Image Super-Resolution
Ao Li, Le Zhang, Yun Liu, Ce Zhu
PETformer: Long-term Time Series Forecasting via Placeholder-enhanced Transformer
Shengsheng Lin, Weiwei Lin, Wentai Wu, Songbo Wang, Yongxiang Wang
Self-supervised Learning of Rotation-invariant 3D Point Set Features using Transformer and its Self-distillation
Takahiko Furuya, Zhoujie Chen, Ryutarou Ohbuchi, Zhenzhong Kuang
Efficient Bayesian Optimization with Deep Kernel Learning and Transformer Pre-trained on Multiple Heterogeneous Datasets
Wenlong Lyu, Shoubo Hu, Jie Chuai, Zhitang Chen
LATR: 3D Lane Detection from Monocular Images with Transformer
Yueru Luo, Chaoda Zheng, Xu Yan, Tang Kun, Chao Zheng, Shuguang Cui, Zhen Li
3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment
Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, Qing Li
SODFormer: Streaming Object Detection with Transformer Using Events and Frames
Dianze Li, Jianing Li, Yonghong Tian