Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
In2SET: Intra-Inter Similarity Exploiting Transformer for Dual-Camera Compressive Hyperspectral Imaging
Xin Wang, Lizhi Wang, Xiangtian Ma, Maoqing Zhang, Lin Zhu, Hua Huang
CST-former: Transformer with Channel-Spectro-Temporal Attention for Sound Event Localization and Detection
Yusun Shul, Jung-Woo Choi
Unleashing the Power of CNN and Transformer for Balanced RGB-Event Video Recognition
Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, Jin Tang
ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Gui-Song Xia, Dacheng Tao
Delving Deeper Into Astromorphic Transformers
Md Zesun Ahmed Mia, Malyaban Bal, Abhronil Sengupta
Polynomial-based Self-Attention for Table Representation learning
Jayoung Kim, Yehjin Shin, Jeongwhan Choi, Hyowon Wi, Noseong Park
Towards Equipping Transformer with the Ability of Systematic Compositionality
Chen Huang, Peixin Qin, Wenqiang Lei, Jiancheng Lv
Language-Guided Transformer for Federated Multi-Label Classification
I-Jieh Liu, Ci-Siang Lin, Fu-En Yang, Yu-Chiang Frank Wang
Can a Transformer Represent a Kalman Filter?
Gautam Goel, Peter Bartlett