Transformer Architecture
Transformer architectures are a dominant deep learning paradigm, primarily known for their self-attention mechanism enabling efficient processing of sequential data like text and time series. Current research focuses on addressing the quadratic time complexity of self-attention through alternative architectures (e.g., state space models like Mamba) and optimized algorithms (e.g., local attention, quantized attention), as well as exploring the application of transformers to diverse domains including computer vision, robotics, and blockchain technology. These efforts aim to improve the efficiency, scalability, and interpretability of transformers, leading to broader applicability and enhanced performance across numerous fields.
Papers
U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers
Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang
A Combination of BERT and Transformer for Vietnamese Spelling Correction
Hieu Ngo Trung, Duong Tran Ham, Tin Huynh, Kiem Hoang
Exploring Extreme Quantization in Spiking Language Models
Malyaban Bal, Yi Jiang, Abhronil Sengupta
Transfer Learning and Transformer Architecture for Financial Sentiment Analysis
Tohida Rehman, Raghubir Bose, Samiran Chattopadhyay, Debarshi Kumar Sanyal
Exploring the Robustness of In-Context Learning with Noisy Labels
Chen Cheng, Xinzhi Yu, Haodong Wen, Jingsong Sun, Guanzhang Yue, Yihao Zhang, Zeming Wei
Transformers, Contextualism, and Polysemy
Jumbly Grindrod
State Space Model for New-Generation Network Alternative to Transformers: A Survey
Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, Jin Tang