Transformer Model
Transformer models are a class of neural networks built upon an attention mechanism, enabling them to process sequential data like text and time series with remarkable effectiveness. Current research focuses on improving training stability (e.g., mitigating loss spikes), enhancing expressiveness through novel attention mechanisms and embedding techniques, and optimizing performance for various applications by exploring different architectures (e.g., hybrid Transformer-Mamba models) and parallelization strategies. This work is significant due to the widespread adoption of transformers in diverse fields, from natural language processing and computer vision to scientific computing and engineering, driving advancements in both theoretical understanding and practical applications.
Papers
Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity
Mutian He, Philip N. Garner
Optimizing Transformer based on high-performance optimizer for predicting employment sentiment in American social media content
Feiyang Wang, Qiaozhi Bao, Zixuan Wang, Yanlin Chen
LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions
Ravindran Kannan, Chiranjib Bhattacharyya, Praneeth Kacham, David P. Woodruff
Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes
Kosuke Nishida, Kyosuke Nishida, Kuniko Saito
DAPE V2: Process Attention Score as Feature Map for Length Extrapolation
Chuanyang Zheng, Yihang Gao, Han Shi, Jing Xiong, Jiankai Sun, Jingyao Li, Minbin Huang, Xiaozhe Ren, Michael Ng, Xin Jiang, Zhenguo Li, Yu Li