Transformer Architecture
Transformer architectures are a dominant deep learning paradigm, primarily known for their self-attention mechanism enabling efficient processing of sequential data like text and time series. Current research focuses on addressing the quadratic time complexity of self-attention through alternative architectures (e.g., state space models like Mamba) and optimized algorithms (e.g., local attention, quantized attention), as well as exploring the application of transformers to diverse domains including computer vision, robotics, and blockchain technology. These efforts aim to improve the efficiency, scalability, and interpretability of transformers, leading to broader applicability and enhanced performance across numerous fields.
Papers
CTA-Net: A CNN-Transformer Aggregation Network for Improving Multi-Scale Feature Extraction
Chunlei Meng, Jiacheng Yang, Wei Lin, Bowen Liu, Hongda Zhang, chun ouyang, Zhongxue Gan
Survey and Evaluation of Converging Architecture in LLMs based on Footsteps of Operations
Seongho Kim, Jihyun Moon, Juntaek Oh, Insu Choi, Joon-Sung Yang
Bypassing the Exponential Dependency: Looped Transformers Efficiently Learn In-context by Multi-step Gradient Descent
Bo Chen, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song
Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix
Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, Yufa Zhou
Extra Global Attention Designation Using Keyword Detection in Sparse Transformer Architectures
Evan Lucas, Dylan Kangas, Timothy C Havens
HorGait: A Hybrid Model for Accurate Gait Recognition in LiDAR Point Cloud Planar Projections
Jiaxing Hao, Yanxi Wang, Zhigang Chang, Hongmin Gao, Zihao Cheng, Chen Wu, Xin Zhao, Peiye Fang, Rachmat Muwardi
Fundamental Limitations on Subquadratic Alternatives to Transformers
Josh Alman, Hantao Yu
Transformers Utilization in Chart Understanding: A Review of Recent Advances & Future Trends
Mirna Al-Shetairy, Hanan Hindy, Dina Khattab, Mostafa M. Aref
Equivariant Neural Functional Networks for Transformers
Viet-Hoang Tran, Thieu N. Vo, An Nguyen The, Tho Tran Huu, Minh-Khoi Nguyen-Nhat, Thanh Tran, Duy-Tung Pham, Tan Minh Nguyen
Local Attention Mechanism: Boosting the Transformer Architecture for Long-Sequence Time Series Forecasting
Ignacio Aguilera-Martos, Andrés Herrera-Poyatos, Julián Luengo, Francisco Herrera
Enhanced Transformer architecture for in-context learning of dynamical systems
Matteo Rufolo, Dario Piga, Gabriele Maroni, Marco Forgione