Self Attention
Self-attention is a mechanism in neural networks that allows the model to weigh the importance of different parts of the input data when processing it, enabling the capture of long-range dependencies and contextual information. Current research focuses on improving the efficiency of self-attention, particularly in vision transformers and other large models, through techniques like low-rank approximations, selective attention, and grouped query attention, aiming to reduce computational costs while maintaining accuracy. These advancements are significantly impacting various fields, including computer vision, natural language processing, and time series analysis, by enabling more efficient and powerful models for tasks such as image restoration, text-to-image generation, and medical image segmentation.
Papers
Mechanics of Next Token Prediction with Self-Attention
Yingcong Li, Yixiao Huang, M. Emrullah Ildiz, Ankit Singh Rawat, Samet Oymak
CHAI: Clustered Head Attention for Efficient LLM Inference
Saurabh Agarwal, Bilge Acun, Basil Hosmer, Mostafa Elhoushi, Yejin Lee, Shivaram Venkataraman, Dimitris Papailiopoulos, Carole-Jean Wu
Explainable Transformer Prototypes for Medical Diagnoses
Ugur Demir, Debesh Jha, Zheyuan Zhang, Elif Keles, Bradley Allen, Aggelos K. Katsaggelos, Ulas Bagci
Multi-Scale Implicit Transformer with Re-parameterize for Arbitrary-Scale Super-Resolution
Jinchen Zhu, Mingjian Zhang, Ling Zheng, Shizhuang Weng
MiKASA: Multi-Key-Anchor & Scene-Aware Transformer for 3D Visual Grounding
Chun-Peng Chang, Shaoxiang Wang, Alain Pagani, Didier Stricker
TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax
Tobias Christian Nauen, Sebastian Palacio, Andreas Dengel
Quantum Mixed-State Self-Attention Network
Fu Chen, Qinglin Zhao, Li Feng, Chuangtao Chen, Yangbin Lin, Jianhong Lin