Self Attention
Self-attention is a mechanism in neural networks that allows the model to weigh the importance of different parts of the input data when processing it, enabling the capture of long-range dependencies and contextual information. Current research focuses on improving the efficiency of self-attention, particularly in vision transformers and other large models, through techniques like low-rank approximations, selective attention, and grouped query attention, aiming to reduce computational costs while maintaining accuracy. These advancements are significantly impacting various fields, including computer vision, natural language processing, and time series analysis, by enabling more efficient and powerful models for tasks such as image restoration, text-to-image generation, and medical image segmentation.
Papers
Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang
Skeletal Graph Self-Attention: Embedding a Skeleton Inductive Bias into Sign Language Production
Ben Saunders, Necati Cihan Camgoz, Richard Bowden
A cross-modal fusion network based on self-attention and residual structure for multimodal emotion recognition
Ziwang Fu, Feng Liu, Hanyang Wang, Jiayin Qi, Xiangling Fu, Aimin Zhou, Zhibin Li
ProSTformer: Pre-trained Progressive Space-Time Self-attention Model for Traffic Flow Forecasting
Xiao Yan, Xianghua Gan, Jingjing Tang, Rui Wang