Attention Mechanism
Attention mechanisms are computational processes that selectively focus on relevant information within data, improving efficiency and performance in various machine learning models. Current research emphasizes optimizing attention's computational cost (e.g., reducing quadratic complexity to linear), enhancing its expressiveness (e.g., through convolutional operations on attention scores), and improving its robustness (e.g., mitigating hallucination in vision-language models and addressing overfitting). These advancements are significantly impacting fields like natural language processing, computer vision, and time series analysis, leading to more efficient and accurate models for diverse applications.
Papers
Why Do Large Language Models (LLMs) Struggle to Count Letters?
Tairan Fu, Raquel Ferrando, Javier Conde, Carlos Arriaga, Pedro Reviriego
FLAMe: Federated Learning with Attention Mechanism using Spatio-Temporal Keypoint Transformers for Pedestrian Fall Detection in Smart Cities
Byeonghun Kim, Byeongjoon Noh
Emotional Vietnamese Speech-Based Depression Diagnosis Using Dynamic Attention Mechanism
Quang-Anh N.D., Manh-Hung Ha, Thai Kim Dinh, Minh-Duc Pham, Ninh Nguyen Van
Euclidean Fast Attention: Machine Learning Global Atomic Representations at Linear Cost
J. Thorben Frank, Stefan Chmiela, Klaus-Robert Müller, Oliver T. Unke
SurvBETA: Ensemble-Based Survival Models Using Beran Estimators and Several Attention Mechanisms
Lev V. Utkin, Semen P. Khomets, Vlada A. Efremenko, Andrei V. Konstantinov
3A-YOLO: New Real-Time Object Detectors with Triple Discriminative Awareness and Coordinated Representations
Xuecheng Wu, Junxiao Xue, Liangyu Fu, Jiayu Nie, Danlei Huang, Xinyi Yin