Attention Mechanism
Attention mechanisms are computational processes that selectively focus on relevant information within data, improving efficiency and performance in various machine learning models. Current research emphasizes optimizing attention's computational cost (e.g., reducing quadratic complexity to linear), enhancing its expressiveness (e.g., through convolutional operations on attention scores), and improving its robustness (e.g., mitigating hallucination in vision-language models and addressing overfitting). These advancements are significantly impacting fields like natural language processing, computer vision, and time series analysis, leading to more efficient and accurate models for diverse applications.
Papers
Local Attention Mechanism: Boosting the Transformer Architecture for Long-Sequence Time Series Forecasting
Ignacio Aguilera-Martos, Andrés Herrera-Poyatos, Julián Luengo, Francisco Herrera
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy
Rongzhi Zhang, Kuang Wang, Liyuan Liu, Shuohang Wang, Hao Cheng, Chao Zhang, Yelong Shen
Optimizing News Text Classification with Bi-LSTM and Attention Mechanism for Efficient Data Processing
Bingyao Liu, Jiajing Chen, Rui Wang, Junming Huang, Yuanshuai Luo, Jianjun Wei
A-VL: Adaptive Attention for Large Vision-Language Models
Junyang Zhang, Mu Yuan, Ruiguang Zhong, Puhan Luo, Huiyou Zhan, Ningkang Zhang, Chengchen Hu, Xiangyang Li
A dynamic vision sensor object recognition model based on trainable event-driven convolution and spiking attention mechanism
Peng Zheng, Qian Zhou
Optimizing food taste sensory evaluation through neural network-based taste electroencephalogram channel selection
Xiuxin Xia, Qun Wang, He Wang, Chenrui Liu, Pengwei Li, Yan Shi, Hong Men
Mastering Chess with a Transformer Model
Daniel Monroe, Philip A. Chalmers
Agent Aggregator with Mask Denoise Mechanism for Histopathology Whole Slide Image Analysis
Xitong Ling, Minxi Ouyang, Yizhi Wang, Xinrui Chen, Renao Yan, Hongbo Chu, Junru Cheng, Tian Guan, Sufang Tian, Xiaoping Liu, Yonghong He