Attention Pattern
Attention patterns in neural networks, particularly transformers, are a focus of intense research aiming to understand how these models process information and make decisions. Current work investigates attention mechanisms across various model architectures, including vision transformers and large language models, analyzing how attention weights relate to model performance, human attention, and the presence of adversarial examples or biases. Understanding and potentially controlling these patterns is crucial for improving model interpretability, robustness, efficiency, and ultimately, building more reliable and trustworthy AI systems across diverse applications like medical image analysis and natural language processing.
Papers
November 14, 2024
November 1, 2024
October 23, 2024
September 24, 2024
September 8, 2024
August 29, 2024
August 10, 2024
August 4, 2024
July 28, 2024
July 15, 2024
June 26, 2024
June 20, 2024
June 19, 2024
June 13, 2024
April 23, 2024
March 25, 2024
March 8, 2024
March 7, 2024
February 16, 2024