Paper ID: 2502.10927 • Published Feb 15, 2025
The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training
Matteo Saponati, Pascal Sager, Pau Vilimelis Aceituno, Thilo Stadelmann, Benjamin Grewe
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Self-attention is essential to Transformer architectures, yet how information
is embedded in the self-attention matrices and how different objective
functions impact this process remains unclear. We present a mathematical
framework to analyze self-attention matrices by deriving the structures
governing their weight updates. Using this framework, we demonstrate that
bidirectional training induces symmetry in the weight matrices, while
autoregressive training results in directionality and column dominance. Our
theoretical findings are validated across multiple Transformer models -
including ModernBERT, GPT, LLaMA3, and Mistral - and input modalities like
text, vision, and audio. Finally, we apply these insights by showing that
symmetric initialization improves the performance of encoder-only models on
language tasks. This mathematical analysis offers a novel theoretical
perspective on how information is embedded through self-attention, thereby
improving the interpretability of Transformer models.