Paper ID: 2501.19215 • Published Jan 31, 2025
Strassen Attention: Unlocking Compositional Abilities in Transformers Based on a New Lower Bound Method
Alexander Kozachinskiy, Felipe Urrutia, Hector Jimenez, Tomasz Steifer, Germán Pizarro, Matías Fuentes, Francisco Meza...
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We propose a novel method to evaluate the theoretical limits of Transformers,
allowing us to prove the first lower bounds against one-layer softmax
Transformers with infinite precision. We establish those bounds for three tasks
that require advanced reasoning. The first task, Match3 (Sanford et al., 2023),
requires looking at all triples of positions. The second and third tasks
address compositionality-based reasoning: one is composition of functions (Peng
et al., 2024) and the other is composition of binary relations. We formally
prove the inability of one-layer softmax Transformers to solve any of these
tasks. In an attempt to overcome these limitations, we introduce Strassen
attention and prove that with this mechanism a one-layer Transformer can in
principle solve all these tasks. We also show that it enjoys sub-cubic
running-time complexity, making it more scalable than similar previously
proposed mechanisms, such as higher-order attention (Sanford et al., 2023). To
complement our theoretical findings, we experimentally studied Strassen
attention and compared it against standard (Vaswani et al, 2017), higher-order
attention (Sanford et al., 2023) and triangular attention (Bergen et al. 2021).
Our results help to disentangle all these attention mechanisms, highlighting
their strengths and limitations. In particular, Strassen attention outperforms
standard attention significantly on all the tasks. Altogether, understanding
the theoretical limitations can guide research towards scalable attention
mechanisms that improve the reasoning abilities of Transformers.