Paper ID: 2307.07843
Transformers are Universal Predictors
Sourya Basu, Moulik Choraria, Lav R. Varshney
We find limits to the Transformer architecture for language modeling and show it has a universal prediction property in an information-theoretic sense. We further analyze performance in non-asymptotic data regimes to understand the role of various components of the Transformer architecture, especially in the context of data-efficient training. We validate our theoretical analysis with experiments on both synthetic and real datasets.
Submitted: Jul 15, 2023