Token Mixing

Token mixing, a core component of many modern neural network architectures, aims to efficiently capture relationships between different parts of an input (e.g., pixels in an image, tokens in a sequence). Current research focuses on developing computationally efficient token mixers, exploring alternatives to computationally expensive self-attention mechanisms, such as those based on convolutional layers, wavelets, and Fourier transforms. These efforts are driven by the need for faster and more resource-efficient models, particularly for deployment on mobile devices and in applications involving high-resolution inputs. Improved token mixing techniques are significantly impacting the performance and scalability of various computer vision and other machine learning tasks.

Papers