Paper ID: 2405.09061
Improving Transformers using Faithful Positional Encoding
Tsuyoshi Idé, Jokin Labaien, Pin-Yu Chen
We propose a new positional encoding method for a neural network architecture called the Transformer. Unlike the standard sinusoidal positional encoding, our approach is based on solid mathematical grounds and has a guarantee of not losing information about the positional order of the input sequence. We show that the new encoding approach systematically improves the prediction performance in the time-series classification task.
Submitted: May 15, 2024