Encoder Decoder Transformer
Encoder-decoder transformers are neural network architectures designed for sequence-to-sequence tasks, aiming to map input sequences (e.g., images, text, time series) to output sequences. Current research focuses on improving efficiency and robustness, particularly through novel attention mechanisms (e.g., channel modulation self-attention) and architectural modifications like incorporating local features or handling unbounded input lengths. These models are proving highly effective across diverse applications, including image deblurring, machine translation, and time series prediction, demonstrating their versatility and potential to advance various scientific fields and practical technologies.
Papers
October 1, 2024
June 4, 2024
March 19, 2024
March 3, 2024
February 7, 2024
November 7, 2023
October 5, 2023
August 23, 2023
August 19, 2023
June 2, 2023
May 2, 2023
February 10, 2023
December 20, 2022
July 1, 2022
May 8, 2022
April 17, 2022
March 21, 2022