Encoder Decoder Model
Encoder-decoder models are a class of neural networks designed for sequence-to-sequence tasks, aiming to map an input sequence (e.g., an image, audio, or text) to an output sequence (e.g., a caption, translation, or code). Current research emphasizes improving efficiency and robustness, exploring architectures like Transformers and LSTMs, and incorporating techniques such as contrastive learning, adversarial training, and direct preference optimization to enhance performance across diverse applications. These models are proving highly impactful, enabling advancements in various fields including machine translation, speech recognition, image captioning, and even biological sequence analysis.
Papers
May 19, 2023
May 16, 2023
May 9, 2023
May 2, 2023
April 17, 2023
January 27, 2023
January 17, 2023
December 18, 2022
November 29, 2022
November 23, 2022
November 18, 2022
October 24, 2022
October 17, 2022
October 13, 2022
October 12, 2022
October 2, 2022
September 24, 2022
June 8, 2022
June 7, 2022
May 24, 2022