Non Autoregressive Neural Machine Translation
Non-autoregressive neural machine translation (NMT) aims to significantly speed up machine translation by generating entire target sentences in a single pass, unlike the iterative approach of autoregressive models. Current research focuses on bridging the performance gap with autoregressive methods through techniques like reinforcement learning, latent variable modeling, and knowledge distillation, often employing transformer-based architectures. These advancements are crucial for real-world applications requiring fast translation, such as real-time communication and large-scale data processing, while also contributing to a deeper understanding of efficient sequence generation in neural networks.
Papers
May 2, 2024
May 2, 2023
March 31, 2023
March 14, 2023
November 30, 2022
June 10, 2022
May 28, 2022
May 21, 2022
March 17, 2022
December 22, 2021