Seq2seq Generation
Seq2seq generation aims to create output sequences (e.g., translations, summaries) from input sequences, primarily using transformer-based architectures. Current research focuses on improving efficiency and robustness, addressing issues like slow autoregressive decoding through methods such as non-autoregressive decoding and speculative execution, as well as mitigating biases and improving faithfulness in generated text via adversarial training techniques. These advancements are significant because they enhance the speed, accuracy, and reliability of various natural language processing applications, impacting fields ranging from machine translation to conversational AI.
Papers
September 29, 2024
October 11, 2023
October 22, 2022
May 20, 2022