Multilingual Sequence to Sequence
Multilingual sequence-to-sequence models aim to build single systems capable of translating and processing text across many languages, overcoming limitations of resource-scarce languages. Current research focuses on improving efficiency through techniques like low-rank adaptation and modular architectures that disentangle language-specific information, as well as addressing challenges like data scarcity and domain mismatch during fine-tuning. These advancements are significant for expanding the reach of natural language processing to a wider range of languages and applications, particularly in machine translation and cross-lingual understanding tasks.
Papers
September 25, 2024
June 2, 2023
May 23, 2023
December 19, 2022
December 15, 2022
September 20, 2022
April 16, 2022
March 16, 2022