Multilingual Neural Machine Translation
Multilingual neural machine translation (MNMT) aims to build single models capable of translating between numerous language pairs, improving efficiency and resource allocation compared to training separate bilingual models. Current research focuses on optimizing model architectures (like Mixture-of-Experts) and training strategies to address challenges such as parameter inefficiency, negative interactions between languages during fine-tuning, and the "off-target" problem (incorrect language output). These advancements are significant because they enable more efficient and effective translation for low-resource languages and improve the overall quality and robustness of machine translation systems.
Papers
Exploiting Multilingualism in Low-resource Neural Machine Translation via Adversarial Learning
Amit Kumar, Ajay Pratap, Anil Kumar Singh
$\varepsilon$ K\'U <MASK>: Integrating Yor\`ub\'a cultural greetings into machine translation
Idris Akinade, Jesujoba Alabi, David Adelani, Clement Odoje, Dietrich Klakow