Multilingual Machine Translation
Multilingual machine translation (MMT) aims to build systems capable of translating between many language pairs using a single model, improving cross-lingual communication and access to information. Current research focuses on leveraging large language models (LLMs), comparing encoder-decoder and decoder-only architectures, and optimizing training strategies like supervised fine-tuning and data augmentation to enhance translation quality, particularly for low-resource languages. These advancements are significant because they address the limitations of traditional NMT approaches, improving translation accuracy and efficiency across diverse linguistic contexts and potentially impacting fields like global communication, information access, and cross-cultural understanding.
Papers
SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages
Alireza Mohammadshahi, Vassilina Nikoulina, Alexandre Berard, Caroline Brun, James Henderson, Laurent Besacier
The VolcTrans System for WMT22 Multilingual Machine Translation Task
Xian Qian, Kai Hu, Jiaqiang Wang, Yifeng Liu, Xingyuan Pan, Jun Cao, Mingxuan Wang