Multilingual Machine Translation
Multilingual machine translation (MMT) aims to build systems capable of translating between many language pairs using a single model, improving cross-lingual communication and access to information. Current research focuses on leveraging large language models (LLMs), comparing encoder-decoder and decoder-only architectures, and optimizing training strategies like supervised fine-tuning and data augmentation to enhance translation quality, particularly for low-resource languages. These advancements are significant because they address the limitations of traditional NMT approaches, improving translation accuracy and efficiency across diverse linguistic contexts and potentially impacting fields like global communication, information access, and cross-cultural understanding.
Papers
Towards Cross-Cultural Machine Translation with Retrieval-Augmented Generation from Multilingual Knowledge Graphs
Simone Conia, Daniel Lee, Min Li, Umar Farooq Minhas, Saloni Potdar, Yunyao Li
NLIP_Lab-IITH Multilingual MT System for WAT24 MT Shared Task
Maharaj Brahma, Pramit Sahoo, Maunendra Sankar Desarkar
IKUN for WMT24 General MT Task: LLMs Are here for Multilingual Machine Translation
Baohao Liao, Christian Herold, Shahram Khadivi, Christof Monz
Expanding FLORES+ Benchmark for more Low-Resource Settings: Portuguese-Emakhuwa Machine Translation Evaluation
Felermino D. M. Antonio Ali, Henrique Lopes Cardoso, Rui Sousa-Silva