Machine Translation
Machine translation (MT) aims to automatically translate text between languages, with current research heavily focused on leveraging large language models (LLMs) and exploring various architectures like encoder-decoder and decoder-only models. Key areas of investigation include improving translation quality, particularly for low-resource languages and specialized domains like medicine, mitigating biases (e.g., gender bias), and enhancing evaluation methods beyond simple correlation with human judgments. These advancements have significant implications for cross-cultural communication, information access, and the development of more equitable and effective multilingual technologies.
Papers
Towards Tailored Recovery of Lexical Diversity in Literary Machine Translation
Esther Ploeger, Huiyuan Lai, Rik van Noord, Antonio Toral
InkubaLM: A small language model for low-resource African languages
Atnafu Lambebo Tonja, Bonaventure F. P. Dossou, Jessica Ojo, Jenalea Rajab, Fadel Thior, Eric Peter Wairagala, Aremu Anuoluwapo, Pelonomi Moiloa, Jade Abbott, Vukosi Marivate, Benjamin Rosman
FLEURS-ASL: Including American Sign Language in Massively Multilingual Multitask Evaluation
Garrett Tanzer
Cultural Adaptation of Menus: A Fine-Grained Approach
Zhonghe Zhang, Xiaoyu He, Vivek Iyer, Alexandra Birch
Generative-Adversarial Networks for Low-Resource Language Data Augmentation in Machine Translation
Linda Zeng