Machine Translation
Machine translation (MT) aims to automatically translate text between languages, with current research heavily focused on leveraging large language models (LLMs) and exploring various architectures like encoder-decoder and decoder-only models. Key areas of investigation include improving translation quality, particularly for low-resource languages and specialized domains like medicine, mitigating biases (e.g., gender bias), and enhancing evaluation methods beyond simple correlation with human judgments. These advancements have significant implications for cross-cultural communication, information access, and the development of more equitable and effective multilingual technologies.
Papers
SOTASTREAM: A Streaming Approach to Machine Translation Training
Matt Post, Thamme Gowda, Roman Grundkiewicz, Huda Khayrallah, Rohit Jain, Marcin Junczys-Dowmunt
The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation
Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat