Machine Translation
Machine translation (MT) aims to automatically translate text between languages, with current research heavily focused on leveraging large language models (LLMs) and exploring various architectures like encoder-decoder and decoder-only models. Key areas of investigation include improving translation quality, particularly for low-resource languages and specialized domains like medicine, mitigating biases (e.g., gender bias), and enhancing evaluation methods beyond simple correlation with human judgments. These advancements have significant implications for cross-cultural communication, information access, and the development of more equitable and effective multilingual technologies.
Papers
FAME-MT Dataset: Formality Awareness Made Easy for Machine Translation Purposes
Dawid Wiśniewski, Zofia Rostek, Artur Nowakowski
Chasing COMET: Leveraging Minimum Bayes Risk Decoding for Self-Improving Machine Translation
Kamil Guttmann, Mikołaj Pokrywka, Adrian Charkiewicz, Artur Nowakowski
Beyond MLE: Investigating SEARNN for Low-Resourced Neural Machine Translation
Chris Emezue
(Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts
Minghao Wu, Yulin Yuan, Gholamreza Haffari, Longyue Wang
LLM-Assisted Rule Based Machine Translation for Low/No-Resource Languages
Jared Coleman, Bhaskar Krishnamachari, Khalil Iskarous, Ruben Rosales
Enhancing Gender-Inclusive Machine Translation with Neomorphemes and Large Language Models
Andrea Piergentili, Beatrice Savoldi, Matteo Negri, Luisa Bentivogli
Relay Decoding: Concatenating Large Language Models for Machine Translation
Chengpeng Fu, Xiaocheng Feng, Yichong Huang, Wenshuai Huo, Baohang Li, Hui Wang, Bin Qin, Ting Liu
Sentiment Analysis Across Languages: Evaluation Before and After Machine Translation to English
Aekansh Kathunia, Mohammad Kaif, Nalin Arora, N Narotam