Neural Machine Translation
Neural Machine Translation (NMT) aims to automatically translate text between languages using deep learning models, primarily focusing on improving translation accuracy and fluency. Current research emphasizes enhancing model robustness through techniques like contrastive learning to reduce repetition, leveraging translation memories and large language models (LLMs) for improved accuracy and efficiency, and addressing issues such as data scarcity in low-resource languages via data augmentation and transfer learning. These advancements have significant implications for cross-lingual communication, impacting fields ranging from international commerce to multilingual education and accessibility.
Papers
Improving Long Context Document-Level Machine Translation
Christian Herold, Hermann Ney
Improving Language Model Integration for Neural Machine Translation
Christian Herold, Yingbo Gao, Mohammad Zeineldeen, Hermann Ney
T3L: Translate-and-Test Transfer Learning for Cross-Lingual Text Classification
Inigo Jauregi Unanue, Gholamreza Haffari, Massimo Piccardi
CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine Translation
Md Mahfuz Ibn Alam, Sina Ahmadi, Antonios Anastasopoulos
On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss
Yihong Liu, Alexandra Chronopoulou, Hinrich Schütze, Alexander Fraser
Songs Across Borders: Singable and Controllable Neural Lyric Translation
Longshen Ou, Xichu Ma, Min-Yen Kan, Ye Wang
Do GPTs Produce Less Literal Translations?
Vikas Raunak, Arul Menezes, Matt Post, Hany Hassan Awadalla