Neural Machine Translation
Neural Machine Translation (NMT) aims to automatically translate text between languages using deep learning models, primarily focusing on improving translation accuracy and fluency. Current research emphasizes enhancing model robustness through techniques like contrastive learning to reduce repetition, leveraging translation memories and large language models (LLMs) for improved accuracy and efficiency, and addressing issues such as data scarcity in low-resource languages via data augmentation and transfer learning. These advancements have significant implications for cross-lingual communication, impacting fields ranging from international commerce to multilingual education and accessibility.
Papers
Gender-specific Machine Translation with Large Language Models
Eduardo Sánchez, Pierre Andrews, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà
Epi-Curriculum: Episodic Curriculum Learning for Low-Resource Domain Adaptation in Neural Machine Translation
Keyu Chen, Di Zhuang, Mingchen Li, J. Morris Chang
Syntax-Aware Complex-Valued Neural Machine Translation
Yang Liu, Yuexian Hou
Improving End-to-End Speech Translation by Imitation-Based Knowledge Distillation with Synthetic Transcripts
Rebekka Hubert, Artem Sokolov, Stefan Riezler
Enhancing Supervised Learning with Contrastive Markings in Neural Machine Translation Training
Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler