Neural Machine Translation
Neural Machine Translation (NMT) aims to automatically translate text between languages using deep learning models, primarily focusing on improving translation accuracy and fluency. Current research emphasizes enhancing model robustness through techniques like contrastive learning to reduce repetition, leveraging translation memories and large language models (LLMs) for improved accuracy and efficiency, and addressing issues such as data scarcity in low-resource languages via data augmentation and transfer learning. These advancements have significant implications for cross-lingual communication, impacting fields ranging from international commerce to multilingual education and accessibility.
Papers
Confidence Based Bidirectional Global Context Aware Training Framework for Neural Machine Translation
Chulun Zhou, Fandong Meng, Jie Zhou, Min Zhang, Hongji Wang, Jinsong Su
LCP-dropout: Compression-based Multiple Subword Segmentation for Neural Machine Translation
Keita Nonaka, Kazutaka Yamanouchi, Tomohiro I, Tsuyoshi Okita, Kazutaka Shimada, Hiroshi Sakamoto