Neural Machine Translation
Neural Machine Translation (NMT) aims to automatically translate text between languages using deep learning models, primarily focusing on improving translation accuracy and fluency. Current research emphasizes enhancing model robustness through techniques like contrastive learning to reduce repetition, leveraging translation memories and large language models (LLMs) for improved accuracy and efficiency, and addressing issues such as data scarcity in low-resource languages via data augmentation and transfer learning. These advancements have significant implications for cross-lingual communication, impacting fields ranging from international commerce to multilingual education and accessibility.
Papers
DICTDIS: Dictionary Constrained Disambiguation for Improved NMT
Ayush Maheshwari, Preethi Jyothi, Ganesh Ramakrishnan
Low-resource Neural Machine Translation with Cross-modal Alignment
Zhe Yang, Qingkai Fang, Yang Feng
Categorizing Semantic Representations for Neural Machine Translation
Yongjing Yin, Yafu Li, Fandong Meng, Jie Zhou, Yue Zhang
Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Marwa Gaser, Manuel Mager, Injy Hamed, Nizar Habash, Slim Abdennadher, Ngoc Thang Vu
Viterbi Decoding of Directed Acyclic Transformer for Non-Autoregressive Machine Translation
Chenze Shao, Zhengrui Ma, Yang Feng
Improving Robustness of Retrieval Augmented Translation via Shuffling of Suggestions
Cuong Hoang, Devendra Sachan, Prashant Mathur, Brian Thompson, Marcello Federico
QUAK: A Synthetic Quality Estimation Dataset for Korean-English Neural Machine Translation
Sugyeong Eo, Chanjun Park, Hyeonseok Moon, Jaehyung Seo, Gyeongmin Kim, Jungseob Lee, Heuiseok Lim
Blur the Linguistic Boundary: Interpreting Chinese Buddhist Sutra in English via Neural Machine Translation
Denghao Li, Yuqiao Zeng, Jianzong Wang, Lingwei Kong, Zhangcheng Huang, Ning Cheng, Xiaoyang Qu, Jing Xiao