Machine Translation
Machine translation (MT) aims to automatically translate text between languages, with current research heavily focused on leveraging large language models (LLMs) and exploring various architectures like encoder-decoder and decoder-only models. Key areas of investigation include improving translation quality, particularly for low-resource languages and specialized domains like medicine, mitigating biases (e.g., gender bias), and enhancing evaluation methods beyond simple correlation with human judgments. These advancements have significant implications for cross-cultural communication, information access, and the development of more equitable and effective multilingual technologies.
Papers
Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Marwa Gaser, Manuel Mager, Injy Hamed, Nizar Habash, Slim Abdennadher, Ngoc Thang Vu
Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting
Zifan Jiang, Amit Moryossef, Mathias Müller, Sarah Ebling
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao
Facilitating Global Team Meetings Between Language-Based Subgroups: When and How Can Machine Translation Help?
Yongle Zhang, Dennis Asamoah Owusu, Marine Carpuat, Ge Gao