Machine Translation
Machine translation (MT) aims to automatically translate text between languages, with current research heavily focused on leveraging large language models (LLMs) and exploring various architectures like encoder-decoder and decoder-only models. Key areas of investigation include improving translation quality, particularly for low-resource languages and specialized domains like medicine, mitigating biases (e.g., gender bias), and enhancing evaluation methods beyond simple correlation with human judgments. These advancements have significant implications for cross-cultural communication, information access, and the development of more equitable and effective multilingual technologies.
Papers
An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation
Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu
Teaching LLMs at Charles University: Assignments and Activities
Jindřich Helcl, Zdeněk Kasner, Ondřej Dušek, Tomasz Limisiewicz, Dominik Macháček, Tomáš Musil, Jindřich Libovický
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language Models
Kenza Benkirane, Laura Gongas, Shahar Pelles, Naomi Fuchs, Joshua Darmon, Pontus Stenetorp, David Ifeoluwa Adelani, Eduardo Sánchez
Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words
Yijie Chen, Yijin Liu, Fandong Meng, Jinan Xu, Yufeng Chen, Jie Zhou