Generating Translation
Generating translation, encompassing machine translation (MT) and simultaneous translation (SiMT), aims to automatically convert text or speech from one language to another, prioritizing accuracy and fluency. Current research focuses on improving the fidelity of large language model (LLM)-based translations by addressing issues like unfaithful rendering of source context and incorporating domain-specific terminology, often using techniques like knowledge distillation and attention mechanisms. These advancements are crucial for enhancing cross-lingual communication in diverse fields, from scientific literature to multimedia content localization, and improving the efficiency of tasks like automatic dubbing.
Papers
Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model
Hongbin Zhang, Kehai Chen, Xuefeng Bai, Yang Xiang, Min Zhang
Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models
Shoutao Guo, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng