Literal Translation

Literal translation, the direct word-for-word conversion between languages, is a core challenge in machine translation (MT) research, aiming to improve accuracy and efficiency while addressing inherent limitations. Current research focuses on leveraging large language models (LLMs) and neural machine translation (NMT) models, often employing techniques like minimum Bayesian risk decoding, zero-shot ensembling, and post-editing with LLMs to refine translations and handle idiomatic expressions. These advancements are significant because they improve the quality and fluency of translations, particularly for low-resource languages and specialized domains, impacting fields ranging from cross-cultural communication to multilingual information access.

Papers