Document Level

Document-level machine translation (DocMT) aims to improve translation quality by leveraging the broader context of entire documents, rather than translating sentences in isolation. Current research focuses on adapting large language models (LLMs) and transformer architectures for DocMT, exploring techniques like in-context learning, context-aware prompting, and efficient attention mechanisms to handle long sequences while maintaining coherence and accuracy. This area is significant because it addresses limitations of sentence-level approaches, leading to more natural and nuanced translations, particularly beneficial for literary works and other context-rich texts. Improved DocMT has implications for various applications, including cross-lingual communication, information retrieval, and multilingual content creation.

Papers