MT Evaluation

Machine translation (MT) evaluation aims to objectively assess the quality of automatically generated translations, typically by correlating automated scores with human judgments. Current research focuses on improving the efficiency and robustness of learned metrics, such as those based on large language models, while also addressing limitations in evaluating low-resource languages and multi-turn conversations. These advancements are crucial for developing more accurate and reliable MT systems, impacting both research on MT algorithms and the practical deployment of translation technologies in various applications.

Papers