MT Evaluation
Machine translation (MT) evaluation aims to objectively assess the quality of automatically generated translations, typically by correlating automated scores with human judgments. Current research focuses on improving the efficiency and robustness of learned metrics, such as those based on large language models, while also addressing limitations in evaluating low-resource languages and multi-turn conversations. These advancements are crucial for developing more accurate and reliable MT systems, impacting both research on MT algorithms and the practical deployment of translation technologies in various applications.
Papers
November 1, 2024
June 20, 2024
June 6, 2024
March 19, 2024
January 30, 2024
December 1, 2023
November 2, 2023
November 1, 2023
August 8, 2023
January 31, 2023
January 21, 2023
October 27, 2022
October 25, 2022
October 20, 2022
September 20, 2022
May 2, 2022
February 11, 2022