Machine Translation Evaluation
Machine translation evaluation aims to objectively assess the quality of automatically generated translations, typically by comparing them to human-created references. Current research heavily focuses on leveraging large language models (LLMs) and other deep learning architectures to automate this process, exploring various prompting techniques and addressing challenges in low-resource language settings and reference-free evaluation. These advancements are crucial for improving machine translation systems and enabling more reliable and nuanced assessments of translation quality across diverse languages and domains.
Papers
October 4, 2024
August 21, 2024
June 6, 2024
April 20, 2024
April 3, 2024
March 19, 2024
January 12, 2024
January 2, 2024
October 17, 2023
October 16, 2023
June 9, 2023
May 24, 2023
May 19, 2023
March 24, 2023
January 30, 2023
October 21, 2022
October 19, 2022
October 18, 2022
September 15, 2022