Reference Free Machine Translation
Reference-free machine translation evaluation aims to assess the quality of translations without relying on human-created reference translations, addressing limitations of traditional methods. Current research focuses on leveraging large language models (LLMs) and adapting existing metrics like BERTScore, often employing techniques like pairwise ranking or direct score generation to mimic human judgment. These advancements are significant because they enable more efficient and scalable evaluation of machine translation systems, particularly in low-resource scenarios and for languages lacking ample parallel corpora, ultimately improving the development and deployment of translation technologies.
Papers
July 7, 2024
July 6, 2024
April 3, 2024
January 30, 2024
April 3, 2023
January 30, 2023