Round Trip

Round-trip translation (RTT) involves translating text from one language (or code) to another and back again, assessing the semantic equivalence of the original and final versions. Current research focuses on leveraging RTT for evaluating large language models (LLMs), particularly in software development and machine translation, using it as a self-evaluating metric or a method for improving model robustness against adversarial attacks. This technique offers a potentially powerful, less human-intensive alternative to traditional evaluation methods, impacting both the development of more reliable LLMs and the advancement of automatic program repair and translation technologies.

Papers