Paper ID: 2410.00863

On the Implications of Verbose LLM Outputs: A Case Study in Translation Evaluation

Eleftheria Briakou, Zhongtao Liu, Colin Cherry, Markus Freitag

This paper investigates the impact of verbose LLM translations on evaluation. We first demonstrate the prevalence of this behavior across several LLM outputs drawn from the WMT 2024 general shared task on machine translation. We then identify the primary triggers of verbosity, including safety, copyright concerns, and insufficient context in short input queries. Finally, we show that ignoring this behavior unfairly penalizes more verbose LLMs according to both automatic and human evaluations, highlighting the need to address this issue for more accurate future evaluations.

Submitted: Oct 1, 2024