Length Bias
Length bias, a pervasive issue in machine learning, particularly within large language models (LLMs), refers to the tendency of models and evaluation metrics to unfairly favor longer outputs regardless of their actual quality. Current research focuses on identifying and mitigating this bias through various techniques, including reward model calibration, algorithmic modifications like downsampled KL divergence in Direct Preference Optimization (DPO), and data-centric approaches such as length-controlled evaluation metrics. Addressing length bias is crucial for ensuring the reliable evaluation and fair development of LLMs, ultimately leading to more accurate and trustworthy AI systems.
Papers
September 25, 2024
September 18, 2024
July 1, 2024
June 25, 2024
June 16, 2024
April 24, 2024
April 6, 2024
February 11, 2024
February 4, 2024
February 1, 2024
November 20, 2023
October 9, 2023
October 8, 2023
September 17, 2023
September 14, 2023
June 7, 2023
December 2, 2022
November 19, 2022
October 19, 2022