Preference Based
Preference-based methods are crucial for aligning large language models (LLMs) and generative AI systems with human expectations, focusing on training models to generate outputs preferred by users. Current research emphasizes overcoming biases in automated preference evaluation, particularly length bias in LLM-based assessments, and developing more robust and efficient methods for incorporating diverse human preferences, including multi-objective decoding algorithms. These advancements are vital for improving the reliability and trustworthiness of AI systems across various applications, ranging from text generation to more complex tasks like long-form question answering.
Papers
October 5, 2024
September 11, 2024
July 1, 2024
June 27, 2024
June 3, 2024
March 10, 2024
February 17, 2024