Learning Preference
Learning preference research investigates how artificial intelligence models, particularly large language models (LLMs), acquire and utilize information, focusing on identifying biases and optimizing training processes. Current research explores how LLMs prioritize information based on factors like data formality and consistency, and how these preferences can be manipulated to improve model performance and alignment with human needs, often employing techniques like synthetic data generation and knowledge distillation. This work has significant implications for improving the reliability and efficiency of AI systems across diverse applications, from personalized education and code generation to robotics and course recommendation.
Papers
October 7, 2024
October 4, 2024
August 9, 2024
June 27, 2024
April 16, 2024
March 5, 2024
December 6, 2022
October 19, 2022