Learning Preference

Learning preference research investigates how artificial intelligence models, particularly large language models (LLMs), acquire and utilize information, focusing on identifying biases and optimizing training processes. Current research explores how LLMs prioritize information based on factors like data formality and consistency, and how these preferences can be manipulated to improve model performance and alignment with human needs, often employing techniques like synthetic data generation and knowledge distillation. This work has significant implications for improving the reliability and efficiency of AI systems across diverse applications, from personalized education and code generation to robotics and course recommendation.

Papers