User Preference

User preference modeling aims to understand and predict individual choices, enabling personalized systems and experiences. Current research focuses on efficiently aligning large language models (LLMs) with diverse user preferences using techniques like personalized reward modeling, direct preference optimization, and parameter-efficient fine-tuning, often incorporating user history and feedback. These advancements are crucial for improving the effectiveness and personalization of LLMs in various applications, from code generation and recommendation systems to robotic assistance and conversational AI. The ultimate goal is to create systems that not only understand but also adapt to the nuanced and evolving preferences of individual users.

Papers