User Response
User response research focuses on understanding and predicting how people interact with and react to various systems, particularly those employing large language models (LLMs). Current research emphasizes aligning LLM responses with human expectations, including aspects like empathy, accuracy, and stylistic consistency, often using techniques like Direct Preference Optimization (DPO) and reinforcement learning to calibrate model outputs against human-generated data. This field is crucial for improving the user experience across numerous applications, from chatbots and smart reply systems to online advertising and social media moderation, by enabling more natural, helpful, and ethically sound interactions between humans and AI.
Papers
November 1, 2024
October 11, 2024
October 10, 2024
September 23, 2024
September 17, 2024
September 12, 2024
June 18, 2024
June 12, 2024
June 7, 2024
May 31, 2024
April 6, 2024
April 1, 2024
March 26, 2024
March 15, 2024
February 22, 2024
February 20, 2024
January 23, 2024
January 17, 2024
December 8, 2023