User Response
User response research focuses on understanding and predicting how people interact with and react to various systems, particularly those employing large language models (LLMs). Current research emphasizes aligning LLM responses with human expectations, including aspects like empathy, accuracy, and stylistic consistency, often using techniques like Direct Preference Optimization (DPO) and reinforcement learning to calibrate model outputs against human-generated data. This field is crucial for improving the user experience across numerous applications, from chatbots and smart reply systems to online advertising and social media moderation, by enabling more natural, helpful, and ethically sound interactions between humans and AI.
Papers
February 20, 2024
January 23, 2024
January 17, 2024
December 8, 2023
November 7, 2023
October 29, 2023
October 20, 2023
October 5, 2023
August 28, 2023
August 15, 2023
August 6, 2023
July 30, 2023
June 1, 2023
May 26, 2023
May 11, 2023
May 7, 2023
April 4, 2023
February 16, 2023
February 10, 2023