Human Preference Judgment

Human preference judgment research investigates how people make choices and rate options, focusing on understanding the underlying factors influencing these judgments and how these preferences can be modeled and predicted. Current research utilizes large language models (LLMs) like GPT-4 and analyzes human preference data to identify influential factors such as output length, informativeness, and even biases like sycophancy, revealing inconsistencies between human and model preferences in various tasks. This work is crucial for improving the alignment of AI systems with human values, enhancing the fairness and trustworthiness of AI-driven decision-making, and refining the design of human-computer interaction.

Papers