Paper ID: 2503.00248 • Published Feb 28, 2025
Human-AI Collaboration: Trade-offs Between Performance and Preferences
Lukas William Mayer, Sheer Karny, Jackie Ayoub, Miao Song, Danyang Tian, Ehsan Moradi-Pari, Mark Steyvers
University of California, Irvine•Honda Research Institute USA, Inc
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Despite the growing interest in collaborative AI, designing systems that
seamlessly integrate human input remains a major challenge. In this study, we
developed a task to systematically examine human preferences for collaborative
agents. We created and evaluated five collaborative AI agents with strategies
that differ in the manner and degree they adapt to human actions. Participants
interacted with a subset of these agents, evaluated their perceived traits, and
selected their preferred agent. We used a Bayesian model to understand how
agents' strategies influence the Human-AI team performance, AI's perceived
traits, and the factors shaping human-preferences in pairwise agent
comparisons. Our results show that agents who are more considerate of human
actions are preferred over purely performance-maximizing agents. Moreover, we
show that such human-centric design can improve the likability of AI
collaborators without reducing performance. We find evidence for
inequality-aversion effects being a driver of human choices, suggesting that
people prefer collaborative agents which allow them to meaningfully contribute
to the team. Taken together, these findings demonstrate how collaboration with
AI can benefit from development efforts which include both subjective and
objective metrics.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.