Paper ID: 2503.00283 • Published Mar 1, 2025
Xpress: A System For Dynamic, Context-Aware Robot Facial Expressions using Language Models
Victor Nikhil Antony, Maia Stiber, Chien-Ming Huang
Johns Hopkins University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Facial expressions are vital in human communication and significantly
influence outcomes in human-robot interaction (HRI), such as likeability,
trust, and companionship. However, current methods for generating robotic
facial expressions are often labor-intensive, lack adaptability across contexts
and platforms, and have limited expressive ranges--leading to repetitive
behaviors that reduce interaction quality, particularly in long-term scenarios.
We introduce Xpress, a system that leverages language models (LMs) to
dynamically generate context-aware facial expressions for robots through a
three-phase process: encoding temporal flow, conditioning expressions on
context, and generating facial expression code. We demonstrated Xpress as a
proof-of-concept through two user studies (n=15x2) and a case study with
children and parents (n=13), in storytelling and conversational scenarios to
assess the system's context-awareness, expressiveness, and dynamism. Results
demonstrate Xpress's ability to dynamically produce expressive and contextually
appropriate facial expressions, highlighting its versatility and potential in
HRI applications.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.