Paper ID: 2311.02009

Trust-Preserved Human-Robot Shared Autonomy enabled by Bayesian Relational Event Modeling

Yingke Li, Fumin Zhang

Shared autonomy functions as a flexible framework that empowers robots to operate across a spectrum of autonomy levels, allowing for efficient task execution with minimal human oversight. However, humans might be intimidated by the autonomous decision-making capabilities of robots due to perceived risks and a lack of trust. This paper proposed a trust-preserved shared autonomy strategy that allows robots to seamlessly adjust their autonomy level, striving to optimize team performance and enhance their acceptance among human collaborators. By enhancing the relational event modeling framework with Bayesian learning techniques, this paper enables dynamic inference of human trust based solely on time-stamped relational events communicated within human-robot teams. Adopting a longitudinal perspective on trust development and calibration in human-robot teams, the proposed trust-preserved shared autonomy strategy warrants robots to actively establish, maintain, and repair human trust, rather than merely passively adapting to it. We validate the effectiveness of the proposed approach through a user study on a human-robot collaborative search and rescue scenario. The objective and subjective evaluations demonstrate its merits on both task execution and user acceptability over the baseline approach that does not consider the preservation of trust.

Submitted: Nov 3, 2023