Paper ID: 2402.04206

Explaining Autonomy: Enhancing Human-Robot Interaction through Explanation Generation with Large Language Models

David Sobrín-Hidalgo, Miguel A. González-Santamarta, Ángel M. Guerrero-Higueras, Francisco J. Rodríguez-Lera, Vicente Matellán-Olivera

This paper introduces a system designed to generate explanations for the actions performed by an autonomous robot in Human-Robot Interaction (HRI). Explainability in robotics, encapsulated within the concept of an eXplainable Autonomous Robot (XAR), is a growing research area. The work described in this paper aims to take advantage of the capabilities of Large Language Models (LLMs) in performing natural language processing tasks. This study focuses on the possibility of generating explanations using such models in combination with a Retrieval Augmented Generation (RAG) method to interpret data gathered from the logs of autonomous systems. In addition, this work also presents a formalization of the proposed explanation system. It has been evaluated through a navigation test from the European Robotics League (ERL), a Europe-wide social robotics competition. Regarding the obtained results, a validation questionnaire has been conducted to measure the quality of the explanations from the perspective of technical users. The results obtained during the experiment highlight the potential utility of LLMs in achieving explanatory capabilities in robots.

Submitted: Feb 6, 2024