Robot Explanation

Robot explanation research focuses on making robots' decision-making processes transparent and understandable to humans, aiming to improve trust, collaboration, and acceptance of robots in various settings. Current efforts involve developing methods to generate human-interpretable explanations, often using vision-language models and reward decomposition techniques to provide high-level, context-aware justifications for robot actions. This work is crucial for deploying robots safely and effectively in real-world scenarios, particularly in human-robot interaction, and is driving advancements in explainable AI and human-centered robotics design.

Papers