Robot Explanation
Robot explanation research focuses on making robots' decision-making processes transparent and understandable to humans, aiming to improve trust, collaboration, and acceptance of robots in various settings. Current efforts involve developing methods to generate human-interpretable explanations, often using vision-language models and reward decomposition techniques to provide high-level, context-aware justifications for robot actions. This work is crucial for deploying robots safely and effectively in real-world scenarios, particularly in human-robot interaction, and is driving advancements in explainable AI and human-centered robotics design.
Papers
September 16, 2024
May 22, 2024
April 15, 2024
March 25, 2024
December 22, 2023
November 13, 2023
July 10, 2023
April 25, 2023
September 22, 2022
July 7, 2022
April 9, 2022
March 24, 2022
January 27, 2022