Paper ID: 2206.10553

Uncertainty Quantification for Competency Assessment of Autonomous Agents

Aastha Acharya, Rebecca Russell, Nisar R. Ahmed

For safe and reliable deployment in the real world, autonomous agents must elicit appropriate levels of trust from human users. One method to build trust is to have agents assess and communicate their own competencies for performing given tasks. Competency depends on the uncertainties affecting the agent, making accurate uncertainty quantification vital for competency assessment. In this work, we show how ensembles of deep generative models can be used to quantify the agent's aleatoric and epistemic uncertainties when forecasting task outcomes as part of competency assessment.

Submitted: Jun 21, 2022