Trust Aware
Trust-aware systems aim to design AI agents and robots that effectively interact with humans by incorporating and responding to human trust. Current research focuses on modeling trust dynamics using various approaches, including Bayesian methods, Markov Decision Processes, and neural networks (often incorporating uncertainty quantification), to improve decision-making in collaborative settings. This research is significant because it addresses the critical gap between optimal algorithmic performance and the practical need for human acceptance and reliance on AI, impacting fields like human-robot interaction, healthcare, and multi-agent systems. The ultimate goal is to build more reliable and trustworthy AI systems that enhance human-AI collaboration.