Appropriate Trust
Appropriate trust in artificial intelligence (AI) systems is crucial for successful human-AI collaboration and widespread adoption. Current research focuses on understanding the factors influencing trust, including AI model accuracy, explainability (e.g., using SHAP values or occlusion methods), human-computer interaction design, and the impact of uncertainty communication. This involves developing and evaluating trust models, often incorporating machine learning techniques like Bayesian methods and reinforcement learning, to improve AI system design and user experience. The ultimate goal is to build trustworthy AI systems that are reliable, transparent, and ethically sound, leading to safer and more effective applications across various domains.
Papers
"iCub, We Forgive You!" Investigating Trust in a Game Scenario with Kids
Francesca Cocchella, Giulia Pusceddu, Giulia Belgiovine, Linda Lastrico, Francesco Rea, Alessandra Sciutti
Robotic Exercise Trainer: How Failures and T-HRI Levels Affect User Acceptance and Trust
Maya Krakovski, Naama Aharony, Yael Edan
TRUST: An Accurate and End-to-End Table structure Recognizer Using Splitting-based Transformers
Zengyuan Guo, Yuechen Yu, Pengyuan Lv, Chengquan Zhang, Haojie Li, Zhihui Wang, Kun Yao, Jingtuo Liu, Jingdong Wang
Are we measuring trust correctly in explainability, interpretability, and transparency research?
Tim Miller
The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction
Tim Schreiter, Lucas Morillo-Mendez, Ravi T. Chadalavada, Andrey Rudenko, Erik Alexander Billing, Achim J. Lilienthal