Appropriate Trust
Appropriate trust in artificial intelligence (AI) systems is crucial for successful human-AI collaboration and widespread adoption. Current research focuses on understanding the factors influencing trust, including AI model accuracy, explainability (e.g., using SHAP values or occlusion methods), human-computer interaction design, and the impact of uncertainty communication. This involves developing and evaluating trust models, often incorporating machine learning techniques like Bayesian methods and reinforcement learning, to improve AI system design and user experience. The ultimate goal is to build trustworthy AI systems that are reliable, transparent, and ethically sound, leading to safer and more effective applications across various domains.
Papers
Value Alignment and Trust in Human-Robot Interaction: Insights from Simulation and User Study
Shreyas Bhat, Joseph B. Lyons, Cong Shi, X. Jessie Yang
Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and Partisan Negativity Bias
Keith Burghardt, Daniel M. T. Fessler, Chyna Tang, Anne Pisor, Kristina Lerman
No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO
Skander Moalla, Andrea Miele, Razvan Pascanu, Caglar Gulcehre
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan