Appropriate Trust
Appropriate trust in artificial intelligence (AI) systems is crucial for successful human-AI collaboration and widespread adoption. Current research focuses on understanding the factors influencing trust, including AI model accuracy, explainability (e.g., using SHAP values or occlusion methods), human-computer interaction design, and the impact of uncertainty communication. This involves developing and evaluating trust models, often incorporating machine learning techniques like Bayesian methods and reinforcement learning, to improve AI system design and user experience. The ultimate goal is to build trustworthy AI systems that are reliable, transparent, and ethically sound, leading to safer and more effective applications across various domains.
Papers
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
Tirtha Chanda, Katja Hauser, Sarah Hobelsberger, Tabea-Clara Bucher, Carina Nogueira Garcia, Christoph Wies, Harald Kittler, Philipp Tschandl, Cristian Navarrete-Dechent, Sebastian Podlipnik, Emmanouil Chousakos, Iva Crnaric, Jovana Majstorovic, Linda Alhajwan, Tanya Foreman, Sandra Peternel, Sergei Sarap, İrem Özdemir, Raymond L. Barnhill, Mar Llamas Velasco, Gabriela Poch, Sören Korsing, Wiebke Sondermann, Frank Friedrich Gellrich, Markus V. Heppt, Michael Erdmann, Sebastian Haferkamp, Konstantin Drexler, Matthias Goebeler, Bastian Schilling, Jochen S. Utikal, Kamran Ghoreschi, Stefan Fröhling, Eva Krieghoff-Henning, Titus J. Brinker
Trust in Shared Automated Vehicles: Study on Two Mobility Platforms
Shashank Mehrotra, Jacob G Hunter, Matthew Konishi, Kumar Akash, Zhaobo Zheng, Teruhisa Misu, Anil Kumar, Tahira Reid, Neera Jain
Trust, but Verify: Using Self-Supervised Probing to Improve Trustworthiness
Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi
Clarifying Trust of Materials Property Predictions using Neural Networks with Distribution-Specific Uncertainty Quantification
Cameron Gruich, Varun Madhavan, Yixin Wang, Bryan Goldsmith