Appropriate Trust
Appropriate trust in artificial intelligence (AI) systems is crucial for successful human-AI collaboration and widespread adoption. Current research focuses on understanding the factors influencing trust, including AI model accuracy, explainability (e.g., using SHAP values or occlusion methods), human-computer interaction design, and the impact of uncertainty communication. This involves developing and evaluating trust models, often incorporating machine learning techniques like Bayesian methods and reinforcement learning, to improve AI system design and user experience. The ultimate goal is to build trustworthy AI systems that are reliable, transparent, and ethically sound, leading to safer and more effective applications across various domains.
Papers
In Generative AI we Trust: Can Chatbots Effectively Verify Political Information?
Elizaveta Kuznetsova, Mykola Makhortykh, Victoria Vziatysheva, Martha Stolze, Ani Baghumyan, Aleksandra Urman
Towards Machines that Trust: AI Agents Learn to Trust in the Trust Game
Ardavan S. Nobandegani, Irina Rish, Thomas R. Shultz
From Human to Robot Interactions: A Circular Approach towards Trustworthy Social Robots
Anna L. Lange, Murat Kirtay, Verena V. Hafner
The ART of LLM Refinement: Ask, Refine, and Trust
Kumar Shridhar, Koustuv Sinha, Andrew Cohen, Tianlu Wang, Ping Yu, Ram Pasunuru, Mrinmaya Sachan, Jason Weston, Asli Celikyilmaz