Appropriate Trust
Appropriate trust in artificial intelligence (AI) systems is crucial for successful human-AI collaboration and widespread adoption. Current research focuses on understanding the factors influencing trust, including AI model accuracy, explainability (e.g., using SHAP values or occlusion methods), human-computer interaction design, and the impact of uncertainty communication. This involves developing and evaluating trust models, often incorporating machine learning techniques like Bayesian methods and reinforcement learning, to improve AI system design and user experience. The ultimate goal is to build trustworthy AI systems that are reliable, transparent, and ethically sound, leading to safer and more effective applications across various domains.
Papers
Trust and Resilience in Federated Learning Through Smart Contracts Enabled Decentralized Systems
Lorenzo Cassano, Jacopo D'Abramo, Siraj Munir, Stefano Ferretti
Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust
Arif Kerem Dayı, Orhan Eren Akgün, Stephanie Gil, Michal Yemini, Angelia Nedić