User Trust
User trust in artificial intelligence (AI) systems, particularly large language models (LLMs), is a critical area of research focusing on understanding how users form trust, the factors influencing it (e.g., personality, explanation quality, system uncertainty), and how to design trustworthy AI. Current research employs various methods, including human subject experiments, machine learning models for trust prediction and personalization, and novel metrics for evaluating trustworthiness and uncertainty. This work is crucial for responsible AI development and deployment, ensuring that AI systems are not only technically sound but also engender user confidence and acceptance across diverse applications, from healthcare to e-commerce.
Papers
May 1, 2023
April 18, 2023
March 15, 2023
February 27, 2023
January 21, 2023
September 11, 2022
September 5, 2022
May 13, 2022