User Trust
User trust in artificial intelligence (AI) systems, particularly large language models (LLMs), is a critical area of research focusing on understanding how users form trust, the factors influencing it (e.g., personality, explanation quality, system uncertainty), and how to design trustworthy AI. Current research employs various methods, including human subject experiments, machine learning models for trust prediction and personalization, and novel metrics for evaluating trustworthiness and uncertainty. This work is crucial for responsible AI development and deployment, ensuring that AI systems are not only technically sound but also engender user confidence and acceptance across diverse applications, from healthcare to e-commerce.
Papers
October 17, 2024
September 26, 2024
August 8, 2024
June 4, 2024
May 9, 2024
May 1, 2024
March 15, 2024
March 14, 2024
February 3, 2024
December 29, 2023
December 28, 2023
November 16, 2023
October 20, 2023
September 19, 2023
July 5, 2023
May 15, 2023
May 8, 2023
May 5, 2023