User Trust

User trust in artificial intelligence (AI) systems, particularly large language models (LLMs), is a critical area of research focusing on understanding how users form trust, the factors influencing it (e.g., personality, explanation quality, system uncertainty), and how to design trustworthy AI. Current research employs various methods, including human subject experiments, machine learning models for trust prediction and personalization, and novel metrics for evaluating trustworthiness and uncertainty. This work is crucial for responsible AI development and deployment, ensuring that AI systems are not only technically sound but also engender user confidence and acceptance across diverse applications, from healthcare to e-commerce.

Papers