Uncertainty Expression
Uncertainty expression in artificial intelligence focuses on enabling AI models, particularly large language models (LLMs) and deep neural networks, to accurately represent and communicate their uncertainty in predictions. Current research emphasizes improving the calibration of uncertainty estimates, exploring various methods for expressing uncertainty in natural language, and developing techniques to leverage uncertainty information for improved model performance and user trust. This is crucial for building reliable and trustworthy AI systems, particularly in high-stakes applications like medicine and autonomous systems, where understanding the limitations of AI predictions is paramount. The development of robust uncertainty quantification methods is a key challenge driving ongoing research and impacting the broader field of AI safety and reliability.
Papers
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan
Enhanced Language Model Truthfulness with Learnable Intervention and Uncertainty Expression
Farima Fatahi Bayat, Xin Liu, H. V. Jagadish, Lu Wang