Paper ID: 2502.16820 • Published Feb 24, 2025
Uncertainty Quantification of Large Language Models through Multi-Dimensional Responses
Tiejin Chen, Xiaoou Liu, Longchao Da, Xiaoou Liu, Vagelis Papalexakis, Hua Wei
Arizona State University•University of California
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Large Language Models (LLMs) have demonstrated remarkable capabilities across
various tasks due to large training datasets and powerful transformer
architecture. However, the reliability of responses from LLMs remains a
question. Uncertainty quantification (UQ) of LLMs is crucial for ensuring their
reliability, especially in areas such as healthcare, finance, and
decision-making. Existing UQ methods primarily focus on semantic similarity,
overlooking the deeper knowledge dimensions embedded in responses. We introduce
a multi-dimensional UQ framework that integrates semantic and knowledge-aware
similarity analysis. By generating multiple responses and leveraging auxiliary
LLMs to extract implicit knowledge, we construct separate similarity matrices
and apply tensor decomposition to derive a comprehensive uncertainty
representation. This approach disentangles overlapping information from both
semantic and knowledge dimensions, capturing both semantic variations and
factual consistency, leading to more accurate UQ. Our empirical evaluations
demonstrate that our method outperforms existing techniques in identifying
uncertain responses, offering a more robust framework for enhancing LLM
reliability in high-stakes applications.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.