Paper ID: 2406.15627
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Lyudmila Rvanova, Akim Tsvigun, Daniil Vasilev, Rui Xing, Abdelrahman Boda Sadallah, Kirill Grishchenkov, Sergey Petrakov, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov, Artem Shelmanov
Uncertainty quantification (UQ) is a critical component of machine learning (ML) applications. The rapid proliferation of large language models (LLMs) has stimulated researchers to seek efficient and effective approaches to UQ for text generation. As with other ML models, LLMs are prone to making incorrect predictions, in the form of ``hallucinations'' whereby claims are fabricated or low-quality outputs are generated for a given input. UQ is a key element in dealing with these challenges. However, research to date on UQ methods for LLMs has been fragmented, in terms of the literature on UQ techniques and evaluation methods. In this work, we tackle this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks, and identify the most promising approaches. Code: this https URL
Submitted: Jun 21, 2024