Paper ID: 2406.07457
Estimating the Hallucination Rate of Generative AI
Andrew Jesson, Nicolas Beltran-Velez, Quentin Chu, Sweta Karlekar, Jannik Kossen, Yarin Gal, John P. Cunningham, David Blei
This work is about estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and asked to answer a prediction question based on that dataset. Formally, an ICL problem is a tuple containing a CGM, a dataset, and a prediction question. One interpretation of ICL assumes that the CGM computes the posterior predictive of an unknown Bayesian model. The Bayesian model defines a joint distribution over observable datasets and latent mechanisms, which factorizes into the model likelihood over datasets given a mechanism and the model prior over mechanisms. It is assumed that an ICL dataset comprises independent samples from the model likelihood indexed by a specific mechanism. Moreover, that the prediction question and any valid response are distributed according to the same likelihood. With this perspective, we define a \textit{hallucination} as a generated response to the prediction question that has low-probability under the model likelihood indexed by the mechanism. We develop a new method that takes an ICL problem and estimates the probability that a CGM will generate a hallucination. Our method only requires generating prediction questions and responses from the CGM and evaluating its response log probability. We empirically evaluate our method on synthetic regression and natural language ICL tasks using large language models.
Submitted: Jun 11, 2024