Sequence Likelihood

Sequence likelihood, the probability a language model assigns to a given text sequence, is crucial for evaluating and improving model performance, particularly in tasks like text generation and summarization. Current research focuses on calibrating these likelihoods to better reflect actual sequence quality and consistency, often employing techniques like contrastive learning and label smoothing, and leveraging human feedback to refine model estimations. Addressing inaccuracies in sequence likelihood estimations, especially for less frequent sequences, is vital for enhancing the reliability and safety of language models across various applications, from question answering to content generation.

Papers