Sequence Likelihood
Sequence likelihood, the probability a language model assigns to a given text sequence, is crucial for evaluating and improving model performance, particularly in tasks like text generation and summarization. Current research focuses on calibrating these likelihoods to better reflect actual sequence quality and consistency, often employing techniques like contrastive learning and label smoothing, and leveraging human feedback to refine model estimations. Addressing inaccuracies in sequence likelihood estimations, especially for less frequent sequences, is vital for enhancing the reliability and safety of language models across various applications, from question answering to content generation.
Papers
December 14, 2023
October 12, 2023
June 6, 2023
May 17, 2023
March 13, 2023
September 30, 2022