Token Prediction
Token prediction, the task of predicting the next word (or token) in a sequence, is central to many natural language processing (NLP) applications and underpins the functionality of large language models (LLMs). Current research focuses on improving prediction accuracy, particularly for long-range dependencies and in the presence of misinformation or adversarial inputs, exploring techniques like planning tokens, divergence-based calibration, and adaptive decoding methods to enhance efficiency and robustness. These advancements are crucial for building more reliable and efficient LLMs, impacting various fields from question answering and text generation to code completion and image synthesis.
Papers
June 18, 2024
June 12, 2024
June 11, 2024
June 3, 2024
May 30, 2024
May 27, 2024
May 25, 2024
May 21, 2024
May 13, 2024
May 2, 2024
April 12, 2024
March 14, 2024
February 22, 2024
February 2, 2024
February 1, 2024
December 14, 2023
December 4, 2023
November 16, 2023
October 17, 2023