Token Prediction
Token prediction, the task of predicting the next word (or token) in a sequence, is central to many natural language processing (NLP) applications and underpins the functionality of large language models (LLMs). Current research focuses on improving prediction accuracy, particularly for long-range dependencies and in the presence of misinformation or adversarial inputs, exploring techniques like planning tokens, divergence-based calibration, and adaptive decoding methods to enhance efficiency and robustness. These advancements are crucial for building more reliable and efficient LLMs, impacting various fields from question answering and text generation to code completion and image synthesis.
Papers
November 5, 2024
October 24, 2024
October 23, 2024
October 17, 2024
October 15, 2024
September 23, 2024
September 14, 2024
September 11, 2024
September 10, 2024
August 26, 2024
August 23, 2024
July 24, 2024
June 28, 2024
June 27, 2024
June 18, 2024
June 12, 2024
June 11, 2024
June 3, 2024