Exposure Bias
Exposure bias, a discrepancy between training and inference data in machine learning models, significantly impacts the performance of various generative models, including diffusion models and those used in natural language processing and other domains. Current research focuses on mitigating this bias through techniques like scheduled sampling, input perturbation, and epsilon scaling, often applied within architectures such as Transformers and diffusion probabilistic models. Addressing exposure bias is crucial for improving the accuracy, diversity, and reliability of generated outputs across diverse applications, ranging from molecular conformation prediction to real-time music generation and recommendation systems.
Papers
September 21, 2024
November 18, 2023
September 5, 2023
August 29, 2023
August 22, 2023
August 16, 2023
May 24, 2023
January 27, 2023
September 15, 2022
September 13, 2022
May 31, 2022
April 3, 2022