Exposure Bias

Exposure bias, a discrepancy between training and inference data in machine learning models, significantly impacts the performance of various generative models, including diffusion models and those used in natural language processing and other domains. Current research focuses on mitigating this bias through techniques like scheduled sampling, input perturbation, and epsilon scaling, often applied within architectures such as Transformers and diffusion probabilistic models. Addressing exposure bias is crucial for improving the accuracy, diversity, and reliability of generated outputs across diverse applications, ranging from molecular conformation prediction to real-time music generation and recommendation systems.

Papers