Paper ID: 2412.19104

Improving Generative Pre-Training: An In-depth Study of Masked Image Modeling and Denoising Models

Hyesong Choi, Daeun Kim, Sungmin Cha, Kwang Moo Yi, Dongbo Min

In this work, we dive deep into the impact of additive noise in pre-training deep networks. While various methods have attempted to use additive noise inspired by the success of latent denoising diffusion models, when used in combination with masked image modeling, their gains have been marginal when it comes to recognition tasks. We thus investigate why this would be the case, in an attempt to find effective ways to combine the two ideas. Specifically, we find three critical conditions: corruption and restoration must be applied within the encoder, noise must be introduced in the feature space, and an explicit disentanglement between noised and masked tokens is necessary. By implementing these findings, we demonstrate improved pre-training performance for a wide range of recognition tasks, including those that require fine-grained, high-frequency information to solve.

Submitted: Dec 26, 2024