Paper ID: 2306.05497

Noise-Robust Loss Functions: Enhancing Bounded Losses for Large-Scale Noisy Data Learning

Max Staats, Matthias Thamm, Bernd Rosenow

Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels. Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting. Through a quantitative approach, this paper explores the limited overlap between the network output at initialization and regions of non-vanishing gradients of bounded loss functions in the initial learning phase. Using these insights, we address underfitting of the MAE loss with a novel method denoted as logit bias, which adds a real number $\epsilon$ to the logit at the position of the correct class. This method enables bounded losses to learn, even on datasets like WebVision, consisting of over a million images from 1000 classes. Extensive numerical experiments show that the logit bias enables MAE to compete with state-of-the-art noise robust loss functions. In addition, we demonstrate that our method can be used to determine optimal parameters for other loss functions -- without having to train networks. Remarkably, our method determines the hyperparameters based on the number of classes, resulting in loss functions which require zero dataset or noise-dependent parameters.

Submitted: Jun 8, 2023