Paper ID: 2402.06855
For Better or For Worse? Learning Minimum Variance Features With Label Augmentation
Muthu Chidambaram, Rong Ge
Data augmentation has been pivotal in successfully training deep learning models on classification tasks over the past decade. An important subclass of data augmentation techniques - which includes both label smoothing and Mixup - involves modifying not only the input data but also the input label during model training. In this work, we analyze the role played by the label augmentation aspect of such methods. We first prove that linear models on binary classification data trained with label augmentation learn only the minimum variance features in the data, while standard training (which includes weight decay) can learn higher variance features. We then use our techniques to show that even for nonlinear models and general data distributions, the label smoothing and Mixup losses are lower bounded by a function of the model output variance. An important consequence of our results is negative: label smoothing and Mixup can be less robust to spurious correlations in the data. We verify that our theory reflects practice via experiments on image classification benchmarks modified to have spurious correlations.
Submitted: Feb 10, 2024