Invariance Regularization

Invariance regularization aims to improve the robustness and generalization of machine learning models by enforcing consistent predictions across variations in input data, such as rotations, translations, or adversarial perturbations. Current research focuses on developing novel regularization techniques, often integrated into convolutional neural networks (CNNs) and transformers, to address issues like the robustness-accuracy trade-off in adversarial training and the lack of equivariance in existing architectures. These advancements are significant because they enhance model performance in various applications, including image reconstruction, knowledge graph completion, and weakly supervised learning, by reducing sensitivity to spurious correlations and improving generalization to unseen data.

Papers