Representation Invariance
Representation invariance in machine learning focuses on developing models that produce consistent outputs despite variations in input data, a crucial aspect for robust generalization and transfer learning. Current research emphasizes quantifying and enhancing invariance through various techniques, including contrastive learning, information-theoretic approaches like minimizing mutual information between representations and nuisance factors, and the analysis of neural network architectures for inherent invariance properties. These advancements are improving model performance across diverse applications, such as medical image analysis and adversarial robustness, by enabling more reliable and generalizable representations.
Papers
July 5, 2024
October 26, 2023
May 4, 2023
September 15, 2022
July 4, 2022
March 21, 2022