Augmentation Consistency

Augmentation consistency focuses on improving the robustness and generalization of machine learning models by leveraging data augmentation techniques that maintain consistency in model predictions across different augmented versions of the same data. Current research explores various methods to achieve this consistency, including contrastive learning, consistency-based losses (e.g., using Jensen-Shannon divergence), and techniques for generating semantically consistent augmentations. This approach is proving valuable across diverse applications, from improving semi-supervised learning in text and image analysis to enhancing the performance of recommendation systems and robotic imitation learning, by effectively utilizing unlabeled or limited labeled data.

Papers