Invariant Representation
Invariant representation learning aims to create data representations that are insensitive to irrelevant transformations, such as rotations, translations, or changes in data distribution, while preserving information crucial for downstream tasks. Current research focuses on developing algorithms and model architectures (including graph neural networks, convolutional neural networks, and capsule networks) that learn these invariant features, often employing techniques like contrastive learning, minimax optimization, and causal inference to achieve robustness and generalization across diverse datasets. This field is significant because invariant representations enhance the robustness and generalizability of machine learning models, leading to improved performance in various applications, including object recognition, time-series forecasting, and domain adaptation.
Papers
Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations
Haifeng Li, Jun Cao, Jiawei Zhu, Qinyao Luo, Silu He, Xuyin Wang
Learning Invariant Representation and Risk Minimized for Unsupervised Accent Domain Adaptation
Chendong Zhao, Jianzong Wang, Xiaoyang Qu, Haoqian Wang, Jing Xiao