COntrastive Disentanglement

Contrastive disentanglement aims to learn data representations that separate underlying factors of variation, improving model robustness and generalization. Current research focuses on applying contrastive learning techniques within various architectures, including generative adversarial networks and graph neural networks, to achieve this disentanglement across diverse data modalities (images, sound, graphs). This approach addresses challenges like domain shift, class imbalance, and the presence of spurious correlations, leading to improved performance in tasks such as image classification, sound event detection, and clustering, particularly in scenarios with limited labeled data or significant data heterogeneity. The resulting disentangled representations offer enhanced interpretability and fairness, with significant implications for various applications including medical image analysis and fair machine learning.

Papers