Contrastive Masked

Contrastive masked autoencoders (CMAEs) are a class of self-supervised learning models that combine masked image modeling (MIM) with contrastive learning to learn robust feature representations from unlabeled data. Current research focuses on adapting CMAEs to various modalities (images, videos, point clouds, multi-modal data like ECG and text, audio-visual data) and tasks (classification, segmentation, retrieval), often incorporating architectural innovations like hierarchical structures or split latent spaces to improve performance. This approach is particularly valuable in domains with limited labeled data, such as medical image analysis and remote sensing, where CMAEs demonstrate significant improvements over existing methods in downstream tasks. The resulting high-quality representations enhance the capabilities of various applications, including disease diagnosis and ecological mapping.

Papers