Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers - Page 54
Towards Robust Real-Time Scene Text Detection: From Semantic to Instance Representation Learning
ICPC: Instance-Conditioned Prompting with Contrastive Learning for Semantic Segmentation
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Contrastive Bi-Projector for Unsupervised Domain Adaption