Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers - Page 67
MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label Framing Detection with Contrastive Learning
Improving Speech Translation by Cross-Modal Multi-Grained Contrastive Learning
SARF: Aliasing Relation Assisted Self-Supervised Learning for Few-shot Relation Reasoning
Domain Generalization for Mammographic Image Analysis with Contrastive Learning
Effective Open Intent Classification with K-center Contrastive Learning and Adjustable Decision Boundary
Video-based Contrastive Learning on Decision Trees: from Action Recognition to Autism Diagnosis
ID-MixGCL: Identity Mixup for Graph Contrastive Learning