Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers - Page 60
A brief review of contrastive learning applied to astrophysics
Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Attention Weighted Mixture of Experts with Contrastive Learning for Personalized Ranking in E-commerce
Contrastive Representation Disentanglement for Clustering
CoCo: A Coupled Contrastive Framework for Unsupervised Domain Adaptive Graph Classification
Phrase Retrieval for Open-Domain Conversational Question Answering with Conversational Dependency Modeling via Contrastive Learning
On the Generalization of Multi-modal Contrastive Learning
ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function
Rethinking Weak Supervision in Helping Contrastive Learning
Systematic Analysis of Music Representations from BERT
Subgraph Networks Based Contrastive Learning
BatchSampler: Sampling Mini-Batches for Contrastive Learning in Vision, Language, and Graphs
Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning
Unraveling Projection Heads in Contrastive Learning: Insights from Expansion and Shrinkage