unsupeRvised Contrastive Learning
Unsupervised contrastive learning aims to learn effective data representations without labeled data by contrasting similar and dissimilar data points. Current research focuses on improving the quality of learned representations through techniques like hard negative mining, data augmentation (including diffusion-based methods), and novel loss functions such as focal-InfoNCE and variations of the InfoNCE loss. These advancements are impacting diverse fields, including image classification, time series analysis, speech processing, and natural language processing, by enabling robust model training with limited or no labeled data. The resulting representations show improved performance in downstream tasks, particularly in scenarios with scarce labeled data or noisy datasets.
Papers
Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
Alexander Jaus, Kailun Yang, Rainer Stiefelhagen
Few-Max: Few-Shot Domain Adaptation for Unsupervised Contrastive Representation Learning
Ali Lotfi Rezaabad, Sidharth Kumar, Sriram Vishwanath, Jonathan I. Tamir