Contrastive Learning Framework
Contrastive learning frameworks aim to learn robust and informative representations by comparing similar (positive) and dissimilar (negative) data points. Current research focuses on applying this technique across diverse modalities, including text, images, graphs, and multimodal data, often employing variations of contrastive loss functions within neural network architectures. These frameworks are proving valuable for improving performance in various downstream tasks, such as classification, regression, clustering, and recommendation systems, particularly in scenarios with limited labeled data or complex data structures. The resulting advancements have significant implications for various fields, including natural language processing, computer vision, and recommendation systems.
Papers
DisCo: Graph-Based Disentangled Contrastive Learning for Cold-Start Cross-Domain Recommendation
Hourun Li, Yifan Wang, Zhiping Xiao, Jia Yang, Changling Zhou, Ming Zhang, Wei JuBalanced Gradient Sample Retrieval for Enhanced Knowledge Retention in Proxy-based Continual Learning
Hongye Xu, Jan Wasilewski, Bartosz Krawczyk