Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Enhancing Emotional Text-to-Speech Controllability with Natural Language Guidance through Contrastive Learning and Diffusion Models
Xin Jing, Kun Zhou, Andreas Triantafyllopoulos, Björn W. Schuller
LAMP: Learnable Meta-Path Guided Adversarial Contrastive Learning for Heterogeneous Graphs
Siqing Li, Jin-Duk Park, Wei Huang, Xin Cao, Won-Yong Shin, Zhiqiang Xu
Contrastive Federated Learning with Tabular Data Silos
Achmad Ginanjar, Xue Li, Wen Hua
Constrained Multi-Layer Contrastive Learning for Implicit Discourse Relationship Recognition
Yiheng Wu, Junhui Li, Muhua Zhu
Fine-Grained Representation Learning via Multi-Level Contrastive Learning without Class Priors
Houwang Jiang, Zhuxian Liu, Guodong Liu, Xiaolong Liu, Shihua Zhan
Dual-stream Feature Augmentation for Domain Generalization
Shanshan Wang, ALuSi, Xun Yang, Ke Xu, Huibin Tan, Xingyi Zhang
Towards Generative Class Prompt Learning for Few-shot Visual Recognition
Soumitri Chattopadhyay, Sanket Biswas, Emanuele Vivoli, Josep Lladós
Dual Advancement of Representation Learning and Clustering for Sparse and Noisy Images
Wenlin Li, Yucheng Xu, Xiaoqing Zheng, Suoya Han, Jun Wang, Xiaobo Sun
BEVNav: Robot Autonomous Navigation Via Spatial-Temporal Contrastive Learning in Bird's-Eye View
Jiahao Jiang, Yuxiang Yang, Yingqi Deng, Chenlong Ma, Jing Zhang