Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Unsupervised HDR Image and Video Tone Mapping via Contrastive Learning
Cong Cao, Huanjing Yue, Xin Liu, Jingyu Yang
Nearest-Neighbor Inter-Intra Contrastive Learning from Unlabeled Videos
David Fan, Deyu Yang, Xinyu Li, Vimal Bhat, Rohith MV
Twin Contrastive Learning with Noisy Labels
Zhizhong Huang, Junping Zhang, Hongming Shan
Molecular Property Prediction by Semantic-invariant Contrastive Learning
Ziqiao Zhang, Ailin Xie, Jihong Guan, Shuigeng Zhou
Dynamic Clustering and Cluster Contrastive Learning for Unsupervised Person Re-identification
Ziqi He, Mengjia Xue, Yunhao Du, Zhicheng Zhao, Fei Su
Learning Stationary Markov Processes with Contrastive Adjustment
Ludvig Bergenstråhle, Jens Lagergren, Joakim Lundeberg
TQ-Net: Mixed Contrastive Representation Learning For Heterogeneous Test Questions
He Zhu, Xihua Li, Xuemin Zhao, Yunbo Cao, Shan Yu
ESCL: Equivariant Self-Contrastive Learning for Sentence Representations
Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng
Distortion-Disentangled Contrastive Learning
Jinfeng Wang, Sifan Song, Jionglong Su, S. Kevin Zhou
Multi-Stage Coarse-to-Fine Contrastive Learning for Conversation Intent Induction
Caiyuan Chu, Ya Li, Yifan Liu, Jia-Chen Gu, Quan Liu, Yongxin Ge, Guoping Hu