Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Instances and Labels: Hierarchy-aware Joint Supervised Contrastive Learning for Hierarchical Multi-Label Text Classification
Simon Yu, Jie He, Víctor Gutiérrez-Basulto, Jeff Z. Pan
SemST: Semantically Consistent Multi-Scale Image Translation via Structure-Texture Alignment
Ganning Zhao, Wenhui Cui, Suya You, C. -C. Jay Kuo
Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift
Yihao Xue, Siddharth Joshi, Dang Nguyen, Baharan Mirzasoleiman
An Investigation of Representation and Allocation Harms in Contrastive Learning
Subha Maity, Mayank Agarwal, Mikhail Yurochkin, Yuekai Sun
Towards Distribution-Agnostic Generalized Category Discovery
Jianhong Bai, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, Haoji Hu
STANCE-C3: Domain-adaptive Cross-target Stance Detection via Contrastive Learning and Counterfactual Generation
Nayoung Kim, David Mosallanezhad, Lu Cheng, Michelle V. Mancenido, Huan Liu
Contrastive Continual Multi-view Clustering with Filtered Structural Fusion
Xinhang Wan, Jiyuan Liu, Hao Yu, Ao Li, Xinwang Liu, Ke Liang, Zhibin Dong, En Zhu
Pre-training-free Image Manipulation Localization through Non-Mutually Exclusive Contrastive Learning
Jizhe Zhou, Xiaochen Ma, Xia Du, Ahmed Y. Alhammadi, Wentao Feng