Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Commonsense Knowledge Graph Completion Via Contrastive Pretraining and Node Clustering
Siwei Wu, Xiangqing Shen, Rui Xia
Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank
Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Kai Chen, Rui Yan
ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection
Jeeho Hyun, Sangyun Kim, Giyoung Jeon, Seung Hwan Kim, Kyunghoon Bae, Byung Jun Kang
Balanced Supervised Contrastive Learning for Few-Shot Class-Incremental Learning
In-Ug Yoon, Tae-Min Choi, Young-Min Kim, Jong-Hwan Kim
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, Baharan Mirzasoleiman
Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization
Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, Kun Kuang
Clinically Labeled Contrastive Learning for OCT Biomarker Classification
Kiran Kokilepersaud, Stephanie Trejo Corona, Mohit Prabhushankar, Ghassan AlRegib, Charles Wykoff
Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models
Geewook Kim, Hodong Lee, Daehee Kim, Haeji Jung, Sanghee Park, Yoonsik Kim, Sangdoo Yun, Taeho Kil, Bado Lee, Seunghyun Park
Contrastive Learning of Sentence Embeddings from Scratch
Junlei Zhang, Zhenzhong Lan, Junxian He
PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Yau-Shian Wang, Ta-Chung Chi, Ruohong Zhang, Yiming Yang
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings
William Brannon, Suyash Fulay, Hang Jiang, Wonjune Kang, Brandon Roy, Jad Kabbara, Deb Roy
Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding
Yu Zhang, Hao Cheng, Zhihong Shen, Xiaodong Liu, Ye-Yi Wang, Jianfeng Gao
Federated Generalized Category Discovery
Nan Pu, Zhun Zhong, Xinyuan Ji, Nicu Sebe
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun
Temporal Contrastive Learning for Spiking Neural Networks
Haonan Qiu, Zeyin Song, Yanqi Chen, Munan Ning, Wei Fang, Tao Sun, Zhengyu Ma, Li Yuan, Yonghong Tian
Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver
Optimizing Non-Autoregressive Transformers with Contrastive Learning
Chenxin An, Jiangtao Feng, Fei Huang, Xipeng Qiu, Lingpeng Kong