Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
DyTed: Disentangled Representation Learning for Discrete-time Dynamic Graph
Kaike Zhang, Qi Cao, Gaolin Fang, Bingbing Xu, Hongjian Zou, Huawei Shen, Xueqi Cheng
CPL: Counterfactual Prompt Learning for Vision and Language Models
Xuehai He, Diji Yang, Weixi Feng, Tsu-Jui Fu, Arjun Akula, Varun Jampani, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang
MedCLIP: Contrastive Learning from Unpaired Medical Images and Text
Zifeng Wang, Zhenbang Wu, Dinesh Agarwal, Jimeng Sun
Unsupervised visualization of image datasets using contrastive learning
Jan Niklas Böhm, Philipp Berens, Dmitry Kobak
Universal hidden monotonic trend estimation with contrastive learning
Edouard Pineau, Sébastien Razakarivony, Mauricio Gonzalez, Anthony Schrapffer
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Kangning Liu, Weicheng Zhu, Yiqiu Shen, Sheng Liu, Narges Razavian, Krzysztof J. Geras, Carlos Fernandez-Granda
Improving Contrastive Learning on Visually Homogeneous Mars Rover Images
Isaac Ronald Ward, Charles Moore, Kai Pak, Jingdao Chen, Edwin Goh
Mars: Modeling Context & State Representations with Contrastive Learning for End-to-End Task-Oriented Dialog
Haipeng Sun, Junwei Bao, Youzheng Wu, Xiaodong He
HCL-TAT: A Hybrid Contrastive Learning Method for Few-shot Event Detection with Task-Adaptive Threshold
Ruihan Zhang, Wei Wei, Xian-Ling Mao, Rui Fang, Dangyang Chen
Unifying Graph Contrastive Learning with Flexible Contextual Scopes
Yizhen Zheng, Yu Zheng, Xiaofei Zhou, Chen Gong, Vincent CS Lee, Shirui Pan
Supervised Prototypical Contrastive Learning for Emotion Recognition in Conversation
Xiaohui Song, Longtao Huang, Hui Xue, Songlin Hu
Invariance-adapted decomposition and Lasso-type contrastive learning
Masanori Koyama, Takeru Miyato, Kenji Fukumizu
LEAVES: Learning Views for Time-Series Data in Contrastive Learning
Han Yu, Huiyuan Yang, Akane Sano
Closed-book Question Generation via Contrastive Learning
Xiangjue Dong, Jiaying Lu, Jianling Wang, James Caverlee
Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Xiyang Hu, Xinchi Chen, Peng Qi, Deguang Kong, Kunlun Liu, William Yang Wang, Zhiheng Huang
Self-Attention Message Passing for Contrastive Few-Shot Learning
Ojas Kishorkumar Shirekar, Anuj Singh, Hadi Jamali-Rad
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RL
Chen Sun, Wannan Yang, Thomas Jiralerspong, Dane Malenfant, Benjamin Alsbury-Nealy, Yoshua Bengio, Blake Richards