Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Sihao Chen, Hongming Zhang, Tong Chen, Ben Zhou, Wenhao Yu, Dian Yu, Baolin Peng, Hongwei Wang, Dan Roth, Dong Yu
Sparse Contrastive Learning of Sentence Embeddings
Ruize An, Chen Zhang, Dawei Song
Counterfactual Data Augmentation with Contrastive Learning
Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
FLAP: Fast Language-Audio Pre-training
Ching-Feng Yeh, Po-Yao Huang, Vasu Sharma, Shang-Wen Li, Gargi Gosh
Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud Registration
Yifan Xie, Jihua Zhu, Shiqi Li, Pengcheng Shi
AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning
Mohammadamin Tavakoli, Yin Ting T. Chiu, Alexander Shmakov, Ann Marie Carlton, David Van Vranken, Pierre Baldi
FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space
Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, Tarek Abdelzaher
Bidirectional Captioning for Clinically Accurate and Interpretable Models
Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland