Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
Julien Denize, Jaonary Rabarisoa, Astrid Orcesi, Romain Hérault
ALCAP: Alignment-Augmented Music Captioner
Zihao He, Weituo Hao, Wei-Tsung Lu, Changyou Chen, Kristina Lerman, Xuchen Song
MoQuad: Motion-focused Quadruple Construction for Video Contrastive Learning
Yuan Liu, Jiacheng Chen, Hao Wu
Continual Contrastive Finetuning Improves Low-Resource Relation Extraction
Wenxuan Zhou, Sheng Zhang, Tristan Naumann, Muhao Chen, Hoifung Poon
Beyond Contrastive Learning: A Variational Generative Model for Multilingual Retrieval
John Wieting, Jonathan H. Clark, William W. Cohen, Graham Neubig, Taylor Berg-Kirkpatrick
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu
WACO: Word-Aligned Contrastive Learning for Speech Translation
Siqi Ouyang, Rong Ye, Lei Li