Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Multimodal Representation Learning using Adaptive Graph Construction
Weichen Huang
Contrastive Learning to Fine-Tune Feature Extraction Models for the Visual Cortex
Alex Mulrooney, Austin J. Brockmeier
ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning
Shiguang Wu, Yaqing Wang, Yatao Bian, Quanming Yao
FGCL: Fine-grained Contrastive Learning For Mandarin Stuttering Event Detection
Han Jiang, Wenyu Wang, Yiquan Zhou, Hongwu Ding, Jiacheng Xu, Jihua Zhu
Improving Object Detection via Local-global Contrastive Learning
Danai Triantafyllidou, Sarah Parisot, Ales Leonardis, Steven McDonagh
Improving Speaker Representations Using Contrastive Losses on Multi-scale Features
Satvik Dixit, Massa Baali, Rita Singh, Bhiksha Raj
WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning
Divine Joseph Appiah, Donghai Guan, Abdul Nasser Kasule, Mingqiang Wei
Contrastive Learning to Improve Retrieval for Real-world Fact Checking
Aniruddh Sriram, Fangyuan Xu, Eunsol Choi, Greg Durrett
Contrastive Abstraction for Reinforcement Learning
Vihang Patil, Markus Hofmarcher, Elisabeth Rumetshofer, Sepp Hochreiter
Decoding Emotions: Unveiling Facial Expressions through Acoustic Sensing with Contrastive Attention
Guangjing Wang, Juexing Wang, Ce Zhou, Weikang Ding, Huacheng Zeng, Tianxing Li, Qiben Yan
Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
Raja Kumar, Raghav Singhal, Pranamya Kulkarni, Deval Mehta, Kshitij Jadhav
TA-Cleaner: A Fine-grained Text Alignment Backdoor Defense Strategy for Multimodal Contrastive Learning
Yuan Xun, Siyuan Liang, Xiaojun Jia, Xinwei Liu, Xiaochun Cao
Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs
Mattia Segu, Luigi Piccinelli, Siyuan Li, Luc Van Gool, Fisher Yu, Bernt Schiele
DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data
Lucas Robinet, Ahmad Berjaoui, Ziad Kheil, Elizabeth Cohen-Jonathan Moyal
Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data
Kota Dohi, Aoi Ito, Harsh Purohit, Tomoya Nishida, Takashi Endo, Yohei Kawaguchi