Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
PairCFR: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning
Xiaoqi Qiu, Yongjie Wang, Xu Guo, Zhiwei Zeng, Yue Yu, Yuhong Feng, Chunyan Miao
Anomaly Multi-classification in Industrial Scenarios: Transferring Few-shot Learning to a New Task
Jie Liu, Yao Wu, Xiaotong Luo, Zongze Wu
Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language
Mark Hamilton, Andrew Zisserman, John R. Hershey, William T. Freeman
One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Hao Fang, Jiawei Kong, Wenbo Yu, Bin Chen, Jiawei Li, Shutao Xia, Ke Xu
Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated ReLU and Smooth K2 Loss
Bowen Zhang, Chunping Li
Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing
Yihan Wang, Yiwei Lu, Guojun Zhang, Franziska Boenisch, Adam Dziedzic, Yaoliang Yu, Xiao-Shan Gao
Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning
Chi-Sheng Chen, Chun-Shu Wei
CSS: Contrastive Semantic Similarity for Uncertainty Quantification of LLMs
Shuang Ao, Stefan Rueger, Advaith Siddharthan
RevRIR: Joint Reverberant Speech and Room Impulse Response Embedding using Contrastive Learning with Application to Room Shape Classification
Jacob Bitterman, Daniel Levi, Hilel Hagai Diamandi, Sharon Gannot, Tal Rosenwein
MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Shay Deutsch, Lionel Yelibi, Alex Tong Lin, Arjun Ravi Kannan
SMCL: Saliency Masked Contrastive Learning for Long-tailed Recognition
Sanglee Park, Seung-won Hwang, Jungmin So
No Captions, No Problem: Captionless 3D-CLIP Alignment with Hard Negatives via CLIP Knowledge and LLMs
Cristian Sbrolli, Matteo Matteucci
RAG-based Crowdsourcing Task Decomposition via Masked Contrastive Learning with Prompts
Jing Yang, Xiao Wang, Yu Zhao, Yuhang Liu, Fei-Yue Wang
Negative Prototypes Guided Contrastive Learning for WSOD
Yu Zhang, Chuang Zhu, Guoqing Yang, Siqi Chen
Contrastive Language Video Time Pre-training
Hengyue Liu, Kyle Min, Hector A. Valdez, Subarna Tripathi