Contrastive Loss
Contrastive loss is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (e.g., images of the same object) while minimizing similarity between dissimilar points. Current research focuses on refining contrastive loss functions, often incorporating additional constraints or integrating them with other learning paradigms like self-supervised learning and semi-supervised learning, and applying them to various architectures including transformers and autoencoders. This approach has proven effective across diverse applications, including image classification, speaker verification, and graph anomaly detection, leading to improved accuracy and robustness in various machine learning tasks.
Papers
Prototypical Contrastive Transfer Learning for Multimodal Language Understanding
Seitaro Otsuki, Shintaro Ishikawa, Komei Sugiura
Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho, Kartik Sreenivasan, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
Experimenting with Additive Margins for Contrastive Self-Supervised Speaker Verification
Theo Lepage, Reda Dehak
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach
Min Yan, Qianxiong Ning, Qian Wang
Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning
Chujie Zheng, Pei Ke, Zheng Zhang, Minlie Huang