Contrastive Loss
Contrastive loss is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (e.g., images of the same object) while minimizing similarity between dissimilar points. Current research focuses on refining contrastive loss functions, often incorporating additional constraints or integrating them with other learning paradigms like self-supervised learning and semi-supervised learning, and applying them to various architectures including transformers and autoencoders. This approach has proven effective across diverse applications, including image classification, speaker verification, and graph anomaly detection, leading to improved accuracy and robustness in various machine learning tasks.
Papers
Single-Stream Multi-Level Alignment for Vision-Language Pretraining
Zaid Khan, Vijay Kumar BG, Xiang Yu, Samuel Schulter, Manmohan Chandraker, Yun Fu
CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning
Xiao Wang, Yuhang Huang, Dan Zeng, Guo-Jun Qi
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain
Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition
Zhisheng Zhong, Jiequan Cui, Zeming Li, Eric Lo, Jian Sun, Jiaya Jia