Contrastive Loss
Contrastive loss is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (e.g., images of the same object) while minimizing similarity between dissimilar points. Current research focuses on refining contrastive loss functions, often incorporating additional constraints or integrating them with other learning paradigms like self-supervised learning and semi-supervised learning, and applying them to various architectures including transformers and autoencoders. This approach has proven effective across diverse applications, including image classification, speaker verification, and graph anomaly detection, leading to improved accuracy and robustness in various machine learning tasks.
Papers
Adversarial Learning Data Augmentation for Graph Contrastive Learning in Recommendation
Junjie Huang, Qi Cao, Ruobing Xie, Shaoliang Zhang, Feng Xia, Huawei Shen, Xueqi Cheng
Spatiotemporal Decouple-and-Squeeze Contrastive Learning for Semi-Supervised Skeleton-based Action Recognition
Binqian Xu, Xiangbo Shu
MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain Adaptation for Breast MRI Segmentation in Small Datasets
Sheng Kuang, Henry C. Woodruff, Renee Granzier, Thiemo J. A. van Nijnatten, Marc B. I. Lobbes, Marjolein L. Smidt, Philippe Lambin, Siamak Mehrkanoon
Attribute-Centric Compositional Text-to-Image Generation
Yuren Cong, Martin Renqiang Min, Li Erran Li, Bodo Rosenhahn, Michael Ying Yang