Contrastive Loss
Contrastive loss is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (e.g., images of the same object) while minimizing similarity between dissimilar points. Current research focuses on refining contrastive loss functions, often incorporating additional constraints or integrating them with other learning paradigms like self-supervised learning and semi-supervised learning, and applying them to various architectures including transformers and autoencoders. This approach has proven effective across diverse applications, including image classification, speaker verification, and graph anomaly detection, leading to improved accuracy and robustness in various machine learning tasks.
Papers
What Has Been Overlooked in Contrastive Source-Free Domain Adaptation: Leveraging Source-Informed Latent Augmentation within Neighborhood Context
Jing Wang, Wonho Bae, Jiahong Chen, Kuangen Zhang, Leonid Sigal, Clarence W. de Silva
Temporally Consistent Object-Centric Learning by Contrasting Slots
Anna Manasyan, Maximilian Seitzer, Filip Radovic, Georg Martius, Andrii Zadaianchuk
Energy-Based Preference Model Offers Better Offline Alignment than the Bradley-Terry Preference Model
Yuzhong Hong, Hanshan Zhang, Junwei Bao, Hongfei Jiang, Yang Song
On the Utilization of Unique Node Identifiers in Graph Neural Networks
Maya Bechler-Speicher, Moshe Eliasof, Carola-Bibiane Schönlieb, Ran Gilad-Bachrach, Amir Globerson
GraphVL: Graph-Enhanced Semantic Modeling via Vision-Language Models for Generalized Class Discovery
Bhupendra Solanki, Ashwin Nair, Mainak Singha, Souradeep Mukhopadhyay, Ankit Jha, Biplab Banerjee