Supervised Contrastive Loss

Supervised contrastive loss is a machine learning technique that improves model performance by learning representations where data points of the same class are closer together than those of different classes. Current research focuses on enhancing its robustness to noisy labels, long-tailed data distributions, and class imbalances, often integrating it with other loss functions or employing techniques like data augmentation and class-aware attention mechanisms within various architectures, including autoencoders, graph convolutional networks, and transformers. This approach has shown significant improvements in diverse applications such as image classification, speech emotion recognition, and recommendation systems, highlighting its value for learning effective and generalizable representations from complex data.

Papers