Contrastive Loss
Contrastive loss is a machine learning technique that improves model performance by learning representations that maximize the similarity between similar data points (e.g., images of the same object) while minimizing similarity between dissimilar points. Current research focuses on refining contrastive loss functions, often incorporating additional constraints or integrating them with other learning paradigms like self-supervised learning and semi-supervised learning, and applying them to various architectures including transformers and autoencoders. This approach has proven effective across diverse applications, including image classification, speaker verification, and graph anomaly detection, leading to improved accuracy and robustness in various machine learning tasks.
Papers
Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training
Yibin Lei, Liang Ding, Yu Cao, Changtong Zan, Andrew Yates, Dacheng Tao
SamToNe: Improving Contrastive Loss for Dual Encoder Retrieval Models with Same Tower Negatives
Fedor Moiseev, Gustavo Hernandez Abrego, Peter Dornbach, Imed Zitouni, Enrique Alfonseca, Zhe Dong
Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie Zhou, Yue Zhang
Joint Generative-Contrastive Representation Learning for Anomalous Sound Detection
Xiao-Min Zeng, Yan Song, Zhu Zhuo, Yu Zhou, Yu-Hong Li, Hui Xue, Li-Rong Dai, Ian McLoughlin
Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization
Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang
Towards understanding neural collapse in supervised contrastive learning with the information bottleneck method
Siwei Wang, Stephanie E Palmer