Paper ID: 2305.10229

How does Contrastive Learning Organize Images?

Yunzhe Zhang, Yao Lu, Qi Xuan

Contrastive learning, a dominant self-supervised technique, emphasizes similarity in representations between augmentations of the same input and dissimilarity for different ones. Although low contrastive loss often correlates with high classification accuracy, recent studies challenge this direct relationship, spotlighting the crucial role of inductive biases. We delve into these biases from a clustering viewpoint, noting that contrastive learning creates locally dense clusters, contrasting the globally dense clusters from supervised learning. To capture this discrepancy, we introduce the "RLD (Relative Local Density)" metric. While this cluster property can hinder linear classification accuracy, leveraging a Graph Convolutional Network (GCN) based classifier mitigates this, boosting accuracy and reducing parameter requirements. The code is available \href{https://github.com/xsgxlz/How-does-Contrastive-Learning-Organize-Images/tree/main}{here}.

Submitted: May 17, 2023