Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Learning Label Hierarchy with Supervised Contrastive Learning
Ruixue Lian, William A. Sethares, Junjie Hu
Rank Supervised Contrastive Learning for Time Series Classification
Qianying Ren, Dongsheng Luo, Dongjin Song
Optimizing contrastive learning for cortical folding pattern detection
Aymeric Gaudin, Louise Guillon, Clara Fischer, Arnaud Cachia, Denis Rivière, Jean-François Mangin, Joël Chavas
Customizing Language Model Responses with Contrastive In-Context Learning
Xiang Gao, Kamalika Das
Morality is Non-Binary: Building a Pluralist Moral Sentence Embedding Space using Contrastive Learning
Jeongwoo Park, Enrico Liscio, Pradeep K. Murukannaiah
M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation
Fotios Lygerakis, Vedant Dave, Elmar Rueckert
Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning
Chenan Wang, Pu Zhao, Siyue Wang, Xue Lin
ConFit: Improving Resume-Job Matching using Data Augmentation and Contrastive Learning
Xiao Yu, Jinzhong Zhang, Zhou Yu
PICL: Physics Informed Contrastive Learning for Partial Differential Equations
Cooper Lorsung, Amir Barati Farimani
MLEM: Generative and Contrastive Learning as Distinct Modalities for Event Sequences
Viktor Moskvoretskii, Dmitry Osin, Egor Shvetsov, Igor Udovichenko, Maxim Zhelnin, Andrey Dukhovny, Anna Zhimerikina, Evgeny Burnaev
Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings
Logan Hallee, Rohan Kapur, Arjun Patel, Jason P. Gleghorn, Bohdan Khomtchouk
RecDCL: Dual Contrastive Learning for Recommendation
Dan Zhang, Yangliao Geng, Wenwen Gong, Zhongang Qi, Zhiyu Chen, Xing Tang, Ying Shan, Yuxiao Dong, Jie Tang
DenoSent: A Denoising Objective for Self-Supervised Sentence Representation Learning
Xinghao Wang, Junliang He, Pengyu Wang, Yunhua Zhou, Tianxiang Sun, Xipeng Qiu
Towards Efficient and Effective Deep Clustering with Dynamic Grouping and Prototype Aggregation
Haixin Zhang, Dong Huang
Learning Representations for Clustering via Partial Information Discrimination and Cross-Level Interaction
Hai-Xin Zhang, Dong Huang, Hua-Bao Ling, Guang-Yu Zhang, Wei-jun Sun, Zi-hao Wen
Memory Consistency Guided Divide-and-Conquer Learning for Generalized Category Discovery
Yuanpeng Tu, Zhun Zhong, Yuxi Li, Hengshuang Zhao