Contrastive Learning
Contrastive learning is a self-supervised machine learning technique that aims to learn robust data representations by contrasting similar and dissimilar data points. Current research focuses on applying contrastive learning to diverse modalities, including images, audio, text, and time-series data, often within multimodal frameworks and using architectures like MoCo and SimCLR, and exploring its application in various tasks such as object detection, speaker verification, and image dehazing. This approach is significant because it allows for effective learning from unlabeled or weakly labeled data, improving model generalization and performance across numerous applications, particularly in scenarios with limited annotated data or significant domain shifts.
Papers
Federated Contrastive Learning of Graph-Level Representations
Xiang Li, Gagan Agrawal, Rajiv Ramnath, Ruoming Jin
Dissecting Misalignment of Multimodal Large Language Models via Influence Function
Lijie Hu, Chenyang Ren, Huanyi Xie, Khouloud Saadi, Shu Yang, Jingfeng Zhang, Di Wang
Relational Contrastive Learning and Masked Image Modeling for Scene Text Recognition
Tiancheng Lin, Jinglei Zhang, Yi Xu, Kai Chen, Rui Zhang, Chang-Wen Chen
Debias-CLR: A Contrastive Learning Based Debiasing Method for Algorithmic Fairness in Healthcare Applications
Ankita Agarwal, Tanvi Banerjee, William Romine, Mia Cajita
MCL: Multi-view Enhanced Contrastive Learning for Chest X-ray Report Generation
Kang Liu, Zhuoqi Ma, Kun Xie, Zhicheng Jiao, Qiguang Miao
Masked Image Contrastive Learning for Efficient Visual Conceptual Pre-training
Xiaoyu Yang, Lijian Xu
Partial Multi-View Clustering via Meta-Learning and Contrastive Feature Alignment
BoHao Chen
Long-Tailed Object Detection Pre-training: Dynamic Rebalancing Contrastive Learning with Dual Reconstruction
Chen-Long Duan, Yong Li, Xiu-Shen Wei, Lin Zhao
Towards Neural Foundation Models for Vision: Aligning EEG, MEG, and fMRI Representations for Decoding, Encoding, and Modality Conversion
Matteo Ferrante, Tommaso Boccato, Grigorii Rashkov, Nicola Toschi
Reducing Distraction in Long-Context Language Models by Focused Learning
Zijun Wu, Bingyuan Liu, Ran Yan, Lei Chen, Thomas Delteil
Enhancing Cardiovascular Disease Prediction through Multi-Modal Self-Supervised Learning
Francesco Girlanda, Olga Demler, Bjoern Menze, Neda Davoudi
Predicting Stroke through Retinal Graphs and Multimodal Self-supervised Learning
Yuqing Huang, Bastian Wittmann, Olga Demler, Bjoern Menze, Neda Davoudi