Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Instance Image Retrieval by Learning Purely From Within the Dataset
Zhongyan Zhang, Lei Wang, Yang Wang, Luping Zhou, Jianjia Zhang, Peng Wang, Fang Chen
Contrastive Learning for Object Detection
Rishab Balasubramanian, Kunal Rathore
Contrastive Learning for OOD in Object detection
Rishab Balasubramanian, Rupashree Dey, Kunal Rathore
Non-Contrastive Self-supervised Learning for Utterance-Level Information Extraction from Speech
Jaejin Cho, Jes'us Villalba, Laureano Moro-Velazquez, Najim Dehak
Consistency-based Self-supervised Learning for Temporal Anomaly Localization
Aniello Panariello, Angelo Porrello, Simone Calderara, Rita Cucchiara