Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Using Self-supervised Learning Can Improve Model Fairness
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar
Towards Supervised Performance on Speaker Verification with Self-Supervised Learning by Leveraging Large-Scale ASR Models
Victor Miara, Theo Lepage, Reda Dehak
Strengthening Network Intrusion Detection in IoT Environments with Self-Supervised Learning and Few Shot Learning
Safa Ben Atitallah, Maha Driss, Wadii Boulila, Anis Koubaa
Rapid Wildfire Hotspot Detection Using Self-Supervised Learning on Temporal Remote Sensing Data
Luca Barco, Angelica Urbanelli, Claudio Rossi
Rethinking Spectral Augmentation for Contrast-based Graph Self-Supervised Learning
Xiangru Jian, Xinjian Zhao, Wei Pang, Chaolong Ying, Yimu Wang, Yaoyao Xu, Tianshu Yu
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation
Kimia Hamidieh, Haoran Zhang, Swami Sankaranarayanan, Marzyeh Ghassemi
Self-Supervised Learning Based Handwriting Verification
Mihir Chauhan, Mohammad Abuzar Hashemi, Abhishek Satbhai, Mir Basheer Ali, Bina Ramamurthy, Mingchen Gao, Siwei Lyu, Sargur Srihari
In-Context Symmetries: Self-Supervised Learning through Contextual World Models
Sharut Gupta, Chenyu Wang, Yifei Wang, Tommi Jaakkola, Stefanie Jegelka
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Huy V. Vo, Vasil Khalidov, Timothée Darcet, Théo Moutakanni, Nikita Smetanin, Marc Szafraniec, Hugo Touvron, Camille Couprie, Maxime Oquab, Armand Joulin, Hervé Jégou, Patrick Labatut, Piotr Bojanowski
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen, Chae Jung Park
Benchmarking Hierarchical Image Pyramid Transformer for the classification of colon biopsies and polyps in histopathology images
Nohemi Sofia Leon Contreras, Marina D'Amato, Francesco Ciompi, Clement Grisi, Witali Aswolinskiy, Simona Vatrano, Filippo Fraggetta, Iris Nagtegaal
GeoMask3D: Geometrically Informed Mask Selection for Self-Supervised Point Cloud Learning in 3D
Ali Bahri, Moslem Yazdanpanah, Mehrdad Noori, Milad Cheraghalikhani, Gustavo Adolfo Vargas Hakim, David Osowiechi, Farzad Beizaee, Ismail Ben Ayed, Christian Desrosiers
Towards Graph Contrastive Learning: A Survey and Beyond
Wei Ju, Yifan Wang, Yifang Qin, Zhengyang Mao, Zhiping Xiao, Junyu Luo, Junwei Yang, Yiyang Gu, Dongjie Wang, Qingqing Long, Siyu Yi, Xiao Luo, Ming Zhang
Self-Supervised Learning of Visual Servoing for Low-Rigidity Robots Considering Temporal Body Changes
Kento Kawaharazuka, Naoaki Kanazawa, Kei Okada, Masayuki Inaba