Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Self-Supervised Pretraining on Satellite Imagery: a Case Study on Label-Efficient Vehicle Detection
Jules BOURCIER, Thomas Floquet, Gohar Dashyan, Tugdual Ceillier, Karteek Alahari, Jocelyn Chanussot
Evidence of Vocal Tract Articulation in Self-Supervised Learning of Speech
Cheol Jun Cho, Peter Wu, Abdelrahman Mohamed, Gopala K. Anumanchipalli
Self-Supervised Learning via Maximum Entropy Coding
Xin Liu, Zhongdao Wang, Yali Li, Shengjin Wang
Self-Supervised Learning with Masked Image Modeling for Teeth Numbering, Detection of Dental Restorations, and Instance Segmentation in Dental Panoramic Radiographs
Amani Almalki, Longin Jan Latecki
A survey on Self Supervised learning approaches for improving Multimodal representation learning
Naman Goyal
Towards Sustainable Self-supervised Learning
Shanghua Gao, Pan Zhou, Ming-Ming Cheng, Shuicheng Yan
Does Learning from Decentralized Non-IID Unlabeled Data Benefit from Self Supervision?
Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, Russ Tedrake
Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Sravanti Addepalli, Kaushal Bhogale, Priyam Dey, R. Venkatesh Babu
Depth Contrast: Self-Supervised Pretraining on 3DPM Images for Mining Material Classification
Prakash Chandra Chhipa, Richa Upadhyay, Rajkumar Saini, Lars Lindqvist, Richard Nordenskjold, Seiichi Uchida, Marcus Liwicki
SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Tzu-hsun Feng, Annie Dong, Ching-Feng Yeh, Shu-wen Yang, Tzu-Quan Lin, Jiatong Shi, Kai-Wei Chang, Zili Huang, Haibin Wu, Xuankai Chang, Shinji Watanabe, Abdelrahman Mohamed, Shang-Wen Li, Hung-yi Lee
Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Tao Tang, Changlin Li, Guangrun Wang, Kaicheng Yu, Xiaojun Chang, Xiaodan Liang
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li, Ren Pang, Zhaohan Xi, Tianyu Du, Shouling Ji, Yuan Yao, Ting Wang
The Hidden Uniform Cluster Prior in Self-Supervised Learning
Mahmoud Assran, Randall Balestriero, Quentin Duval, Florian Bordes, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Nicolas Ballas
Visual Reinforcement Learning with Self-Supervised 3D Representations
Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, Xiaolong Wang
On Compressing Sequences for Self-Supervised Speech Models
Yen Meng, Hsuan-Jui Chen, Jiatong Shi, Shinji Watanabe, Paola Garcia, Hung-yi Lee, Hao Tang
Self-Supervised Learning of Linear Precoders under Non-Linear PA Distortion for Energy-Efficient Massive MIMO Systems
Thomas Feys, Xavier Mestre, François Rottenberg
Evaluating the Label Efficiency of Contrastive Self-Supervised Learning for Multi-Resolution Satellite Imagery
Jules BOURCIER, Gohar Dashyan, Jocelyn Chanussot, Karteek Alahari