Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency
Yike Yuan, Xinghe Fu, Yunlong Yu, Xi Li
Supervised Knowledge May Hurt Novel Class Discovery Performance
Ziyun Li, Jona Otholt, Ben Dai, Di Hu, Christoph Meinel, Haojin Yang
Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
On the Robustness of Arabic Speech Dialect Identification
Peter Sullivan, AbdelRahim Elmadany, Muhammad Abdul-Mageed
Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression
Runtian Zhai, Bingbin Liu, Andrej Risteski, Zico Kolter, Pradeep Ravikumar
A Novel Driver Distraction Behavior Detection Method Based on Self-supervised Learning with Masked Image Modeling
Yingzhi Zhang, Taiguo Li, Chao Li, Xinghong Zhou
SSL-CPCD: Self-supervised learning with composite pretext-class discrimination for improved generalisability in endoscopic image analysis
Ziang Xu, Jens Rittscher, Sharib Ali
Feature Learning in Image Hierarchies using Functional Maximal Correlation
Bo Hu, Yuheng Bu, José C. Príncipe
There is more to graphs than meets the eye: Learning universal features with self-supervision
Laya Das, Sai Munikoti, Mahantesh Halappanavar
Self-supervised Learning to Bring Dual Reversed Rolling Shutter Images Alive
Wei Shang, Dongwei Ren, Chaoyu Feng, Xiaotao Wang, Lei Lei, Wangmeng Zuo
Spectal Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning
Marina Munkhoeva, Ivan Oseledets
Exploration of Efficient End-to-End ASR using Discretized Input from Self-Supervised Learning
Xuankai Chang, Brian Yan, Yuya Fujita, Takashi Maekaku, Shinji Watanabe
MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations
Calum Heggan, Tim Hospedales, Sam Budgett, Mehrdad Yaghoobi