Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
WSSL: Weighted Self-supervised Learning Framework For Image-inpainting
Shubham Gupta, Rahul Kunigal Ravishankar, Madhoolika Gangaraju, Poojasree Dwarkanath, Natarajan Subramanyam
Ladder Siamese Network: a Method and Insights for Multi-level Self-Supervised Learning
Ryota Yoshihashi, Shuhei Nishimura, Dai Yonebayashi, Yuya Otsuka, Tomohiro Tanaka, Takashi Miyazaki
Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment
Yuchen Ma, Yanbei Chen, Zeynep Akata
Self-Supervised Learning based on Heat Equation
Yinpeng Chen, Xiyang Dai, Dongdong Chen, Mengchen Liu, Lu Yuan, Zicheng Liu, Youzuo Lin
Reason from Context with Self-supervised Learning
Xiao Liu, Ankur Sikarwar, Gabriel Kreiman, Zenglin Shi, Mengmi Zhang
Robust Alzheimer's Progression Modeling using Cross-Domain Self-Supervised Deep Learning
Saba Dadsetan, Mohsen Hejrati, Shandong Wu, Somaye Hashemifar
Homomorphic Self-Supervised Learning
T. Anderson Keller, Xavier Suau, Luca Zappella
Masked Reconstruction Contrastive Learning with Information Bottleneck Principle
Ziwen Liu, Bonan Li, Congying Han, Tiande Guo, Xuecheng Nie
Feature Correlation-guided Knowledge Transfer for Federated Self-supervised Learning
Yi Liu, Song Guo, Jie Zhang, Qihua Zhou, Yingchun Wang, Xiaohan Zhao
MT4SSL: Boosting Self-Supervised Speech Representation Learning by Integrating Multiple Targets
Ziyang Ma, Zhisheng Zheng, Changli Tang, Yujin Wang, Xie Chen