Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth and Data Heterogeneity
Yiyue Chen, Haris Vikalo, Chianing Wang
SelfEEG: A Python library for Self-Supervised Learning in Electroencephalography
Federico Del Pup, Andrea Zanola, Louis Fabrice Tshimanga, Paolo Emilio Mazzon, Manfredo Atzori
FusDom: Combining In-Domain and Out-of-Domain Knowledge for Continuous Self-Supervised Learning
Ashish Seth, Sreyan Ghosh, S. Umesh, Dinesh Manocha
Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation
Jiaming Liu, Ran Xu, Senqiao Yang, Renrui Zhang, Qizhe Zhang, Zehui Chen, Yandong Guo, Shanghang Zhang
Self-supervised Learning for Enhancing Geometrical Modeling in 3D-Aware Generative Adversarial Network
Jiarong Guo, Xiaogang Xu, Hengshuang Zhao
Evaluation of Barlow Twins and VICReg self-supervised learning for sound patterns of bird and anuran species
Fábio Felix Dias, Moacir Antonelli Ponti, Mílton Cezar Ribeiro, Rosane Minghim
Self-Supervised Learning for Image Super-Resolution and Deblurring
Jérémy Scanvic, Mike Davies, Patrice Abry, Julián Tachella
Efficiency-oriented approaches for self-supervised speech representation learning
Luis Lugo, Valentin Vielzeuf