Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
One-bit Supervision for Image Classification: Problem, Solution, and Beyond
Hengtong Hu, Lingxi Xie, Xinyue Hue, Richang Hong, Qi Tian
Predicting Gradient is Better: Exploring Self-Supervised Learning for SAR ATR with a Joint-Embedding Predictive Architecture
Weijie Li, Yang Wei, Tianpeng Liu, Yuenan Hou, Yuxuan Li, Zhen Liu, Yongxiang Liu, Li Liu
Contrastive Left-Right Wearable Sensors (IMUs) Consistency Matching for HAR
Dominique Nshimyimana, Vitor Fortes Rey, Paul Lukowic
Echocardiogram Foundation Model -- Application 1: Estimating Ejection Fraction
Adil Dahlan, Cyril Zakka, Abhinav Kumar, Laura Tang, Rohan Shad, Robyn Fong, William Hiesinger
Automatized Self-Supervised Learning for Skin Lesion Screening
Vullnet Useini, Stephanie Tanadini-Lang, Quentin Lohmeyer, Mirko Meboldt, Nicolaus Andratschke, Ralph P. Braun, Javier Barranco García
PECoP: Parameter Efficient Continual Pretraining for Action Quality Assessment
Amirhossein Dadashzadeh, Shuchao Duan, Alan Whone, Majid Mirmehdi