Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
GenView: Enhancing View Quality with Pretrained Generative Model for Self-Supervised Learning
Xiaojie Li, Yibo Yang, Xiangtai Li, Jianlong Wu, Yue Yu, Bernard Ghanem, Min Zhang
Unsupervised End-to-End Training with a Self-Defined Target
Dongshu Liu, Jérémie Laydevant, Adrien Pontlevy, Damien Querlioz, Julie Grollier
VANP: Learning Where to See for Navigation with Self-Supervised Vision-Action Pre-Training
Mohammad Nazeri, Junzhe Wang, Amirreza Payandeh, Xuesu Xiao
Intra-video Positive Pairs in Self-Supervised Learning for Ultrasound
Blake VanBerlo, Alexander Wong, Jesse Hoey, Robert Arntfield
AACP: Aesthetics assessment of children's paintings based on self-supervised learning
Shiqi Jiang, Ning Li, Chen Shi, Liping Guo, Changbo Wang, Chenhui Li
Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models
Philip Harris, Michael Kagan, Jeffrey Krupa, Benedikt Maier, Nathaniel Woodward
Joint-Embedding Masked Autoencoder for Self-supervised Learning of Dynamic Functional Connectivity from the Human Brain
Jungwon Choi, Hyungi Lee, Byung-Hoon Kim, Juho Lee
Augmentations vs Algorithms: What Works in Self-Supervised Learning
Warren Morningstar, Alex Bijamov, Chris Duvarney, Luke Friedman, Neha Kalibhat, Luyang Liu, Philip Mansfield, Renan Rojas-Gomez, Karan Singhal, Bradley Green, Sushant Prakash
SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised Learning for Robust Infrared Small Target Detection
Yahao Lu, Yupei Lin, Han Wu, Xiaoyu Xian, Yukai Shi, Liang Lin
Self-Supervised Multiple Instance Learning for Acute Myeloid Leukemia Classification
Salome Kazeminia, Max Joosten, Dragan Bosnacki, Carsten Marr
Self-Supervision in Time for Satellite Images(S3-TSS): A novel method of SSL technique in Satellite images
Akansh Maurya, Hewan Shrestha, Mohammad Munem Shahriar
Reducing self-supervised learning complexity improves weakly-supervised classification performance in computational pathology
Tim Lenz, Omar S. M. El Nahhas, Marta Ligero, Jakob Nikolas Kather
Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning
Haoyu Chen, Wenbo Li, Jinjin Gu, Jingjing Ren, Haoze Sun, Xueyi Zou, Zhensong Zhang, Youliang Yan, Lei Zhu
Pooling Image Datasets With Multiple Covariate Shift and Imbalance
Sotirios Panagiotis Chytas, Vishnu Suresh Lokhande, Peiran Li, Vikas Singh