Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Refining Latent Representations: A Generative SSL Approach for Heterogeneous Graph Learning
Yulan Hu, Zhirui Yang, Sheng Ouyang, Yong Liu
DORec: Decomposed Object Reconstruction and Segmentation Utilizing 2D Self-Supervised Features
Jun Wu, Sicheng Li, Sihui Ji, Yifei Yang, Yue Wang, Rong Xiong, Yiyi Liao
Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video
Shashanka Venkataramanan, Mamshad Nayeem Rizve, João Carreira, Yuki M. Asano, Yannis Avrithis
Visual Self-supervised Learning Scheme for Dense Prediction Tasks on X-ray Images
Shervin Halat, Mohammad Rahmati, Ehsan Nazerfard
Computational Pathology at Health System Scale -- Self-Supervised Foundation Models from Three Billion Images
Gabriele Campanella, Ricky Kwan, Eugene Fluder, Jennifer Zeng, Aryeh Stock, Brandon Veremis, Alexandros D. Polydorides, Cyrus Hedvat, Adam Schoenfeld, Chad Vanderbilt, Patricia Kovatch, Carlos Cordon-Cardo, Thomas J. Fuchs
Self-Supervised Representation Learning for Online Handwriting Text Classification
Pouya Mehralian, Bagher BabaAli, Ashena Gorgan Mohammadi
Self-Supervised Dataset Distillation for Transfer Learning
Dong Bok Lee, Seanie Lee, Joonho Ko, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang
Antenna Response Consistency Driven Self-supervised Learning for WIFI-based Human Activity Recognition
Ke Xu, Jiangtao Wang, Hongyuan Zhu, Dingchang Zheng
Pain Forecasting using Self-supervised Learning and Patient Phenotyping: An attempt to prevent Opioid Addiction
Swati Padhee, Tanvi Banerjee, Daniel M. Abrams, Nirmish Shah
Revisiting the Temporal Modeling in Spatio-Temporal Predictive Learning under A Unified View
Cheng Tan, Jue Wang, Zhangyang Gao, Siyuan Li, Lirong Wu, Jun Xia, Stan Z. Li
Self-supervised Learning for Anomaly Detection in Computational Workflows
Hongwei Jin, Krishnan Raghavan, George Papadimitriou, Cong Wang, Anirban Mandal, Ewa Deelman, Prasanna Balaprakash
Unconstrained Stochastic CCA: Unifying Multiview and Self-Supervised Learning
James Chapman, Lennie Wells, Ana Lawry Aguila
GhostEncoder: Stealthy Backdoor Attacks with Dynamic Triggers to Pre-trained Encoders in Self-supervised Learning
Qiannan Wang, Changchun Yin, Zhe Liu, Liming Fang, Run Wang, Chenhao Lin
Self-supervised Learning of Contextualized Local Visual Embeddings
Thalles Santos Silva, Helio Pedrini, Adín Ramírez Rivera