Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
AlignDet: Aligning Pre-training and Fine-tuning in Object Detection
Ming Li, Jie Wu, Xionghui Wang, Chen Chen, Jie Qin, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan
Sequential Multi-Dimensional Self-Supervised Learning for Clinical Time Series
Aniruddh Raghu, Payal Chandak, Ridwan Alam, John Guttag, Collin M. Stultz
The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning
Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella
Rician likelihood loss for quantitative MRI using self-supervised deep learning
Christopher S. Parker, Anna Schroder, Sean C. Epstein, James Cole, Daniel C. Alexander, Hui Zhang
DSV: An Alignment Validation Loss for Self-supervised Outlier Model Selection
Jaemin Yoo, Yue Zhao, Lingxiao Zhao, Leman Akoglu
Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays
Rajib Bhattacharjea, Nathan West
Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation
José Morano, Guilherme Aresta, Dmitrii Lachinov, Julia Mai, Ursula Schmidt-Erfurth, Hrvoje Bogunović
Self-supervised learning with diffusion-based multichannel speech enhancement for speaker verification under noisy conditions
Sandipana Dowerah, Ajinkya Kulkarni, Romain Serizel, Denis Jouvet
Prompting Diffusion Representations for Cross-Domain Semantic Segmentation
Rui Gong, Martin Danelljan, Han Sun, Julio Delgado Mangas, Luc Van Gool
KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Weijie Xu, Xiaoyu Jiang, Jay Desai, Bin Han, Fuqin Yan, Francis Iannacci
SelfFed: Self-supervised Federated Learning for Data Heterogeneity and Label Scarcity in IoMT
Sunder Ali Khowaja, Kapal Dev, Syed Muhammad Anwar, Marius George Linguraru