Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Overcoming Dimensional Collapse in Self-supervised Contrastive Learning for Medical Image Segmentation
Jamshid Hassanpour, Vinkle Srivastav, Didier Mutter, Nicolas Padoy
Self-supervised Visualisation of Medical Image Datasets
Ifeoma Veronica Nwabufo, Jan Niklas Böhm, Philipp Berens, Dmitry Kobak
GAM-Depth: Self-Supervised Indoor Depth Estimation Leveraging a Gradient-Aware Mask and Semantic Constraints
Anqi Cheng, Zhiyuan Yang, Haiyue Zhu, Kezhi Mao
MAL: Motion-Aware Loss with Temporal and Distillation Hints for Self-Supervised Depth Estimation
Yue-Jiang Dong, Fang-Lue Zhang, Song-Hai Zhang
Thyroid ultrasound diagnosis improvement via multi-view self-supervised learning and two-stage pre-training
Jian Wang, Xin Yang, Xiaohong Jia, Wufeng Xue, Rusi Chen, Yanlin Chen, Xiliang Zhu, Lian Liu, Yan Cao, Jianqiao Zhou, Dong Ni, Ning Gu
Scalable Graph Self-Supervised Learning
Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Raika Karimi, Ali Ghodsi
Affine transformation estimation improves visual self-supervised learning
David Torpey, Richard Klein
DUEL: Duplicate Elimination on Active Memory for Self-Supervised Class-Imbalanced Learning
Won-Seok Choi, Hyundo Lee, Dong-Sig Han, Junseok Park, Heeyeon Koo, Byoung-Tak Zhang
ESPnet-SPK: full pipeline speaker embedding toolkit with reproducible recipes, self-supervised front-ends, and off-the-shelf models
Jee-weon Jung, Wangyou Zhang, Jiatong Shi, Zakaria Aldeneh, Takuya Higuchi, Barry-John Theobald, Ahmed Hussen Abdelaziz, Shinji Watanabe
SpeechBERTScore: Reference-Aware Automatic Evaluation of Speech Generation Leveraging NLP Evaluation Metrics
Takaaki Saeki, Soumi Maiti, Shinnosuke Takamichi, Shinji Watanabe, Hiroshi Saruwatari