Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Go-tuning: Improving Zero-shot Learning Abilities of Smaller Language Models
Jingjing Xu, Qingxiu Dong, Hongyi Liu, Lei Li
Image Segmentation-based Unsupervised Multiple Objects Discovery
Sandra Kara, Hejer Ammar, Florian Chabot, Quoc-Cuong Pham
Exploring Effective Fusion Algorithms for Speech Based Self-Supervised Learning Models
Changli Tang, Yujin Wang, Xie Chen, Wei-Qiang Zhang
Noise2Contrast: Multi-Contrast Fusion Enables Self-Supervised Tomographic Image Denoising
Fabian Wagner, Mareike Thies, Laura Pfaff, Noah Maul, Sabrina Pechmann, Mingxuan Gu, Jonas Utz, Oliver Aust, Daniela Weidner, Georgiana Neag, Stefan Uderhardt, Jang-Hwan Choi, Andreas Maier
Benchmarking Self-Supervised Learning on Diverse Pathology Datasets
Mingu Kang, Heon Song, Seonwook Park, Donggeun Yoo, Sérgio Pereira
Progressive Multi-Scale Self-Supervised Learning for Speech Recognition
Genshun Wan, Tan Liu, Hang Chen, Jia Pan, Cong Liu, Zhongfu Ye
Improved Self-Supervised Multilingual Speech Representation Learning Combined with Auxiliary Language Information
Fenglin Ding, Genshun Wan, Pengcheng Li, Jia Pan, Cong Liu