Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
PointCMC: Cross-Modal Multi-Scale Correspondences Learning for Point Cloud Understanding
Honggu Zhou, Xiaogang Peng, Jiawei Mao, Zizhao Wu, Ming Zeng
YZR-net : Self-supervised Hidden representations Invariant to Transformations for profanity detection
Vedant Sandeep Joshi, Sivanagaraja Tatinati, Yubo Wang
Compressing Transformer-based self-supervised models for speech processing
Tzu-Quan Lin, Tsung-Huan Yang, Chun-Yao Chang, Kuang-Ming Chen, Tzu-hsun Feng, Hung-yi Lee, Hao Tang
Self-Supervised Visual Representation Learning via Residual Momentum
Trung X. Pham, Axi Niu, Zhang Kang, Sultan Rizky Madjid, Ji Woo Hong, Daehyeok Kim, Joshua Tian Jin Tee, Chang D. Yoo