Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
878papers
Papers - Page 6
October 11, 2024
On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning
Bokun Wang, Yunwen Lei, Yiming Ying, Tianbao YangSegGrasp: Zero-Shot Task-Oriented Grasping via Semantic and Geometric Guided Segmentation
Haosheng Li, Weixin Mao, Weipeng Deng, Chenyu Meng, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Hongan Wang, Xiaoming Deng
September 26, 2024
September 23, 2024
CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification
Junyi Peng, Ladislav Mošner, Lin Zhang, Oldřich Plchot, Themos Stafylakis, Lukáš Burget, Jan ČernockýRobust Training Objectives Improve Embedding-based Retrieval in Industrial Recommendation Systems
Matthew Kolodner, Mingxuan Ju, Zihao Fan, Tong Zhao, Elham Ghazizadeh, Yan Wu, Neil Shah, Yozen Liu
September 21, 2024
September 20, 2024
September 18, 2024
September 14, 2024
September 11, 2024