Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
878papers
Papers - Page 13
March 27, 2024
March 25, 2024
March 24, 2024
March 22, 2024
Self-Supervised Backbone Framework for Diverse Agricultural Vision Tasks
Exploring the Task-agnostic Trait of Self-supervised Learning in the Context of Detecting Mental Disorders
Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential Recommendation
Trajectory Regularization Enhances Self-Supervised Geometric Representation
March 21, 2024
Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning
CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers
Exploring Task Unification in Graph Representation Learning via Generative Approach
March 18, 2024
March 14, 2024
March 11, 2024
March 9, 2024