Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Improved baselines for vision-language pre-training
Enrico Fini, Pietro Astolfi, Adriana Romero-Soriano, Jakob Verbeek, Michal Drozdzal
Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning
Songning Lai, Jiakang Li, Guinan Guo, Xifeng Hu, Yulong Li, Yuan Tan, Zichen Song, Yutong Liu, Zhaoxia Ren, Chun Wan, Danmin Miao, Zhi Liu
Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding
Li Jiang, Zetong Yang, Shaoshuai Shi, Vladislav Golyanik, Dengxin Dai, Bernt Schiele
SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign Language Understanding
Hezhen Hu, Weichao Zhao, Wengang Zhou, Houqiang Li
Additive Class Distinction Maps using Branched-GANs
Elnatan Kadar, Jonathan Brokman, Guy Gilboa
Disentangled Contrastive Collaborative Filtering
Xubin Ren, Lianghao Xia, Jiashu Zhao, Dawei Yin, Chao Huang
Self-Supervised 3D Scene Flow Estimation Guided by Superpoints
Yaqi Shen, Le Hui, Jin Xie, Jian Yang
CLIP-S$^4$: Language-Guided Self-Supervised Semantic Segmentation
Wenbin He, Suphanut Jamonnak, Liang Gou, Liu Ren
SelfDocSeg: A Self-Supervised vision-based Approach towards Document Segmentation
Subhajit Maity, Sanket Biswas, Siladittya Manna, Ayan Banerjee, Josep Lladós, Saumik Bhattacharya, Umapada Pal
Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning
Mingyang Wang, Zhenshan Bing, Xiangtong Yao, Shuai Wang, Hang Su, Chenguang Yang, Kai Huang, Alois Knoll
Regularizing Self-training for Unsupervised Domain Adaptation via Structural Constraints
Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma Strubell, Jose Moura
S$^2$MAT: Simultaneous and Self-Reinforced Mapping and Tracking in Dynamic Urban Scenariosorcing Framework for Simultaneous Mapping and Tracking in Unbounded Urban Environments
Tingxiang Fan, Bowen Shen, Yinqiang Zhang, Chuye Zhang, Lei Yang, Hua Chen, Wei Zhang, Jia Pan
Lightweight, Pre-trained Transformers for Remote Sensing Timeseries
Gabriel Tseng, Ruben Cartuyvels, Ivan Zvonkov, Mirali Purohit, David Rolnick, Hannah Kerner