Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Relax, it doesn't matter how you get there: A new self-supervised approach for multi-timescale behavior analysis
Mehdi Azabou, Michael Mendelson, Nauman Ahad, Maks Sorokin, Shantanu Thakoor, Carolina Urzay, Eva L. Dyer
SeqCo-DETR: Sequence Consistency Training for Self-Supervised Object Detection with Transformers
Guoqiang Jin, Fan Yang, Mingshan Sun, Ruyi Zhao, Yakun Liu, Wei Li, Tianpeng Bao, Liwei Wu, Xingyu Zeng, Rui Zhao
DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision
Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha
Learning Cross-lingual Visual Speech Representations
Andreas Zinonos, Alexandros Haliassos, Pingchuan Ma, Stavros Petridis, Maja Pantic
Lightweight feature encoder for wake-up word detection based on self-supervised speech representation
Hyungjun Lim, Younggwan Kim, Kiho Yeom, Eunjoo Seo, Hoodong Lee, Stanley Jungkyu Choi, Honglak Lee
Functional Knowledge Transfer with Self-supervised Representation Learning
Prakash Chandra Chhipa, Muskaan Chopra, Gopal Mengi, Varun Gupta, Richa Upadhyay, Meenakshi Subhash Chippa, Kanjar De, Rajkumar Saini, Seiichi Uchida, Marcus Liwicki
Fine-tuning Strategies for Faster Inference using Speech Self-Supervised Models: A Comparative Study
Salah Zaiem, Robin Algayres, Titouan Parcollet, Slim Essid, Mirco Ravanelli
CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation
Arthur Moreau, Nathan Piasco, Moussab Bennehar, Dzmitry Tsishkou, Bogdan Stanciulescu, Arnaud de La Fortelle
Centroid-centered Modeling for Efficient Vision Transformer Pre-training
Xin Yan, Zuchao Li, Lefei Zhang, Bo Du, Dacheng Tao
Ultra-High-Resolution Detector Simulation with Intra-Event Aware GAN and Self-Supervised Relational Reasoning
Baran Hashemi, Nikolai Hartmann, Sahand Sharifzadeh, James Kahn, Thomas Kuhr
Improving Self-Supervised Learning for Audio Representations by Feature Diversity and Decorrelation
Bac Nguyen, Stefan Uhlich, Fabien Cardinaux
MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors
Chen Huang, Hanlin Goh, Jiatao Gu, Josh Susskind
Self-Supervised Few-Shot Learning for Ischemic Stroke Lesion Segmentation
Luca Tomasetti, Stine Hansen, Mahdieh Khanmohammadi, Kjersti Engan, Liv Jorunn Høllesli, Kathinka Dæhli Kurz, Michael Kampffmeyer
Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive Learning
Huiwon Jang, Hankook Lee, Jinwoo Shin