Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
SimCURL: Simple Contrastive User Representation Learning from Command Sequences
Hang Chu, Amir Hosein Khasahmadi, Karl D. D. Willis, Fraser Anderson, Yaoli Mao, Linh Tran, Justin Matejka, Jo Vermeulen
Transfer Learning for Segmentation Problems: Choose the Right Encoder and Skip the Decoder
Jonas Dippel, Matthias Lenga, Thomas Goerttler, Klaus Obermayer, Johannes Höhne
Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches
Yuanzheng Ci, Chen Lin, Lei Bai, Wanli Ouyang
Self-Supervised-RCNN for Medical Image Segmentation with Limited Data Annotation
Banafshe Felfeliyan, Abhilash Hareendranathan, Gregor Kuntze, David Cornell, Nils D. Forkert, Jacob L. Jaremko, Janet L. Ronsky
LAVA: Language Audio Vision Alignment for Contrastive Video Pre-Training
Sumanth Gurram, Andy Fang, David Chan, John Canny
SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video Anomaly Detection
Antonio Barbalau, Radu Tudor Ionescu, Mariana-Iuliana Georgescu, Jacob Dueholm, Bharathkumar Ramachandra, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B. Moeslund, Mubarak Shah