Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Unsupervised Segmentation of Colonoscopy Images
Heming Yao, Jérôme Lüscher, Benjamin Gutierrez Becker, Josep Arús-Pous, Tommaso Biancalani, Amelie Bigorgne, David Richmond
DMT: Comprehensive Distillation with Multiple Self-supervised Teachers
Yuang Liu, Jing Wang, Qiang Zhou, Fan Wang, Jun Wang, Wei Zhang
FaultFormer: Pretraining Transformers for Adaptable Bearing Fault Classification
Anthony Zhou, Amir Barati Farimani
TriDeNT: Triple Deep Network Training for Privileged Knowledge Distillation in Histopathology
Lucas Farndale, Robert Insall, Ke Yuan
Multimodal Speech Emotion Recognition Using Modality-specific Self-Supervised Frameworks
Rutherford Agbeshi Patamia, Paulo E. Santos, Kingsley Nketia Acheampong, Favour Ekong, Kwabena Sarpong, She Kun
Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
Yefan Zhou, Tianyu Pang, Keqin Liu, Charles H. Martin, Michael W. Mahoney, Yaoqing Yang
Learning Anatomically Consistent Embedding for Chest Radiography
Ziyu Zhou, Haozhe Luo, Jiaxuan Pang, Xiaowei Ding, Michael Gotway, Jianming Liang