Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Infinite Width Limits of Self Supervised Neural Networks
Maximilian Fleissner, Gautham Govind Anil, Debarghya Ghoshdastidar
Multi-Modal Self-Supervised Learning for Surgical Feedback Effectiveness Assessment
Arushi Gupta, Rafal Kocielnik, Jiayun Wang, Firdavs Nasriddinov, Cherine Yang, Elyssa Wong, Anima Anandkumar, Andrew Hung
PFML: Self-Supervised Learning of Time-Series Data Without Representation Collapse
Einari Vaaras, Manu Airaksinen, Okko Räsänen
Deep learning robotics using self-supervised spatial differentiation drive autonomous contact-based semiconductor characterization
Alexander E. Siemenn, Basita Das, Kangyu Ji, Fang Sheng, Tonio Buonassisi
A Self-Supervised Model for Multi-modal Stroke Risk Prediction
Camille Delgrange, Olga Demler, Samia Mora, Bjoern Menze, Ezequiel de la Rosa, Neda Davoudi
VPBSD:Vessel-Pattern-Based Semi-Supervised Distillation for Efficient 3D Microscopic Cerebrovascular Segmentation
Xi Lin, Shixuan Zhao, Xinxu Wei, Amir Shmuel, Yongjie Li
Self Supervised Networks for Learning Latent Space Representations of Human Body Scans and Motions
Emmanuel Hartman, Nicolas Charon, Martin Bauer
MA^2: A Self-Supervised and Motion Augmenting Autoencoder for Gait-Based Automatic Disease Detection
Yiqun Liu, Ke Zhang, Yin Zhu
Multi-modal NeRF Self-Supervision for LiDAR Semantic Segmentation
Xavier Timoneda, Markus Herb, Fabian Duerr, Daniel Goehring, Fisher Yu
Self-Supervised Multi-View Learning for Disentangled Music Audio Representations
Julia Wilkins, Sivan Ding, Magdalena Fuentes, Juan Pablo Bello
MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision
Ruicheng Wang, Sicheng Xu, Cassie Dai, Jianfeng Xiang, Yu Deng, Xin Tong, Jiaolong Yang
Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study
Tianze Wang, Maryam Honari-Jahromi, Styliani Katsarou, Olga Mikheeva, Theodoros Panagiotakopoulos, Oleg Smirnov, Lele Cao, Sahar Asadi