Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
STORM: Spatio-Temporal Reconstruction Model for Large-Scale Outdoor Scenes
Jiawei Yang, Jiahui Huang, Yuxiao Chen, Yan Wang, Boyi Li, Yurong You, Apoorva Sharma, Maximilian Igl, Peter Karkus, Danfei Xu, Boris Ivanovic, Yue Wang, Marco Pavone
SS-CTML: Self-Supervised Cross-Task Mutual Learning for CT Image Reconstruction
Gaofeng Chen, Yaoduo Zhang, Li Huang, Pengfei Wang, Wenyu Zhang, Dong Zeng, Jianhua Ma, Ji He
Beyond Pairwise Correlations: Higher-Order Redundancies in Self-Supervised Representation Learning
David Zollikofer, Béni Egressy, Frederik Benzing, Matthias Otth, Roger Wattenhofer
Gen-SIS: Generative Self-augmentation Improves Self-supervised Learning
Varun Belagali, Srikar Yellapragada, Alexandros Graikos, Saarthak Kapse, Zilinghan Li, Tarak Nath Nandi, Ravi K Madduri, Prateek Prasanna, Joel Saltz, Dimitris Samaras