Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Semantic Graph Consistency: Going Beyond Patches for Regularizing Self-Supervised Vision Transformers
Chaitanya Devaguptapu, Sumukh Aithal, Shrinivas Ramasubramanian, Moyuru Yamada, Manohar Kaul
Enhancing Single-Slice Segmentation with 3D-to-2D Unpaired Scan Distillation
Xin Yu, Qi Yang, Han Liu, Ho Hin Lee, Yucheng Tang, Lucas W. Remedios, Michael E. Kim, Rendong Zhang, Shunxing Bao, Yuankai Huo, Ann Zenobia Moore, Luigi Ferrucci, Bennett A. Landman
Occam's Razor for Self Supervised Learning: What is Sufficient to Learn Good Representations?
Mark Ibrahim, David Klindt, Randall Balestriero
Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition
Weichao Zhao, Wengang Zhou, Hezhen Hu, Min Wang, Houqiang Li
Self-Supervised and Few-Shot Learning for Robust Bioaerosol Monitoring
Adrian Willi, Pascal Baumann, Sophie Erb, Fabian Gröger, Yanick Zeder, Simone Lionetti
MMM: Multi-Layer Multi-Residual Multi-Stream Discrete Speech Representation from Self-supervised Learning Model
Jiatong Shi, Xutai Ma, Hirofumi Inaguma, Anna Sun, Shinji Watanabe
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation Models for Industrial Settings
Keno Moenck, Duc Trung Thieu, Julian Koch, Thorsten Schüppstuhl
Higher-Order Spatial Information for Self-Supervised Place Cell Learning
Jared Deighton, Wyatt Mackey, Ioannis Schizas, David L. Boothe, Vasileios Maroulas
Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching
Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Yipeng Zhang, Haitao Mi, Helen Meng