Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Self-Supervised Learning of Color Constancy
Markus R. Ernst, Francisco M. López, Arthur Aubret, Roland W. Fleming, Jochen Triesch
An Effective Automated Speaking Assessment Approach to Mitigating Data Scarcity and Imbalanced Distribution
Tien-Hong Lo, Fu-An Chao, Tzu-I Wu, Yao-Ting Sung, Berlin Chen
Encoding Urban Ecologies: Automated Building Archetype Generation through Self-Supervised Learning for Energy Modeling
Xinwei Zhuang, Zixun Huang, Wentao Zeng, Luisa Caldas
LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression
Rachid Zeghlache, Pierre-Henri Conze, Mostafa El Habib Daho, Yihao Li, Hugo Le Boité, Ramin Tadayoni, Pascal Massin, Béatrice Cochener, Alireza Rezaei, Ikram Brahim, Gwenolé Quellec, Mathieu Lamard
How to Craft Backdoors with Unlabeled Data Alone?
Yifei Wang, Wenhan Ma, Stefanie Jegelka, Yisen Wang
Label-Efficient Sleep Staging Using Transformers Pre-trained with Position Prediction
Sayeri Lala, Hanlin Goh, Christopher Sandino
ADAPT^2: Adapting Pre-Trained Sensing Models to End-Users via Self-Supervision Replay
Hyungjun Yoon, Jaehyun Kwak, Biniyam Aschalew Tolera, Gaole Dai, Mo Li, Taesik Gong, Kimin Lee, Sung-Ju Lee
Exploring the Task-agnostic Trait of Self-supervised Learning in the Context of Detecting Mental Disorders
Rohan Kumar Gupta, Rohit Sinha
Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential Recommendation
Shaowei Wei, Zhengwei Wu, Xin Li, Qintong Wu, Zhiqiang Zhang, Jun Zhou, Lihong Gu, Jinjie Gu