Federated Self Supervised Learning
Federated self-supervised learning (FSSL) aims to train machine learning models collaboratively across decentralized devices without sharing sensitive data, leveraging unlabeled data for model training. Current research focuses on addressing challenges like data heterogeneity and limited client resources through techniques such as layer-wise training, momentum contrast, and spectral contrastive objectives, often employing deep reinforcement learning for resource optimization. FSSL's significance lies in its potential to enable privacy-preserving model development in various applications, including medical imaging, autonomous vehicles, and speech recognition, where labeled data is scarce or expensive to obtain.
Papers
July 5, 2022
May 31, 2022
May 25, 2022
May 17, 2022
April 9, 2022
April 6, 2022