Federated Representation Learning

Federated representation learning (FRL) aims to collaboratively train shared data representations across multiple decentralized devices without directly sharing sensitive raw data, preserving privacy while improving model performance. Current research focuses on addressing challenges like non-IID data distributions and limited communication bandwidth, exploring techniques such as self-supervised learning, bilevel optimization, and novel loss functions (e.g., maximal coding rate reduction) to enhance efficiency and accuracy. These advancements are significant for applications requiring privacy-preserving collaborative learning, such as personalized medicine and speech recognition, where data is inherently distributed and sensitive.

Papers