Paper ID: 2401.11647
LW-FedSSL: Resource-efficient Layer-wise Federated Self-supervised Learning
Ye Lin Tun, Chu Myaet Thwal, Le Quang Huy, Minh N. H. Nguyen, Choong Seon Hong
Many studies integrate federated learning (FL) with self-supervised learning (SSL) to take advantage of raw data distributed across edge devices. However, edge devices often struggle with high computation and communication costs imposed by SSL and FL algorithms. To tackle this hindrance, we propose LW-FedSSL, a layer-wise federated self-supervised learning approach that allows edge devices to incrementally train a single layer of the model at a time. We introduce server-side calibration and representation alignment mechanisms to ensure LW-FedSSL delivers performance on par with conventional federated self-supervised learning (FedSSL) while significantly lowering resource demands. In a pure layer-wise training scheme, training one layer at a time may limit effective interaction between different layers of the model. The server-side calibration mechanism takes advantage of the resource-rich FL server to ensure smooth collaboration between different layers of the global model. During local training, the representation alignment mechanism encourages closeness between representations of local models and those of the global model, thereby preserving the layer cohesion established by server-side calibration. With the proposed mechanisms, LW-FedSSL achieves a $3.3 \times$ reduction in memory usage, $2.1 \times$ fewer computational operations (FLOPs), and a $3.2 \times$ lower communication cost while maintaining the same level of performance as its end-to-end training counterpart. Additionally, we explore a progressive training strategy called Prog-FedSSL, which matches end-to-end training in memory requirements but offers a $1.8 \times$ reduction in FLOPs and communication costs. Although Prog-FedSSL is not as resource-efficient as LW-FedSSL, its performance improvements make it a suitable candidate for FL environments with more lenient resource constraints.
Submitted: Jan 22, 2024