Paper ID: 2305.11633
Goal-Oriented Communications in Federated Learning via Feedback on Risk-Averse Participation
Shashi Raj Pandey, Van Phuc Bui, Petar Popovski
We treat the problem of client selection in a Federated Learning (FL) setup, where the learning objective and the local incentives of the participants are used to formulate a goal-oriented communication problem. Specifically, we incorporate the risk-averse nature of participants and obtain a communication-efficient on-device performance, while relying on feedback from the Parameter Server (\texttt{PS}). A client has to decide its transmission plan on when not to participate in FL. This is based on its intrinsic incentive, which is the value of the trained global model upon participation by this client. Poor updates not only plunge the performance of the global model with added communication cost but also propagate the loss in performance on other participating devices. We cast the relevance of local updates as \emph{semantic information} for developing local transmission strategies, i.e., making a decision on when to ``not transmit". The devices use feedback about the state of the PS and evaluate their contributions in training the learning model in each aggregation period, which eventually lowers the number of occupied connections. Simulation results validate the efficacy of our proposed approach, with up to $1.4\times$ gain in communication links utilization as compared with the baselines.
Submitted: May 19, 2023