Byzantine Robust Method
Byzantine robust methods aim to secure distributed machine learning, particularly federated learning (FL), against malicious actors ("Byzantine" nodes) that may inject faulty data or disrupt the training process. Current research focuses on developing algorithms that are robust to these attacks while maintaining efficiency, often employing techniques like gradient normalization, clipping, or multi-filter aggregation to identify and mitigate malicious inputs, even under non-identical data distributions and partial client participation. These advancements are crucial for ensuring the reliability and security of FL systems, which are increasingly important for privacy-preserving AI applications and large-scale distributed training.
Papers
September 26, 2024
August 18, 2024
May 14, 2024
December 2, 2023
November 23, 2023
November 8, 2023
October 15, 2023
March 9, 2023
June 1, 2022