Robust Aggregation Rule

Robust aggregation rules (AGRs) are crucial for secure and reliable distributed machine learning, particularly in federated learning settings where data is decentralized and potentially compromised by malicious actors or non-identical data distributions. Current research focuses on developing AGRs that are resilient to Byzantine attacks (malicious data injection) and handle non-identically and independently distributed (non-IID) data, often employing techniques like gradient splitting or distance-based outlier suppression to identify and mitigate the influence of faulty updates. These advancements are vital for ensuring the accuracy and trustworthiness of models trained on sensitive data across multiple sources, with applications ranging from medical imaging to other privacy-sensitive domains.

Papers