Robust Aggregation

Robust aggregation in machine learning focuses on combining predictions or model updates from multiple sources, even when some are unreliable or malicious. Current research emphasizes developing algorithms that are resilient to various attacks, such as Byzantine failures, selfish clients, and backdoor poisoning, often employing techniques like geometric median, trimmed mean, and adaptive weighting schemes to filter out or downweight faulty inputs. This field is crucial for securing distributed learning paradigms like federated learning, enhancing their reliability and applicability in sensitive domains such as healthcare and finance, where data privacy and model integrity are paramount.

Papers