Robust Aggregator

Robust aggregators aim to improve the reliability of distributed machine learning by mitigating the impact of malicious or faulty data contributions from individual participants. Current research focuses on comparing the performance of various aggregators, including mean aggregation and variations like truncated mean, under different attack models such as label poisoning and Byzantine attacks, often in high-dimensional spaces. A key challenge is balancing robustness against the need for effective learning, as overly conservative methods can hinder model accuracy. These advancements are crucial for securing collaborative machine learning systems and ensuring the trustworthiness of models trained on decentralized data.

Papers