Robust Aggregator
Robust aggregators aim to improve the reliability of distributed machine learning by mitigating the impact of malicious or faulty data contributions from individual participants. Current research focuses on comparing the performance of various aggregators, including mean aggregation and variations like truncated mean, under different attack models such as label poisoning and Byzantine attacks, often in high-dimensional spaces. A key challenge is balancing robustness against the need for effective learning, as overly conservative methods can hinder model accuracy. These advancements are crucial for securing collaborative machine learning systems and ensuring the trustworthiness of models trained on decentralized data.
Papers
November 6, 2024
April 21, 2024
March 13, 2024
February 21, 2024
December 22, 2023
November 23, 2023
August 21, 2022