Paper ID: 2405.18752
Resilient Average Consensus with Adversaries via Distributed Detection and Recovery
Liwei Yuan, Hideaki Ishii
We study the problem of resilient average consensus in multi-agent systems where some of the agents are subject to failures or attacks. The objective of resilient average consensus is for non-faulty/normal agents to converge to the average of their initial values despite the erroneous effects from malicious agents. To this end, we propose a successful distributed iterative resilient average consensus algorithm for the multi-agent networks with general directed topologies. The proposed algorithm has two parts at each iteration: detection and averaging. For the detection part, we propose two distributed algorithms and one of them can detect malicious agents with only the information from direct in-neighbors. For the averaging part, we extend the applicability of an existing averaging algorithm where normal agents can remove the effects from malicious agents so far, after they are detected. Another important feature of our method is that it can handle the case where malicious agents are neighboring and collaborating with each other to mislead the normal ones from averaging. This case cannot be solved by existing detection approaches in related literature. Moreover, our algorithm is efficient in storage usage especially for large-scale networks as each agent only requires the values of neighbors within two hops. Lastly, numerical examples are given to verify the efficacy of the proposed algorithms.
Submitted: May 29, 2024