Adversarial Communication

Adversarial communication in multi-agent systems focuses on how agents can robustly cooperate and achieve shared goals despite the presence of malicious actors attempting to disrupt communication or manipulate information. Current research emphasizes developing methods, often leveraging multi-agent reinforcement learning and techniques like generative adversarial imitation learning or theory-of-mind models, to create resilient communication strategies that can filter out or mitigate adversarial messages. This field is crucial for building trustworthy and secure multi-agent systems, with implications for applications ranging from autonomous robotics and search-and-rescue operations to more general collaborative AI systems.

Papers