Adversarial Environment
Adversarial environments in machine learning encompass scenarios where agents or models face unpredictable, potentially malicious inputs or interactions, aiming to develop robust and resilient systems. Current research focuses on enhancing the robustness of various models, including reinforcement learning agents, federated learning systems, and deep neural networks, often employing techniques like minimax optimization, adversarial training, and robust aggregation rules to mitigate the impact of these adversarial conditions. This research is crucial for deploying reliable AI systems in real-world applications, such as cybersecurity, autonomous driving, and online advertising, where adversarial attacks pose significant risks. The ultimate goal is to create algorithms and architectures that can maintain performance and safety despite unpredictable or malicious interference.
Papers
Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design
Matthew Thomas Jackson, Minqi Jiang, Jack Parker-Holder, Risto Vuorio, Chris Lu, Gregory Farquhar, Shimon Whiteson, Jakob Nicolaus Foerster
Expected flow networks in stochastic environments and two-player zero-sum games
Marco Jiralerspong, Bilun Sun, Danilo Vucetic, Tianyu Zhang, Yoshua Bengio, Gauthier Gidel, Nikolay Malkin