Multiple Adversary

Multiple adversary research focuses on enhancing the robustness of machine learning models, particularly deep neural networks, against a variety of simultaneous attacks. Current research explores diverse attack strategies, including multi-granular perturbations, Bayesian approaches, and multi-trigger backdoors, and employs models like generative adversarial networks and reinforcement learning to generate and defend against these attacks. This work is crucial for improving the security and reliability of AI systems in safety-critical applications, addressing vulnerabilities that could lead to catastrophic failures or malicious exploitation. The development of robust defense mechanisms is a key objective, with recent efforts focusing on interpreter-based ensembles and uncertainty-aware techniques.

Papers