Adversarial Regularization
Adversarial regularization is a technique used to improve the robustness and generalization of machine learning models by training them to be resilient against adversarial examples—inputs designed to mislead the model. Current research focuses on applying this technique across diverse domains, including image processing, medical image analysis, and reinforcement learning, often employing generative adversarial networks (GANs) or other adversarial training methods within various architectures like transformers and graph neural networks. This approach enhances model performance by mitigating overfitting, reducing sensitivity to noise and data uncertainty, and improving the reliability of predictions in real-world applications where data may be imperfect or subject to manipulation.
Papers
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms
Alexander Bukharin, Yan Li, Yue Yu, Qingru Zhang, Zhehui Chen, Simiao Zuo, Chao Zhang, Songan Zhang, Tuo Zhao
Passive Inference Attacks on Split Learning via Adversarial Regularization
Xiaochen Zhu, Xinjian Luo, Yuncheng Wu, Yangfan Jiang, Xiaokui Xiao, Beng Chin Ooi