Adversarial Sample
Adversarial samples are inputs designed to intentionally mislead machine learning models, primarily by introducing small, imperceptible perturbations to otherwise correctly classified data. Current research focuses on developing more robust models through techniques like adversarial training, purification methods using generative models (e.g., GANs), and exploring the vulnerabilities of various architectures, including convolutional neural networks, recurrent networks, and large language models. Understanding and mitigating the impact of adversarial samples is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from cybersecurity to medical diagnosis.
Papers
Addressing Key Challenges of Adversarial Attacks and Defenses in the Tabular Domain: A Methodological Framework for Coherence and Consistency
Yael Itzhakev, Amit Giloni, Yuval Elovici, Asaf Shabtai
A Parametric Approach to Adversarial Augmentation for Cross-Domain Iris Presentation Attack Detection
Debasmita Pal, Redwan Sony, Arun Ross
Adversarial Transferability in Deep Denoising Models: Theoretical Insights and Robustness Enhancement via Out-of-Distribution Typical Set Sampling
Jie Ning, Jiebao Sun, Shengzhu Shi, Zhichang Guo, Yao Li, Hongwei Li, Boying Wu
DeMem: Privacy-Enhanced Robust Adversarial Learning via De-Memorization
Xiaoyu Luo, Qiongxiu Li