Adversarial Sample
Adversarial samples are inputs designed to intentionally mislead machine learning models, primarily by introducing small, imperceptible perturbations to otherwise correctly classified data. Current research focuses on developing more robust models through techniques like adversarial training, purification methods using generative models (e.g., GANs), and exploring the vulnerabilities of various architectures, including convolutional neural networks, recurrent networks, and large language models. Understanding and mitigating the impact of adversarial samples is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from cybersecurity to medical diagnosis.
Papers
May 16, 2022
May 5, 2022
May 2, 2022
April 21, 2022
April 9, 2022
April 7, 2022
April 4, 2022
March 30, 2022
March 26, 2022
March 21, 2022
March 19, 2022
March 18, 2022
March 11, 2022
March 1, 2022
February 25, 2022
February 21, 2022
February 15, 2022
February 4, 2022