Adversarial Sample
Adversarial samples are inputs designed to intentionally mislead machine learning models, primarily by introducing small, imperceptible perturbations to otherwise correctly classified data. Current research focuses on developing more robust models through techniques like adversarial training, purification methods using generative models (e.g., GANs), and exploring the vulnerabilities of various architectures, including convolutional neural networks, recurrent networks, and large language models. Understanding and mitigating the impact of adversarial samples is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from cybersecurity to medical diagnosis.
Papers
November 26, 2022
November 20, 2022
November 19, 2022
November 10, 2022
November 4, 2022
November 3, 2022
October 24, 2022
October 19, 2022
October 12, 2022
October 6, 2022
September 28, 2022
August 31, 2022
July 31, 2022
July 19, 2022
July 18, 2022
July 1, 2022
June 17, 2022
May 31, 2022