Adversarial Sample
Adversarial samples are inputs designed to intentionally mislead machine learning models, primarily by introducing small, imperceptible perturbations to otherwise correctly classified data. Current research focuses on developing more robust models through techniques like adversarial training, purification methods using generative models (e.g., GANs), and exploring the vulnerabilities of various architectures, including convolutional neural networks, recurrent networks, and large language models. Understanding and mitigating the impact of adversarial samples is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from cybersecurity to medical diagnosis.
Papers
October 21, 2024
October 16, 2024
October 13, 2024
October 9, 2024
September 23, 2024
September 1, 2024
August 20, 2024
August 14, 2024
August 8, 2024
August 6, 2024
August 5, 2024
July 30, 2024
July 29, 2024
July 25, 2024
July 13, 2024
June 24, 2024
June 19, 2024
June 15, 2024
June 13, 2024