Adversarial Image
Adversarial images are subtly altered images designed to deceive deep learning models, primarily image classifiers and more recently, vision-language models, into making incorrect predictions while appearing normal to humans. Current research focuses on developing more robust and transferable adversarial attacks, exploring various attack methods (e.g., gradient-based, generative, and frequency-domain manipulations) and defense mechanisms (e.g., adversarial training, purification, and anomaly detection). This field is crucial for understanding and mitigating the vulnerabilities of AI systems to malicious manipulation, impacting the security and reliability of applications ranging from autonomous driving to medical image analysis.
Papers
October 30, 2024
October 29, 2024
October 28, 2024
October 17, 2024
October 11, 2024
October 9, 2024
October 7, 2024
October 6, 2024
September 12, 2024
September 2, 2024
August 17, 2024
August 11, 2024
August 9, 2024
July 31, 2024
July 25, 2024
July 8, 2024
July 1, 2024
June 28, 2024
June 19, 2024