Adversarial Image
Adversarial images are subtly altered images designed to deceive deep learning models, primarily image classifiers and more recently, vision-language models, into making incorrect predictions while appearing normal to humans. Current research focuses on developing more robust and transferable adversarial attacks, exploring various attack methods (e.g., gradient-based, generative, and frequency-domain manipulations) and defense mechanisms (e.g., adversarial training, purification, and anomaly detection). This field is crucial for understanding and mitigating the vulnerabilities of AI systems to malicious manipulation, impacting the security and reliability of applications ranging from autonomous driving to medical image analysis.
Papers
June 5, 2024
May 30, 2024
April 18, 2024
March 27, 2024
March 19, 2024
February 22, 2024
February 19, 2024
February 13, 2024
February 7, 2024
January 24, 2024
December 15, 2023
December 8, 2023
November 27, 2023
November 20, 2023
November 19, 2023
October 25, 2023
October 16, 2023
October 4, 2023
September 30, 2023