Adversarial Image
Adversarial images are subtly altered images designed to deceive deep learning models, primarily image classifiers and more recently, vision-language models, into making incorrect predictions while appearing normal to humans. Current research focuses on developing more robust and transferable adversarial attacks, exploring various attack methods (e.g., gradient-based, generative, and frequency-domain manipulations) and defense mechanisms (e.g., adversarial training, purification, and anomaly detection). This field is crucial for understanding and mitigating the vulnerabilities of AI systems to malicious manipulation, impacting the security and reliability of applications ranging from autonomous driving to medical image analysis.
Papers
September 1, 2023
July 26, 2023
June 30, 2023
June 13, 2023
June 9, 2023
May 15, 2023
May 2, 2023
March 24, 2023
January 30, 2023
December 27, 2022
December 11, 2022
November 24, 2022
November 23, 2022
November 18, 2022
November 5, 2022
October 27, 2022
October 16, 2022
October 8, 2022