Adversarial Image
Adversarial images are subtly altered images designed to deceive deep learning models, primarily image classifiers and more recently, vision-language models, into making incorrect predictions while appearing normal to humans. Current research focuses on developing more robust and transferable adversarial attacks, exploring various attack methods (e.g., gradient-based, generative, and frequency-domain manipulations) and defense mechanisms (e.g., adversarial training, purification, and anomaly detection). This field is crucial for understanding and mitigating the vulnerabilities of AI systems to malicious manipulation, impacting the security and reliability of applications ranging from autonomous driving to medical image analysis.
Papers
October 27, 2022
October 16, 2022
October 8, 2022
October 7, 2022
July 13, 2022
July 11, 2022
June 17, 2022
June 16, 2022
June 3, 2022
May 26, 2022
March 26, 2022
March 19, 2022
March 17, 2022
March 14, 2022
March 2, 2022
February 2, 2022
January 4, 2022
December 29, 2021