Adversarial Image
Adversarial images are subtly altered images designed to deceive deep learning models, primarily image classifiers and more recently, vision-language models, into making incorrect predictions while appearing normal to humans. Current research focuses on developing more robust and transferable adversarial attacks, exploring various attack methods (e.g., gradient-based, generative, and frequency-domain manipulations) and defense mechanisms (e.g., adversarial training, purification, and anomaly detection). This field is crucial for understanding and mitigating the vulnerabilities of AI systems to malicious manipulation, impacting the security and reliability of applications ranging from autonomous driving to medical image analysis.
Papers
July 13, 2022
July 11, 2022
June 17, 2022
June 16, 2022
June 3, 2022
May 26, 2022
March 26, 2022
March 19, 2022
March 17, 2022
March 14, 2022
March 2, 2022
February 2, 2022
January 4, 2022
December 29, 2021
December 28, 2021
December 22, 2021
December 16, 2021
November 25, 2021