Adversarial Data
Adversarial data, encompassing maliciously perturbed inputs designed to mislead machine learning models, poses a significant threat to the reliability of AI systems. Current research focuses on developing robust models through techniques like adversarial training (incorporating adversarial examples during training), and novel detection methods that identify adversarial instances based on distributional discrepancies or feature analysis, often employing diffusion models or ensemble approaches. This field is crucial for ensuring the trustworthiness and security of AI applications across diverse domains, from medical diagnosis and autonomous driving to natural language processing, where adversarial attacks can have serious consequences.
Papers
November 27, 2022
August 17, 2022
July 29, 2022
June 28, 2022
June 17, 2022
June 8, 2022
June 2, 2022
May 25, 2022
May 9, 2022
May 5, 2022
April 28, 2022
March 16, 2022
February 28, 2022
January 3, 2022