Adversarial DEfense
Adversarial defense in machine learning aims to create models robust against adversarial attacks—maliciously crafted inputs designed to cause misclassification. Current research focuses on developing both training-based defenses, such as adversarial training and techniques leveraging optimal transport or energy-based models, and test-time defenses, including input preprocessing and model reprogramming. These efforts are crucial for ensuring the reliability and security of machine learning systems across diverse applications, from image classification and natural language processing to structural health monitoring and malware detection, where vulnerabilities could have significant consequences.
Papers
December 11, 2022
November 4, 2022
November 3, 2022
October 12, 2022
September 14, 2022
August 12, 2022
August 1, 2022
July 25, 2022
July 13, 2022
June 22, 2022
June 17, 2022
June 9, 2022
April 14, 2022
April 10, 2022
March 19, 2022
March 12, 2022
March 3, 2022
February 12, 2022