Adversarial DEfense
Adversarial defense in machine learning aims to create models robust against adversarial attacks—maliciously crafted inputs designed to cause misclassification. Current research focuses on developing both training-based defenses, such as adversarial training and techniques leveraging optimal transport or energy-based models, and test-time defenses, including input preprocessing and model reprogramming. These efforts are crucial for ensuring the reliability and security of machine learning systems across diverse applications, from image classification and natural language processing to structural health monitoring and malware detection, where vulnerabilities could have significant consequences.
Papers
July 14, 2023
July 11, 2023
July 10, 2023
June 15, 2023
June 10, 2023
June 1, 2023
May 25, 2023
May 23, 2023
May 22, 2023
April 18, 2023
March 21, 2023
March 17, 2023
March 16, 2023
March 8, 2023
February 2, 2023
January 30, 2023
January 9, 2023
December 30, 2022
December 20, 2022
December 19, 2022