Adversarial DEfense
Adversarial defense in machine learning aims to create models robust against adversarial attacks—maliciously crafted inputs designed to cause misclassification. Current research focuses on developing both training-based defenses, such as adversarial training and techniques leveraging optimal transport or energy-based models, and test-time defenses, including input preprocessing and model reprogramming. These efforts are crucial for ensuring the reliability and security of machine learning systems across diverse applications, from image classification and natural language processing to structural health monitoring and malware detection, where vulnerabilities could have significant consequences.
Papers
April 12, 2024
April 8, 2024
April 4, 2024
April 3, 2024
March 23, 2024
February 29, 2024
February 27, 2024
December 16, 2023
December 14, 2023
November 28, 2023
October 27, 2023
September 13, 2023
September 10, 2023
September 7, 2023
September 6, 2023
September 5, 2023
September 3, 2023
August 7, 2023
July 31, 2023