Adversarial DEfense
Adversarial defense in machine learning aims to create models robust against adversarial attacks—maliciously crafted inputs designed to cause misclassification. Current research focuses on developing both training-based defenses, such as adversarial training and techniques leveraging optimal transport or energy-based models, and test-time defenses, including input preprocessing and model reprogramming. These efforts are crucial for ensuring the reliability and security of machine learning systems across diverse applications, from image classification and natural language processing to structural health monitoring and malware detection, where vulnerabilities could have significant consequences.
Papers
October 30, 2024
October 11, 2024
October 6, 2024
September 5, 2024
August 28, 2024
August 27, 2024
August 21, 2024
August 20, 2024
August 1, 2024
July 8, 2024
June 24, 2024
June 21, 2024
June 20, 2024
June 16, 2024
June 3, 2024
May 24, 2024
May 18, 2024