Adversarial Risk
Adversarial risk focuses on the vulnerability of machine learning models to malicious inputs designed to cause misclassification or other undesirable outputs. Current research investigates various attack methods, including test-time data poisoning and attacks targeting specific model components (e.g., face detectors, language models), and explores defense strategies such as adversarial training, robust model architectures (e.g., Vision Transformers), and novel loss functions. Understanding and mitigating adversarial risk is crucial for deploying reliable machine learning systems in security-sensitive applications, driving ongoing efforts to develop more robust models and rigorous evaluation methods.
Papers
November 6, 2024
October 26, 2024
October 7, 2024
September 23, 2024
April 26, 2024
February 19, 2024
February 4, 2024
January 30, 2024
November 27, 2023
September 7, 2023
September 5, 2023
September 2, 2023
July 20, 2023
June 22, 2023
June 16, 2023
May 17, 2023
April 11, 2023
February 21, 2023
February 14, 2023