Adversarial Capability
Adversarial capability research explores how to create and defend against malicious inputs designed to deceive machine learning models. Current efforts focus on developing more effective attack methods, including those employing reinforcement learning, disentangled feature spaces, and techniques tailored to specific data types like tabular data and hardware power traces. This research is crucial for improving the robustness and security of machine learning systems across various applications, from malware detection to safety-critical systems, by identifying vulnerabilities and developing more resilient models. The ultimate goal is to create models that are not only accurate but also resistant to manipulation.
Papers
June 25, 2024
June 20, 2024
June 5, 2024
February 29, 2024
February 14, 2024
January 4, 2024
October 25, 2023
September 26, 2023
August 27, 2022
July 21, 2022
May 20, 2022