Adaptive Attack
Adaptive attacks, which leverage knowledge of a system's defenses to craft more effective adversarial examples, are a growing concern across various machine learning applications. Current research focuses on developing both stronger adaptive attacks and more robust defenses, often employing techniques like optimization-based methods, ensemble models, and self-supervised learning to improve detection and resilience. This research is crucial for ensuring the security and reliability of machine learning systems in high-stakes domains, such as healthcare, finance, and autonomous systems, where vulnerabilities to adaptive attacks can have significant consequences.
Papers
June 3, 2023
April 16, 2023
March 4, 2023
February 23, 2023
January 31, 2023
August 31, 2022
August 29, 2022
July 13, 2022
June 25, 2022
January 24, 2022
November 22, 2021