Robust Backdoor Attack
Robust backdoor attacks aim to create malicious, yet undetectable, vulnerabilities in machine learning models, primarily by poisoning training data with subtly altered inputs (triggers). Current research focuses on developing increasingly stealthy attacks against various model types, including federated learning, speech recognition systems, and object detectors, often employing techniques like steganography, Bayesian approaches, and diffusion models to generate imperceptible triggers and evade defenses. The significance lies in the potential for widespread compromise of AI systems across diverse applications, highlighting the urgent need for robust defense mechanisms and improved model security.
Papers
November 18, 2024
August 25, 2024
June 15, 2024
February 5, 2024
January 3, 2024
September 16, 2023
August 31, 2023
April 21, 2023
March 23, 2023
February 24, 2023