Universal Adversarial
Universal adversarial attacks exploit subtle input manipulations to deceive machine learning models, particularly deep neural networks, across various modalities including images, text, and LiDAR data. Current research focuses on developing both effective attacks (e.g., using gradient-based methods, generative models, and optimized noise) and robust defenses (e.g., employing randomized smoothing, adversarial training, and diffusion models), often within black-box settings to address privacy concerns. This field is crucial for evaluating and improving the reliability and security of AI systems deployed in high-stakes applications like autonomous driving, healthcare, and language processing, where model robustness is paramount.
Papers
November 15, 2024
October 15, 2024
August 20, 2024
August 17, 2024
June 4, 2024
May 9, 2024
March 25, 2024
February 24, 2024
February 21, 2024
February 13, 2024
December 26, 2023
December 13, 2023
December 5, 2023
November 14, 2023
October 22, 2023
October 19, 2023
September 1, 2023
July 31, 2023
June 20, 2023
June 13, 2023