Imperceptible Adversarial
Imperceptible adversarial attacks involve subtly altering input data—images, audio, 3D point clouds, or time series—to deceive machine learning models without noticeably changing the original data. Current research focuses on developing efficient algorithms to craft these attacks, often leveraging gradient-based methods, invertible neural networks, or multi-objective optimization, and exploring their effectiveness across various model architectures, including CNNs, RNNs, and graph neural networks. This field is crucial because such attacks can compromise the reliability of AI systems in critical applications like medical diagnosis, autonomous driving, and security, highlighting the need for robust and resilient models.
Papers
October 8, 2024
September 11, 2024
July 1, 2024
May 23, 2024
August 29, 2023
March 24, 2023
December 9, 2022
November 28, 2022
September 14, 2022
March 27, 2022
March 10, 2022
March 8, 2022
February 25, 2022